|Feature Article - October 2016
|by Do-While Jones
Ten years ago we pondered the complexity of life. It is time to revisit the topic again.
Some creationists argue that life is too complex to have happened by chance. It is a reasonable argument—but it is subjective on two levels. First, how do you measure and assign a numerical value to the complexity of living things? Second, if you can assign an objective numerical value to complexity, what is the threshold value that corresponds to “too complex?”
The “interdisciplinary approach” was the hot new idea at the end of the 20th century (even though it dates back to the ancient Greeks). The idea is that you build a team of people from different academic backgrounds. The TV show, Scorpion, is a good example because the team consists of members from diverse disciplines, including a psychologist, mathematician, and mechanical engineer. People from different backgrounds bring different skills to the task of problem solving.
A team of biologists studying the problem of biological complexity could benefit from members with different backgrounds. In particular, someone with a computer science background could be very helpful.
In the 1970’s, software engineers (that is, computer programmers) were forced to address the complexity issue. The more complex a computer program is, the more errors it is likely to contain, and the harder (and more costly) it will be to find and fix those errors. Software developers recognized that they desperately needed a way to measure complexity and relate complexity to cost. In 1976, Thomas McCabe invented a tool to measure software complexity that became the industry standard.1 I worked in a group that used this tool to measure the cyclomatic complexity of every software module, and, if the complexity exceeded 20, it had to be rewritten to simplify it. Otherwise, the software module would not be approved.
Biologist Robert M. Hazen may not have the same appreciation for complexity that software engineers do. He gave a lecture titled, Emergence, in which he claimed that simple systems just become more complex and more capable naturally. We addressed his lecture in 2006. 2 We hope you will go back and read that essay. In it, we happened to mention the McCabe cyclomatic complexity criterion. We were shocked, humbled, and honored to receive an email from Thomas McCabe himself in which he said he had similar thoughts. We published part of his email in the August, 2006, newsletter. 3 His email contained a preliminary draft of a paper he was writing on the subject.
His email began,
This letter has some thoughts that follow from your article "Emergent Complexity" dated March 06. I agree with and enjoyed your application of the McCabe complexity to emergent intelligence and in fact I have several observations about emergent intelligence in software systems. I'll relay my thoughts on macro emergent intelligence later but I have some thoughts about applying complexity at the micro biological level --- where life begins.
It ended with the words,
There's much more here, perhaps this 'teaser' will get a healthy discussion going. My fortuitous breakthrough was to see the mathematics at work with our computer algorithms --- maybe the same mathematics is at work with our biological algorithms.
The second complexity analysis is to work directly with the DNA double helix and treat it as a mathematical structure. More on this later.
We didn’t publish the paper in his email because it was preliminary, incomplete, very technical, and copyrighted.
Ten years have gone by, and we haven’t heard from him since. In recent years we have tried to contact him on several occasions, but we could not find him on social media, and he is no longer associated with the company he founded. Presumably he retired, and he might possibly be dead by now (as so many of my professional associates are).
We hope that he is still pursuing the topic, and that he is still reading our newsletter, and will respond. If any of you readers know how to contact him, we would greatly appreciate it if you could send us his contact information. Unfortunately, we have to proceed on the assumption that Tom has no more to say on the subject, and we have to proceed without him.
If we ignore the spiritual and cognitive aspects of life, we can consider life to be nothing more than a metabolic process.
Metabolism (from Greek: μεταβολή metabolē, "change") is the set of life-sustaining chemical transformations within the cells of living organisms. The three main purposes of metabolism are the conversion of food/fuel to energy to run cellular processes, the conversion of food/fuel to building blocks for proteins, lipids, nucleic acids, and some carbohydrates, and the elimination of nitrogenous wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. The word metabolism can also refer to the sum of all chemical reactions that occur in living organisms, including digestion and the transport of substances into and between different cells, in which case the set of reactions within the cells is called intermediary metabolism or intermediate metabolism. 4
Subjectively, these processes appear to be very complex. In our first essay ten years ago5, we tried to show you the huge (3724 square inch) two-part chart of Metabolic Pathways published by Roche Applied Science. The photo of the whole chart reduced all the pathways so much that they could not be read. The enlarged photo of a small portion of the chart was readable, but lost the enormity of the chart. That two-part chart is now available on-line in an interactive form6. We encourage you to check it out that way.
The McCabe measurement of software complexity depends upon measuring the number of paths from the input to the output. The more paths, and the more decisions about which path the process should take, the more complex the computer program is.
The Roche chart depicting all the metabolic pathways in living cells contains many more process paths than any computer program I ever wrote. So, just on the basis of the number of processes, and the many interconnections between them, the entire metabolic system is obviously very complex. But exactly how complex is it? What is the best way to count paths and connections?
A functioning computer consists of many hardware components (such as a Central Processing Unit, some memory, input and output devices, a power supply and wires to connect them all together). But the hardware won’t do anything without software (a computer program) to tell it what to do.
In the same way, an automobile consists of hardware components (engine, transmission, wheels, etc.). But the car won’t do anything without a driver telling the hardware what to do. Even the experimental “self-driving” cars need some sort of controller that simulates a human driver.
The point is that living things are like computers and automobiles in that they all consist of tangible material and an intangible process that controls the material. Both are necessary. The material can’t do anything without a controlling process, and a process can’t do anything without material to carry out the process’ intention. Any biological complexity measurement has to take into account the complexity of the processing as well as the complexity of the physical mechanism needed to perform the process.
When evolutionists speculate about how the first living cell formed, they always focus on the hardware and ignore the software. That is, they try to imagine simple, natural ways in which amino acids, proteins and enzymes could form accidentally. Then they speculate about how a membrane could have formed around the necessary organic molecules.
But even given all the proper organic molecules in a suitable membrane (that is, the hardware necessary for life) where does the software come from?
The fictional Doctor Frankenstein created a monster by sewing together a bunch of body parts. (An engineer would have simply started with the body of someone who had died in his sleep. All the necessary body parts and fluids are already there. No assembly required. ) Assembling the hardware was the easy part. The hard part is bringing the body to life. That is, the hard part is loading the software and booting the system. It takes more than a lightening bolt to do that.
That brings us back to metabolism. Metabolism is the set of processes that the cellular hardware has to perform. Generally speaking, those processes are very complex. This month’s Evolution in the News column examines just one of those processes, so we don’t want (or need) to go into detail here. All we want to say at this point is that one must evaluate the complexity of the metabolic process as well as the complexity of the physical structure needed to execute that process.
The purpose of this essay is to stimulate a discussion of complexity with the goal of finding an objective method of measuring biological complexity. The premise is that a lot of this work has already been done by computer scientists and software developers. So, let’s start there.
This work dates back to the 1970’s. That’s when computers started to be embedded in smart appliances. For the first time people were programming computers to do something other than keeping track of the company payroll or solving equations. Frankly, we didn’t know what we were doing, and we were making it up as we went along. We were making a lot of expensive mistakes. I became moderately famous by learning from those mistakes and presenting solutions in the professional literature and speaking at conferences. So let’s go back to the beginning of software complexity measurements and start there to develop some biological complexity measurements.
The first thing we did was to count lines of code. It stands to reason that if it takes 100 pages to print a computer program, that computer program might be twice as complex as a program that can be printed on 50 pages. Oh, if it were only that simple! There’s more to it than just program size—but size does matter. Generally speaking, the longer a program is, the more complex it is.
The simplest (and admittedly crudest) measurement is simply to count the number of biological molecules in a cell. Count the number of different molecules in brain cells, bone cells, muscle cells, and skin cells. The cell with the most different kinds of molecules is most complex. The number of molecules is an objective measurement that can be compared easily.
The number of different functions something can perform might be a better measure of complexity than size is. In 1979, Allan Albrecht came up with the idea of measuring “function points” common to all software processes. Perhaps there are corresponding biological function points that could be measured.
This month’s Evolution in the News column mentions the number of subunits in the NADH:ubiquinone oxidoreductase enzyme. Counting subunits might be a valid complexity measurement. (Unpronounceablity might be an equally good measurement of complexity! )
We admit we don’t have the answers to the questions, “How complex are living things?” and “What is too complex to have happened by chance?” But at least we are asking the question and thinking about possible answers. We hope to encourage you to do the same thing. Even if we can’t put a numerical value on the complexity of a living cell, at least we can get some subjective appreciation for how complex life is.
|Quick links to
|Science Against Evolution
|Back issues of
of the Month
2 Disclosure, March 2006, “Emerging Complexity”
3 Disclosure, August 2006, “Measuring Complexity”
5 Disclosure, March 2006, “Emerging Complexity”
6 Part 1: Metabolic Pathways http://biochemical-pathways.com/#/map/1
Part 2: Cellular and Molecular Processes http://biochemical-pathways.com/#/map/2