n a recent interview, Richard Dawkins, a fanatical atheist and a leading spokesman for Darwinian evolution, was asked if he could produce an example of a mutation or evolutionary process which led to an increase in information. Although this has been known for some time to be a significant issue, during a recorded interview, Dawkins was unable to offer any such example of a documented increase in information resulting from a mutation.
After some months, Professor Dawkins has offered an essay responding to this question in context with the interview, and it will be examined here. It is pointed out that speculation and selective use of data is no substitute for evidence. Since some statements are based on Thomas Bayes’ notion of information, this is evaluated in Part 2 and shown to be unconvincing. Some ideas are based on Claude Shannon’s work, and Part 3 shows this to be irrelevant to the controversy. The true issue, that of what coded information, such as found in DNA, human speech and the bee dance, is and how it could have arisen by chance, is simply ignored. Part 4 discusses the Werner Gitt theory of information.
After several years, we continue to request from the Darwinist theoreticians: propose a workable model and show convincing evidence for how coded information can arise by chance!
Part 1: Biological Systems Function
Because Information is Present
As scientists penetrate ever deeper into the details of nature, the feeling is growing that we have some serious rethinking to do. We cannot use known properties of non-living matter and naturalistic laws to explain how living organisms arose.
Until recently, limited technology and scientific knowledge led to a simplistic view of living systems. Based on the limited resolution offered by microscopes last century, Ernst Haeckel (18341919) concluded that a cell was a ‘simple little lump of albuminous combination of carbon.’ That simple organic compounds should be able to produce this clump of ‘stuff’ by accident did not seem too improbable at that time.
Let’s contrast this with a modern view as expressed by biologist and ex-evolutionist Dr. Gary Parker:
‘A cell needs over 75 "helper molecules", all working together in harmony, to make one protein (R-group series) as instructed by one DNA base series. A few of these molecules are RNA (messenger, transfer, and ribosomal RNA); most are highly specific proteins.
‘When it comes to "translating" DNA’s instructions for making proteins, the real "heroes" are the activating enzymes. Enzymes are proteins with special slots for selecting and holding other molecules for speedy reaction. Each activating enzyme has five slots: two for chemical coupling, one for energy (ATP), and most importantly, two to establish a non-chemical three-base "code name" for each different amino acid R-group. You may find that awe-inspiring, and so do my cell-biology students! [Even more awe-inspiring, since the more recent discovery that some of the activating enzymes have editing machinery to remove errant products, including an ingenious "double sieve" system.,]
‘And that’s not the end of the story. The living cell requires at least 20 of these activating enzymes I call "translases," one for each of the specific R-group/code name (amino acid/tRNA) pairs. Even so, the whole set of translases (100 specific active sites) would be (1) worthless without ribosomes (50 proteins plus rRNA) to break the base-coded message of heredity into three-letter code names; (2) destructive without a continuously renewed supply of ATP energy [as recently shown, this is produced by ATP synthase, an enzyme containing a miniature motor, F1-ATPase.,,,] to keep the translases from tearing up the pairs they are supposed to form; and (3) vanishing if it weren’t for having translases and other specific proteins to re-make the translase proteins that are continuously and rapidly wearing out because of the destructive effects of time and chance on protein structure! 
One can give such descriptions some serious thought, or repeat the evolutionist’s mantra, ‘But with enough time anything is possible’ and change the topic real fast!
Self-organization over Vast Time Periods?
Is there any reason, a priori, to assume that inanimate chemicals, unguided, will aggregate in manners which reflect neither statistical, thermodynamical nor mechanistic principles, and display behavior which we would otherwise clearly identify as the product of design? Random chemical reactions proceed by discrete mechanisms with rate constants that can be studied in fine detail. These follow very exact statistical rules that are at the lowest level constrained by spatial and thermodynamical laws.
Now, as time passes, molecular bonds may break, allowing individual molecules to separate and later undergo other chemical reactions. Eventually, with enough time, one expects a universal trend towards the thermodynamically most stable distribution. However, some molecules have energy barriers to breaking, especially at cold temperatures, and end up as amorphous ‘tar’, the nemesis of every organic chemist. These two outcomes are a priori the expected outcome of random chemical changes, given enough time.
Now, what do we observe today among living forms? A steady state of chemical structures plus amorphous junk? Lets take a second look at how organic material is found currently, supposedly after billions of years, in living organisms. Dr. Paul Nelson, informs us, after the theistic biochemist Dr. Michael Behe:
‘A typical cell contains thousands and thousands of different types of proteins. Assembled from amino acids in chains "anywhere from 50 to 1000 amino acids" long, proteins fold up into "very precise" three-dimensional structuresand those structures determine their precise functions:’ 
That doesn’t sound like nature behaving as expected. Let’s take a closer look at just one of those proteins mentioned above to see how serious the problem is. From any biochemical textbook we find that a precise 3-dimensional structure is necessary for it to be functional in any useful manner. In some portions of the chain a little leeway can be tolerated, in others the right amino acid must be in place:
‘This means that if, say, a P does not appear at position 78 of a given protein, the protein will not fold regardless of the proximity of the rest of the sequence to the natural protein.’ 
Might a single protein nevertheless somehow arise unaided? No! Of the possible chemical bonds between amino acids, all must form peptide bonds, although the natural tendency is for the reverse reaction to occur. Then the protein must consist of only L or left-handed amino acid bonds although the inherent symmetry of the chemical reaction predicts a 50/50 mixture of L and R forms for every amino acid (except the achiral glycine) on the protein! Then the right sequence of amino acids must be combined. Robert Sauer, a biochemist at MIT, systematically deleted small pieces from viral proteins and inserted altered pieces back into the genes at the sites of the deletions to determine how much variation at various portions of the sequence can be allowed. As one might expect, in some portions more degrees of freedom can be tolerated than in others.
Sauer’s conclusion: the likelihood of finding a folded protein by a random mutational search is about 1 in 1065. That is equivalent to guessing correctly one particular atom out of our whole galaxy.
And that would be one single, isolated, worthless protein, which would quickly fall apart in the presence of water or ultraviolet light from the sun!
Something Sounds Wrong Somewhere!
But we know that fertilized eggs develop into grown humans and we can synthesize fairly complex vitamins for our consumption. Even the Bible states that after 6 days of Creation, God ‘rested’, or ceased from His creative work, and thereafter His activity was upholding His creation (Col. 1:17). Complexity increase in the development of individual organisms and useful functionality appear to be a fact, independent of whether you believe in creation or evolution! Why do we find the immense complexity if natural processes predict the opposite, i.e. the most thermodynamically stable distribution and the ‘amorphous tar’?
The answer lies in the third fundamental property of living organisms, and that is information.
Where Does Information Come From?
The atheist must propose a solution in which inanimate matter alone develops information. Why is this? Consider a cherry tree. Before it dies, it must pass on instructions to organize organic material and thereby regenerate multiple copies of itself. A single copy would not be good enough since external conditions prevent survival from being 100% effective. We observe that living beings are able to reproduce more than one copy of themselves on average per lifetime. This is behavior that does not arise through natural processes from inanimate matter.
How might the necessary increase of information content be pumped (coded) into DNA? This question was posed to Professor Dawkins, a vocal atheistic evolutionist. As shown on the video A Frog to a Prince, he was unable to answer the question. The Australian Skeptics made some typically lame excuses for Dawkins and scurrilous accusations against the producers of the video and creationists in general, which were thoroughly refuted. Dawkins himself responded to the ‘information challenge’., Careful reading of his essay reveals no arguments or insights that have not been used for years by those wishing to deny the existence of a Creator. It seems worthwhile to spend some effort going over these dead ideas, not because any particular person has brought them up again, but simply because evolutionists persist in resurrecting them.
A closer look at what Dawkins and others mean by the word ‘information’ will be examined in more detail in the following three parts of this essay.
Now, we shall see later that Dawkins offers an impoverished concept of information which serves neither his nor our purposes, and does not address the issues of how functional systems needed to support life processes could arise in an unguided fashion. My purpose is not to criticize a person but the ideas. He has the ability to tell very entertaining stories to illustrate ideas in a manner which indeed reflect the underlying thinking of many evolutionists, but that’s all they arestories. Since Dawkins’ article appears to be representative of the beliefs of many people, we shall take a closer look at the case he has presented.
There are several statements in Dawkins’ article which are not justified from the biological point of view. After first discussing information content, Dawkins implies incompetence on the part of a Creator. This is a recurring theme in evolutionist literature:
‘Can we measure the information capacity of that portion of the genome which is actually used? We can at least estimate it. In the case of the human genome it is about 2%considerably less than the proportion of my hard disc that I have ever used since I bought it.’ 
Here several comments are in order. First, a comparison of creationist and atheistic positions shows a constant trend whereby the latter assume an oversimplified view of nature that becomes more complex as research continues and our knowledge advances. For example, Darwin’s view of genetics via pangenes has been shown to be hopelessly simplistic. Furthermore, from about 180 atavistic (vestigial) organs claimed for the human body we are down to perhaps none today., In the late 1950s, artificial intelligence researchers claimed human thought could be encompassed by a ‘General Problem Solver’ based on about 100 rules. This proved to be hopelessly naïve.
A mind-set that postulates that life arose by chance is inclined to oversimplify the difficulties involved. Experience should teach us that if those claiming 2% efficiency for the human genome believed in a Creator, they would look more seriously at how it actually works instead of disparaging what in reality no one really understands.
Dembski discussed this point recently:
‘But design is not a science stopper. Indeed, design can foster inquiry where traditional evolutionary approaches obstruct it. Consider the term “junk”. Implicit in this term is the view that because the genome of an organism has been cobbled together through a long, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function. And indeed, the most recent findings suggest that designating DNA as “junk” merely cloaks our current lack of knowledge about function. For instance, in a recent issue of the Journal of Theoretical Biology, John Bodnar describes how “non-coding DNA in eukaryotic genomes encodes a language which programs organismal growth and development.” Design encourages scientists to look for function where evolution discourages it.’ [Emphasis added]
Secondly, genes are only a part of the DNA molecule, and how these map to a final outcome is far from understood.
Consider heterochromatin, the highly repetitive DNA which codes for little or no protein, and which represents about 15% of DNA in human cells and about 30% in fly cells., Zuckerkandl observes:
‘Despite all arguments made in the past in favor of considering heterochromatin as junk, many people active in the field no longer doubt that it plays functional roles ... They may individually be junk, and collectively gold.’ 
That is, the nucleotides should be evaluated as an ensemble and not exclusively in context of isolated specific genes.
One protein can become part of an enzyme that then allows a different gene to be able to produce a different kind of protein. Craig Venter, who founded the Celera company to crack the human genome, states that of the 70,000 human genes the function of no more than 5000 is known. Genes may have multiple functions, such as that of the SLC6A3-9 whose presence decreases the likelihood the person will smoke, as reported recently. Vender points out that the manner in which multiple genes interact is not understood. 3196 genes are involved in the brain alone.
Dawkins’ comments about only 2% of the human genome being used assumes researchers know far more about human DNA than is the case.
Thirdly, the statement reflects a deliberate attempt to make the DNA’s reproductive ingenuity look as bad as possible. Other estimates of information capacity to date go as high as 15%.
Fourth, there is an implied assumption that the best solution must be 100% information density. There are other possible trade-offs, such as reliability and rates of production of various proteins. Genes jammed together might lead to more duplication failure and the chemical interactions to produce proteins must also be optimally timed: neither too fast nor too slow. The spatial location and density of genes must be interpreted with such considerations.
Here is an example. How much of a computer’s ‘mouse’ is absolutely necessary to function? Probably less than Dawkins’ proposed 2%. It appears to be primarily superfluous space. The absolute minimum cable length is surely under a millimetre. The two buttons on mine could easily be crammed into a fraction of its space. But it would then not perform its intended function. My fingers would not be able to press only the desired button in a comfortable manner.
Atheists are now faced with two problems. Where does information come from in the first place, and secondly, how could it increase over time. Dawkins proposes one of the usual models whereby change is introduced randomly into the DNA and then natural selection weeds out unwanted results.
Is there Evidence of Evolutionary Improvement over Time?
From Dawkins’ article we read:
‘The dozen or so different globins inside you are descended from an ancient globin gene which, in a remote ancestor who lived about half a billion years ago, duplicated, after which both copies stayed in the genome.
‘There were then two copies of it, in different parts of the genome of all descendant animals. One copy was destined to give rise to the alpha cluster (on what would eventually become Chromosome 11 in our genome), the other to the beta cluster (on Chromosome 16)...
‘We should see the same within-genome split if we look at any other mammals, at birds, reptiles, amphibians and bony fish, for our common ancestor with all of them lived less than 500 million years ago. Wherever it has been investigated, this expectation has proved correct.’ 
This is typical of a class of statements one is often confronted with from the evolutionary story-telling community, so I might as well invest a little time on it. Notice the total lack of evidence (has the ancient globin been demonstrated to have existed?), lack of plausibility (what would the intermediate molecular structures look like and what use would they have had?) and implied compulsion to accept only one of many possible answers (why shouldn’t blood-processing organisms all be endowed with such an ingenious resource by their Creator).
I shall also point out how conveniently data can be picked and chosen to try to demonstrate a preconception. The fact that mould, yeast and the root nodules of beans also contain hemoglobin is inexplicably (but conveniently) not mentioned!
Clearly what we see here is a story, not evidence. There are many ways of evaluating similarities found among living organisms.
Dr. Kofahl, a chemist, observes:
‘A good example of alleged molecular homology is afforded by the a- and b-hemoglobin molecules of land vertebrates including man. These supposedly are homologous with an ancestral myoglobin molecule similar to human myoglobin. Two a- and two b-hemoglobin associate together to form the marvellous human hemoglobin molecule that carries oxygen and carbon dioxide in our blood. But myoglobin acts as single molecules to transport oxygen in our muscles. Supposedly, the ancient original myoglobin molecules slowly evolved along two paths until the precisely designed a- and b-hemoglobin molecules resulted that function only when linked together in groups of four to work in the blood in a much different way under very different conditions from myoglobin in the muscle cells. What we have today in modern myoglobin and hemoglobin molecules are marvels of perfect designs for special, highly demanding tasks. Is there any evidence that intermediate, half-evolved molecules could have served useful functions during this imaginary evolutionary change process, or that any creature could survive with them in its blood? There is no such information. Modern vertebrates can tolerate very little variation in these molecules. Thus, the supposed evolutionary history of the allegedly homologous globin molecules is a fantasy, not science.’ 
Hemoglobin has been extensively studied, and is often used in the creation/evolution discussions. Parker points out that,
‘We find hemoglobin in nearly all vertebrates, but we also find it in some annelids (the earthworm group), some echinoderms (the starfish group), some molluscs (the clam group), some arthropods (the insect group), and even in some bacteria! In all these cases, we find the same kind of moleculecomplete and fully functional.’ 
He echoes Dickerson’s observation that, ‘It does not seem possible, that the entire eight-helix folded pattern appeared repeatedly by time and chance.’
Kofahl has also made a similar observation:
‘Hemoglobin molecules occur not only in vertebrate animals, but also in yeast, the mould, Neurospora, and in the root nodules of beans. Hemoglobin occurs in some species of every major category of life except sponges, coelenterates (jelly fish, etc.) and protochordates (mostly worm-like marine creatures having a limited nerve chord, which are supposedly ancestral to vertebrates which have hemoglobin). It is obvious that this distribution of hemoglobin does not fit with the idea that similarities indicate common descent, for nobody could believe that humans inherited their hemoglobin molecules from yeast. And independent evolution of hemoglobin in so many different species appears highly improbable.’ 
Dr. Hannu Lång, a Molecular Geneticist at the University of Helsinki, commented about Dawkins’ statements with respect to tetrameric hemoglobins:
‘It is not strange that all mammals have tetrameric hemoglobins in their blood; monomeric ones can’t transport oxygen from lungs to muscles. Every animal has slightly different hemoglobins, animals live in different environmental conditions and hemoglobin design has to be accommodated to these environmental changes accordingly.’ 
The biochemist Dr. Bob Hosken, Senior Lecturer in Food Technology at the University of Newcastle, Australia, has independently provided further information on this. He began his postgraduate career comparing amino acid sequences of Australia’s unique fauna. While he said that trying to work out phylogenetic relationships was ‘very interesting’, he also said:
‘[T]he most exciting thing ... was the opportunity it provided for relating the molecular architecture of each species of haemoglobin to the unique physiological requirements of the animal species studied.
‘In other words, in a study of the relation between the structure and function of haemoglobin in various marsupial and monotreme species, I found it more meaningful to interpret haemoglobin structure in relation to the unique physiological demands of each species. A marsupial mouse has a greater rate of metabolism than a large kangaroo, so small marsupials need a haemoglobin with a structure designed to deliver oxygen to tissues more efficiently than that required in large animals, and I found this to be exactly the case. I also investigated the relation of haemoglobin structure and oxygen transport in the echidna and platypus, and again found the oxygen delivery system of the platypus was well suited to diving, while in the echidna it was suited to burrowing.’ 
Let’s look at some other aspects of hemoglobin to see whether there is evidence of a common ancestry:
‘When it comes to comparing similarities among amino acids in alpha hemoglobin sequences, crocodiles have much more in common with chickens (17.5%) than with vipers (5.6%). Averaging all the data for three various reptiles, three kinds of crocodiles and three kinds of birds showscompletely contrary to the predictions of evolutionary descent from a common ancestorthat the greatest similarity is between the crocodiles and chickens...’ 
Any similarity, whether at the morphology or cellular level, could be used to support the evolutionist theory. Why limit oneself to hemoglobin? University of Berkeley law professor Phillip Johnson, a leading figure in the Intelligent Design community, points out that there are over 40 different kinds of eyes which, because of their fundamentally differing structure, must have ‘evolved’ separately since the alternative of a common ancestor would be unpalatable to evolutionists. Notice the double standard Dawkins and others use. When data appears consistent with one model, it is used as evidence, and when not, a new story or name like ‘convergence’ is invented. One must ask what the driving force is which produces such miracles repeatedly and independently.
We have a choice of many criteria that could be used to define similarity between species.
‘By comparing lysozyme and lactalbumin, Dickerson was hoping to “pin down with great precision” where human beings branched off the mammal line. The results were surprising. In this test, it turned out that humans are more closely related to the chicken than to any living mammal tested!’ 
Let’s continue with Dawkins’ thesis. He claims:
‘Genomes are littered with non-functional pseudogenes, faulty duplicates of functional genes that do nothing ... And there’s lots more DNA that doesn’t even deserve the name pseudogene. It, too, is derived by duplication, but not duplication of functional genes. It consists of multiple copies of junk, ‘tandem repeats’, and other nonsense...’ 
Behe addresses the issue of duplicate genes as follows:
‘The sequence similarities are there for all to see ... By itself, however, the hypothesis of gene duplication
says nothing about how any particular protein or protein system was first produced.’ 
He amplifies later:
‘For example, the DNA in each of the antibody-producing cells of your body is very similar to that of the others, but not identical. The similarities are due to common descent; that is, all the cells in your body descended from one fertilized egg cell. The differences, however, are not due to Darwinian natural selection. Rather, there is a very clever, built-in program to rearrange antibody genes. Billions of different kinds of antibody genes are “intentionally” produced by your body from a pre-existing stock of just a few hundred gene pieces.’ 
Better designs are always those which are more fault-tolerant; that is, new eventualities are anticipated.
Living things have by far the most compact information storage/retrieval system known. This stands to reason if a microscopic cell stores as much information as several sets of Encyclopædia Britannica. To illustrate further, the amount of information that could be stored in a pinhead’s volume of DNA is staggering. It is the equivalent information content of a pile of paperback books 500 times as tall as the distance from Earth to the moon, each with a different, yet specific content.
However, Dawkins apparently has a low opinion of DNA and how it works. This makes it easy to gloss over the issue of how all the necessary information arose. Let’s look a little more closely at this genetic apparatus.
Many genes tend to be involved in one function and must work together ab initio. Dr. Lucy Shapiro of Stanford, in writing about the flagellum, the filament in bacterial cells that is driven by a rotary motor and is used for propulsion, writes:
‘To carry out the feat of co-ordinating the ordered expression of about 50 genes, delivering the protein products for these genes to the construction site, and moving the correct parts to the upper floors while adhering to the design specification with a high degree of accuracy, the cell requires impressive organisational skills.’ 
In an older evolutionary book, various tools, such as electrochemical generators, traps, snares, nets, nooses, pitfalls, lures, hooks, press-studs, parachutes and so on, which follow the principles of physics and mechanics, found among living beings were discussed. The author, Andrée Tétry, a leading French biologist, and anti-Darwinian evolutionist, deliberately sought a naturalistic explanation for the existence of life. But she concluded:
‘But how could these organic inventions, these small tools, appear? It seems most improbable that a single mutation could have given rise simultaneously to the various elements which compose, say, a press-stud or hooking device. Several mutations must therefore be assumed, but this implies the further assumption of close co-ordination between different and distinct mutations. Such indispensable co-ordination is a major stumbling block, for no known mutations occur in this way.’ 
Here we find examples of what Behe calls ‘Irreducible Complexity’: systems composed of individual parts which only make sense when all components are present, and for which developing each part individually is inconceivable. Behe’s most convincing examples involve functional systems composed of individual members which are single molecules. Examples include aspects of blood clotting, closed circular DNA, electron transport, the bacterial flagellum, telomeres, photosynthesis and transcription regulation. It is absurd to argue that the individual parts arose sequentially (or in parallel) and uncoordinated.
Can multi-part systems, which are themselves only a component of a living organism, arise by chance? Professor Siegfried Scherer, a creationist microbiologist, published a paper in the Journal of Theoretical Biology on the energy-producing mechanism of bacterial photosynthesis. He estimated the number of basic functional states involve no fewer than five new proteins to move from ‘fermentative bacteria, perhaps similar to Clostridium’ to fully photosynthetic bacteria. His calculations show that ‘the range of probabilities estimated is between 10-40 and 10-104.’  (Note: the total number of particles in the universe is estimated at around 1080). And this is a trivial change compared to producing organs such as a brain or heart.
To Scherer’s astronomical number, one must factor in the consideration of what all can go wrong when photons interact with ‘chromophores’, the portions of molecules able to absorb light in photosynthesis. If not properly designed, ‘free radicals’ can be generated which would wreak havoc on the cell.
Might Information Content Increase
through Materialistic Processes?
Does Dawkins offer us any suggestions as to how information content might increase over time in living organisms? We read:
‘Mutation is not an increase in true information content, rather the reverse, for mutation, in the Shannon analogy, contributes to increasing the prior uncertainty. But now we come to natural selection, which reduces the ‘prior uncertainty’ and therefore, in Shannon’s sense, contributes information to the gene pool. In every generation, natural selection removes the less successful genes from the gene pool, so the remaining gene pool is a narrower subset.’
‘Of course the total range of variation is topped up again in every generation by new mutations...’
‘According to this analogy, natural selection is by definition a process whereby information is fed into the gene pool of the next generation.
If natural selection feeds information into gene pools, what is the information about? It is about how to survive.’ 
Apparently, mutations provide change, and selection makes sure the good changes are favored, and this is defined by Dawkins as an increase in information. Since the amount of total change available after duplication of genes is greater, and Dawkins states that mutations decrease the true information content it is not clear why a larger number of initially identical genes, each now undergoing random mutations, is going to help his argument out. He now begins with a larger ‘prior uncertainty’. The following parts of this essay will examine this question in more detail.
It seems all these additional genes are going to add to the confusion produced by DNA duplicating errors. The total number of chances for failure increases, meaning more proteins with the wrong structures will be produced.
It addition, it would appear that specialization by selection should tend to decrease the genetic information. Darwin found wingless beetles stranded on the island of Madeira. Perhaps the beetles that could fly were all blown out to the ocean by the wind, so drowned before they could propagate their genes. But should conditions change, those beetles can no longer regain a valuable function, flying. Selection inevitably removes information from the gene pool.
We should also consider regulatory genes that switch other genes ‘on’ or ‘off’. That is, they control whether or not the information in a gene will be decoded, so the trait will be expressed in the creature. This would enable very rapid and ‘jumpy’ changes, which are still changes involving already created information, not generation of new information, even if latent (hidden) information was turned on. For example, horses probably have genetic information coding for extra toes, but it is switched off in most modern horses. Sometimes a horse is born today where the genes are switched on, and certainly many fossil horses also had the genes switched on. This phenomenon explains the fossil record of the horse, showing that it is variation within a kind, not evolution. It also explains why there are no transitional forms showing gradually smaller toe size.
Virtually all mutations are harmful or at best neutral to the organism and prevent the messages encoded on DNA from being passed on as intended. A greater number of redundant genes compounds the problem. Consider what the long-term effect of mutations is according to Parker:
‘The more time that goes by, the greater the genetic burden or genetic corruption. Natural selection can’t save us from this genetic decay, since most mutations are recessive and can sneak through a population hidden in carriers, only rarely showing up as the double recessive which can be “attacked” by natural selection. As time goes by, accumulating genetic decay threatens the very survival of plant, animal, and the human populations’. 
The late Professor Pièrre-Paul Grassé, widely regarded as one of the most distinguished of French zoologists, although not a creationist, denied emphatically that mutations and selection can create new complex organs, assigning to DNA duplication errors the role of mere fluctuation.
Dr. Demick, a practising pathologist, likens the activity of mutations to ‘A Blind Gunman’. He points out:
‘First, that the human mutation problem is bad and getting worse. Second, that it is unbalanced by any detectable positive mutations. To summarize, recent research has revealed literally tens of thousands of different mutations affecting the human genome, with a likelihood of many more yet to be characterized. These have been associated with thousands of diseases affecting every organ and tissue type in the body. In all this research, not one mutation that increased the efficiency of a genetically coded human protein has been found. Each generation has a slightly more disordered genetic constitution than the preceding one.’ 
Dr. Jonathan Wells, a cell biologist currently at the University of Berkeley, states specifically with reference to Dawkins’ article:
‘But there is no evidence that DNA mutations can provide the sorts of variations needed for evolution ... The sorts of variations which can contribute to Darwinian evolution, however, involve things like bone structure or body plan. There is no evidence for beneficial mutations at the level of macroevolution, but there is also no evidence at the level of what is commonly regarded as microevolution.’ 
‘The claim that mutations explain differences among genes, which in turn explain differences among organisms, is the Neo-Darwinian equivalent of alchemy. Compare:
We know that mutations happen, and that they alter DNA sequences; organisms differ in their DNA sequences, so the differences between organisms must be due (ultimately) to mutations.
We know that we can change the characteristics of metals by chemical means; lead and gold have different characteristics; therefore it must be possible to change lead into gold by chemical means.
In both cases, the mechanisms invoked to explain the phenomena are incapable of doing so. Darwinists (like alchemists) have misconceived the nature of reality, and thus hitched their wagon to an imaginary horse.’ 
Israeli MIT-trained biophysicist Dr. Lee Spetner inspired the original question as to where the information arose in living beings through his book Not By Chance. He made the following observations about Dawkins’ essay:
Let me coin the word ‘biocosm’ to denote the union of all living organisms at any particular time. Then we can say that the information in the biocosm of today is vastly greater than that in the putative primitive organism.
If Neo-Darwinian theory (NDT) is to account for the evolution of all life, as it claims to, it must account for this vast increase of biocosmic information [which would be needed to transform bacteria into humans].
Since NDT is based on a long series of small steps then, on the average, each step must have added some information.
According to NDT, a step consists of the appearance of random genetic variation acted upon by natural selection. (The randomness is important to NDT to avoid having to invoke some mechanism for the organism’s ‘need’ to induce mutations that are adaptive to it.)
Because the steps in evolution are very small, and because there is supposed to have been a vast amount of evolutionary change, there must have been a very large number of such steps. Likewise, a very large number of steps should have added information to the biocosm.
Mutations provide the raw material from which natural selection chooses. If a single step of mutation followed by natural selection adds information, then the mutation that gets selected must provide an increase in genetic information.
Considering the great sweep of evolution for which NDT claims to account, and considering the huge number of steps that are supposed to have led to that evolution, there must have been a huge number of random mutations that added at least a little information to the biocosm.
Therefore, with all the mutations that have been studied on the molecular level, we should find some that add information.
The fact is that none have been found, and that is why Dawkins cannot give an example. 
Dawkins, and others who postulate that inanimate material can produce life unaided with a necessary constant increase in information, are going to have to face up to the fact that a lot of very smart people are taking an increasingly dim view of what is being presented as ‘fact’ in many textbooks.
It seems fair to point out that evolutionists have yet to provide even a single concrete example of a mutation leading to an increase of information as requested.
[Return to Top]
Part 2: The Concept of Information:
A Bayesian Approach
Evolutionists have been asked, and often ask themselves, how they can justify the assumption of a steady increase in average information content over eons of time as required by their theories. Now, most people have some feeling for what the word information means, but it is not so easy to define. In addition, there are several notions, depending on context, which can confuse the discussion. Relevant for our discussion are concepts of information, which I shall refer to as:
- the Bayesian
- the Shannon
- the Gitt
We will deal with the first one in this part of this essay.
What does Dawkins mean by ‘information’? We will see in this section that (I) is alluded to, and in Part 3 that he uses definition (II) on occasion. Unfortunately, the truly relevant one, (III), is not dealt with at all (see Part 4)!
Dawkins writes (numbering added; emphasis in original):
‘Let’s estimate, [Shannon suggested], the receiver’s ignorance or uncertainty before receiving the message, and then compare it with the receiver’s remaining ignorance after receiving the message. The quantity of ignorance-reduction is the information content. ...’ 
‘But now we come to natural selection, which reduces the "prior uncertainty" and therefore, in Shannon’s sense, contributes information to the gene pool...
‘Information is what enables the narrowing down from prior uncertainty (the initial range of possibilities) to later certainty (the "successful" choice among the prior probabilities). According to this analogy, natural selection is by definition a process whereby information is fed into the gene pool of the next generation.’ 
We recognize from the statements above that Bayes’ theorem (see below) [after Thomas Bayes (17021761)], used also in modern Decision Theory,
, is being invoked. Claude Shannon incorporated Bayes’ ideas somewhat in his own work. The word ‘information’ is indeed sometimes used in such a sense.
Unfortunately, Dawkins declines to develop his argument to where anyone can quantify and evaluate the plausibility of his explanation as to how the complexity of organisms could increase over time. To consider the suitability of his use of prior and posterior probabilities, lets take a short detour, look at the mathematics involved, and decide whether the case for unguided progress has been advanced.
In Dawkins’ statements above, (b) should correctly say, ‘reduces the “posterior uncertainty” compared to the “prior uncertainty”’ ‘information’ refers to the term P(E|F) / P(E) in equation (1) below (this is the usage of ‘information’ in this mathematical context).
Bayes’ theorem says:
P(F|E) = P(F) (P(E|F)/P(E)) (1)
The symbol | means ‘given that’.
F is a Fact or belief. Usually it refers to a hypothesis under consideration.
P(F) is the prior probability, that is, the probability that F is true or will occur before being provided with additional statements.
E is some Event or Evidence that is generally, but might not be, causally related to F.
P(E) is the probability that E will occur.
P(F|E) is the posterior probability, that is, the probability of event F occurring (or our confidence that F is indeed true) after being told event E occurred.
P(E|F) is the probability that event E will occur should F be true.
The probability that event E will occur, P(E), may be a function of several other factors, Fn. Simplified by assuming the factors F1, F2...Fn are mutually exclusive and exhaustive,
P(E) = P(E|F1)P(F1) + (E|F2)P(F2) + ... (E|Fn)P(Fn) (2)
Mathematically, the probabilities can be expressed with various distribution functions instead of simple fractions between 0 and 1. We need not get into this level of detail.
Now, how can E and F be defined, to advance Dawkins’ argument? What values are to be assigned? We are left with nothing a peer group could evaluate. Let’s see if we can help him out a little. We would like things whose values could at least be ‘guesstimated’ to some extent.
F = the probability that a useful protein will arise by chance
E = the event that the organism survives and passes on at least as many offspring as its parents. This is a compound event whose overall probability is composed of several terms as shown in equation (2).
How do we handle P(E|F)/P(E)? As shown in statements ac, this is what Dawkins labels ‘information’ (what reduces the posterior uncertainty compared to the prior uncertainty). Among the huge number of parameters which affect which organisms pass on their genes, a Neo-Darwinist must argue that natural selection makes the phenomena not quite totally random, and this leads to a value of P(E|F)/P(E) slightly > 1.
Using Sauer’s work, the probability of a protein arising by chance is around P(F) = 1.0x1065. Should P(E|F)/P(E) be estimated to be greater than 1, say 1.1 (under normal circumstances an average advantage of 10% due exclusively to the single new protein in the presence of the other extraneous ‘noise’ survival factors suggests a massive selective advantage. A typical value suggested is around 1.001, corresponding to a selection coefficient of 0.1%), then from Bayes’ equation (1), we then obtain P(F|E)= 1.1x1065. That is, the new estimate for the chances of one protein arising by chance is theoretically raised by a minuscule amount, but it is still infinitesimal.
That appears to be the gist of Dawkins’ use of the concept of information so far. We cannot guess as to what the new protein does, do not know why it was produced nor whether it is now a step towards something new such as an enzyme.
This answer is highly unsatisfying, since the original question deals with how the know-how to produce complex structures, such as eyes, bone joints, a heart, became encoded on DNA. These involve a vast amount of co-ordination in perfect timing.
Let’s complete the analysis, remaining with this notion of ‘information’, and show why many scientists (I suspect the atheistic evolutionist Professor Gould of Harvard has independently stumbled on the same problem) realize random mutations cannot explain the development of ever more complex organism. The key is that on average, P(E|F)/P(E) may actually be < 1! This would be a knockout conclusion for Neo-Darwinists.
Let’s reconsider P(E|F)/P(E). How much more likely is it on average that an organism with one additional protein, generated ab initio, with or without more duplicated genes, will survive than a sister exclusively because of that one protein? In the best case, this single protein would become functional concurrent with a drastic change in the environment for which the protein could be of some immediate use. This would offer some measurable advantage. But it becomes ‘just-so’ story-telling to invent such environmental catastrophes so often.
Now, it is questionable whether any mutation can be shown to lead to some kind of improvement without causing deleterious functioning of some processes already encoded on the DNA (this is very different from the question whether one mutation could allow some members to temporarily survive some drastic environmental change). Presumably a very bad mutation leads to death, weeding out such mutated genes from that species’ gene pool forever. Nevertheless, that member with a single new protein, whose offspring will eventually dominate the species population, will inevitably passively carry a large number of slightly defective but not yet deadly genes.
In other words, when I determine that a new protein is present in one or several organisms, I then know that many generations have passed since the protein-building process started, and that a huge number of bad, but individually not yet deadly, mutations have been accumulating. This time bomb may indeed mean that P(E|F)/P(E) on average may actually be < 1 — the chances of survival for a large number of members with one improvement but a huge number of disadvantages could militate against enhanced survival chances!
This is an inevitable consequence of the law of increase in entropy to which all matter is subject in the long run. This genetic load will get worse with an increasing number of generations. By invoking duplicate ‘junk’ genes, Dawkins is merely increasing the potential for more flaws. When told that an organism has a new protein, I know that many generations must have passed since the point where no evidence for that protein existed, and so the current member has inherited a lot of momentarily hidden flaws. Its temporary survival is a blessing in disguise for the species as a whole. I therefore suspect that P(E|F)/P(E) would indeed be < 1.
Nevertheless, survival is not the real issue, but rather an increase in information. The penalty for generating a new protein is a degrading of many other functions that have been damaged by all the concurrent mutations not related to producing that protein.
Now, to obtain anything interesting, such as a new organ, I need quite more than a single new protein. The probability of getting two of the right ones which will eventually lead to a new structure, all at once or sequentially, can only occur, if at all, should vastly more generations have passed, accompanied also by a vastly greater amount of genetic load. In fact, time becomes the greatest enemy of evolution.
Conclusion: the source of information, even when defined as per Dawkins, remains an intractable problem for evolutionary theory.
[Return to Top]
Part 3: The Concept of Information:
A Shannon Approach
Shannon’s theory of information, while useful in the context of telecommunications, does not seem to help anyone much in the evolution/creation debate. The purpose in spending the effort at all here is to clarify why a richer concept, discussed in Part 4, becomes necessary.
Building on ideas Shannon developed in 1948, Dawkins expresses (here in summarized form) the key ingredients of his view of information.
‘Redundancy was a second technical term introduced by Shannon, as the inverse of information. ...Redundancy is any part of a message that is not informative, either because the recipient already knows it (is not surprised by it) or because it duplicates other parts of the message. ...
‘Note that Shannon’s definition of the quantity of information is independent of whether it is true. The measure he came up with was ingenious and intuitively satisfying. Let’s estimate, he suggested, the receiver’s ignorance or uncertainty before receiving the message, and then compare it with the receiver’s remaining ignorance after receiving the message. The quantity of ignorance-reduction is the information content. Shannon’s unit of information is the bit, short for ‘binary digit’. One bit is defined as the amount of information needed to halve the receiver’s prior uncertainty, however great that prior uncertainty was ...’ 
‘In practice, you first have to find a way of measuring the prior uncertaintythat which is reduced by the information when it comes. For particular kinds of simple message, this is easily done in terms of probabilities. ...
‘In a message that is totally free of redundancy, after there’s been an error there is no means of reconstructing what was intended. Computer codes often incorporate deliberately redundant ‘parity bits’ to aid in error detection. DNA, too, has various error-correcting procedures which depend upon redundancy. ...
‘DNA carries information in a very computer-like way, and we can measure the genome’s capacity in bits too, if we wish. DNA doesn’t use a binary code, but a quaternary one. Whereas the unit of information in the computer is a 1 or a 0, the unit in DNA can be T, A, C or G. ...
‘Whenever prior uncertainty of recipient can be expressed as a number of equiprobable alternatives N, the information content of a message which narrows those alternatives down to one is log2N (the power to which 2 must be raised in order to yield the number of alternatives N). ...
‘When the prior uncertainty is some mixture of alternatives that are not equiprobable, Shannon’s formula becomes a slightly more elaborate weighted average, but it is essentially similar.’ 
‘The true information content is what’s left when the redundancy has been compressed out of the message’ 
Testing the Relevance of this Definition of Information
Experiment #1. Tell me about the information content of the following message:
Experiment #2. In the following pairs which has the most information:
11010010101 or 1001101
E=mc2 or the big brown dog which
Experiment #3. Tell me about the information content of the following message:
We see we are in trouble. Every attempt at an answer seems to start with, ‘It depends’. Let’s give some thought to these three experiments.
In experiment #1, the bits could represent characters in a computer’s extended ASCII code or a whole sentence in a secret agent’s code look-up book. What did I actually intend by this bit sequence? The first 4 bits represent 1 out of 16 books (24; we’ll let 0000 represent book 0, the first one) I have agreed to in advance with my secret agent in Bolivia; the next 8 bits represent a page number, between 1 and 256 (28; 00000000 represents the first page); the final 4 bits represent a sentence on that page. In case you are interested, book number 15 was Sun Tzu’s, ‘The Art of War’ page 146 sentence 3 and the intended message was: ‘Expendable agents are those of our own spies who are deliberately given fabricated information.’
In experiment #2 we cannot select between the pairs without knowing what the intended meaning was or how this was coded. The short code, E=mc2, provides a huge amount of information and causes a great amount of surprise. The information content cannot be captured by Dawkins’ concept of information.
The three letters in experiment #3 might have several meanings. Mine was ‘Bishop to e6’ (= ‘Bishop to [White’s] King 6’, in the obsolete descriptive notation). Its information value depends on the particular settings of the other chess pieces. Shannon might have argued that the number of potential moves available reflects the space of possibilities, and the densest communication of the move chosen can be represented through bits of 0 and 1. This is not very helpful nor does it reflect the usual, and my, meaning of information in this context. In this case the total information can be what I now surmise, after receiving the message, about the intentions of my chess opponent and this depends on the context. Some possible conclusions might be:
- ‘What a dummy. That move makes no sense, he doesn’t have a clue about what I am up to. He’s dead meat.’
- ‘Aha, that closes off the last escape route for my king. Given the layout of the whole chess board, his strategy seems to be a direct attack on the big guy.’
- ‘Oh no, he just opened my queen to attack by his rook and simultaneously attacked my unprotected knight. Time to bump the table over.’
The definition of information in terms of binary bits across a communication channel has been analyzed and evaluated as being of use in only some limited contexts by many information theorists. This includes Professor Gitt (see Part 4), who then developed a detailed theory involving sender and receiver pairs, which indeed allows us to identify how the three experiments outlined above can be handled. These ideas will be discussed later.
Let’s consider an example Dawkins offers, based on Shannon’s theory:
‘An expectant father watches the Caesarian birth of his child through a window into the operating theatre. He can’t see any details, so a nurse has agreed to hold up a pink card if it is a girl, blue for a boy. How much information is conveyed when, say, the nurse flourishes the pink card to the delighted father? The answer is one bitthe prior uncertainty is halved.’ 
Is this a persuasive notion? There are several weaknesses. The blue card is known in advance not to refer to the color of the baby’s face. A huge amount of knowledge must be assumed in advance between the parties before the communication can take place. Now, suppose the nurse shows up quickly with a pink card. The father concludes several things, with various levels of certainty:
The baby is probably healthy, since the nurse would hardly bother telling him about a dead girl.
With far above 50% likelihood the nurse is correct that it is a girl: hopefully the card was not intended for another party with a different meaning; presumably the nurse has not forgotten the agreed code, nor does she wish to play a mean joke on him.
The baby is very likely 100% girl and does not possess parts from both sexes, since this possibility has not been anticipated in developing the code. A card held up suggests such possibilities are unlikely, since the sender would otherwise not know how to react.
If the nurse does not show up within 48 hours, in high likelihood things have not gone well.
To understand how much information transfer between sender and receiver is occurring it would seem that what is encoded in the message alone is only part of the picture. There are cases where the receiver benefits from a multiplier effect when the transmitted information is augmented with existing knowledge on the part of the receiver.
Suppose military headquarters is looking for a volunteer for a dangerous mission. A candidate is found, and the message is sent that, ‘Candidate X can do 24 push-ups’. Suppose the receiver of this information knows that the elite troop consists of females who can do between 15 and 26 push-ups, and males who can do between 50 and 110. Then in addition to a rough estimate of the physical strength of the candidate the receiver now also knows its sex. Assume that the original recruitment criteria for women required them all to have IQs above 110 and that it was known that only 3 women could actually do 24 or more push-ups and these were all black. Then the amount of true information available to the receiver may now be greater than that transmitted by the message and indeed than available to the sender! This concept is not captured effectively by Shannon’s formula for maximum information transmission but is readily grasped by the average speaker.
Here’s another example from Dawkins:
‘The great biologist J B S Haldane used Shannon’s theory to compute the number of bits of information conveyed by a worker bee to her hive-mates when she ‘dances’ the location of a food source (about 3 bits to tell about the direction of the food and another 3 bits for the distance of the food).’ 
This is also very unconvincing. The amount of information is not defined exclusively by the message, but what can be assumed as additional knowledge on the part of the sender and receiver. The sender bee decides when the receiver is ready to receive the message. The intensity of ‘wagging’ reflects the sender bee’s opinion as to the size of the food source available, and she apparently decides how often the message must be repeated before the content is understood and memorized. The sender can assume that flight adjustments by the receiver (around this bush, over that tree, away from that wasp nest) will be made. The return path does not need to be explained. Although the whole communication appears to be instinctive, somewhere along the line additional knowledge had to be built in. The sender or receiver must ‘decide’ whether it is worthwhile sending the colleagues that distance for that particular quantity of food, and whether the journey to and fro should be made now or whether darkness will prevent a successful completion of the mission.
The amount of information that needs to be coded depends on the known resources assumed to be available to the receiver. If the bee decided to instruct a friendly ant to collect the same pollen a very different kind of message would need to be used with a very different minimum content, whose length cannot be calculated simply by Shannon’s equation. The message length required to ensure the intention can be realized is a function of pre-existing understanding between both parties. All sender-receiver members need to be on the same ‘wavelength’ before it is possible to determine what needs to be transmitted in the coded message.
In general, information theory as discussed in Part 4, is based on sender-receiver notions which assumes the sender can intelligently or instinctively evaluate the needs of the receiver and act accordingly. In speech, the voice loudness and speed can be estimated and adjusted to facilitate understanding by the receiver, without excessive waste. It is not clear how Shannon’s digital concept of maximum information content deals with such analogue subtleties.
Now, we have not yet developed a useful definition of information, but lets continue with Dawkins’ ideas.
‘... [W]e can make an approximate estimation of the information contents of the two bodies as follows. Imagine writing a book describing the lobster. Now write another book describing the millipede down to the same level of detail. Divide the word-count in one book by the word-count in the other, and you have an approximate estimate of the relative information content of lobster and millipede.’ 
This looks more promising. Information content in a comparative sense can be grasped.
Let’s see whether the suggestion, in this form, is satisfactory. I have 2 bottles, one of which contains a kilogram of benzene (C6H6) and the other a kilogram of fluid polyethylene ((CH2)n). I now proceed to write a book with all the properties of both substances: I describe various spectroscopic observations such as the infrared (IR) peaks, mass spectra (MS), nuclear magnetic resonance (NMR), and so on. Benzene, given its high symmetry (regular hexagonal), is shown immediately to be incomparably simpler. In fact, the proton NMR for benzene shows a single peak, whereas for polyethylene they seem uncountable and totally inseparable. We continue with rheological properties, such as viscosity as a function of temperature. Then we describe the distillation behavior, followed by an analysis by gas chromatography (GC) and liquid chromatography (LC). Once again, our polyethylene sample is shown to be incomparably more complex.
Now, let’s consider how both materials could be generated, and we write a detailed book for each. In both cases we start with a very simple molecule, ethane (which can be converted into ethylene and other more interesting compounds). We describe the chemical steps needed, including the exact processing details, which include reaction temperatures and in the case of benzene, distillation at a given point in time, to force ethylene to generate either benzene or polyethylene. The result? Benzene is actually far more difficult to synthesize! The information needed to generate the simpler material is greater than for the more complex.
Since DNA must encode the information to drive every step along the pathway from fertilized egg to adult, a description of the final product alone, i.e., the mature organism, is an insufficient criterion to compare information content.
A simpler example would be to compare two chemical molecules derived from a benzene ring. Each has two substituents in the 1 and 4 position (i.e. opposite each othercalled the para isomer). The first compound uses two methyl (CH3) groups (each of which consists of four atoms) as substituents whereas the second uses two fluorines (each consists of one atom). The relative book sizes of description and synthetic preparation are once again in the opposite direction.
Now, this is not nit-picking. I am attempting to suggest a word of caution. Looking only at the physical genome of the organism may not capture the total information picture actually present very well. The Designer understands the ecosystem the organism will be involved in. The average number of offspring can be optimized to compensate for survival changes, and nutritional needs can be provided by the genomes present in other organisms.
A second observation is that DNA also stores information for contingencies that may or may not arise, and this is not reflected in the physical description of the final organism.
Surprisingly, Dawkins may have suspected difficulties such as those mentioned above because he candidly tells us:
‘The great evolutionary biologist George C Williams has pointed out that animals with complicated life cycles need to code for the development of all stages in the life cycle, but they only have one genome with which to do so. A butterfly’s genome has to hold the complete information needed for building a caterpillar as well as a butterfly. A sheep liver fluke has six distinct stages in its life cycle, each specialized for a different way of life.’ 
We must now wonder just what it is Dawkins is trying to communicate. The information coded must also include that which is necessary to guide every step of the individual stages along and to provide for contingencies such as disease or temperature changes.
Let’s look more closely at this problem. Dr. Jonathan Wells and Dr. Paul Nelson offer an example in which virtually indistinguishable organisms are produced, but much more information is required in one case to guide the individual steps to get there.
‘Most frogs begin life as swimming tadpoles, and only later metamorphose into four-legged animals. There are many species of frogs, however, which bypass the larval stage and develop directly. Remarkably, the adults of some of these direct developers are almost indistinguishable from the adults of sister species that develop indirectly. In other words, very similar frogs can be produced by direct and indirect development, even though the pathways are obviously radically different. The same phenomenon is common among sea urchins and ascidians.’ 
The same principle is found between species:
‘Similar features are often produced by very different developmental pathways. No one doubts that the gut is homologous throughout the vertebrates, yet the gut forms from different embryonic cells in different vertebrates. The neural tube, embryonic precursor of the spinal cord, is regarded as homologous throughout the chordates, yet in some its formation depends on induction by the underlying notochord while in others it does not.  Indeed, as developmental biologist Pere Alberch noted in 1985, it is “the rule rather than the exception” that “homologous structures form from distinctly dissimilar initial states.” ,
Origin vs. Transmission of Information
Transmission of information appears sometimes to be confused with its origin. Consider two simple systems which ‘carry’ information.
- a car battery
- a computer algorithm
To create such systems requires a deep understanding of natural phenomena to meet a goal and for the solution to be optimized. Once the intellectual work has been carried out, the knowledge hidden behind each could be stolen and duplicated without a need to understand how or why a system works. The information itself is duplicated and retained on a physical medium. But such information is not intrinsic to the matter itself and cannot be understood by knowing its properties.
Also, for matter organized as for the examples (i) and (ii) above to perform an intended goal, additional physical components are necessary, such as computer hardware components or an engine. These are anticipated and understood by the creator of the information system. In these senses I argue that the total picture of quantity of information content often requires a broader view than if one merely looks at the carefully arranged matter.
Dr. Kofahl provides an interesting example that leads one to question whether Shannon’s notion of information, transmitted as a message, captures the essential issue:
‘One mystery is how one virus has DNA which codes for more proteins than it has space to store the necessary coded information.
'The mystery arose when scientists counted the number of three-letter codons in the DNA of the virus, QX174. They found that the proteins produced by the virus required many more code words than the DNA in the chromosome contains. How could this be? Careful research revealed the amazing answer. A portion of a chain of code letters in the gene, say -A-C-T-G-T-C-C-A-G-, could contain three three-letter genetic words as follows: -A-C-T*G-T-C*C-A-G-. But if the reading frame is shifted to the right one or two letters, two other genetic words are found in the middle of this portion, as follows: -A*C-T-G*T-C-C*A-G- and -A-C*T-G-T*C-C-A*G-. And this is just what the virus does. A string of 390 code letters in its DNA is read in two different reading frames to get two different proteins from the same portion of DNA.  Could this have happened by chance? Try to compose an English sentence of 390 letters from which you can get another good sentence by shifting the framing of the words one letter to the right. It simply can’t be done. The probability of getting sense is effectively zero.’ 
Dawkins is aware of this, but provides no materialistic explanation for its origin. The total information prepared in the above genome by the sender (God) presupposes co-ordination with the receiver as how to process the message. Two schemes, of identical message lengths, could allow either one or two proteins from the same DNA sequence to be generated. In each gene there is no redundancy, yet one provides twice the information as to the protein(s) to be generated than the other one does.
Dembski has argued in a mathematically rigorous way that what he calls Complex Specified Information (CSI) cannot arise by natural causes:
‘Natural causes are in-principle incapable of explaining the origin of CSI. To be sure, natural causes can explain the flow of CSI, being ideally suited for transmitting already existing CSI. What natural causes cannot do, however, is originate CSI. This strong proscriptive claim, that natural causes can only transmit CSI but never originate it, I call the Law of Conservation of Information. It is this law that gives definite scientific content to the claim that CSI is intelligently caused.’ 
Why does change ever occur in the sense of microevolution? Random fluctuations, leading to small fluctuations among existing genes, is fully compatible with our view that God created unique and fully functional plant and animal categories which are to ‘reproduce after their kind’. Before Darwin’s time, natural selection was viewed as a method of culling members of a population which were no longer as well adjusted to the environment as the norm, and it is an information removing process.
Conclusion. The question as to the origin of information necessary to develop greater complexity and to guide an organism’s development has not been answered by Prof. Dawkins. A discussion of Shannon’s notions is not the same as providing an example as requested.
[Return to Top]
Part 4: The Concept of Informationthe Gitt Approach
The key intuition is that some knowledge is ‘pressed’ on to a physical medium (matter or energy), the intellectual content of which was prepared by an original sender and after a time lapse a final receiver will decode the message and use it.
Occasionally messages are accepted and passed on via several sender/receiver pairs. I am calling the final receiver, or its substitute, the intended target, for whom the message was generated in the first place.
Here are some principles about coded information, based generously on Prof. Gitt’s theory, which should facilitate future discussions as to how nature is able to perform in manners seemingly inconsistent with known mechanistic and probabilistic processes.
Information is more than the physical coding used to represent it. The sender and receiver must agree in advance on conventions to represent whatever is to be communicated in the future.
Information exchange requires that the frame of reference or context be agreed to in advance.
Random processes cannot generate coded information; rather, they only reflect the underlying mechanistic and probabilistic properties of the components which created that physical arrangement.
Information efficiency may be denser than implied by Shannon’s log2(n) equation, since a common basis of understanding exists between sender and receiver, often allowing implications with various degrees of certainty to be assumed by both parties, in addition to the raw data of the message.
In addition to the data encoded in the physical message the intention of the original sender must be considered. An encoding system can be devised to ensure transmission accuracy or to avoid understanding by an unwanted party.
A message allows information to survive over time. Assuming that the physical medium is not destroyed, there is some flexibility as to when the receiver can interpret the information.
The underlying meaning of coded information is external to the mere nature and properties of the sender.
The physical medium upon which a message is encoded is subject to physical laws such as a natural trend towards increased entropy in the long run (and thereby loss of encoded information which is dependent on a physical medium).
Information content of messages is more easily quantified in a comparative than absolute sense.
My suggestion (f) that information, encoded in the form of a physical message, can be used to bridge a time span to communicate, and that this time can be variable, offers useful insights. When certain bacteria penetrate our bodies, an immune reaction is activated. This suggests that the necessary information was already present telling the body how to act. The sender (immune system) responds to an environmental stimulus, generates the necessary message, whose conventions and frame of reference had already been anticipated, and the receiver can take appropriate action. The necessary machinery was prepared by information stored in the DNA. What we see here is a complex chain of sender/receiver members which are able to respond to an external stimulus.
The activation of a message could occur a short time after the egg has been fertilized, or might never occur should the need never arise. But the infrastructure is in place.
Consider point (g). Say we are told the exact point where a billiard ball impacted a second one and are also told the velocity and direction of the deflected ball. Do we have enough ‘information’ to calculate the speed and direction of the first ball? This is not the sense, sometimes used in mathematics, for what we are discussing. The feedback provided in this example is inherent to the properties and state of the first ball and the immediate environment, such as friction of the table.
True information encoded in a message looks quite different. Human speech can communicate an intention, which has no relationship to the physical nature of the sender nor to the transmission medium. DNA can communicate how organs are to be developed, step by step, or how to regulate a body temperature in the future. These messages are more than mechanistic outcomes based on the nature or state of the sender.
Consider point (i). Suppose a chemistry teacher shows us a bottle of a pure chemical compound and offers us a choice of knowing: the melting point; heat capacity; or infrared spectra. Based on what we may already know about that sample, the three choices would offer differing amounts of information.
Note that it is easy now to evaluate experiments 13 from Part 3. Information is far more than the coded message. It requires an understanding of what the sender and receiver already know and can do with the message. The relationship E=mc2 has a very deep information content to someone already possessing the necessary mathematical and physical knowledge who also knows what the letters in the equation represent.
Types of Sender and Receiver
(a) Intelligent Sender and Intelligent Receiver
Clearly intelligent sender/receiver pairs exist, such as people. The path between the sender and final target can, of course, involve intermediate sender/receiver pairs. In addition, the message can be received and re-coded in various manners, preserving all or most of the original intended information. Examples include the use of human translators or transmission across various media (voice radio waves tape recorder paper computer diskette).
(b) Intelligent Sender and Non-Intelligent Receiver
Can an intelligent sender communicate with a non-intelligent receiver? Sure. Humans can interact with computers, for example. The sender transmits a database query and the result is sent back. The exchange can be interactive, such as working with a computer expert system. Of course the message encoding (computer language) and additional infrastructure (hardware and communications devices) needs to be set up in advance by an intelligent agent.
(c) Non-Intelligent Sender and Intelligent Receiver
Can a non-intelligent sender/receiver pair or sequence of pairs occur? Certainly. Automated production equipment can rely on a controller, which sends messages to on-line measuring devices to ensure the process is running as desired and corrective action can be taken. Once again, this can only function if an intelligent agent, who knows the purpose of the system, sets up the whole arrangement. The sender must be able to monitor the environment and interpret some kind of a signal. The non-intelligent sender must then be able to automatically generate a message (e.g., ‘the pressure is rising’), which the receiver will be able to process (‘slow down the feed rate of X, increase the flow of cooling water, and send an alarm to Mrs Smith’).
(d) Non-Intelligent Sender and Non-Intelligent Receiver
Now let’s consider an absolute extreme case. The sender and receiver can only react mechanically. Suppose the set-up must be fully automatic, meaning that when the sender or receiver is destroyed, a substitute has been provided for.
Compared to all the alternatives, this one requires the highest amount of intelligence from the agent who designed the system. Eventualities need to be anticipated and all resources for repair and energy need to be prepared in advanced. Do we find anything so enormously complex? Yesit is called life!
Careful analysis shows again and again that the process: sender codes a message ® receiver decodes and uses the intended information, does not arise without the active involvement of a living intelligence at some point. This has been systematically analyzed by Professor Gitt who showed that coded information cannot arise by chance. Coded information obeys fundamental laws of nature, which in summarized form can be expressed as follows:
Professor Gitt’s Universal Laws for Information
It is impossible to set up, store, or transmit information without using a code.
It is impossible to have a code apart from a free and deliberate convention.
It is impossible to have information without a sender.
It is impossible that information can exist without having had a mental source.
It is impossible for information to exist without having been established voluntarily by a free will.
It is impossible for information to exist without all five hierarchical levels: statistics, syntax, semantics, pragmatics, and apobetics [the purpose for which the information is intended, from the Greek apobeinon = result, success, conclusion].
It is impossible that information can originate in statistical processes.
Gitt’s book, which has been published in several languages, develops these principles in great depth. The inviolability of these laws has been accepted in numerous university discussions and conferences irrespective of one’s commitment to evolution or creation (the most determined attemptunsuccessfully, by the wayI witnessed at trying to find a loophole occurred at a creationist computer science conference in Hagen, Germany, in 1997). Like any proposed law of nature, a single exception would suffice to disprove it.
Re-examination of Information Present in Nature
In a nutshell, evolutionary theory postulates that living organisms arose from inanimate matter, and complex organisms from simpler ones. An intelligent designer is excluded from any consideration. This implies an increase in functional order, in information. But where does this information come from?
By information, the average person might mean something like, ‘How does the body know what to do?’ Implied is an intuition that an Intelligent Agent has coded the required know-how to drive the necessary chemical processes correctly. Indeed, the Bible states in passages such as in John 1, that intelligence, will and purpose preceded and were instrumental in creating the physical universe.
The sender/receiver scheme par excellence is found in the DNA encoding system. The intended outcome requires that the receiver use the message to produce proteins, which then act in a variety of manners to allow or hinder formation of additional proteins or to create enzymes or tissue necessary for the organism. The message leads to far more that a reflexive or mechanistic outcome. The genes on DNA store intention as to a complicated final goal to be met. These goals can be in Dembski’s parlance ‘specified’: matter is thereby organized so as to perform clearly recognized actions necessary to support life. There is every reason to believe random messages were not generated, some of which could be for some reason favored (‘selected’ according to Neo-Darwin theory) and simply propagated throughout a population.
Radically Different Pathways Can Lead to the Same Outcome
Clearly the instructions as to what to do to attain a useful outcome has been encoded in all living organisms, and this drives physical processes to a target outcome. The message (genes) may look very different but the same or very similar target outcome (homology) is still met.
That homology poses a severe problem for evolutionary theories has been widely recognized. Spetner points out that the steps which build up to make macroevolution require that two conditions be fulfilled:
They must be able to be part of a long series in which the mutation in each step is adaptive.
The mutations must, on the average, add a little information to the genome.
In addition, a sufficiently large number of useful mutations must be available on the genome at each step to allow the mutants a chance to propagate the DNA copying error throughout the population. About 1 million useful changes in a base must be available at each step or a useful mutation will not be able to spread throughout the population on strictly probabilistic grounds. His detailed calculations then illustrate how hopeless it is to expect the same complex feature to reappear, given the incomparably greater chances of other survival-enhancing alternatives which must be present for the Neo-Darwinist evolutionary model to be consistent. Since hemoglobin appears in beans (or rather, the bacteria in the nodules) and buffaloes and no one claims one descended from the other, the alternative is that similar genes to produce hemoglobin must have arisen by random chance again and again, against impossible odds. This in spite of the fact that the intermediate steps could not possibly be useful until hemoglobin and the accompanying apparatus were functional.
In his 1971 monograph, Homology, An Unsolved Problem, Sir Gavin de Beer, a staunch evolutionist and one of the truly great embryologists of this century, posed the question for evolutionary theory which still is unanswered:
‘But if it is true that through the genetic code, genes code for enzymes that synthesize proteins which are responsible (in a manner still unknown in embryology) for the differentiation of the various parts in their normal manner, what mechanism can it be that results in the production of homologous organs, the same ‘patterns,’ in spite of their not being controlled by the same genes? I asked this question in 1938, and it has not been answered.’ 
Let’s go a little deeper to see the magnitude of the problem. Biologist Brian Goodwin noted that ‘genes are responsible for determining which molecules an organism can produce,’ but ‘the molecular composition of organisms does not, in general, determine their form.’ 
Criticizing the notion of genetic programs, H.F. Nijhout concluded:
The only strictly correct view of the function of genes is that they supply cells, and ultimately organisms, with chemical materials.’ 
These chemical materials, proteins, can be used in a complex variety of ways, including interacting with the genome to aid the formation of additional kinds of proteins which otherwise would not form. There is not a simple direct causal relationship between one or several genes and a specific biological activity or structure. Sydney Brenner recognized this problem when he realized that the information required to specify the neural connections of even a simple worm far exceeds the information content of its DNA., The complete process, including regulatory activities such as accelerating or preventing the formation of additional proteins according to need, strongly suggests that a genome processes information a priori as to final outcomes. This allows vastly different pathways to still lead to the same or very similar outcomes.
It must not be forgotten that very similar genes can also result in different outcomes.
Very Similar Genes Can Lead to Very Different Outcomes
There are numerous examples, such that ‘although mice and flies share a similar gene which affects eye development (eyeless), the fly’s multifaceted eye is profoundly different from a mouse’s camera-like eye.’
It is hard to avoid the conclusion that DNA encodes messages, which imply a deeper understanding of the down-stream process which will result.
It is legitimate to ask where the information, as coded in DNA, arose from in the first place. As in all sender-receiver information transfer systems:
The sender or builder of the sender must know what needs to go into the message.
For communication to function effectively, what the receiver is already aware of or can do with the information needs to be known by the sender or its builder to ensure the coded messages passes on the complete intention.
Basically, the evolutionists would like us to believe that the messages can be altered without consultation between the sender and receiver. Random noise is added to the message, or parts of valuable information are excluded (mutations), and one then hopes for the best (selection).
Can Information Content be Increased Using
Additional Random Noise Plus Selection?
As already mentioned, persuasive examples of mutations for a species’ long-term improvement are virtually, possibly even totally, unknown. The smaller the change, the smaller the selective advantage (denoted by the symbol s). But the smaller the selective advantage, the more likely that random effects (e.g. genetic drift) will eliminate itits probability of survival is about 2s. Consider a mutation with a selection coefficient s = 0.001 or 0.1%, a supposedly typical value, i.e. the number of surviving offspring is 0.1% greater for organisms with the mutation than without it. This mutation has only one chance in 500 of surviving, even though it is beneficial.
Here is an analogy to see how Neo-Darwinism is supposed to work. The example is not perfect but allows us to easily grasp how counter-intuitive the evolutionary model is.
Say all Shakespeare’s writings are copied onto A4-size sheets of paper and a large number of photocopies are made. Some of the sheets of paper get fed in askew, others get jammed, sometimes two stick together causing the top sheet to be skipped.
A machine discards the copies which are blatantly flawed, that is, natural selection allows some of the members from the next generation to be removed from circulation. Copies that are still somewhat legible by a control machine are retained.
Since we don’t want a huge number of improved works by Shakespeare (the population size remains fairly constant at around n members), we withdraw from surviving copies a subset of size n. A key fact we must not overlook is that none of the copies are precisely 100% perfect replicas of the original. They are merely not so desperately flawed that they can ‘survive’. Each contains slight, but not yet deadly, flaws.
We repeat the copying process many times, introducing more and more small flaws with each step. Miraculously, in the rarest of occasions, a single letter in one of the copies shows up being different which makes the copy ‘better’ in the opinion of the control machine (without knowing what the final masterpiece is going to look like). Perhaps a crease makes a ‘g’ look like ‘o’ when photocopied.
However, our machine can only detect or select for subsequent duplication 0.5% of these fortunate accidents. The n copies selected for further copying include on average 0.5% of the ‘good’ changes. It is important to not overlook that these ‘improved’ portions of the original are different for every case among the sample of size n. The ‘g’, which now looks like an ‘o’, virtually never shows up in the same position for these 0.5% ‘improved’ copies.
We repeat the process again and again. Statistically speaking, ‘good’ changes virtually never occur to photocopies which already have some improvements already. And should any copies show several good changes, they will be leading in totally different directions! (The control machine has no goal).
If we increase the number of copies to n, we end up with more potential improvements but a decreased chance it will ever propagate throughout all n copies after several iterations.
Over the millions of generations, the small flaws have been accumulating until nothing is legible any more. Even should a new masterpiece arise, the photocopier does not stop working but works merrily away, introducing more and more small flaws.
Do you believe this process will produce better literature?
You may object that according to the above reasoning, no classes of organisms should survive tens of millions of years. You are right, of course.
Here’s another thought: could one start with a few pages of a work by Shakespeare and hope that somehow the missing ones will be generated by copying errors of those available? Some would suggest the photocopier could first produce some duplicate identical pages (genes), and some will remain unchanged while the others will undergo good accidents leading to new thoughts or information. This is unreasonable.
Can’t Organisms Somehow Start Simple and Add Complexity?
It is my conviction that the information for critical bodily functions, such as a heart or brain in mammals, must be present and functional immediately for that class of organism to survive. Suppose our body is lacking the CFTR gene (or it is not yet functional), which produces a trans-membrane protein which regulates chloride ion transport across the cell membrane. Or suppose that it is missing the RB gene on the 13th chromosome, whose job it is to identify abnormal tumour growth, especially in a child’s rapidly growing retina, and kill such tumours. If one tiny piece of the puzzle is missing all the other thousands of functional genes become worthless, since the organism cannot survive.
How sensitive is our human copy machine to error? The CFTP gene has 250,000 base pairs. Over 200 mutations have been described which lead to cystic fibrosis (CF). The most common mutation, -F508 at position 508 on the peptide chain involves the deletion of three nucleotides. Three out of 250,000 nucleotides are not copied correctly and the gene cannot function! It is simply not correct to pretend that nature offers endless degrees of freedom to monkey around with the highly interdependent and very sensitive machinery of cell duplication. Furthermore, as discussed above, time is the greatest enemy for evolutionary theory, since most mutations are recessive and for the time being non-lethal. These accumulate from generation to generation and increase the genetic burden.
A Challenge to Professor Dawkins and Other Evolutionists
It seems fair to point out that Professor Dawkins has yet to provide a concrete example of a mutation leading to an increase of information as requested.
He may wish to take whatever time is necessary, without the pressure of a video camera or interviewers, and from the necessarily trillions of random genetic changes which he postulates must have occurred, select a number which show an indisputable increase in information content of the biocosm. Given his belief in natural selection, of 20 examples, all might not survive professional scrutiny, but surely the best cases would. With this contribution he should be in a position to claim he has indeed answered the original question posed to him.
Dr. Royal Truman
[Return to Top]
 Farley, John, 1979. The Spontaneous Generation Controversy from Descartes to Oparin, 2nd ed, p. 73, The John Hopkins University Press, Baltimore. [RETURN TO TEXT]
 Nureki, O. and nine others, 1998. Enzyme structure with two catalytic sites for double-sieve selection of substrate. Science 280(5363):578582. [RETURN TO TEXT]
 Sarfati, J.D., 1999. Decoding and editing design: double sieve enzymes. Creation Ex Nihilo Technical Journal13(1):57 [RETURN TO TEXT]
 Hiroyuki Noji et al., 1997. Direct observation of the rotation of F1ATPase. Nature 386(6622):299302. Comment by Block, S. Real engines of creation. Same issue, pp. 217219. [RETURN TO TEXT]
 Boyer, P., 1993. The binding change mechanism for ATP synthesis — some probabilities and possibilities. Biochim. Biophys. Acta 1140:215250. [RETURN TO TEXT]
 Abrahams, J.P. et al., 1994. Structure at 2.8 Å resolution of F1-ATPase from bovine heart mitochondria. Nature 370(6491):621628. Comment by Cross, R.L. Our primary source of ATP. Same issue, pp. 594595. [RETURN TO TEXT]
 Sarfati, J.D., 1998. Design in living organisms: Motors. CEN Tech. J. 12(1):35. [RETURN TO TEXT]
 Parker, G., 1994. Creation. Facts of Life, 6th ed. p.28, Master Books, Green Forest, AR. [RETURN TO TEXT]
 Quoted in Nelson, Paul, Thinking About the Theory of Design http://www.arn.org/docs/orpages/or152/152main.htm. [RETURN TO TEXT]
 Behe, M.J., 1996. Darwin’s Black Box: The Biochemical Challenge to Evolution, The Free Press, New York. Reviewed by Ury, T.H., 1997. CEN Tech. J. 11(3):283291. See also Dr. Robert DiSilvestro (a biochemist), 1999. Rebuttals to common criticisms of the book Darwin's Black Box. http://www.leaderu.com/science/disilvestro-dbb.html (3 June 1999 update). [RETURN TO TEXT]
 Sarfati, J.D., 1998. Origin of life: the polymerization problem. CEN Tech. J. 12(3):281284. [RETURN TO TEXT]
 Sarfati, J.D., 1998. Origin of life: the chirality problem. CEN Tech. J. 12(3):263266. [RETURN TO TEXT]
 (a) Meyer, Stephen, The Message in the Microcosm: DNA and the Death of Materialism http://www.arn.org/docs/meyer/sm_message.htm
(b) Meyer, Stephen, The Origin of Life and the Death of Materialism, http://www.arn.org/docs/meyer/sm_origins.htm [RETURN TO TEXT]
 Sauer, cited by Meyer, Ref. 13(b). [RETURN TO TEXT]
 From a Frog to a Prince, Keziah Video Productions, 1998, in conjunction with Answers in Genesis and the Institute for Creation Research. [RETURN TO TEXT]
 Williams, B., 1998. Creationist Deception Exposed. The Skeptic 18(3):710. [RETURN TO TEXT]
 Brown, G., 1998. Skeptics choke on Frog: Was Dawkins caught on the hop? Answers in Genesis Prayer News (Australia), Nov. 1998, p. 3.
In particular, by the time this question was proposed to Dawkins, he had known full well that the producers were creationists. This undermines the story that this question alerted Dawkins to this fact for the first time, and that his pause merely for deciding whether to expel them from his house. [RETURN TO TEXT]
 Dawkins, R., 1998. The ‘Information Challenge’. The Skeptic 18(4):2125. [RETURN TO TEXT]
 Dawkins referred to his new book Unweaving the Rainbow: Science, Delusion and The Appetite For Wonder, Houghton Mifflin Company, Boston New York, 1998. See its critique: Truman, R. 1999. CEN Tech. J. 13(1):3336. [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 24. [RETURN TO TEXT]
 Wiedersheim, R., 1895. The Structure of Man: an Index to his Past History, Macmillan, London, translated by H. and M. Bernard. [RETURN TO TEXT]
 Bergman, J. and Howe, G., 1990. ‘Vestigial Organs’ are Fully Functional, Creation Research Society Books, Kansas City. [RETURN TO TEXT]
 ‘Vestigial’ Organs: What do they prove?, online at http://www.answersingenesis.org/. [RETURN TO TEXT]
 (a) Dembski, William, Science and Design, First Things 86 (October 1998:2127).
(b) Dembski, William A., Science and Design, http://www.arn.org/ftissues/ft9810/dembski.html
(c) Dembski, William A., 1998. The Design Inference. Eliminating Chance Through Small Probabilities, Cambridge Studies in Probability, Induction and Decision Theory, Cambridge University Press, 1998. [RETURN TO TEXT]
 Wells, Jonathan and Nelson, Paul. Homology: A Concept in Crisis, Origins & Design 18(2):1219; online at http://www.arn.org/docs/odesign/od182/hobi182.htm. [RETURN TO TEXT]
 Raff, Rudolf A. (1996). The Shape of Life: Genes, Development, and the Evolution of Animal Form. Chicago: The University of Chicago Press. [RETURN TO TEXT]
 (a) Emile Zuckerkandl, Neutral and Nonneutral Mutations: The Creative Mix—Evolution of Complexity in Gene Interaction Systems, Journal of Molecular Evolution 44 (1997):S2S8
(b) Nelson, Paul, The Junk Dealer Ain’t Selling That No More, http://www.arn.org/docs/odesign/od182/ls182.htm#anchor569108. [RETURN TO TEXT]
 Interview in Der Spiegel, 37:272, 1998. [RETURN TO TEXT]
 Smoker’s Nature, Newsweek Feb. 1, 1999, p. 4. [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 23. [RETURN TO TEXT]
 Kofahl, Robert E., Probability and the Origin of Life, http://www.parentcompany.com/creation_essays/essay44.htm. [RETURN TO TEXT]
 Parker, Ref. 8, p. 44 [RETURN TO TEXT]
 Dickerson, Richard E., and Irving Geis, 1969. The Structure and Action of Proteins. New York: Harper and Row. [RETURN TO TEXT]
 Hannu Lång, 1999. Pers. comm. [RETURN TO TEXT]
Hosken, B., in Ashton, J., ed., 1999. In Six Days: Why 50 [Ph.D.] scientists choose to believe in creation, New Holland, Sydney/Auckland/London/Cape Town, pp. 110113. [RETURN TO TEXT]
 Morris, Henry and Parker, G., 1997. What is Creation Science? Master Books, p. 59. [RETURN TO TEXT]
 Johnson, Phillip E., The Storyteller and the Scientist, http://www.arn.org/docs/johnson/behedawk.htm. [RETURN TO TEXT]
 Behe, Ref. 10, pp. 8990. [RETURN TO TEXT]
 Behe, Michael J., Behe Responds to the Boston Review, http://www.arn.org/docs/behe/mb_brresp.htm. [RETURN TO TEXT]
 Gitt, W., 1997. ‘Dazzling Design in Miniature’, Creation Ex Nihilo 20(1):6, December 1997February 1998, online at http://www.answersingenesis.org/. [RETURN TO TEXT]
 Behe, Ref. 10, pp. 6973. [RETURN TO TEXT]
 Lucy Shapiro, 1995. The Bacterial Flagellum: From Genetic Network to Complex Architecture, Cell 80:52527. [RETURN TO TEXT]
 Tétry A., Theories of Evolution, in Rostand J. & Tétry A., Larousse Science of Life: A study of Biology Sex, Genetics, Heredity and Evolution, (1962), Hamlyn: London, 1971 pp 428432). [RETURN TO TEXT]
 Behe, Ref. 10, pp. 3945. [RETURN TO TEXT]
 Scherer, S., 1983. Basic Functional States in the Evolution of Light-driven Cyclic Electron Transport, Journal of Theoretical Biology 104:289299. [RETURN TO TEXT]
 Scherer, Ref. 45, p. 296. [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 25; emphasis in original. [RETURN TO TEXT]
 Wieland, Carl, 1997. Beetle Bloopers: even a defect can be an advantage sometimes. Creation Ex Nihilo 19(3):30. [RETURN TO TEXT]
 Sarfati, J.D., 1999. The non-evolution of the horse. Creation 21(3):2831. [RETURN TO TEXT]
 Parker, Ref. 8, p. 125. [RETURN TO TEXT]
 Grassé’, P.-P., 1973. L’Evolution du Vivant, translated in English under The Evolution of Living Organisms (1977). Discussed by Professor Johnson in Darwinism’s Rules of Reasoning, http://www.arn.org/docs/johnson/drr.htm. [RETURN TO TEXT]
 Demick, D.A., February 1999. The Blind Gunman. Impact #308, Institute for Creation Research; http://www.icr.org/pubs/imp/imp-308.htm. [RETURN TO TEXT]
 Wells, J. 1999. Pers. Comm. [RETURN TO TEXT]
 Spetner, Lee, 19978. Not By Chance! Shattering the Modern Theory of Evolution, The Judaica Press, Inc. See also review by Wieland, C., Creation 20(1):5051, Dec. 1997. [RETURN TO TEXT]
 Spetner, Lee, 1999. Pers. Comm. [RETURN TO TEXT]
 Ury, Thane H., 1997. Mere Creation Conference, CEN Tech. J. 11(1):2530. [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 21. [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 25. [RETURN TO TEXT]
 Lindley, D.V., 1985. Making Decisions, 2nd edition, John Wiley & Sons. [RETURN TO TEXT]
 Hey, John D., 1985. Data in Doubt, Basil Backwell Ltd. (especially ch. 4). [RETURN TO TEXT]
 Spetner, Ref. 54, p. 102. [RETURN TO TEXT]
 See Sarfati, J.D., 1999. The Second Law of Thermodynamics. [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 21 (emphasis in original). [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 22. [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 22 (emphasis in original). [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 22. [RETURN TO TEXT]
 Gilbert, Scott F., 1994. Developmental Biology, 4th ed. Sunderland, MA.: Sinauer Associates. [RETURN TO TEXT]
 Alberch, Pere, 1985. Problems with the Interpretation of Developmental Sequences, Systematic Zoology 34(1): [RETURN TO TEXT]
 Barrell, B.G., Air, G.M., and Hutchinson, C.A. III, 1976. Overlapping genes in bacteriophage QX174, Nature 264:3441. Cited in: Denton, M., 1985. Evolution: a Theory in Crisis, Adler & Adler, Bethesda, Maryland, p. 343. [RETURN TO TEXT]
 Dawkins, Ref. 18, p. 24. He notes that the genetic material was actually RNA, not DNA, but he said ‘the principle is the same’. [RETURN TO TEXT]
 Gitt, Werner, 1997. In the Beginning was Information, Christliche Literature-Verbreitung e.V., Bielefeld, Germany. [RETURN TO TEXT]
 Gitt, Ref. 71, p. 80. [RETURN TO TEXT]
 First used in Gitt, W., 1981. Information und Entropie als Bindeglieder diverser Wissenschaftszweige PTB-Mitt. 91:117; cited in: Gitt, Ref. 71, p. 76. [RETURN TO TEXT]
 Spetner, Ref. 54, ch. 4. [RETURN TO TEXT]
 Kofahl, Robert E., Do Similarities Prove Evolution from a Common Ancestor?, http://www.parentcompany.com/creation_essays/essay11.htm. [RETURN TO TEXT]
 De Beer, G., 1991. Homology, An Unsolved Problem, Oxford University Press, London. [RETURN TO TEXT]
 Goodwin, Brian C., 1985. What Are the Causes of Morphogenesis? Bioessays 3:3236; quote on p. 32 [RETURN TO TEXT]
 Nijhout, H.F. (1990). Metaphors and the Role of Genes in Development, Bioessays 12:441446; quote on p. 444. [RETURN TO TEXT]
 Brenner, Sydney, 1973. The Genetics of Behaviour, British Medical Bulletin 29:269271. [RETURN TO TEXT]
 Actually 2s/(1-e-2sN), where s = selection coefficient and N is the population size. This asymptotically converges down to 2s where sN is large. So it’s much harder for large populations to substitute beneficial mutations. But smaller populations have their own disadvantages, e.g. they are less likely to produce any good mutations, and are vulnerable to the deleterious effects of inbreeding and genetic drift. See Spetner, Ref. 54, ch. 3. [RETURN TO TEXT]
Home | Feedback | Links | Books | Donate
| Back to Top
© 2023 TrueOrigin Archive. All Rights Reserved.
powered by Webhandlung