Home | Feedback | Links | Books


Barry Setterfield on cDK


Setterfield’s response to Skiff’s criticism
featured in Kelly’s book, Creation and Change

© 2024 Barry Setterfield.  All Rights Reserved.


Introduction

When Barry Setterfield and Trevor Norman published their work on the speed of light decay in 1987, entitled “The Atomic Constants, Light and Time”, it eventually sparked a great deal of controversy over not only the idea of the decay of the speed of light (cDK), but over the way the data had been handled by Setterfield and Norman. Accusations were made regarding mishandling data and pre-selecting data to fit their theories. The fact that data from such a limited time (since it had been possible to directly measure the speed of light) was, of necessity, used and then extrapolated backwards was also brought up. Because the earlier in time the light speed measurements had been taken, the more subject to error they were, there were a number of physicists who felt that no reliable curve could be fit to the data at all. Statistician Alan Montgomery looked at the data and, after working with it, came to the conclusion that the Setterfield-Norman paper was correct in its use of the data. Much of the material concerning and explaining this can be found at Lambert Dolphin’s website (http://www.best.com/~dolphin//constc.shtml).

In the meantime, Douglas Kelly, in his book Creation and Change: Genesis 1.1-2.4 in the light of changing scientific paradigms (1997, Christian Focus Publications, Great Britain) discusses this issue in terms of Genesis. Endeavoring to present both sides of the cDK argument, he asked for a comment from Professor Frederick N. Skiff. Professor Skiff responded with a private letter which Kelly published on pp. 153 and 154 of his book. The letter is quoted below, followed by Barry Setterfield’s response.

Helen Fryman
January 25, 1999

[Webmaster’s note: Kelly’s book supports a literal Genesis, with creation in six consecutive normal days about 6000 years ago.]


From Professor Frederick N. Skiff:
Associate Professor of Physics, University of Iowa

I see that Setterfield does indeed propose that Planck’s constant is also changing. Therefore, the fine structure constant ‘a’ could remain truly constant and the electron velocity in the atom could then change in a fashion proportional to the speed of light. His hypothesis is plausible.

My concern was that if you say

1) The speed of light is changing. And
2) The electron velocity in the atom is proportional to the speed of light,
then you will generate an immediate objection from a physicist unless you add
3) Planck’s constant is also changing in such a way as to keep the fine structure ‘constant’ constant.

The last statement is not a small addition. It indicates that his proposal involves a certain relations between the quantum theory (in the atom) and relativity theory (concerning the speed of light). The relation between these theories, in describing gravity, space and time, is recognized as one of the most important outstanding problems in physics. At present these theories cannot be fully reconciled, despite their many successes in describing a wide rang of phenomena. Thus, in a way, his proposal enters new territory rather than challenging current theory. Actually, the idea has been around for more than a decade, but it has not been pursued for lack of proof. My concerns are the following:

The measurements exist over a relatively short period of time. Over this period of time the speed changes by only a small amount. No matter how good the fit to the data is over the last few decades, it is very speculative to extrapolate such a curve over thousands of years unless there are other (stronger) arguments that suggest that he really has the right curve. The fact is that there are an infinite number of mathematical curves which fit the data perfectly (he does not seem to realize this in his article). On the other hand, we should doubt any theory which fits the data perfectly because we know that the data contain various kinds of errors (which have been estimated). Therefore the range of potential curves is even larger, because the data contain errors. There is clearly some kind of systematic effect, but not one that can be extrapolated with much confidence. The fact that his model is consistent with a biblical chronology is very interesting, but not conclusive (there are an infinite number of curves that would also agree with this chronology). The fact that he does propose a relative well known, and simply trigonometric function is also curious, but not conclusive.

The theoretical derivation that he gives for the variation of the speed of light contains a number of fundamental errors. He speaks of Planck’s constant as the quantum unit of energy, but it is the quantum unit of angular motion. In his use of the conversion constant b he seems to implicitly infer that the ‘basic’ photon has a frequency of 1Hz, but there is no warrant for doing this. His use of the power density in an electromagnetic wave as a way of calculating the rate of change of the speed of light will not normally come out of a dynamical equation which assumes that the speed of light is a constant (Maxwell’s Equations). If there is validity in his model, I don’t believe that it will come from the theory that he gives. Unfortunately, the problem is much more complicated, because the creation is very rich in phenomena and delicate in structure.

Nevertheless, such an idea begs for an experimental test. The problem is that the predicted changes seem to be always smaller than what can be resolved. I share some of the concerns of the second respondent in the Pascal Notebook article.* One would not expect to have the rate of change of the speed of light related to the current state-of-the-art measurement (the graph of page 4 of Pascal’s Notebook**) unless the effect is due to bias. Effects that are ‘only there when you are not looking’ can happen in certain contexts in quantum theory, but you would not expect them in such a measurement as the speed of light.

There are my concerns. I think that it is very important to explore alternative ideas. The community which is interested in looking at theories outside of the ideologtical mainstream is small and has a difficult life. No one scientist is likely to work out a new theory from scratch. It needs to be a community effort, I think.

* A reference to “Decrease in the Velocity of Light: Its Meaning For Physics” in The Pascal Centre Notebook, Vol One, Number one, July, 1990. The second respondent to Setterfield’s theory was Dr. Wytse Van Dijk, Professor of Physics and Mathematics, Redeemer College, who asked (concerning Professor Troistskii’s model of the slowing down of the speed of light): “Can we test the validity of Troitskii’s model? If his model is correct, then atomic clocks should be slowing compared to dynamic clocks. The model could be tested by comparing atomic and gravitational time over several years to see whether they diverge. I think such a test would be worthwhile. The results might help us to resolve some of the issues relation to faith and science.” ( p.5.)

** This graph consists of a correlation of accuracy of measurements of speed of light c with the rate of change in c between 1740 and1980.


Barry Setterfield’s response:

During the early 1980’s it was my privilege to collect data on the speed of light, c. In that time, several preliminary publications on the issue were presented. In them the data list increased with time as further experiments determining c were unearthed. Furthermore, the preferred curve to fit the data changed as the data list became more complete. In several notable cases, this process produced trails on the theoretical front and elsewhere which have long since been abandoned as further information came in. In August of 1987, our definitive Report on the data was issued as “The Atomic Constants, Light and Time” in a joint arrangement with SRI International and Flinders University. Trevor Norman and I spent some time making sure that we had all the facts and data available, and had treated it correctly statistically. In fact the Maths Department at Flinders Uni was anxious for us to present a seminar on the topic. That report presented all 163 measurements of c by 16 methods over the 300 years since 1675. We also examined all 475 measurements of 11 other c-related atomic quantities by 25 methods. These experimental data determined the theoretical approach to the topic. From them it became obvious that, with any variation of c, energy is going to be conserved in all atomic processes. A best fit curve to the data was presented.

In response to criticism, it was obvious the data list was beyond contention - we had included everything in our Report. Furthermore, the theoretical approach withstood scrutiny, except on the two issues of the redshift and gravitation. The main point of contention with the Report has been the statistical treatment of the data, and whether or not these data show a statistically significant decay in c over the last 300 years. Interestingly, all professional statistical comment agreed that a decay in c had occurred, while many less qualified statisticians claimed it had not! At that point, a Canadian statistician, Alan Montgomery, liaised with Lambert Dolphin and me, and argued the case well against all comers. He presented a series of papers which have withstood the criticism of both the Creationist community and others. From his treatment of the data it can be stated that c decay (cDK) has at least formal statistical significance.

However, my forthcoming redshift paper (which also resolves the gravitational problem) takes the available data right back beyond the last 300 years. In so doing, a complete theory of how cDK occurred (and why) has been developed in a way that is consistent with the observational data from astronomy and atomic physics. In simple terms, the light from distant galaxies is redshifted by progressively greater amounts the further out into space we look. This is also equivalent to looking back in time. As it turns out, the redshift of light includes a signature as to what the value of c was at the moment of emission. Using this signature, we then know precisely how c (and other c-related atomic constants) has behaved with time. In essence, we now have a data set that goes right back to the origin of the cosmos. This has allowed a definitive cDK curve to be constructed from the data and ultimate causes to be uncovered. It also allows all radiometric and other atomic dates to be corrected to read actual orbital time, since theory shows that cDK affects the run-rate of these clocks.

A very recent development on the cDK front has been the London Press announcement on November 15th, 1998, of the possibility of a significantly higher light-speed at the origin of the cosmos. I have been privileged to receive a 13 page pre-print of the Albrecht-Magueijo paper (A-M paper) which is entitled “A time varying speed of light as a solution to cosmological puzzles”. From this fascinating paper, one can see that a very high initial c value really does answer a number of problems with Big Bang cosmology. My main reservation is that it is entirely theoretically based. It may be difficult to obtain observational support. As I read it, the A-M paper requires c to be at least 10^60 times its current speed from the start of the Big Bang process until “a phase transition in c occurs, producing matter, and leaving the Universe very fine-tuned ...”. At that transition, the A-M paper proposes that c dropped to its current value. By contrast, the redshift data suggests that cDK may have occurred over a longer time.

Some specific questions relating to the cDK work have been raised. Helen Fryman wrote to me that someone had suggested “that the early measurements of c had such large probable errors attached, that (t)his inference of a changing light speed was unwarranted by the data.” This statement may not be quite accurate, as Montgomery’s analysis does not support this conclusion. However, the new data set from the redshift resolves all such understandable reservations.

There have been claims that I ‘cooked’ or mishandled the data by selecting figures that fit the theory. This can hardly apply to the 1987 Report as all the data is included. Even the Skeptics admitted that “it is much harder to accuse Setterfield of data selection in this Report”. The accusation may have had some validity for the early incomplete data sets of the preliminary work, but I was reporting what I had at the time. The rigorous data analyses of Montgomery’s papers subsequent to the 1987 Report have withstood all scrutiny on this point and positively support cDK. However, the redshift data in the forthcoming paper overcomes all such objections, as the trend is quite specific and follows a natural decay form unequivocally.

Finally, Douglas Kelly’s book “Creation and Change” contained a very fair critique on cDK by Professor Fred Skiff. However, a few comments may be in order here to clarify the issue somewhat. Douglas Kelly appears to derive most of his information from my 1983 publication “The Velocity of Light and the Age of the Universe”. He does not appear to reference the 1987 Report which updated all previous publications on the cDK issue. As a result, some of the information in this book is outdated. In the “Technical And Bibliographical Notes For Chapter Seven” on pp.153-155 several corrections are needed as a result. In the paragraph headed by “1. Barry Setterfield” the form of the decay curve presented there was updated in the 1987 Report, and has been further refined by the redshift work which has data back essentially to the curve’s origin. As a result, a different date for creation emerges, one in accord with the text that Christ, the Apostles and Church Fathers used. Furthermore this new work gives a much better idea of the likely value for c at any given date. The redshift data indicate that the initial value of c was (2.54 x 10^10) times the speed of light now. This appears conservative when compared with the initial value of c from the A-M paper of 10^60 times c now.

Professor Skiff then makes several comments. He suggests that cDK may be acceptable if “Planck’s constant is also changing in such a way as to keep the fine structure ‘constant’ constant.” This is in fact the case as the 1987 Report makes clear.

Professor Skiff then addresses the problem of the accuracy of the measurements of c over the last 300 years. He rightly points out that there are a number of curves which fit the data. Even though the same comments still apply to the 1987 Report, I would point out that the curves and data that he is discussing are those offered in 1983, rather than those of 1987. It is unfortunate that the outcome of the more recent analyses by Montgomery are not even mentioned in Douglas Kelly’s book.

Professor Skiff is also correct in pointing out that the extrapolation from the 300 years data is “very speculative”. Nevertheless, geochronologists extrapolate by factors of up to 50 million to obtain dates of 5 billion years on the basis of less than a century’s observations of half-lives. However, the Professor’s legitimate concern here should be largely dissipated by the redshift results which take us back essentially to the origin of the curve and define the form of that curve unambiguously. The other issue that the Professor spends some time on is the theoretical derivation for cDK, and a basic photon idea which was used to support the preferred equation in the 1983 publication. Both that equation and the theoretical derivation were short-lived. The 1987 Report presented the revised scenario. The upcoming redshift paper has a completely defined curve, that has a solid observational basis throughout. The theory of why c decayed along with the associated changes in the related atomic constants, is rooted firmly in modern physics with only one very reasonable basic assumption needed. I trust that this forthcoming paper will be accepted as contributing something to our knowledge of the cosmos.

Professor Skiff also refers to the comments by Dr. Wytse Van Dijk who said that “If (t)his model is correct, then atomic clocks should be slowing compared to dynamical clocks.” This has indeed been observed. In fact it is mentioned in our 1987 Report. There we point out that the lunar and planetary orbital periods, which comprise the dynamical clock, had been compared with atomic clocks from 1955 to 1981 by Van Flandern and others. Assessing the evidence in 1984, Dr. T. C. Van Flandern came to a conclusion. He stated that “the number of atomic seconds in a dynamical interval is becoming fewer. Presumably, if the result has any generality to it, this means that atomic phenomena are slowing with respect to dynamical phenomena ...” This is the observational evidence that Dr. Wytse Van Dijk and Professor Skiff required. Further details of this assessment by Van Flandern can be found in “Precision Measurements and Fundamental Constants II”, pp.625-627, National Bureau of Standards (US) Special Publication 617 (1984), B. N. Taylor and W. D. Phillips editors.

In conclusion, I would like to thank Fred Skiff for his very gracious handling of the cDK situation as presented in Douglas Kelly’s book. Even though the information on which it is based is outdated, Professor Skiff’s critique is very gentlemanly and is deeply appreciated. If this example were to be followed by others, it would be to everyone’s advantage.

BARRY SETTERFIELD
January 25, 1999


Home | Feedback | Links | Books | Donate | Back to Top

© 2024 TrueOrigin Archive.  All Rights Reserved.
  powered by Webhandlung