Introduction
Science has done some remarkable things. From Thales’ prediction of a solar eclipse in 585 BC to the detection of gravitational waves in September 2015, from enabling heart transplants to enabling of the atomic bomb, science has made many impressive, useful, and powerful contributions that sometimes change history. Science also influences our day-to-day life in ways we rarely notice. Every time you turn on the lights, run the dishwasher, use your phone, or drive your car down the highway, it is an incarnation of multiple scientific theories.
But what is science? And what is a theory? And to what extent are they true?
This article will take a closer look at the nature of science and to what degree scientific knowledge is certain and true knowledge of the material world.
Sometime, somewhere, we’ve all heard or even said something like, “That’s not scientifically proven; it’s just a theory!” Most people don’t realize there are two problems with this statement:
- Science is in the business of making theories: this is the main goal of science! So, to belittle scientific theories by calling something “just a theory” is to belittle the
- Scientific theories and most scientific claims cannot be proven to be true. It is fair only to say that a good theory has significant explanatory power, that it effectively models reality under certain conditions, that it is approximately true.
Deduction vs. Induction in Traditional and Modern Science
Science was not always what it is today. Maybe that seems obvious. What I mean is that the methods and goals of science have dramatically changed over the years.
Aristotle was one of the first and most influential natural scientists in history. He sought a qualitative knowledge (scientia in Latin, episteme in Greek) of material reality by seeking to explain the behavior of things by describing what causes them to be the way they are. Only by knowing the causes of things, Aristotle taught, can one truly know the thing (Posterior Analytics, 71b 8–12). This kind of knowledge that reaches certainty is based on demonstration or proof from first principles, a method called deduction.
Geometry or any type of math is an excellent example of deduction. Beginning with basic principles, we can use logic to arrive at the necessary consequences that follow from these principles. Aristotle would call these principles “causes” and the consequences “effects.” Deduction can reach true, certain knowledge by going from cause to effect. In geometry we can achieve true knowledge of things by reasoning from true first principles. For example, based on the principle “all triangles have the sum of their angles equal to 180°,” one can say, “This shape ABC is a triangle, and these are its angles. Therefore, the sum of the measures of these angles is 180°.”
Deduction can reach certain and true knowledge if it is based on true principles. Traditional science, a tradition pioneered largely by Aristotle and continued for approximately 2,000 years, used deduction as its primary tool to reach knowledge. Modern science does not take this approach.
Modern science, instead of going from cause to effect, goes in the reverse order: from effect to reasonable principles of cause. This method of reasoning is referred to as induction. If geometry were done using induction, it would take careful measurements of the angles of many triangles, find that the many examples add to 180°, and at best conclude that all triangles probably have the sum of their angles approximately equal to 180°. This would not be certain and true knowledge. The angles of all triangles do in fact add up to 180°, but the certainty of this conclusion can only be arrived at by deduction, not by induction.
Modern science performs experiments and gathers data, makes observations, and uses the data and observations to reason inductively to develop theories and test hypotheses that accurately model the natural world.
In 1687, Sir Isaac Newton published his Mathematical Principles of Natural Philosophy, or Principia for short. In contrast to Aristotle, Newton presents a more accurate and precise formulation of the laws of nature, but he uses inductive methods instead of deductive. In an essay “General Scholium,” which was appended to the 1713 edition of the Principia, Newton famously said “hypotheses non fingo” (“I frame no hypotheses”). This quote is part of a longer quote that is far more interesting for many reasons:
But hitherto I have not been able to discover the cause of those properties of gravity from phenomena, and I frame no hypotheses; for whatever is not deduced from the phenomena is to be called an hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or mechanical, have no place in experimental philosophy…. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.[1] (emphasis mine)
In this excerpt, Newton describes a new kind of doing science, which he refers to as “experimental philosophy.” We would nowadays call this “modern science” or simply “science.” He also describes the method of reasoning used in this “experimental philosophy,” namely induction: “…particular propositions are inferred from the phenomena and afterwards rendered general by induction.” Although the word “hypotheses” is the exact same word in English as was used in the original Latin text, perhaps a better word for the purpose of our understanding would be “assumptions,” since the word “hypotheses” has taken on a different meaning than the one intended by Newton at the time of his writing.
Newton does make some deductive arguments in the Principia. These deductive arguments, however, are made with careful wording. Instead of making the airtight standard deductive syllogism in the form of “if… then…,” he qualifies the syllogism with quam proxime (“nearly”). After making the argument that if an orbiting body sweeps out equal areas in equal times (Kepler’s law), then it will have a centripetal force, he says:
And if these areas are proportional to the times nearly, the remaining force will tend to the other body T nearly.[2]
He does this so that he can later use these conclusions in an inductive argument. He shows data which demonstrate that planets sweep out approximately equal areas in equal times, and so therefore they have approximately centripetal forces, which he shows are due to the gravitational force of the orbited body.
It is a fundamental principle of logic that arguing from effect to cause (induction) is a logical fallacy, or at least that such an argument cannot reach certain knowledge. For this reason the conclusions reached by this method of reasoning in science are at best only probable. The deductions of modern science, exemplified by Newton in his Principia, can only be made for the purposes of later using these deductions in an inductive argument.
The Inductive Method of Reasoning in Modern Science Cannot Reach True Certainty
If we hold that theories are true and certain because we repeatedly get experimental results in agreement with theoretical predictions, we fall into a logical fallacy akin to saying, “If it’s raining outside, I will hear pattering on the roof (theory). I hear pattering on the roof, therefore it is raining outside (experimental results and conclusion).” If we observe that there is pattering on the roof, we can only conclude that it is probably raining outside.
Now to their credit, scientific theories are not as simplistic as this example. To be given the status of scientific theory, it must make many experimentally verified predictions. The predictions of a scientific theory would be more akin to,”“If it’s raining outside then there’s pattering on the roof, the ground outside is wet, it’s cloudy, I feel water falling on my head when I go outside, I see water falling from the sky when I go outside, I sometimes see a rainbow….” But the important point is that the logic of the scientific empirical method is the same. The methods used by science to test theories only reaches possible or at best probable truth, not certainty. This is a flaw intrinsic to the method of reasoning used by modern science, namely, the inductive method.
Even so, this should not bother us. It is reasonable to rely on the explanatory power of tested scientific models, even if they do not reach the level of certainty. We follow a similar process in everyday life when we make practical decisions based on the empirical evidence that is before us. A reasonable person does not require certainty to make most decisions. The model building that is characteristic of modern science is similar to the way we evaluate evidence and make decisions in everyday life.
Theories in Modern Science
In his Mathematical Principles of Natural Philosophy, Newton describes some foundational ideas for modern scientific thought including the mechanics of centripetal forces and his three Laws of Motion. He argues against a contemporary hypothesis of gravity and develops his own Universal Law of Gravitation. These developments unified terrestrial and celestial mechanics (giving a framework to describe how rocks fall to the ground and how planets orbit the sun), an incredible and unprecedented achievement. Though Newton’s methods differed from Aristotle’s in several ways, he retained a confidence, similar to Aristotle’s, that he was penetrating the mysteries of nature and achieving true knowledge of the natural world. In his preface to the third edition, he writes the following:
But as artificers do not work with perfect accuracy, it comes to pass that mechanics is so distinguished from geometry, that what is perfectly accurate is called geometrical, what is less so, is called mechanical. But the errors are not in the art, but in the artificer. He that works with less accuracy is an imperfect mechanic; and if any could work with perfect accuracy, he would be the most perfect mechanic of all; for the description of right lines and circles, upon which geometry is founded, belongs to mechanics.[3]
From Newton’s perspective, the laws of science were understood such that if one had adequate knowledge of the position and velocity of every particle in the universe, he should be able to predict the positions and velocities of those particles at any future or past time. Newton and many of his followers believed that Newton’s Laws were descriptions of the laws of the universe as they are in the mind of God.
Scientific determinism, the notion that scientific theories express certain and true knowledge of the world and can predict any phenomenon if given enough information lost some ground in the late 1800s and early 1900s. With such discoveries as the randomness of radioactive decay of atoms, scientific determinism was losing a foothold. Furthermore, theories like general relativity and quantum mechanics with their extremely strange and non-intuitive predictions led scientists to wonder what it really meant to understand the physical world if such strange theories were the best we could do to “explain it.”
The Vienna Circle was a group of scientists, mathematicians, and philosophers who met to discuss these issues. Their conclusion was flawed: they developed a mistaken philosophy called logical positivism, which essentially says that if and only if something can be experimentally proven by observation and experiment is it meaningful and true. The Vienna Circle also held that it would be possible one day to develop a unified theory of everything in mathematics and physics. This harkened back to Newton’s notions that experimental science does in fact attain certain and true knowledge. Inherent flaws with logical positivism led it to be severely criticized and largely rejected by scientists and philosophers by the 1960s.
Scientists today have returned to a humbler approach. Few professional scientists believe that scientific theories represent certain, true, unchanging knowledge about the natural world. Thomas Kuhn is considered by many to be the “murderer” of the logical positivism held by the Vienna Circle. His momentous book, The Structure of Scientific Revolutions,[4] was in many ways a critique of this Vienna Circle philosophy and is widely accepted by scientists to be an accurate analysis of how science is and ought to be done.
The main goal of science is to develop theories, but that is not what scientists typically spend their time on. In fact, “normal science” as Thomas Kuhn calls it, is composed of three kinds of investigation:
- Making accurate measurements of the properties of things that a theory shows to be important (g., measuring the gravitational constant, the speed of light, the mass/charge of an electron)
- Applying theories to various situations to clarify it; making small modifications and solving important problems (e.g., applying Newtonian mechanics to the motion of various planets or to projectile motion)
- Testing hypotheses that are deduced from a theory (e.g., testing General relativity’s prediction that the light of a star will be bent by the gravitational field of the sun).
Now it is important to note that all of these are at the service of theories. This suggests that although the most common activity of scientists is not developing theories, the primary goal of science is still the development and refinement of theories. Since these acts of normal science are at the service of theories, however, they are of secondary importance.
Constructive and Principle Theories
Einstein distinguishes between two kinds of theories: constructive theories and principle theories.[5] Constructive theories are based on somewhat arbitrary, imagined constructions or “ideal models” that seek to explain part of the natural world. Principle theories on the other hand, are based on a few empirically confirmed principles. Many philosophers of science hold that constructive theories are better able to explain underlying phenomena behind what we observe, whereas principle theories are more resilient and harder to disprove.
Most theories in science are constructive theories. To see some excellent examples of constructive theories, we can look at the history of the various models of the atom developed by Dalton, Thomson, Rutherford, and Bohr.
In 1803, John Dalton, using ideas from the ancient Greek philosopher Democritus, proposed that there is a smallest particle from which all matter in the universe is composed: the atom. Dalton’s view of the atom was like an indestructible billiard ball, not created nor destroyed in chemical reactions, combining in whole number ratios to form compounds.
Dalton’s model was partially correct, and it helped scientists to make fruitful discoveries for almost a hundred years. But like any constructive theory, since it was arbitrarily constructed, it was doomed to be replaced.
In 1897, J.J. Thomson discovered the electron, thus showing that atoms have parts (they are not indivisible like Dalton said). He proposed the “Plum Pudding Model” because he was British and probably liked plum pudding, but presumably also because he thought it was a good analogy. In this model, negatively charged electrons (the “plums”) are suspended in a positively charged material (the “pudding”). This model had great explanatory power, but was even shorter lived, and was replaced in twelve years.
In 1909, Ernest Rutherford discovered the atom’s nucleus, a positively charged core where most of the mass resides. Rutherford’s model said that there was a positively charged nucleus with electrons moving around outside of it, and most of the atom was an empty void.
Just four years later, Niels Bohr explained how atoms are stable and why they emit discrete colors when electrons move from an excited state to a non-excited state. Bohr’s “planetary model” said that electrons are stuck in orbits around the positive nucleus (similar to the way our planets orbit the sun). These orbits correspond to how much energy the electron has, and light is emitted when electrons pass from one energy level to another. Bohr was able not only to provide a pictorial description of his model but to mathematically describe how the electrons are arranged. His model could explain many experimental results, but it only really worked for hydrogen or hydrogen-like atoms. Eventually it, too, was replaced by the modern quantum model.
Principle theories, which are based on a few empirically verifiable principles and not based on imagined or idealized models, are less common – but much more powerful. Some examples of principle theories are thermodynamics and special relativity.
Einstein’s Theory of Special Relativity is based on just two empirically verified principles. The first principle is verified by nearly all of our experience, and it is that the laws of physics do not change between someone standing still and someone moving at a constant speed relative to that person. We can picture a person standing on the ground and another person standing in a train moving at a constant speed. If the person on the train didn’t look out the window, they would not be able to detect that they were moving. A ball dropped will fall the same way in a train moving at constant speed compared to a ball dropped by someone standing on solid ground. The other principle is less intuitive but is verified over and over again by experiment: the speed of light relative to an observer is fixed and does not increase or decrease depending on the speed of its source. Unlike sound which does change speed depending on the speed of its source (take for example the common experience of the sound of a train, ambulance, or music from a car when it is moving toward you vs away from you), light does not change speed. The speed of the light coming from a car moving towards you is exactly the same as the speed of light coming from a car moving away from you. This principle is not intuitive, so don’t worry if you are confused by it; in fact, if you aren’t confused by it, you probably have misunderstood it. Based on just these two principles and common sense, Einstein was able to derive all of Special Relativity, which has since replaced Newtonian Mechanics. One well-known repercussion of Special Relativity is that GPS satellites need to adjust their atomic clocks (which are accurate to about a nanosecond) by several microseconds every day, because time is not the same for a satellite moving at 14,000 km/hr relative to us on earth.
Neither Constructive Nor Principle Theories Reflect Truth About the Natural World
The distinction between constructive and principle theories helps us to provide a critique of whether scientific theories are truly certain.
Most theories (constructive type) are based on imagined constructions, free inventions of the mind or idealized models. As a consequence, constructive theories are never permanent (though often remain useful) and are eventually replaced by a more accurate constructive theory or a principle theory. It is important to point out that many theories also describe the behavior of fictitious or at best inferred entities that we do not have direct experience of. And sometimes the descriptions of these entities are apparently contradictory.
For example, the electron, photon, and it turns out any so-called “particle” will at times behave like a particle but at other times behave like a wave. So, which one is it? At best we call it a particle-wave and must be satisfied with the fact we cannot imagine it, though it exists in physical space.
Jacob Bronowski[6], a British mathematician, holds that the ambiguity of these kinds of definitions is what allows science to progress. If the electron was defined strictly according to what we once thought it was, a particle, then when its wavelike properties were discovered scientists would be unable to attribute these properties to what they knew was a particle. They might even posit a new kind of entity. This is one example that illustrates how the ambiguity and uncertainty intrinsic to science is a positive thing. Without it, scientific progress would be impossible.
Even principle theories are often based on empirically grounded but idealized and not actually verified principles. For example, Newton’s law that an object in motion will stay in motion unless acted on by an outside force is one of the principles of Newtonian Mechanics, but he himself certainly never observed an object staying in motion forever because he was never able to remove all forces. A “perpetual motion machine” would be a machine obeying this law of Newton, yet no man has ever successfully produced such a machine.
To summarize, constructive theories do not reflect the truth of reality because they rely on imagined constructions of the mind or idealized models which are never permanent. Principle theories cannot reflect the truth of reality because they rely on unverified, idealized principles which do not have empirical evidence in reality.
Gödel’s Incompleteness Theorem and Modern Scientific Models
Fr. Stanley Jaki (philosopher of science) and Stephen Hawking (renowned physicist) both describe the implications of “Gödel’s Incompleteness Theorem” on scientific theories. Gödel’s Incompleteness theorem, simply put, says that no mathematical system (arithmetic, geometry, algebra, calculus) can be a universal theory of math, able to prove every true mathematical statement. This is not because there are infinite things to be proven nor because it is impossible to know things but because every mathematical system is based on principles, axioms, or assumptions that by itself it cannot prove. It must point outside of itself to something else.
Interesting sidenote: the first line of Aristotle’s Posterior Analytics sounds strangely similar to a major premise for Gödel’s Incompleteness Theorem: “All teaching and all intellectual learning come about from already existing knowledge. This is evident if we consider it in every case; for the mathematical sciences are acquired in this fashion….”[7] Since science uses mathematics to model nature (and the more developed a scientific theory is the more math it uses), a complete unified theory of everything in science is not possible – or at least, if it is possible, we would never know that we had achieved it because we couldn’t use the theory to prove the validity of its axioms. Jaki and Hawking both have interesting comments about this.
At first, Hawking was depressed when he realized this[8]: if a complete, unified theory of everything is not possible, doesn’t this mean that science is doomed to failure from the outset? But as he came to terms with the fact that a unified theory of everything is impossible, he realized that Gödel’s Theorem is not the death knell of science but rather the guarantee that humans will always have reason to do science and learn more about the natural world.
Fr. Stanley Jaki[9], a Benedictine priest, saw a bit further. Since scientific theories can never completely explain everything about the natural world, ultimately the world itself points to something beyond itself, something not governed by the laws of nature. And this is an indirect proof for the existence of a creator-God.
Conclusion
David Bohm provides an excellent analysis of the problems with considering theories as reflecting the truth about the world in his essay “Fragmentation and Wholeness”.[10]
The word “theory” derives from the Greek “theoria,” which has the same root as “theatre,” in a word meaning “to view” or “to make a spectacle.” Thus, it might be said that a theory is primarily a form of insight, i.e., a way of looking at the world, and not a form of knowledge of how the world is.
…If we supposed that theories gave true knowledge, corresponding to “reality as it is,” then we would have to conclude that Newtonian theory was true until around 1900, after which it suddenly became false, while relativity and quantum theory suddenly became the truth. Such an absurd conclusion does not arise, however, if we say that all theories are insights, which are neither true nor false but, rather, clear in certain domains, and unclear when extended beyond these domains.
This is the correct way to understand theories: as insights. Theories provide different perspectives with which to view the same reality, similar to the way that two-dimensional drawing can provide different perspectives of the same three-dimensional figure. The fact that scientific theories are not meant to be and cannot be proven to be true representations of material reality does not imply we have uncertainty in all areas of knowledge or that we cannot know anything for sure. G.K. Chesterton in his book Orthodoxy warns against going mad from applying logic too strictly to everything:
“The poet only asks to get his head into the heavens. It is the logician [or the misled scientist] who seeks to get the heavens into his head. And it is his head that splits.”[11]
[1] Newton, Isaac, and Andrew Motte. Newton’s Principia: The Mathematical Principles of Natural Philosophy (1846). Kessinger Publishing, LLC, 2007.
[2] Ibid.
[3] Ibid.
[4] Kuhn, Thomas S. The Structure of Scientific Revolutions. University of Chicago Press, 1962.
[5] Einstein, A., In Ideas and Opinions, Crown Publishers, New York, 1954, pp. 227-232.
[6] Bronowski, Jacob. The Origins of Knowledge and Imagination. Yale University Press, 1 Oct. 2008.
[7] Barnes, Jonathan, translator. Posterior Analytics (Clarendon Aristotle Series). 2nd ed., Clarendon Press, 1994. 71a-71a3.
[8] Stephen Hawking – Gödel and the End of Physics. physics.sfasu.edu/astro/news/20030308news/StephenHawking20030308.htm. Accessed 3 Oct. 2022.
[9] Jaki, Stanley. A Late Awakening and Other Essays. Real View Books, 2022.
[10] Bohm D., Wholeness and the Implicate Order, Routledge, London, 2002.
[11] Chesterton, G.K. Orthodoxy. New York, Dover Publications, 2020.