Physics and Cosmology
Abstract and Keywords
This article considers the role of physics in transforming cosmology into a research field which relies heavily on fundamental physical knowledge. It begins with an overview of astrophysics and the state of physical cosmology prior to the introduction of relativity, followed by a discussion of Albert Einstein’s application of his new theory of gravitation to cosmology. It then examines the development of a theory about the possibility of an expanding universe, citing the work of such scientists as Edwin Hubble, Alexander Friedmann, Georges Lemaître, and George Gamow; the emergence of the field of nuclear archaeology to account for the origins of the early universe; and the controversy sparked by the steady-state theory. It also describes the discovery of a cosmic microwave background of the kind that Alpher and Herman had predicted in 1948 before concluding with a review of modern cosmological hypotheses such as the idea of ‘multiverse’.
Although cosmology is very old, dating even to preliterate societies, in a scientific sense it is a relatively recent branch of knowledge. Physical cosmology—here taken to be the study of the universe that does not rely only on astronomical observations but also on physical laws and methods—is even younger, and belongs largely to the twentieth century. While physicists today may consider cosmology to belong to physics, in an historical perspective this is not the case. As the study of the universe at large has developed over the last century, it has been a subject mainly cultivated by astronomers, physicists, and mathematicians—and one should not forget the philosophers. Although the philosophical and conceptual problems related to the universe cannot be ignored or cleanly separated from the more scientific aspects, the following summary account focuses on the work achieved by physicists in transforming cosmology into a research field which relies intimately on fundamental physical knowledge.
Twentieth-century cosmology has far from developed smoothly or linearly, but in spite of many mistakes and blind alleys it has progressed remarkably. It is understandable that physicists take pride in having unravelled at least some of the major secrets of the universe. ‘Cosmology has become a true science in the sense that ideas not only are developed but also are being tested in the laboratory’, asserted two American astrophysicists in 1988. ‘This is a far cry from earlier eras in which cosmological theories proliferated and there was little way to confirm or refute any of them other than on their aesthetic appeal.’1 Perhaps so, but how did it come to that?
(p. 893) 29.1 The Rise of Astrophysics
The physical cosmology of the twentieth century relied on knowledge and methods of astrophysics—a new interdisciplinary research field that emerged in the previous century and did much to change the very definition of astronomy.2 Still in the early part of the nineteenth century astronomy was conceived as a purely observational science using mathematical methods, and not as a part of the physical sciences. According to the traditional understanding of astronomy, ‘astrophysics’ was a misnomer and ‘astrochemistry’ even more so. This changed quite drastically after the introduction of spectroscopy, which from its very beginning in the 1860s was used as a method to obtain information about the physical state and chemical composition of the Sun and other celestial bodies.
Even before the invention of the spectroscope, the Austrian physicist Christian Doppler had argued that if a light source is moving relative to the observer with radial velocity v, one would observe a change in its wavelength given by , where is the measured and λ the emitted wavelength. According to Doppler, the change related to the speed of light c as
The Doppler effect was soon verified for sound waves, whereas its validity for light remained controversial for several decades. Only in 1868 did the British gentleman astronomer and pioneer of astrospectroscopy William Huggins announce that he had found a small shift in wavelength for the light from Sirius, which he took to imply that the star moved away from the Earth. It took another twenty years until the optical Doppler effect was firmly demonstrated for an astronomical body, when the German astronomer Hermann Vogel examined the rotation of the Sun.
The introduction of spectroscopy, based on the seminal invention of the spectroscope by the Heidelberg physicist Gustav Robert Kirchhoff and his chemist colleague Wilhelm Robert Bunsen, effectively founded the physical and chemical study of the stars.3 With the new method it became possible to identify the chemical elements in stellar atmospheres and to classify stars according to their surface temperature. Astrospectroscopy also led to several suggestions of new elements not known by the chemists, such as helium, nebulium, and archonium. Of these unknown elements, hypothesized on the basis of unidentified spectral lines in the heavens, only helium turned out to be real. The existence of helium in the Sun’s atmosphere was suggested by the British astronomer Norman Lockyer in about 1870, and 25 years later the element was detected in terrestrial sources by his compatriot, the chemist William Ramsay. Helium would turn out to be uniquely important to cosmology, but at the time it was just a curiosity, an inert gas supposed to be very rare and of neither scientific nor industrial interest.
The astrochemistry that emerged in the Victorian era opened up new and exciting questions, such as the possibility that atoms might be complex and perhaps (p. 894) exist in more elemental forms in the stars. Lockyer and a few other scientists ventured to extend the perspective of astrospectroscopy to what may be called ‘cosmospectroscopy’. For example, in an address to the 1886 meeting of the British Association for the Advancement of Science, William Crookes speculated that the elements had not always existed but were formed in the cosmic past under conditions very different from those known today. He imagined that all matter was formed through processes of ‘inorganic Darwinism’ and that it was originally in ‘an ultragaseous state, at a temperature inconceivably hotter than anything now existing in the visible universe’. According to Crookes, matter had not existed eternally but had come into being in the distant past:
Let us start at the moment when the first element came into existence. Before this time, matter, as we know it, was not. It is equally impossible to conceive of matter without energy, as of energy without matter . . . Coincident with the creation of atoms all those attributes and properties which form the means of discriminating one chemical element from another start into existence fully endowed with energy.4
Cosmic speculations of the type entertained by Lockyer and Crookes were one aspect of the new astrophysics; another was the study of heat radiation and its application to astronomy. Spectrum analysis had its background in Kirchhoff’s fundamental investigations of what he called blackbody radiation, the study of which would eventually lead to the hypothesis of energy quantization. Even before Max Planck’s law of blackbody radiation, the physics of heat radiation was applied to estimate the surface temperature of the Sun. In 1895 the German physicist Friedrich Paschen used measurements of the solar constant and the Wien displacement law to determine the temperature of the Sun to about 5,000° C, fairly close to its modern value. Just as scientists about 1900 could not imagine the future importance of helium in cosmology, so they could not imagine how important the law of blackbody radiation would come to be in later cosmological research.
29.2 Physical Cosmology Before Relativity
While astronomers took little interest in cosmological questions in the second half of the nineteenth century, physicists, philosophers, and amateur cosmologists discussed such questions on the basis of the fundamental laws of physics, which at the time meant Newton’s law of gravitation and the two laws of thermodynamics. It was recognized early on that the second law of thermodynamics, expressing a universal tendency towards equilibrium in any closed system, might have profound cosmological consequences. Indeed, Rudolf Clausius, the inventor of the concept of entropy, formulated the second law as ‘the entropy of the world tends towards a maximum’ (my emphasis). If the entropy of the universe continues to increase towards some (p. 895) maximum value it would seem to imply that in the far future the universe would not only become lifeless but also devoid of structure and organization. When this state had occurred, the universe would stay in it for ever. This is the prediction of the ‘heat death’, first stated explicitly by Hermann von Helmholtz in a lecture of 1854 and subsequently discussed by numerous scientists and philosophers. Clausius’ version of 1868 is as follows:
The more the universe approaches this limiting condition in which the entropy is a maximum, the more do the occasions of further change diminish; and supposing this condition to be at last completely attained, no further change could evermore take place, and the universe would be in a state of unchanging death.5
Clausius further pointed out that the second law contradicted the notion of a cyclic universe, the popular view that ‘the same conditions constantly recur, and in the long run the state of the world remains unchanged’.
The heat-death scenario was not only controversial because it predicted an end to all life and activity in the universe, but also because it was sometimes used as an argument for a universe of finite age; that is, a cosmic beginning (which was often thought of as a creation). Because, so the arguments goes, if the universe had existed in an infinity of time, and entropy had always increased, the heat death would have occurred already; since the universe is manifestly not in a state of maximum entropy, it can have existed for only a finite period of time. The ‘entropic creation argument’, discussed and advocated by Christian scientists in particular, was controversial because of its association to religious ideas of a divinely created world.6 It was severely criticized by philosophers and scientists (including Ernst Mach, Pierre Duhem, and Svante Arrhenius), who denied that the law of entropy increase could be legitimately applied to the universe as a whole. The heated discussion concerning cosmic entropy faded out around 1910, without resulting in either consensus or an improved scientific understanding of the physical state of the universe. Yet the heat-death scenario continued to be part of cosmology also in the later phase of that science.
Newton’s law of gravitation had great success in celestial mechanics and possessed an unrivalled scientific authority, and it was generally assumed that the law would be equally successful in accounting for the distribution of the countless numbers of stars that filled the universe, which most physicists and astronomers (following Newton) considered to be infinitely large. However, in 1895 the German astronomer Hugo von Seeliger proved that an infinite Euclidean universe with a uniform mass distribution could not be brought into agreement with Newton’s law of gravitation. It would, as he phrased it, lead to ‘insuperable difficulties and irresolvable contradictions’.7 The following year, in an elaboration of his so-called gravitation paradox, Seeliger suggested a way of avoiding it by modifying Newton’s law at very great distance r. Instead of keeping to Newton’s law of gravitation, he proposed that a body of mass m moving in the gravitational field of a central mass M would experience a force given by (p. 896)
That is, he introduced an attenuation factor of the form exp(–Λr), where Λ is a very small constant. With this rescue manoeuvre he could escape the gravitational collapse of the infinite Newtonian universe. Other scientists reached the same goal without changing Newton’s law. For example, the Swedish astronomer Carl Charlier showed in 1908 that if the assumption of uniformity was abandoned and replaced by a suitable fractal structure of star systems there would be no gravitational paradox. Whether one way or the other, the majority of astronomers found it difficult to conceive a universe that was not spatially infinite.
Only a handful of physicists and astronomers realized the possibility of a closed and finite space of the type that the mathematician Berhard Riemann had introduced in the mid-nineteenth century. Although ‘curved’ non-Euclidean space was familiar to the mathematicians, it was rarely considered by physicists and astronomers. Perhaps the first to do so was the German astrophysicist Karl Friedrich Zöllner in a work of 1872, and in 1900 Karl Schwarzschild discussed the possibility that the geometry of space might be determined by astronomical measurements. He expressed a preference for space being closed and finite, but did not develop his ideas into a cosmological theory. By and large, curved space as a resource for cosmology had to wait until Einstein’s general theory of relativity.
One of the great problems in astronomy and cosmology around 1900 was whether the nebulae, and especially the spiral nebulae, were structures similar to the Milky Way or were perhaps much smaller structures located within it. The first view, known as the ‘island universe’ theory, dated from the eighteenth century and received some support from some spectroscopic measurements, though without being accepted by the majority of astronomers. According to the alternative view, the Milky Way system was essentially the entire material universe, placed in a possibly infinite space or ethereal medium. ‘No competent thinker’, said the British astronomer Agnes Clerk in 1890, ‘can now, it is safe to say, maintain any single nebula to be a star system of coordinate rank with the Milky Way . . . With the infinite possibilities beyond, science has no concern’.8 The whole question of island universes versus the Milky Way universe remained unsolved until the mid-1920s, when Edwin Hubble succeeded in determining the distance to the Andromeda nebula, thereby providing firm evidence in favour of the island universe theory.
To the extent that there was a consensus view of the stellar universe in the early twentieth century, it slightly favoured the idea that there was nothing outside the limits of the Milky Way. Based on observations and statistical analysis, eminent astronomers such as Seeliger, Schwarzschild, and the Dutchman Jacobus Kapteyn suggested models of the Milky Way universe conceived as a huge non-uniform conglomeration of stars. Their models of the universe had in common that they pictured the Milky Way as an ellipsoidal disk of dimensions only a few tens of thousands of light years in diameter. Assuming that starlight was not absorbed by interstellar matter, they estimated the mass density of the Milky Way universe to be about 10–23 g/cm3.
(p. 897) 29.3 Einsteinian Cosmology
In a pioneering paper of 1917, entitled ‘Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie’ (‘Cosmological considerations on the general theory of relativity’), Einstein applied his new theory of gravitation to the universe at large. In doing so, he was faced with conceptual problems of a kind similar to those which had troubled earlier scientists from Newton to Seeliger. In particular, how can the advantages of a finite universe be reconciled with the necessity for a universe without boundaries? Einstein’s solution was to circumvent the problem by conceiving the universe as spatially closed, in accordance with his general theory of relativity based on non-Euclidean geometry. Guided by what little he knew of the available observational evidence, he suggested that the universe was spatially finite but of a positively curved geometry—‘spherical’ in four dimensions. He also assumed the universe to be static, meaning that the radius of curvature did not vary systematically in time, and that it was filled uniformly with matter of low density.
Einstein’s closed universe was formally described by the field equations of general relativity, but in a modified form which included a ‘for the time being unknown universal constant’. This constant (Λ), which soon became known as the cosmological constant, expressed the mean density of matter ρ in the universe and was thereby also related to the radius of curvature R. The relations announced by Einstein were
where κ denotes Einstein’s constant of gravitation, related to Newton’s by κ = 8πG/c2. From a physical point of view the new constant could be conceived as causing a cosmic repulsion balancing the attractive force of gravitation. In this respect, Einstein’s constant corresponded to the one that Seeliger had introduced in 1896, but when Einstein published his cosmological theory he was unaware of Seeliger’s work. In a letter of August 1918 he explained his reasons for introducing the cosmological constant:
Either the world has a centre point, has on the whole an infinitesimal density, and is empty at infinity, whither all thermal energy eventually dissipates as radiation. Or: All the points are on average equivalent, the mean density is the same throughout. Then a hypothetical constant λ [Λ] is needed, which indicates at which mean density this matter can be at equilibrium. One definitely gets the feeling that the second possibility is the more satisfactory one, especially since it implies a finite magnitude for the world.9
Contrary to Einstein’s original belief that his closed model was the only solution to the cosmological field equations, later in 1917 the Dutch astronomer Willem de Sitter produced another solution corresponding to an alternative world model.10 Remarkably, de Sitter’s model contained no matter (ρ = 0), but was nonetheless spatially closed. Moreover, it followed from the spacetime metric that light emitted by a test body would be redshifted, almost as if the body were moving away from the (p. 898) receiver. In fact, although the ‘de Sitter effect’ was not a Doppler effect, de Sitter suggested that it might be related to the measurements of nebular radial velocities that Melvin Slipher had been reporting since 1913. Working at the Lowell Observatory in Arizona, Slipher found that the light from most spiral nebulae was shifted towards the red end of the spectrum, indicating a recessional velocity of up to 2,000 km/s. In the 1920s the galactic redshifts attracted increasing attention among astronomers and cosmologists, who suspected some simple relation between the redshifts of the spirals and their distances. The observed redshifts were generally seen as evidence for de Sitter’s ‘model B’, while the undeniable existence of matter in the universe counted against it and for Einstein’s ‘model A’.
Whatever the credibility of solutions A and B as candidates for the real structure of the universe, from about 1920 there developed a minor industry based on the two models. It was predominantly a mathematical industry, with mathematically-minded physicists and astronomers analysing the properties of the two solutions and proposing various modifications of them. Although the industry was dominated by mathematicians and theoretical physicists, at least some astronomers were aware of the cosmological models and endeavoured to relate them to observations. During the course of their work to understand and elaborate the two relativistic world models, a few astronomers and physicists proposed solutions that combined features of model A and model B. Towards the end of the 1920s there was a tendency to conclude that neither of the two models could represent the real universe, yet finding a compromise turned out to be frustratingly difficult.11
If both Einstein’s model A and de Sitter’s model B were inadequate, and if these were the only solutions, how could cosmology still be based on the theory of general relativity? The alternative of abandoning general relativity and returning to some classical framework was not seriously considered. The ‘obvious’ solution—to search for evolutionary models different from both A and B—had already been published at the time, but was as unknown to most cosmologists as it was unwelcome. The conceptual climate that governed mathematical cosmology in the 1920s was that of a physically static universe, and the scientists engaged in the field tried hard (but of course unconsciously) to avoid breaking with the paradigm.
Although the mathematical cosmology of the 1920s did not include the physics of matter and radiation, there were a few attempts to adopt a more physical approach to the study of the universe. For example, in 1928 a Japanese physicist, Seitaro Suzuki, developing earlier ideas of Richard Tolman in the United States, investigated the cosmological significance of the relative abundances of hydrogen and helium, the two most common elements in the universe. He suggested, almost prophetically, that the mass ratio could be explained only on the assumption of an early state of the universe at a temperature higher than 109 degrees. It is not clear whether Suzuki believed that such a state had actually existed in the early universe.
The first application of thermodynamics to the Einstein universe model was made in a study by the German physicist Wilhelm Lenz, who in 1926 found that the temperature T of an enclosed blackbody radiation would depend on the radius of curvature R, as (p. 899)
where a denotes the constant in the Stefan–Boltzmann radiation law. Since a can be expressed by Planck’s constant h, Lenz implicitly introduced quantum theory in a cosmological context. A more explicit connection was suggested the previous year by Cornelius Lanczos, a Hungarian theorist who, in a study of a periodic Einstein-like world model, was led to introduce a certain ‘world period’ P, depending on the radius of the universe:
where m denotes the mass of the electron. Although neither Lanczos nor others developed the idea, Lanczos’ suggestion that the quantum nature of microphysics reflected the state of the cosmos is noteworthy. ‘The solution of the quantum secrets’, he wrote, ‘are hidden in the spatial and temporal closedness of the world’.12 The same kind of micro–macro theorizing would later form the basis of Arthur Eddington’s unorthodox ‘fundamental’ theory, such as he developed in great detail in the 1930s.
29.4 The Expanding Cosmos
The collapse of the static-universe paradigm took place by the complex interaction of two widely separate approaches—one observational and the other theoretical. By the late 1920s, Hubble turned to the problem of the redshifts of extragalactic nebulae and the supposed relationship between redshift and distance. Mostly relying on the redshifts found earlier by Slipher, in an important paper of 1929 he concluded that they varied nearly linearly with the distances of the galaxies—a relationship he expressed as
Hubble did not really measure recessional velocities v, but understood v as ‘apparent velocities’—that is, redshifts that could be conveniently transformed to velocities by means of the Doppler formula. In his paper of 1929 he suggested vaguely that spectral shifts might be interpreted in terms of de Sitter’s cosmological model, but his general attitude was to stay away from interpretations and keep to observational data. For the empirical constant H, eventually known as the Hubble constant or parameter, he derived a value of about 500 km/s/Mpc (1 Mpc = one megaparsec ≅ 3.26 million light years). Whereas the linear relation of 1929 was not particularly convincing, new and much improved measurements two years later, now published jointly with his assistant Milton Humason, left no doubt that the redshift–distance relation was linear and real.
It is important to realize that Hubble did not conclude, neither in 1929 nor in later publications, that the universe was in a state of expansion.13 A cautious empiricist, (p. 900) Hubble adhered to his data and refrained from clearly interpreting them as evidence for an expanding universe. Nor did other astronomers and physicists initially consider the Hubble relation as observational proof that the universe is expanding. It took more than observations to effect a change in the world view from a static to an expanding universe. This change occurred only in the early part of 1930, when it was recognized that both theory and observation strongly indicated that the static universe was no longer tenable.14
Unknown to Hubble and most others in the scientific community, the possibility of an expanding universe had already been formulated by theorists—first by the Russian physicist Alexander Friedmann in a paper appearing in Zeitschrift für Physik in 1922. In his systematic analysis of Einstein’s cosmological field equations, Friedmann showed that the solutions of Einstein and de Sitter exhausted all the possibilities for stationary world models. More importantly, for closed models he found a variety of dynamic solutions where the curvature of space depends on time, R = R(t). These solutions included a class in which the world began at R = 0 for t = 0 and subsequently expanded monotoneously. Another of the solutions to which Friedmann called attention corresponded to a cyclic universe, starting at R = 0 and ending, after having passed R = Rmax, at R = 0. He examined world models both with and without a cosmological constant, and in a companion paper of 1924 he extended his analysis by examining also open models with negative space curvature.
The Friedmann equations described all possible dynamic world models satisfying the cosmological principle—that is, assuming the universe to be homogeneous and isotropic at a large scale. However, although the Russian theorist clearly realized the significance of evolutionary or dynamic world models, the emphasis of his work was on the mathematical rather than the physical and astronomical aspects. He did not highlight expanding solutions nor argue that the universe we observe is in fact in a state of expansion. Nor did he refer to astronomical data such as Slipher’s galactic redshifts. The mathematical style and character of Friedmann’s work may have been one reason why it failed to attract attention, to be rediscovered several years later. Even the few physicists who were aware of his work, such as Einstein, did not see it as a strong argument against the static world. Friedmann’s singularly important work is a prime example of what has been called prematurity in scientific discovery.15
When five years later the Belgian theorist Georges Lemaître arrived at the conclusion that the universe expands in conformity with the laws of general relativity, he was unaware of Friedmann’s earlier work. Motivated by a desire to find a solution that combined the advantages of Einstein’s model A and de Sitter’s model B, in 1927 he found the same differential equations for R(t) that Friedmann had published earlier. Although from a mathematical point of view Lemaître’s work was very similar to Friedmann’s, from the point of view of physics and astronomy it differed strikingly from it. Trained in both astronomy and physics, Lemaître wanted to find the solution that corresponded to the one and only real universe such as described by astronomical data and by galactic redshifts in particular. He concluded that the best model was a closed universe expanding monotonously from a static Einstein state of a (p. 901) world radius of about 270 Mpc. As the expansion continued, the mass density would gradually diminish and eventually approach that of the empty de Sitter state.
In striking contrast to Friedmann’s paper, galactic redshifts were all-important to Lemaître, who explained that they had to be understood as a cosmic effect of the expansion of the universe. The redshifts were caused by the galaxies being carried with the expanding space in such a way that the ‘apparent Doppler effect’ expressed the increase in R between emission and reception of light. For the approximate relationship between recession velocity and distance, he found
where k is a quantity that depends on the time since the expansion began—an early version of the Hubble constant. He estimated from astronomical data that k ≅ 625 km/s/Mpc. As he further noted, at some time in the future, when z became equal to one, the universe would expand so rapidly that no light from the galaxies would reach us.
Lemaître’s theory of 1927 is today recognized as a cornerstone in cosmology and the true foundation of the expanding universe, but at the time it made no more impact than Friedmann’s earlier work. It is telling that Einstein (who knew about the works of both Friedmann and Lemaître) in an article of 1929 for the Encyclopaedia Britannica maintained his faith in the static universe. ‘Through the general theory of relativity’, he wrote, ‘the view that the continuum is infinite in its time-like extent but finite in its space-like extent has gained in probability’.16 The expanding universe only became a reality in the spring of 1930, after Eddington, de Sitter, and other leading astronomers became aware of Lemaître’s theory and realized how well it fitted with Hubble’s empirical redshift–distance relation. Eddington quickly abandoned the static universe, replacing it with Lemaître’s expanding model (which for this reason is known as the ‘Lemaître–Eddington model’). Einstein too accepted the dynamic solutions of Friedmann and Lemaître, realizing that in an expanding universe the cosmological constant was not necessary. He had for long been dissatisfied with the Λ constant he had introduced in 1917, and now abandoned it for good.
By 1935 the theory of the expanding universe was accepted by the majority of astronomers and physicists, and was the subject of detailed investigations in the scientific literature. It was also disseminated to the public through a number of popular works, such as James Jeans’ The Mysterious Universe (1930), James Crowther’s An Outline of the Universe (1931), de Sitter’s Kosmos (1932), and Eddington’s The Expanding Universe (1933). Far more demanding were the few textbooks oriented towards the small community of cosmologists, of which the most important were Tolman’s Relativity, Thermodynamics and Cosmology (1934) and the Otto Heckmann’s Theorien der Kosmologie (1942).
General acceptance of the expanding universe among leading physicists and astronomers did not extend to the entire scientific community. A considerable minority denied that the universe was in a state of expansion, and consequently had to produce alternative explanations of the observed redshifts. There was a variety of ways (p. 902) in which this could be done on the basis of a static and eternal conception of the universe. Some scientists felt attracted by explanations according to which the redshifts were caused by gravitational mechanisms or by an hypothesis of ‘tired light’, such as advocated in different ways by Fritz Zwicky, William MacMillan, and Walther Nernst. Although these kinds of non-expansion explanation were fairly popular in the 1930s, they did not succeed in reinstating the static universe at the expense of the new expanding paradigm. Mainstream astrophysicists and cosmologists agreed that the expansion was real, though a few thought that it was not a consequence of the general theory of relativity. From about 1933 to 1948 the alternative cosmological system of the Oxford physicist Edward A. Milne attracted great interest. Milne’s cosmos was uniformly expanding, but was not governed by Einstein’s equations and did not operate with the notion of curved space. After Milne’s death in 1950 his cosmology based on ‘kinematic relativity’ ceased to attract attention.
As illustrated by the Lemaître–Eddington model, a continually expanding universe does not need to have a finite age. Although Friedmann had formally introduced the idea of a finite-age or ‘big bang’ universe in his work of 1922, in a physical–realistic sense it dates from 1931. In a brief letter to Nature of 9 May that year, Lemaître made the audacious proposal that the universe had once come into being in what he picturesquely described as a huge radioactive explosion of a primeval ‘atom’ of nuclear density (ρ ≅ 1015 g/cm3). Following the original explosion ‘by a kind of super-radioactive process’, the universe expanded at a furious rate and eventually evolved to the present state of a low-density expanding universe (ρ ≅ 10−30 g/cm3). While Lemaître’s first communication was brief and purely qualitative, he soon developed it into a proper scientific theory based on the cosmological field equations including a positive cosmological constant. In the first detailed account ever of the ‘big bang theory’ (a name not yet coined), he described it as follows:
The first stages of the expansion consisted of a rapid expansion determined by the mass of the initial atom, almost equal to the present mass of the universe . . . The initial expansion was able to permit the radius [of space] to exceed the value of the equilibrium radius [of the Einstein world]. The expansion thus took place in three phases: a first period of rapid expansion in which the atom-universe was broken down into atomic stars, a period of slowing-down, followed by a third period of accelerated expansion. It is doubtless in this third period that we find ourselves today.17
Lemaître’s model of 1931 was a solution to the Friedmann equations, which allowed for several more world models sharing the feature of R = 0 at t = 0. One such model was proposed jointly by Einstein and de Sitter, who in 1932 presented a theory of a flat, continually expanding universe without a cosmological constant. It followed from the theory that the expansion was just balanced by the gravitational attraction, leading to a ‘critical’ value of the mass-density given by
The scale factor—a measure of the distance between two galaxies—increased slowly in cosmic time according to R ~ t2/3. The Einstein–de Sitter model belonged to the (p. 903) ‘big bang’ class, since R = 0 at t = 0, but this was a feature with which neither Einstein nor de Sitter were comfortable, and which they consequently avoided. Like most other physicists and astronomers, they did not consider Lemaître’s proposal to be a realistic candidate for the evolution of the cosmos.18
The general response to the idea of a ‘big bang’ in the distant past was to ignore it or reject it as an unfounded speculation. After all, why believe in it? If the universe had really once been in a highly compact, hot, and radioactive state, would it not have left some traces that could still be subjected to analysis? Lemaître believed that there were indeed such fossils from the far past, and that these were to be found in cosmic rays; but his suggestion failed to receive support. In addition to the lack of observational evidence, the proposal of a world born in a radioactive explosion was thought to be contrived and conceptually weird. Few cosmologists in the 1930s were ready to admit ‘the beginning of the universe’ as a question that could be addressed meaningfully by science. Although the majority of cosmologists wanted to avoid the question, finite-age models of roughly the same kind as Lemaître’s were not dismissed entirely. Among the few physicists who took an interest in such models were Paul Dirac in England, Pascual Jordan in Germany, and George Gamow in the United States.
29.5 Nuclear Archaeology and the Early Universe
Lemaître’s ‘fireworks’ universe never received serious attention, but after the Second World War it was independently revived by the Russian-born American physicist George Gamow—a pioneer of nuclear physics—and a small group of collaborators. Gamow’s approach to early-universe cosmology differed markedly from that of earlier researchers, in the sense that it focused on nuclear and particle physics with the aim of explaining the build-up of elements shortly after the ‘big bang’ at t = 0. Ignoring the conceptual problems of the ‘creation’ of the universe, Gamow considered the very early universe to be an extremely hot and compact crucible, an exotic laboratory for nuclear physical calculations. If he could calculate, on the basis of nuclear physics, how the elements were formed in the original inferno, and the calculated element abundances corresponded to those found in nature, he would have provided strong evidence in favour of the ‘big bang’ origin of the universe. This general research programme had been considered by a few earlier physicists—notably by Carl Friedrich von Weizsäcker in a work of 1939—but without being developed to any extent. The goal of what has aptly been called ‘nuclear archaeology’ was to reconstruct the history of the universe by means of hypothetical cosmological or stellar nuclear processes, and to test these by examining the resultant pattern of element abundances.19
(p. 904) In 1946 Gamow published his first preliminary paper, in which he argued that the problem of the origin of the elements could be solved only by combining the relativistic expansion formulae with the experimentally known rates of nuclear reactions. Together with his research assistant Ralph Alpher, two years later he presented a much-improved version of the ‘big bang scenario’ which assumed as a starting point a high-density ‘soup’ of primordial neutrons. Being radioactive, some of these would decay into protons and electrons, and the protons would combine with neutrons to form deuterons, and eventually, by the mechanism of neutron capture, heavier nuclei. Calculations based on this picture were promising insofar as they could be fitted to produce a nuclear-abundance curve not very different from the one known observationally. More or less independently, Alpher and Gamow realized that at the high temperature required for the nuclear reactions (about 109 K), radiation would dominate over matter and continue to do so until the universe had cooled as a result of the expansion. To produce a reliable picture of the early universe, they had to take into account both matter and radiation.
In a work of 1948, with Robert Herman, another of Gamow’s associates, Ralph Alpher found that the initially very hot radiation would have cooled with the expansion and today appear as low-intensity radiation with a temperature of about 5 K. They argued that this background radiation still filled the entire universe, and thus in principle should be detectable as a feeble fossil from the early radiation-dominated universe. However, Alpher and Herman’s brilliant prediction of cosmic background radiation failed to attract the interest of physicists and astronomers.20 It would be another seventeen years before the background radiation was discovered, and then with dramatic consequences for the development of cosmology.
The research programme carried out mainly by Gamow, Alpher, and Herman focused on the formation of heavier atomic nuclei in the very early universe. Detailed calculations made in the early 1950s resulted in a cosmic helium content between 29% and 36% (by weight), which was in satisfactory agreement with the observed amount of helium in the universe, at the time known only very roughly. On the other hand, Gamow and his group failed to account for the heavier elements of atomic number Z 〉 2, which was seen as a serious problem and even a reason to dismiss the theory. Another problem—which Gamow’s theory shared with most other finite-age models—was that it led to a too short time-scale. The age of the universe τ in terms of the Hubble time (the inverse of the Hubble constant, T = 1/H) follows from a particular cosmological model—in the case of the Einstein-de Sitter model, being τ = 2T/3. Since apparently reliable astronomical measurements indicated T ≅ 2 billion years, this led to a universe younger than the Earth! (The problem disappeared in the mid-1950s when it was shown that H is much smaller than believed previously.)
The low appreciation of Gamow’s ‘hot big bang’ model in the 1950s is illustrated by the view of three distinguished British physicists, who in 1956 concluded about the theory: ‘At present, this theory cannot be regarded as more than a bold hypothesis’.21 In fact, by that time the theory had effectively come to a halt. A dozen or so physicists (but no astronomers!) had been engaged in developing Gamow’s theory after 1948, but a few years later interest decreased drastically. Between 1956 and 1964 only a single (p. 905) research paper was devoted to what around 1950 had appeared to be a flourishing research programme. The reasons for this remarkable lack of interest are complex, and must be ascribed to both social and scientific factors. Among the reasons were not that the theory was refuted by observations in any direct sense, which it was not. The scientific problems that faced Gamow’s theory of the ‘big bang’—a name that dates from about 1950—were not the only reason for its decline. Another reason was that it was widely seen as a theory of the creation of the universe—a concept which most physicists and astronomers considered to be outside the domain of science, or even pseudoscientific.
It is also worth recalling that although Gamow’s theory built on the authoritative general theory of relativity (in the form of the Friedmann equations), it did not follow from it. Some relativistic models, such as the Einstein–de Sitter model, start with a singularity—a ‘state’ in which all matter and space is concentrated in a single point. But the idea of a nuclear explosion—as first suggested by Lemaître and later in much greater detail by Gamow and Alpher—is a foreign element that added to general relativity theory rather than being part of it. This helps explain why many astronomers, who were in favour of a finite-age universe starting in a singularity, were nonetheless opposed to Gamow’s scenario of the early universe. For example, the English–American mainstream cosmologist George McVittie was a supporter of a relativistic evolution universe of, say, the Einstein–de Sitter type, but criticized those ‘imaginative writers’ who had woven fanciful notions such as the ‘big bang’ round the predictions of general relativistic cosmology.
29.6 A Cosmological Controversy
At a time when ‘big bang’ cosmology was still a somewhat immature research programme, it and similar relativistic models were challenged by an entirely different theory of the universe, initially referred to as the ‘continuous-creation’ theory, but soon to become known as the ‘steady-state’ model. The basic message of the steady-state theory was that the large-scale features of the universe had always been, and would always be, the same, which implied that there was neither a cosmic beginning nor a cosmic end. However, contrary to earlier cosmological views of this kind, the new theory accepted the expansion of the universe as an observational fact.
The steady-state theory had its beginning in discussions between three young Cambridge physicists, Fred Hoyle, Hermann Bondi, and Thomas Gold, who agreed that the standard evolution cosmology based on Einstein’s field equations was deeply unsatisfactory. Their alternative was developed in two rather different versions—one by Hoyle, and the other jointly by Bondi and Gold. Both of the founding papers appeared in summer 1948 in Monthly Notices of the Royal Astronomical Society. Although Hoyle’s approach differed considerably from that of Bondi and Gold, the two theories had so much in common that they were seen generally as merely two (p. 906) versions of the same theory of the universe. Both papers were characterized by philosophically-based objections to the standard cosmology based on the Friedmann equations. For example, Hoyle considered what he called ‘creation-in-the-past’ theories to go ‘against the spirit of scientific enquiry’, because the creation could not be causally explained. Bondi and Gold similarly objected to the lack of uniqueness of the standard relativistic theory:
In general relativity a very wide range of models is available, and the comparisons [between theory and observation] merely attempt to find out which of these models fits the facts best. The number of free parameters is so much larger than the number of observational points that a fit certainly exists, and not even all the parameters can be fixed.22
Bondi and Gold proposed to extend the ordinary cosmological principle into what they called the perfect cosmological principle. This principle or assumption remained the defining foundation of the steady-state theory, especially as conceived by Bondi and Gold and their relatively few followers. It states that the large-scale features of the universe do not vary with either space or time. According to Bondi and Gold it was a postulate or fundamental hypothesis: ‘We regard the principle as of such fundamental importance that we shall be willing if necessary to reject theoretical extrapolations from experimental results if they conflict with the perfect cosmological principle even if the theories concerned are generally accepted’.23 One could argue for the perfect cosmological principle philosophically or in terms of the consequences it implied, but not derive it from other physical laws. Although it could be supported by observations, it could not be proved observationally. On the other hand, it could be disproved by observations—namely, if consequences of the principle were unequivocally contradicted by observations. What might look to be an a priori principle, and was often accused of being one, was not really of such a nature.
The expansion of the universe apparently contradicts the perfect cosmological principle because the expansion implies that the average density of matter decreases with time. Rather than admitting a contradiction, the steady-state theorists drew the conclusion that matter is continually and spontaneously created throughout the universe in such a rate that it compensates precisely the thinning-out caused by the expansion. Bondi and Gold showed easily that the creation rate must be given by 3ρH ≈ 10−43 g/s/m3, where ρ is the average density of matter and H is the Hubble constant. The small creation-rate made it impossible to detect the creation of new matter by direct experiment, yet the hypothesis did have detectable consequences. The new matter was supposed to be created in the form of hydrogen atoms, or perhaps neutrons or electrons and protons separately, but this was nothing but a reasonable assumption.
The simple steady-state theory of the universe led to several definite and testable predictions, and in addition to these, to several consequences of a less definite nature—qualitative expectations of what new observations would be like. Thus, it followed from the theory not only that the matter density must remain constant, but also that it had the definite value of ρ = 3H2/8πG, which is precisely the critical density characteristic of the 1932 Einstein–de Sitter model. While in the relativistic model (p. 907) this density implies a slowing down of the expansion, in the steady-state model it corresponds to an exponentially growing expansion. According to this theory, the so-called deceleration parameter—a measurable quantity that expresses the rate of slowing down of the expansion—must have the value q0 = –1. There were a few other predictions, including that the ages of galaxies must be distributed according to a certain statistical law. In other words, the steady-state theory led to unambiguous predictions that could be compared with measurements. Contrary to the class of ‘big bang’ theories, it was eminently falsifiable.
The theory proposed by Hoyle, Bondi, and Gold was controversial from its very beginning, and widely considered provocative—not least because of its assumption of spontaneous creation of matter, in apparent contradiction to the law of energy conservation. The response to the theory was in part based on observations, but to no less an extent also on objections of a more philosophical nature. As Bondi and Gold had applied methodological and epistemic arguments in favour of the new theory, so its opponents applied similar arguments against it. Foremost among the critics was Herbert Dingle, an astrophysicist and philosopher of science, who since the 1930s had fought a crusade against cosmologies of the more rationalistic kind, such as Milne’s. He now felt it necessary to warn against steady-state cosmology, which he accused of being dogmatic and plainly non-scientific. In an unusuallly polemical presidential address of 1953 delivered before the Royal Astronomical Society he charged that the new cosmological model was a cosmythology—a mathematical dream that had no credible connection with physical reality:
It is hard for those unacquainted with the mathematics of the subject, and trained in the scientific tradition, to credit that the elementary principles of science are being so openly outraged as they are here. One naturally inclines to think that the idea of the continual creation of matter has somehow emerged from mathematical discussion based on scientific observation, and that right or wrong, it is a legitimate inference from what we know. It is nothing of the kind, and it is necessary that that should be clearly understood. It has no other basis than the fancy of a few mathematicians who think how nice it would be if the world were made that way.24
Dingle judged the perfect cosmological principle totally unacceptable—to be ad hoc as well as a priori. According to him, the principle had precisely the same dubious nature as the perfectly circular orbits and immutable heavens of Aristotelian cosmology. These were more than mere hypotheses; they were central elements in the paradigm that ruled ancient and medieval cosmology, and as such they were inviolable within the framework of the paradigm. Dingle claimed that the perfect cosmological principle had a similar status.
Despite fundamental disagreements, both parties in the cosmological debate agreed that ultimately the question had to be settled by observation and not by philosophical argument. Bondi, who was greatly inspired by Karl Popper’s falsificationist philosophy of science, declared that the steady-state theory would have to be abandoned if observations contradicted just one of its predictions. But he and other steady-state theorists also emphasized that in any conflict between observation and theory, observation was as likely to be found at fault as was theory. A variety (p. 908) of tests, some direct and others indirect, were used in the controversy between the steady-state model and relativistic evolution models. Among the more important were (i) galaxy formation, (ii) nucleosynthesis, (iii) redshift–magnitude relationship, (iv) radio-astronomical source counts, (v) distribution of quasars, and (vi) the cosmic microwave background.
It was agreed that any acceptable cosmological theory should be able to explain the formation and distribution of galaxies—a difficult problem that was approached in different ways by the two competing theories. After much theoretical work the situation was undecided in the sense that the problem was realized to be too complex to warrant any definite conclusion with regard to the two rival conceptions of the world. In other words, galaxy formation failed to work as the test it was hoped to be. Much the same was the case with regard to the problem of nucleosynthesis, where the ‘big bang’ theories could explain the formation of helium, but not the formation of heavier elements. According to the steady-state theory all elements had to be the products of nuclear reactions in the interior of stars. The first satisfactory explanation of this kind appeared in an ambitious and comprehensive theory published in 1957 by Fred Hoyle in collaboration with William Fowler, Margaret Burbidge, and Geoffrey Burbidge. The so-called B2HF theory was a landmark work in stellar nucleosynthesis, but weak with respect to the predicted amount of helium and deuterium. By the early 1960s the general view was that nucleosynthesis could probably not yet be used to distinguish unambiguously between the two cosmological theories.
A more promising and relatively straightforward test seemed to derive from the variation of the speed of recession of the galaxies with their distances, from which the deceleration constant and the curvature of space could be inferred. Whereas evolution cosmologies predicted that the rate of recessional velocity was disproportionally greater for distant (older) galaxies, according to the steady-state model the velocity would increase directly proportional to the distance. As mentioned, the deceleration parameter of the steady-state theory was as small as q0 = –1, which distinguished it from most evolutionary models based on the Friedmann equations. Data due to Humason and Allan Sandage at Mount Wilson Observatory indicated a slowed-down expansion, corresponding to a q0 value considerably greater than –1. This was in agreement with the evolutionary view, but the data were not certain enough to constitute a crucial test, except for scientists (such as Sandage) already convinced about the truth of ‘big bang’ cosmology. Although Sandage and most other astronomers believed that the accumulated redshift–magnitude observations spoke against the steady-state alternative, the alleged refutation was not clear enough to convince those in favour of the theory.
The most serious challenge to steady-state cosmology came from the new science of radio astronomy. Martin Ryle of Cambridge University, a leader of the new science, soon became an active opponent of the steady-state theory, which, he considered, was ruled out by data from radio sources showing their distribution with regard to intensity. Ryle’s group found a distribution which disagreed flatly with the prediction of steady-state cosmology but could be accommodated by the class of ‘big bang’ theories. Consequently, he concluded that ‘there seems no way in which the observations (p. 909) can be explained in terms of a steady-state theory’.25 Although this conclusion from Ryle’s Halley Lecture of 1955 turned out to be premature—the data were not nearly as good as Ryle thought—improved data of 1961 did speak out clearly against the cosmological theory of Hoyle and his allies. The radio-astronomical test was accepted by the the large majority of astronomers as the final overthrow of steady-state cosmology. However, although the radio-astronomical consensus weakened the theory considerably, it was just possible to keep it alive by introducing suitable modifications, which was what Hoyle and a few other steady-state protagonists did. In spite of the solid foundation of Ryle’s new data, none of the steady-state cosmologists accepted that they amounted to a refutation of their theory, and none of them converted to the view of evolutionary cosmology because of the verdict from radio astronomy.
29.7 Microwaves from the Heaven
The steady-state theory of the universe received its death-blow in 1965 with the discovery of a cosmic microwave background of the kind that Alpher and Herman had predicted in 1948. Apparently unaware of the earlier prediction, in 1964 the Princeton physicist Robert Dicke came to suspect the existence of cold blackbody radiation as a relic from a cosmic bounce in which an earlier universe, after having suffered a ‘big crunch’, was reborn in a ‘big bang’. In early 1965, James Peebles, a former student of Dicke’s, calculated the temperature of the hypothetical radiation to be about 10 K, and preparations were made in Princeton to measure the radiation. But before they got that far, they learned about experiments made by two physicists at Bell Laboratories. Experimenting with a radiometer redesigned for use in radio astronomy, Arno Penzias and Robert Wilson found an antenna temperature of 7.5 K where it should have been only 3.3 K. They were unable to explain the reason for the discrepancy and only realized that the excess temperature—what they had thought of as ‘noise’—was of cosmological origin when they saw a preprint of Peebles’ work.
The discovery of the cosmic background radiation, rewarded by the Nobel prize, was serendipitous, insofar that the Penzias-Wilson experiments were not aimed at finding a radiation of cosmological significance and not initially interpreted as such. It is also noteworthy that the discovery and interpretation were made by physicists. None of the discoverers of the microwave background—arguably the most important discovery in modern cosmology—were astronomers, nor specialists in astrophysics. The Bell and Princeton physicists published their work as companion papers in the July 1965 issue of the Astrophysical Journal, reporting the discovery and interpretation of fossil radiation from the ‘big bang’. While Penzias and Wilson simply announced their finding of an exess temperature at wavelength 7.3 cm, the Princeton group (Dicke, Peebles, Peter Roll, and David Wilkinson) covered the cosmological implications. Neither of the papers mentioned the earlier works of Alpher and Herman.26
(p. 910) The ‘big bang’ interpretation of the observed 7.3-cm microwave background was accepted immediately by the majority of astronomers and physicists, who welcomed it as final proof that the universe had come into being some 10 billion years ago in an explosive event. If the observation of Penzias and Wilson was to be interpreted in this way, the radiation had to be blackbody-distributed, the demonstration of which obviously required more than a single wavelength. The extension to other wavelengths took time, but by the early 1970s there remained no doubt that the spectrum was in fact that of blackbody radiation at a temperature of about 2.7 K. The importance of the 1965 discovery was, first of all, that it provided strong support to the ‘big bang’ picture and effectively ruled out alternative views of the universe. Whereas the microwave background followed naturally from ‘big bang’ assumptions, it could be reproduced only from steady-state cosmology by introducing additional hypotheses of an ad hoc nature. This strategy was followed by Hoyle, Jayant Narlikar, and a few others, but it made no impression on the large majority of astronomers and physicists, who found it to be artificial and unnecessary.
The microwave background also provided indirect support for the ‘big bang’ theory by leading to improved calculations of the primordial formation of helium-4 and other very light nuclear species such as helium-3 and deuterium. For example, in 1966 Peebles calculated the helium abundance on the assumption of a radiation temperature of 3 K and arrived at 26–28% helium, depending on the value of the present density of matter. The result agreed excellently with observations, which Peebles took to be further confirmation of the hot ‘big bang’ model. The work done on primordial production of helium-4 not only amounted to strong support for the ‘big bang’, but also helped determine the matter density of the universe. As to the cosmic abundance of primordial deuterium, it was only with the launching in 1972 of the Copernicus satellite that it became possible to determine the quantity to D/H ≅ 1.4 × 10–5. From this value the American astrophysicists John Rogerson and Donald York derived a value of the density of ordinary (baryonic) matter that indicated an open, ever-expanding universe.
The low density of baryonic matter suggested the existence of large amounts of ‘dark matter’ in the universe—an idea which had been around for some time, and in the form of ‘dark stars’ can be found as early as in the second half of the eighteenth century.27 While studying galactic clusters the Swiss–American astronomer Fritz Zwicky concluded in 1933 that the gravitation from visible matter was not nearly enough to keep the clusters together. He suggested that there must be present large amounts of non-luminating or dark matter, and that this might increase the average density of matter in the universe to about the critical value Ω = ρ/ρcrit = 1, corresponding to space being flat. However, Zwicky’s prophetic arguments received little notice, and only in the 1970s did Vera Rubin and others produce convincing evidence that the greater part of matter in the universe must exist in some unknown, dark form.28 Of course, the discovery of dark matter, quite different from the ordinary kind composed of electrons and nucleons, raised a new question: what is it?
The confidence in the standard hot ‘big bang’ model that emerged in the late 1960s, primarily as a result of the cosmic microwave background, did not imply that all (p. 911) the main problems of cosmology had been solved: far from it. It meant only that most experts now worked within the same paradigm and agreed upon which were the main problems and how they might be solved. General relativity theory rose to become a strong research area of physics and astrophysics in the same period, and it was part of the programme that the universe at large could be described, and could only be described, by means of Einstein’s cosmological field equations. Cosmological models not based on general relativity continued to persist, and new ones were proposed, but being outside the established paradigm they were of marginal significance. What mattered was to decide the cosmological parameters from observations with such a degree of precision that they allowed selection of the best solution to the field equations. This best solution would then describe a cosmological model that most probably would correspond to the real universe.
As Sandage and other observational cosmologists saw it, the relevant parameters were first and foremost the Hubble constant and the deceleration parameter, both of which were quantities that could in principle be derived from observations. The aim of this research programme was epitomized in the title of a paper which Sandage published in 1970: ‘Cosmology: a search for two numbers’.29 However, it proved more difficult than expected to find unambiguous values for the two numbers. Measurements disagreed to such an extent that it was impossible to say with any confidence whether the geometry of the universe was open, flat, or closed. Nor could the age of the universe be pinned down to a value any more precise than very roughly 10 billion years. Most cosmologists assumed the cosmological constant to be zero (that is, non-existing), but their belief was as much grounded in philosophical preferences as in observations. Throughout the period, a non-zero and probably positive cosmological constant remained a possibility. In the absence of firm observational guidance, many cosmologists preferred the flat Einstein–de Sitter model—not because it was confirmed by observation, but because it was simple and not clearly ruled out by observations. It was a compromise model rather than a concordance model.
The change that cosmology experienced in the wake of the mid-1960s manifested itself not only cognitively but also socially. Before that time, cosmology as a scientific discipline scarcely existed, though it did exist as a scientific activity pursued part-time by a small number of physicists and astronomers who did not consider themselves as ‘cosmologists’. The number of textbooks was very few, and varied considerably in content and approach, such as Tolman’s Relativity, Thermodynamics and Cosmology (1934), Bondi’s Cosmology (1952), and McVittie’s General Relativity and Cosmology (1956). The decades following the discovery of the microwave background witnessed a growing integration of cosmology into university departments, and courses, conferences, and textbooks became much more common than previously. For the first time, students were taught standard cosmology and brought up in a research tradition with a shared heritage and shared goals.
The number of students increased, connections between physicists and astronomers strenghtened, and new textbooks appeared that defined the content and context of the new science of the universe. While in earlier decades cosmology had (p. 912) to some extent been characterized by national differences, these largely disappeared. Originally, ‘big bang’ cosmology had been an American theory, steady-state theory had belonged to the British, and the Russians had hesitated venturing into cosmology at all. Now the field became truly international—or nearly so. It was no longer possible to determine an author’s nationality from the cosmological theory he or she advocated. For example, the kind of cosmological research carried out by the Russian physicist Igor Zel’dovich and his school was solidly founded on the new standard ‘big bang’ theory and did not differ from the research by his American and British colleagues. Only in China under the Cultural Revolution was ‘big bang’ cosmology still, in the 1970s, considered ideologically suspect, and suppressed for political reasons— which was the case for spatially closed models in particular.30
The change—some would say revolution—also manifested itself quantitatively, in a rapid and continual increase in publications dealing with cosmological issues.31 Whereas the annual number of scientific articles on cosmology had on average been about thirty during 1950–62, between 1962 and 1972 the number increased from fifty to 250. As another way of expressing the growth, the annual number of papers on cosmology increased at an average rate of 6.4 papers between 1955 and 1967, while in the period 1968–80 the rate was twenty-one additional papers per year. By 1980 the total number of papers was about 330. Still, compared with other fields of physics and astronomy, cosmology remained a small and loosely organized science, split between the two large sciences of physics and astronomy. For example, there were no scientific societies of cosmology, nor were there any journals specifically devoted to cosmological research or carrying the name ‘cosmology’ in their title. The increasing number of papers on cosmology were published in the traditional journals of physics, astronomy, and astrophysics. Most of the important papers from the 1960s to the 1980s appeared in Physical Review, Nature, Science, The Astrophysical Journal, Monthly Notices of the Royal Astronomical Journal, and Astronomy and Astrophysics.
29.8 The First Three Seconds
Much of the work within the new framework of ‘big bang’ cosmology was concerned with the early universe, which physicists wanted to explain in terms of fundamental nuclear and particle physics. In a sense, this was a continuation of the approach adopted much earlier by Gamow and his associates, but since the early 1950s, when this work took place, particle physics had changed dramatically. Progress in high-energy physics offered new possibilities for establishing intimate connections between particle physics and cosmology—two areas of science that entered an increasingly symbiotic relationship. The union was described on a popular level by the eminent particle theorist Steven Weinberg in his best-selling The First Three Minutes (1977), in which he describes how the world came into being shortly after the ‘big bang’.
(p. 913) The new field of ‘particle cosmology’ became the playground of specialists in high-energy theory who, more often than not, had neither training in nor knowledge of astronomy. In 1984 more than two hundred scientists—most of them young particle physicists and astrophysicists—convened at a conference on ‘Inner Space, Outer Space’ at the Fermi National Accelerator Laboratory (Fermilab) outside Chicago. The organizers of the conference spoke of the new revolution in which particle physics promised to reveal some of the deepest mysteries of the universe, and, they said, ‘holds forth the possibility of sorting out the history of the universe back to times as early as seconds or even earlier!’ Moreover:
Although the earliest history of the universe is only now starting to come into focus, the potential revolutionary implications are very apparent. We may very well be close to understanding many, if not all of the cosmological facts left unexplained by the standard cosmology. At the very least it is by now clear the answers to some of our most pressing questions lie in the earliest moments of the universe.32
Eight years after the Fermilab conference the symbiosis between particle physics and cosmology manifested itself in the foundation of a new journal, Astroparticle Physics, aimed specifically at research on the borderline between astrophysics, cosmology, and elementary-particle physics.
An early and impressive result of the new research programme in particle cosmology related to the number of neutrino species. In the mid-1970s, two types of neutrino had been detected: the electron neutrino, and the muon neutrino. There might be more neutrino species, but experiments did not reveal how many. In 1977, three particle physicists—Schramm, Steigman, and James Gunn—used cosmological data and theory to argue that the number of neutrino species could not be greater than six, and subsequent refined calculations sharpened the bound to three. This prediction, based solely on cosmological arguments, was confirmed in 1993 when results from CERN, the European centre of high-energy physics, indicated that there were indeed three species of neutrino, and no more.
Particle cosmology has resulted in several other successes of this kind. In addition, advances in high-energy physics has led to a partial understanding of one of the old enigmas of cosmology: namely, why the universe consists of matter with only slight traces of antimatter. Ever since Paul Dirac predicted the existence of antimatter (positrons and antinucleons) in the early 1930s, this exotic kind of matter had occupied a niche in cosmological thinking. Antinucleons were assumed to be as abundant as nucleons in the very early universe, and almost all these would annihilate into photons until the universe had expanded to such a size that annihilation would become rare. The result would be a nearly complete annihilation of matter, in obvious contradiction to observation. The problem could be explained by assuming a slight excess of matter over antimatter in the early universe, but this was merely pushing back the asymmetry to the initial condition of the universe, and hence was not a real explanation. Other discussions of antimatter in a cosmological context assumed the existence of an ‘anticosmos’ in addition to our cosmos. As the nuclear physicist Maurice Goldhaber speculated in 1956:
(p. 914) Should we simply assume that nucleons and antinucleons were originally created in pairs, that most nucleons and antinucleons later annihilated each other, and that ‘our’ cosmos is a part of the universe where nucleons prevailed over antinucleons, the result of a very large statistical fluctuation, compensated by an opposite situation elsewhere?33
With the emergence in the 1970s of a new class of theories that unified the electromagnetic, weak, and strong forces of nature (‘grand unified theory’, or GUT) it turned out that such airy speculations were unnecessary. The new unified theories did not conserve the number of nucleons (or, more generally, baryons), and on this basis it proved possible to explain the slight initial excess of baryons over antibaryons.
The most important impact of high-energy physics on cosmology was undoubtedly the introduction, around 1980, of the so-called ‘inflationary’ scenario—a radically new conception of the very early universe.34 The principal originator of this highly successful theory or scenario was Alan Guth, a young American particle theorist, who in a landmark paper of 1981 proposed what had happened shortly after the Planck era, given by the time tPl after t = 0 and defined as
Before the Planck time the universe is supposed to have been governed by some as yet unknown laws of quantum gravity, which means that t = tPl marks the effective beginning of cosmological theory. According to Guth, the universe started in a state of ‘false vacuum’ that expanded at a phenomenal rate during a very short period of time (about 10–30 s), and then decayed to a normal vacuum filled with hot radiation energy. In spite of the briefness of the inflation phase, the early universe inflated by the amazing factor of about 1040. What made Guth’s inflation theory appealing was primarily that it was able to explain two problems that the conventional ‘big bang’ theory did not address. One was the horizon problem—the problem that in the very early universe, distant regions could not have been in causal contact: that is, they could not communicate by means of light signals. So why is the universe so uniform? The other problem, known as the flatness problem, concerns the initial density of the universe, which must have been extremely close to the critical value in order to result in the present universe. Within a few years the inflationary scenario became very popular and broadly accepted as an integral part of the consensus model of the universe. Since its publication in Physical Review in 1981, Guth’s pioneering paper on inflation has received more than 3,500 citations in the scientific literature.
The original inflation scenario was quickly developed into a number of new versions, some of which were ‘eternal’ or ‘chaotic’, meaning that they operated with inflation giving rise to a multitude of ever-reproducing subuniverses. In spite of their great explanatory power, inflationary models were considered controversial by some cosmologists, who complained that they lacked foundation in tested physics and had no connections to theories of quantum gravity. In addition, other critics pointed out that there really was no inflation theory but only a wide range of inflationary models which included so many forms that, taken together, they could hardly be (p. 915) falsified observationally. In spite of these and other objections of a scientific and methodological nature, the inflation paradigm continued to prosper and to remain the preferred theory of the very early universe. As yet, there is no proof that inflation really happened.35
While particle physicists and astrophysicists eagerly studied the earliest universe, hoping to understand it down to the Planck time, the future state of the universe was rarely a subject of scientific interest. After all, how could one know what the universe would look like trillions of years from now? Was the heat death discussed in the late nineteenth century still the best answer, or did the new cosmology offer a brighter prospect? As the British astrophysicist Malcolm Longair remarked in an address in 1985: ‘The future of our Universe is a splendid topic for after-dinner speculation’.36 While this was undoubtedly a common view, by that time a few physicists had engaged in the study of the far future of the universe, considering the field more than just after-dinner speculation. What has been called ‘physical eschatology’ began in the 1970s with the work of Martin Rees, Jamal Islam, Freeman Dyson, and a few others. What these physicists did was to extrapolate the current state of the universe into the far future, conservatively assuming that the presently known laws of physics would remain valid. The favoured scenario in this kind of research was the open, continually expanding case, where the picture would typically start with the extinction of stars and their later transformation into neutron stars or black holes. Even later chapters in the future history of the universe—say 1035 years from now—might involve hypothetical processes such as proton decay and evaporation of black holes.
Some of the studies of the far-future universe included speculations on the survival of intelligent life—either humans, or their supposedly much more intelligent descendants (which might be self-reproducing robots rather than beings of flesh and blood). In a lecture entitled ‘Time Without End’ in 1978, published the following year in Reviews of Modern Physics, Dyson argued that in an open universe life might survive indefinitely. Other physicists took up this kind of scientifically informed speculation, which of course appealed greatly to the general public, and which has continued to be cultivated by a minority of astrophysicists and cosmologists.
29.9 The ΛCDM Paradigm
Progress in late-twentieth-century cosmology was not limited to the application of nuclear physics and particle physics to the early universe. On the contrary, much of the progress was observational, due to advanced technology and new generations of high-precision instruments. Observations of the cosmic microwave background were greatly improved in quantity and quality with the launching in 1989 of the COBE satellite, carrying instruments specially designed to measure the background radiation over a wide range of wavelengths. Data from the satellite’s spectrophotometer (p. 916) produced a very detailed picture of the radiation, confirming that it was precisely distributed as blackbody radiation with a temperature of 2.376 K.
Even more importantly, another of the instruments on the COBE satellite measured tiny variations in the intensity of the microwave background from different directions in space. Since the days of Penzias and Wilson it had been known that the radiation had a high degree of uniformity, but it was agreed that it could not be completely uniform, for in that case the formation of structures, eventually leading to galaxies and stars, would not have occurred. COBE’s detector found what was hoped for: small temperature or density variations in the early universe that could provide the seeds from which galaxies were formed. The characteristic density variation turned out to be Δρ/ρ ≅ 10–5. This was a result of great importance, for other reasons, because it was in excellent agreement with predictions based on the inflation model. As a consequence, the inflationary scenario gained in credibility and was, in some version or other, accepted by a majority of cosmologists. In 2006 the principal coordinators of the extensive COBE project—John Mather and George Smoot—shared the Nobel prize for physics ‘for their discovery of the blackbody form and anisotropy of the cosmic microwave background radiation’. This was the first time—105 years after the Nobel award was instituted—that the prize was awarded for cosmological research.
While the detection of density variations in the microwave background agreed with theoretical expectations, the discovery, some years later, of the acceleration of the universe came as a surprise to most astronomers and cosmologists. As early as 1938, Fritz Zwicky and Walter Baade had proposed using the relatively rare supernovae (instead of the standard Cepheid variables) to measure the cosmic expansion as given by the Hubble constant and the deceleration parameter. However, another half century passed before the idea was incorporated in a large-scale research programme—first with the Supernova Cosmology Project (SCP) and then with the rival High-z Supernova Research Team (HZT). Both of the collaborations were international, the first being based in the United States, and the second in Australia. The two groups studied the redshifts of a particular form of supernova known as Type Ia, which could be observed at very large distances.
Although the SCP team originally acquired data which indicated a relatively high-density universe with little or no role for the cosmological constant, in 1998 results obtained by the two groups began to converge towards a very different and largely unexpected picture of the universe.37 An important part of the new and observation-based consensus view was that the universe is in a state of acceleration, with a deceleration parameter q0 ≅ –0.75. Suggestions of an accelerating universe had been made earlier—the first time in 1927, with Lemaître’s expanding model—but this was the first time that the idea received solid observational support. The new series of observations showed convincingly that the total mass density Ωtotal was very close to 1, which implied a spatially flat universe. Since both the ordinary mass density and the density of dark matter were considerably less, the universe was assumed to be dominated by ‘dark’ vacuum energy. The best data from the early years of the twenty-first century demonstrated a composition of (p. 917)
where Ωvacuum refers to the vacuum-energy density relative to the critical density. Ωmatter includes both baryonic and exotic dark matter.
Most physicists agreed that the dark energy was given by the cosmological constant, though other interpretations were also suggested. It had been known for a long time that Einstein’s cosmological constant, interpreted in terms of quantum mechanics, corresponded to empty space having a negative pressure or, expressed differently, that it represents a repulsive force that blows up space. The dark energy associated with the cosmological constant has the remarkable property that the energy density (rather than energy itself) is a conserved quantity. This implies that as the universe expands, the amount of dark energy will increase and dominate ever more. The accelerating universe is a runaway universe. While this had been known theoretically since the 1960s, it was only around 2000 that the hypothetical dark Λ–energy became a reality. As many physicists and astronomers saw it, the cosmological constant had been ‘discovered’.
With the emergence of the new picture of the universe, the nature of dark energy became a top priority in fundamental physics. Another and related priority, of a somewhat older age, was to understand the nature of the dark matter that was known to dominate over the ordinary matter in a ratio of about 5:1 (of Ωmatter = 0.28; a part of 0.233 is due to dark matter, the remaining 0.047 to ordinary matter). Already, around 1990 it was agreed that the major part of the mysterious dark matter was ‘cold’, meaning that it is made up of relatively slowly-moving particles unknown to experimenters but predicted by physical theory. The particles of cold dark matter (CDM) were collectively known as WIMPs—‘weakly interacting massive particles’. Several such hypothetical particles have been suggested as candidates for the exotic dark matter, and some are more popular than others, but the nature of the dark-matter component remains unknown. The new picture of the universe, mainly consisting of dark Λ-energy and cold dark matter, is often referred to as the ΛCDM model or even the ΛCDM paradigm.
Cosmologists and astrophysicists were understandably excited over the new developments that promised a new chapter in the history of cosmology. In an article published in 2003 the leader of the SCP project, Saul Perlmutter, gave voice to the excitement:
We live in an unusual time, perhaps the first golden age of empirical cosmology. With advancing technology, we have begun to make philosophically significant measurements. These measurements have already brought surprises. Not only is the universe accelerating, but it apparently consists primarily of mysterious substances . . . With the next decade’s new experiments, exploiting not only distant supernovae, but also the cosmic microwave background, gravitational lensing of galaxies, and other cosmological observations, we have the prospect of taking the next step toward the ‘Aha!’ moment when a new theory makes sense of the current puzzles.38
(p. 918) 29.10 Postscript: Multiverse Speculations
The early twenty-first century may be the first golden age of empirical cosmology, but it has also seen the return of the kind of higher speculations that traditionally have been part of cosmology. Speculations are still very much alive in cosmological research, but in mathematical forms that make them different from the cosmic speculations of the past. Physical eschatology is one example, and there are many more.39 For example, there are currently many theories of the universe before the ‘big bang’—a notion which according to classical ‘big bang’ theory makes no sense but can nonetheless be defended on the basis of many-dimensional string theory or other modern theories of quantum gravity. These cosmological theories of a cyclic or eternal universe are speculative to the extent that they build on physics which has not been tested experimentally. On the other hand, they do lead to testable predictions concerning, for instance, primordial gravitational waves and the fine structure of the microwave background radiation.
Perhaps the most controversial of the modern cosmological hypotheses is the idea of numerous separate universes, or what is known as the ‘multiverse’—a term first used in a scientific context as late as 1998.40 Although speculations of other universes extend far back in time, the modern multiverse version is held to be quite different, and scientific in nature. The basic claim of the multiverse hypothesis is that there exists a huge number of other universes, causally separate and distinguished by different laws and parameters of physics. We happen to inhabit a very special universe, with laws and parameters of just such a kind that they allow the evolution of intelligent life-forms. This general idea became popular among some physicists in the 1990s, primarily motivated by developments in inflation theory but also inspired by the anthropic principle and the many-worlds interpretation of quantum mechanics. The main reason why the multiverse is taken seriously by a growing number of physicists, however, is that it has received unexpected support from the fundamental theory of superstrings. Based on arguments from string theory, in 2003 the American theorist Leonard Susskind suggested that there exists an enormous ‘landscape’ of universes, each of them corresponding to a vacuum state described by the equations of string theory.41
The Russian–American cosmologist Andrei Linde is another prominent advocate of the landscape multiverse, the general idea of which he describes as follows:
If this scenario [the landscape] is correct, then physics alone cannot provide a complete explanation for all properties of our part of the Universe . . . According to this scenario, we find ourselves inside a four-dimensional domain with our kind of physical laws, not because domains with different dimensionality and with alternative properties are impossible or improbable, but simply because our kind of life cannot exist in other domains.42
Remarkably, many cosmologists and theoretical physicists have become convinced that our universe is just one out of perhaps 10500 universes. According to this most (p. 919) radical hypothesis, things are as they are because they happen to be so, and had they been different, we would not be here to see them. In some of the other universes there are no electrons, the gravitational force is stronger than the electromagnetic force, and magnetic monopoles are abundant. There may even be tartan elephants.
Understandably, the increasing popularity of multiverse cosmology has caused a great deal of controversy in the physics community.43 Not only is the multiverse a strange creature; it does not help that it is intimately linked to other controversial issues such as string theory and the anthropic principle. The overarching question is whether or not multiverse cosmology is a science. Have physicists in this case unknowingly crossed the border between science and philosophy, or perhaps between science and theology? Almost all physicists agree that a scientific theory has to speak out about nature in the sense that it must be empirically testable, but they do not always agree what ‘testability’ means or how important this criterion is relative to other criteria. In spite of the undeniable progress of observational and theoretical cosmology during the last few decades, the field cannot, and probably never will, escape its philosophical past.
Alpher, Ralph A., and Robert C. Herman (2001). Genesis of the Big Bang. New York: Oxford University Press.Find this resource:
Bernstein, Jeremy, and Gerald Feinberg, eds. (1986). Cosmological Constants: Papers in Modern Cosmology. New York: Columbia University Press.Find this resource:
Brandenberger, Robert (2008). ‘Alternatives to cosmological inflation’, Physics Today 61 (March), 44–49.Find this resource:
Carr, Bernard J., ed. (2007). Universe or Multiverse? Cambridge: Cambridge University Press.Find this resource:
——and George F. R. Ellis (2008). ‘Universe or multiverse?’ Astronomy & Geophysics 49, 2.29–2.37.Find this resource:
Clausius, Rudolf (1868). ‘On the second fundamental theorem of the mechanical theory of heat’, Philosophical Magazine 35, 405–419.Find this resource:
Clerke, Agnes M. (1890). The System of the Stars. London: Longmans, Green and Co.Find this resource:
Crookes, William (1886). ‘On the nature and origin of the so-called elements’, Report of the British Association for the Advancement of Science, 558–576.Find this resource:
Dingle, Herbert (1953). ‘Science and modern cosmology’, Monthly Notices of the Royal Astronomical Society 113, 393–407.Find this resource:
Earman, John, and Jesus Mosterin (1999). ‘A critical look at inflationary cosmology’, Philosophy of Science 66, 1–49.Find this resource:
Einstein, Albert (1929). ‘Space-time’, pp. 105–108, in Encyclopedia Britannica, 14th edn, Vol. 21. London: Encyclopedia Britannica.Find this resource:
——(1998). The Collected Works of Albert Einstein. Volume 8. Edited by Robert Schulman, A. J. Kox, Michael Janssen, and Jószef Illy. Princeton: Princeton University Press.Find this resource:
Goldhaber, Maurice (1956). ‘Speculations on cosmogony’, Science 124, 218–219.Find this resource:
Guth, Alan H. (1997). The Inflationary Universe. Reading, MA: Addison-Wesley.Find this resource:
Harper, Eamon, W. C. Parke, and G. D. Anderson, eds. (1997). The George Gamow Symposium. San Francisco: Astronomical Society of the Pacific.Find this resource:
(p. 920) Hearnshaw, J. B. (1986). The Analysis of Starlight: One Hundred and Fifty Years of Astronomical Spectroscopy. Cambridge: Cambridge University Press.Find this resource:
Hetherington, Norriss (2002). ‘Theories of an expanding universe: Implications of their reception for the concept of scientific prematurity’, pp. 109–123, in Ernest B. Hook, ed., Prematurity in Scientific Discovery: On Resistance and Neglect. Berkeley: University of California Press.Find this resource:
Israel, Werner (1987). ‘Dark stars: The evolution of an idea’, pp. 199–276, in Stephen Hawking and Werner Israel, eds., Three Hundred Years of Gravitation. Cambridge: Cambridge University Press.Find this resource:
Jones, G. O., J. Rotblat, and G. J. Whitrow (1956). Atoms and the Universe: An Account of Modern Views on the Structure of Matter and the Universe. New York: Charles Scribner’s Sons.Find this resource:
Kaiser, David (2006). ‘Whose mass is it anyway? Particle cosmology and the objects of theory’, Social Studies of Science 36, 533–564.Find this resource:
Kerzberg, Pierre (1989). The Invented Universe: The Einstein–de Sitter Controversy (1916–17) and the Rise of Relativistic Cosmology. Oxford: Clarendon Press.Find this resource:
Kirshner, Robert P. (2004). The Extravagant Universe: Exploding Stars, Dark Energy and the Accelerating Cosmos. Princeton: Princeton University Press.Find this resource:
Kolb, Edward W.et al., eds. (1986). Inner Space, Outer Space: The Interface Between Cosmology and Particle Physics. Chicago: University of Chicago Press.Find this resource:
Kragh, Helge (1996). Cosmology and Controversy: The Historical Development of Two Theories of the Universe. Princeton: Princeton University Press.Find this resource:
——(2008). Entropic Creation: Religious Contexts of Thermodynamics and Cosmology. Aldershot: Ashgate.Find this resource:
——(2009). ‘Contemporary history of cosmology and the controversy over the multiverse’, Annals of Science 66, 529–551.Find this resource:
——(2010). Higher Speculations: Grand Theories and Failed Revolutions in Physics and Cosmology. Oxford: Oxford University Press.Find this resource:
——and Robert Smith (2003). ‘Who discovered the expanding universe?’ History of Science 41, 141–162.Find this resource:
——and Dominique Lambert (2007). ‘The context of discovery: Lemaître and the origin of the primeval-atom universe’, Annals of Science 64, 445–470.Find this resource:
Lanczos, Cornelius (1925). ‘Über eine zeitlich periodische Welt und eine neue Behandlung des Problems der Ätherstrahlung’, Zeitschrift für Physik 32, 56–80.Find this resource:
Lemaître, Georges (1931). ‘L’expansion de l’espace,’ Revue des Questions Scientifiques 17, 391–410.Find this resource:
Linde, Andrei (2007). ‘The inflationary universe’, pp. 127–150, in Bernard Carr, ed., Universe or Multiverse? Cambridge: Cambridge University Press.Find this resource:
Longair, Malcolm S. (1985). ‘The universe: present, past and future’, The Observatory 105, 171–188.Find this resource:
——(2006). The Cosmic Century: A History of Astrophysics and Cosmology. Cambridge: Cambridge University Press.Find this resource:
North, John (1990). The Measure of the Universe: A History of Modern Cosmology. New York: Dover Publications.Find this resource:
Norton, John (1999). ‘The cosmological woes of Newtonian gravitation theory’, pp. 271–324, in Hubert Goenner et al., eds., The Expanding World of General Relativity. Boston: Birkhäuser.Find this resource:
Nussbaum, Harry, and Lydia Bieri (2009). Discovering the Expanding Universe. Cambridge: Cambridge University Press.Find this resource:
(p. 921) Paul, Erich P. (1993). The Milky Way Galaxy and Statistical Cosmology 1890–1924. Cambridge: Cambridge University Press.Find this resource:
Peebles, P. James E., Lyman A. Page, and R. Bruce Partridge (2009). Finding the Big Bang. Cambridge: Cambridge University Press.Find this resource:
Perlmutter, Saul (2003). ‘Supernovae, dark energy, and the accelerating universe’, Physics Today 56 (April), 53–60.Find this resource:
Rubin, Vera (1983). ‘Dark matter in spiral galaxies,’ Scientific American 248 (June), 88–101.Find this resource:
Ryan, Michael P., and L. C. Shapley (1976). ‘Resource letter RC-1: Cosmology’, American Journal of Physics 44, 223–230.Find this resource:
Ryle, Martin (1955). ‘Radio stars and their cosmological significance’, The Observatory 75, 127–147.Find this resource:
Sandage, Allan (1970). ‘Cosmology: A search for two numbers’, Physics Today 23 (February), 34–41.Find this resource:
Schramm, David N., and Gary Steigman (1988). ‘Particle accelerators test cosmological theory,’ Scientific American 262 (June), 66–72.Find this resource:
Seeliger, Hugo von (1895). ‘Über das Newtonsche Gravitationsgesetz’, Astronomische Nachrichten 137, 129–136.Find this resource:
Smeenk, Chris (2005). ‘False vacuum: Early universe cosmology and the development of inflation,’ pp. 223–257, in A. J. Kox and Jean Eisenstaedt, eds., The Universe of General Relativity. Boston: Birkhäuser.Find this resource:
Sullivan, Woodruff T. (1990). ‘The entry of radio astronomy into cosmology: Radio stars and Martin Ryle’s 2C survey,’ pp. 309–330, in Bruno Bertotti, R. Balbinot, Silvio Bergia, and A. Messina, eds., Modern Cosmology in Retrospect. Cambridge: Cambridge University Press.Find this resource:
Susskind, Leonard (2006). The Cosmic Landscape: String Theory and the Illusion of Intelligent Design. New York: Little, Brown and Company.Find this resource:
Tassoul, Jean-Louis, and Monique Tassoul (2004). A Concise History of Solar and Stellar Physics. Princeton: Princeton University Press.Find this resource:
Williams, James W. (1999). ‘Fang Lizshi’s big bang: A physicist and the state in China,’ Historical Studies in the Physical and Biological Sciences 30, 49–114.Find this resource:
(14) . Nussbaumer and Bieri (2009).
(23) . Bondi and Gold (1948), p. 255.
(40) . For a historically informed introduction, see Kragh (2009). See also the collection of articles in Carr (2007).