Thursday 2 October 2014

Cosmology for Beginners - The Birth of the Universe

PEOPLE are only slowly coming to terms with the full implications of the COBE satellite’s discovery of ripples from the birth of the Universe. One of the most startling is that our Universe may be just one among many, and that it has “evolved” from simpler forebears — that it is, in some sense, alive.





style="display:inline-block;width:300px;height:250px"

data-ad-client="ca-pub-1695684225628968"

data-ad-slot="7091501130">


Among other things, the COBE data seem to confirm that the Universe emerged from a period of exponential expansion, known as inflation, at about the time those ripples were imprinted on the background radiation. At first sight, though, there is no obvious reason why the inflation process should have gone on for just long enough and at just the right rate to produce a Universe in which stars and galaxies can form. A shorter, less intense burst of inflation would have left the proto-universe too jumbled up, and also in danger of quickly recollapsing back down into a singularity, while a longer, stronger burst of inflation would have spread the stuff of the proto-universe so thin that no stars and galaxies could ever form.


This fine-tuning problem is generally regarded as the biggest difficulty with inflation. The problem is essentially another example of the Goldilocks effect — why is inflation, like so many other properties of the Universe, “just right” to allow our (the Universe’s) existence?


But the simplest version of inflation, complete with the fine-tuning problem, can explain everything we can see, including the ripples in the cosmic background radiation. And the fine-tuning problem itself can be resolved once we realise that the Universe itself is alive and has evolved.


Almost halfway through the 1990s, it seems that we know, more precisely than anybody has ever known before, what the Universe is made of, as well as how the Universe came into existence. “Dark matter”, sufficient to make the Universe closed upon itself like a black hole, is required to explain how galaxies move, and the COBE ripples; it is just a matter of time before the nature of the dark matter is revealed (indeed, some may already have been detected — Science, 25 September). And we know that the ultimate fate of the Universe itself is that one day the present expansion will be first halted and then reversed, so that it collapses back into a singularity that is a mirror image of the one that gave it birth.


We actually live inside a huge black hole. And what’s more, we have a pretty good idea of what happens to anything that collapses towards a singularity inside a black hole.


The idea of the Universe as a black hole is not new, although until recently it was distinctly unfashionable. In the 1980s relativists realised that there is nothing to stop the material that falls into a singularity in our three dimensions of space and one of time from being shunted through a kind of spacetime warp and emerging as an expanding singularity in another set of dimensions — another spacetime. Mathematically, this “new” spacetime is represented by a set of four dimensions (three of space and one of time), just like our own but with all of the new dimensions at right angles to all of the familiar dimensions of our own spacetime. Every singularity, on this picture, has its own set of spacetime dimensions, forming a bubble universe within the framework of some “super” spacetime, which we can refer to simply as “superspace”.


One way to picture what this involves is to use the old analogy between the three dimensions of expanding space around us and the two-dimensional expanding surface of a balloon that is being steadily filled with air. The analogy is not with the volume of air inside the balloon, but with the expanding skin of the balloon, stretching uniformly in two dimensions, but curved around upon itself in a closed surface.


Imagine a black hole as forming from a tiny pimple on the surface of the balloon, a small piece of the stretching rubber that gets pinched off, and starts to expand in its own right. There is a new bubble, attached to the original balloon by a tiny, narrow throat — the black hole. And this new bubble can expand away happily in its own right, to become as big as the original balloon, or even bigger, without the skin of the original balloon (the original universe) being affected at all. There can be many bubbles growing out of the skin (the spacetime) of the original universe in this way at the same time. And, of course, new bubbles can grow out of the skin of each new universe, ad infinitum.


Instead of the collapse of a black hole representing a one-way journey to nowhere, many researchers now believe that it is a one-way journey to somewhere — to a new expanding universe in its own set of dimensions. Instead of a black-hole singularity “bouncing” to become an exploding outpouring of energy blasting back into our Universe, it is shunted sideways in spacetime.


The dramatic implication is that many — perhaps all — of the black holes that form in our Universe may be the seeds of new universes. And, of course, our own Universe may have been born in this way out of a black hole in another universe. While the fact that the laws of physics in our Universe seem to be rather precisely “fine tuned” to encourage the formation of black holes means that they are actually fine-tuned for the production of more universes.


This is a spectacular shift of viewpoint, which most cosmologists are still struggling to come to grips with. If one Universe exists, then it seems that there must be many — very many, perhaps even an infinite number of universes. Our Universe has to be seen as just one component of a vast array of universes, a self-reproducing system connected only by the “tunnels” through spacetime (perhaps better regarded as cosmic umbilical cords) that join a “baby” universe to its “parent”.


It is relatively easy to see how such a family of universes can continue to exist, and to reproduce, once something like our own Universe exists. But how did the whole thing get started? Where did the first universe, or universes, come from?


The key concept is quantum uncertainty. This says that there is always an intrinsic uncertainty in many physical properties of the Universe and things in the Universe. The most commonly quoted example is the uncertainty that relates the position of a particle to its motion. Momentum is a measure of where a particle is going, and quantum uncertainty makes it impossible to measure the position of, say, an electron and its momentum at the same time. This is not a result of the inadequacies of our measuring equipment, but a fundamental law of nature, which has been thoroughly tested and proved in many experiments. An object like an electron simply does not have both a precise momentum and a precise position.


Another pair of uncertain variables linked in this way is energy and time. Again, the uncertainty only applies on a subatomic scale, as far as any practical consequences are concerned. But what quantum physics tells us is that any tiny region of the vacuum, which we think of as “empty space”, might actually contain a small amount of energy for a short time. In a sense, it is allowed to possess this energy if the Universe doesn’t have time to “notice” the discrepancy. The more energy there is involved, the shorter the time allowed. But because particles are made of energy (E = mc2), this means that particles are allowed to pop into existence in the vacuum of empty space. They are made out of nothing at all, and can only exist provided that they pop back out of existence again very quickly.


On this picture, the quantum vacuum is a seething froth of particles, constantly appearing and disappearing, and giving “nothing at all” a rich quantum structure. The rapidly appearing and disappearing particles are known as virtual particles, and are said to be produced by quantum fluctuations of the vacuum.


It may seem that quantum theory has run wild when pushed to such extremes, and common sense might tell you that the idea is too crazy to be true. Unfortunately for common sense, these quantum fluctuations have a measurable influence on the way “real” particles behave. The nature of the electric force between charged particles, for example, is altered by the presence of virtual particles, and measurements of the nature of the electric force show that it matches the predictions of quantum theory, rather than matching up to the common sense way it would behave in a “bare” vacuum.


What has all this got to do with the creation of universes? It all hinges upon the fact that we live inside a black hole. In 1973, Edward Tryon, of the City University of New York, suggested that our entire Universe might simply be a fluctuation of the vacuum (Nature, vol. 246, p. 396). He pointed out the curious fact that our Universe contains zero energy — provided it is indeed closed and forms a black hole. The point Tryon jumped off from — the secret of making universes out of nothing at all, as vacuum fluctuations — is that the gravitational energy of the Universe is negative.


The way to understand this is that if you think of a collection of matter, such as the atoms that make up a star, or the bricks that make up a pile, the “zero of gravitational energy” associated with those objects is when they are far apart — as far apart as it is possible for them to be. The strange thing is, as the objects fall together under the influence of gravity they lose energy. They start with none, and end up with less. So gravitational energy is negative, from the perspective in which everyday energy (the mc2 in those atoms and bricks) is positive. Any object in the Universe, like a planet, or the Sun, which is not spread out as far as possible literally has a negative amount of gravitational energy. And if it shrinks, its gravitational energy becomes more negative.


The reason this was so interesting to Tryon is that the energy of all the matter in the Universe, all the mc2, is positive. What’s more, if you take a lump of matter and squeeze it into a singularity, then at the singularity the negative gravitational energy of the mass is exactly equal and opposite to its mass energy.


Jordan’s idea will not work for the formation of a star, because any star trying to form from a singularity in this way will be inside a black hole, invisible to the Universe at large. But it will work for the creation of an entire universe, within the black hole.


Provided that the Universe is indeed closed, like the inside of a black hole, the energy involved in making a universe from a singularity is indeed zero! It is, in the words of Alan Guth, “the ultimate free lunch”. Quantum uncertainty allows bubbles of energy to appear in the vacuum, and energy is equivalent to mass. According to the rules of quantum uncertainty, the less mass-energy such a bubble has, the longer it can exist. So why couldn’t a bubble with no overall mass- energy last forever?





style="display:inline-block;width:320px;height:100px"

data-ad-client="ca-pub-1695684225628968"

data-ad-slot="9765765939">


The snag with all this, and the reason why Tryon’s idea didn’t cause much of a stir in 1973, is that whatever the quantum rules may allow, as soon as a universe containing as much matter as ours does start to expand away from a singularity, its enormous gravitational force (imagine the pull of gravity associated with an object containing the entire mass of the Universe in a volume smaller than an atomic nucleus) would pull it back together and make it collapse back into a new singularity in far less than the blink of an eye.


What Tryon was saying, in effect, was that not just virtual particles but virtual universes might be popping in and out of existence in the vacuum. In 1973, he had no idea how such a virtual universe might be made real. But inflation provides a mechanism which can catch hold of a tiny, embryonic universe during that split second of its virtual existence, and whoosh it up to a respectable size before gravity can do its work. Then, it will take billions (or hundreds of billions) of years for gravity to slow the expansion, bring it to a halt, and make the universe contract back into a singularity.


In the 1980s, this idea of a universe being created out of nothing at all was developed by many researchers, including Tryon himself, Guth, and Alex Vilenkin, of Tufts University. The consensus is that yes, indeed, universes can be born out of nothing at all as a result of quantum fluctuations. And the same powerful influence of inflation can transform any baby universe in the same way — it doesn’t matter how much, or how little, matter goes into a black hole to make the singularity; once the new singularity starts expanding into its own set of dimensions to make a new universe, the balance between mass energy and gravitational energy means that the new universe can be any size at all.


But there is still a puzzle of fine-tuning, because there is no obvious reason why inflation itself should have just the right strength to “make” a universe like our own out of a tiny quantum fluctuation of the vacuum. The “natural” size for a universe is still down in the subatomic region where quantum effects rule, on the scale of the Planck length, 10-35 of a metre. This is where evolution comes in. Nobody would argue, these days, that human beings appeared out of nothing at all on the face of the Earth. We are complex creatures that could not arise “just by chance” out of a brew of chemicals, even in some warm little pond. Simpler kinds of living organisms came first, and it took hundreds of millions of years of evolution on Earth to progress from single celled life forms to complex organisms like ourselves.


The new understanding of cosmology suggests that something similar has happened with the Universe. It is a large and complex system, which cannot have appeared “just by chance” out of a random quantum fluctuation of the vacuum. Simpler universes came first, and it may have taken hundreds of millions of universal generations to progress from a Planck-length fluctuation to complex universes like our own. Lee Smolin, of Syracuse University, has been a leading protagonist for this idea, which also takes on board notions about baby universes developed by Andrei Linde, of the Lebedev Institute, in Moscow.


The key element that Smolin has introduced into the argument is the idea that every time a black hole collapses into a singularity and a new baby universe is formed, the basic laws of physics are altered slightly as spacetime itself is crushed out of existence and reshaped. The process is analogous (perhaps more than analogous) to the way mutations provide the variability among organic life forms on which natural selection can operate. Each baby universe is, says Smolin, not a perfect replica of its parent, but a slightly mutated form.


The original, natural state of such baby universes is indeed to expand out to only about the Planck length, before collapsing once again. But if the random changes in the workings of the laws of physics — the mutations — happen to allow a little bit more inflation, a baby universe will grow a little larger. If it becomes big enough, it may separate into two, or several, different regions, that each collapse to make a new singularity, and thereby trigger the birth of a new universe. Those new universes will also be slightly different from their parents. Some may lose the ability to grow much larger than the Planck length, and will fade back into the quantum foam. But some may have a little more inflation still than their parents, growing even larger, producing more black holes and giving birth to more baby universes in their turn. The number of new universes that are produced in each generation will be roughly proportional to the volume of the parent universe. There is even an element of competition involved, if the many baby universes are in some sense vying with one another, jostling for spacetime elbow room within superspace.


In his published papers, even Smolin has stopped short of suggesting that the Universe is alive. But heredity is an essential feature of life, and this description of the evolution of universes only works if we are dealing with living systems. On this picture, universes pass on their characteristics to their offspring with only minor changes, just as people pass on their characteristics to their children with only minor changes.


Universes that are “successful” are the ones that leave most offspring. Provided that the random mutations are indeed small, there will be a genuinely evolutionary process favouring larger and larger universes.


The end product of this process should be not one but many universes which are all about as big as it is possible to get while still being inside a black hole (as nearly flat as possible), and in which the parameters of physics are such that the formation of stars and black holes is favoured. Our Universe exactly matches that description. This explains the otherwise baffling mystery of why the Universe we live in should be “set up” in what seems, at first sight, such an unusual way. Just as you would not expect a random collection of chemicals to suddenly organise themselves into a human being, so you would not expect a random collection of physical laws emerging from a singularity to give rise to a Universe like the one we live in.


Before Charles Darwin and Alfred Wallace came up with the idea of evolution, many people believed that the only way to explain the existence of so unlikely an organism as a human being was by supernatural intervention; recently, the apparent unlikelihood of the Universe has led some people to suggest that the Big Bang itself may have resulted from supernatural intervention. But there is no longer any basis for invoking the supernatural. We live in a Universe which is exactly the most likely kind of universe to exist, if there are many living universes that have evolved in the same way that living things on Earth have evolved.


Cosmologists are now having to learn to think like biologists and ecologists, and to develop their ideas not within the context of a single, unique Universe, but in the context of an evolving population of universes. Each universe starts from its own big bang, but all the universes are interconnected in complex ways by black hole “umbilical cords”, and closely related universes share the “genetic” influence of a similar set of physical laws. The ripples in time traced out by the sensors on board COBE are, on this new picture, just a tiny part of a much more complex and elaborate structure, a structure which maintains itself far from equilibrium, and in which universes in which the laws of physics resemble those in our Universe are far more common than they ought to be if those universes had arisen by chance.


The COBE discoveries do not mark the end of the science of cosmology, but the beginning of a new science of cosmology, much bigger in scope, probing further in both space and time than cosmologists could have imagined even a few years ago.


Pulsars


The discovery of three planets orbiting a pulsar known as PSR B1257+12 has revealed a system with properties that almost exactly match those of the inner Solar System, made up of Mercury, Venus and the Earth. The similarities are so striking that it seems there may be a law of nature which ensures that planets always form in certain orbits and have certain sizes; and it lends credence to the significance of a mathematical relationship that relates the orbits of the planets in our Solar System, which many astronomers have dismissed as mere numerology.


PSR B1257+12 is a rapidly spinning neutron star, containing slightly more matter than our Sun packed into a sphere only about 10 kilometers across. As the star spins, it flicks a beam of radio noise around, like the beam of a lighthouse, producing regularly spaced pulses of radio noise detectable on Earth. It can only have been produced in a supernova explosion, long ago, which would have disrupted any planetary system the star possessed at the time. So the present planets associated with the pulsar are thought to have formed from the debris of a companion star disrupted by the pulsar.


The three planets cannot be seen directly, but are revealed by the way in which they change the period of the pulsar’s pulses as they orbit around it. There is enough information revealed in the changing pulses to show that the three planets have masses roughly equal to 2.8 times the mass of the Earth, 3.4 times the mass of the Earth, and 1.5 per cent of the mass of the Earth. And they are spaced, respectively, at distances from the pulsar equivalent to 47 per cent of the distance from the Earth to the Sun, 36 per cent of the Sun-Earth distance, and 19 per cent of the Sun-Earth distance.


Tsevi Mazeh and Itzhak Goldman, of Tel Aviv University, have pointed out that the ratio of these distances, 1: 0.77: 0.4, is extremely close to the ratio of the distances of the Earth, Venus and Mercury from the Sun, which is 1: 0.72: 0.39. And the masses of the three inner planets in the Solar System are 1 Earth mass, 82 per cent of the mass of the Earth, and 5.5 per cent of the mass of the Earth. In each case, two outer planets with roughly the same mass have an inner companion with a much smaller mass.


All this is doubly intriguing because for more than 200 years astronomers have puzzled over a relationship called Bode’s law, which relates the orbits of the planets in the Solar System. The law says that if you take the sequence 0, 3, 6, 12 . . . (with each number after 3 twice the previous number in the sequence), add 4 to each number and divide by 10, you end up with the distances of the planets from the Sun in terms of the distance to the third planet (Earth, in the case of the Solar System). Bode’s law works out as far as the orbit of Uranus, but nobody knows why; now, it seems that it also works for the planets of pulsar PSR B1257+12.


The indications are that there is a universal mechanism for the formation of planets around stars. If it works for systems as diverse as a pulsar and our Sun, the chances are that it works for all stars, and that “Solar” Systems very like our own may be the rule, rather than the exception, among the stars of the Milky Way.


On 24 February 1968, Nature carried the announcement of the discovery of pulsars. Then, just three of these objects were known, and their nature was a puzzle. Today, more than 550 are known, and they are among the most important phenomena in astrophysics John Gribbin


ALTHOUGH pulsars were first identified by chance in 1967, that discovery had its roots in a scientific development carried out during World War Two by scientists who had been diverted from more abstract research. That development was radar. Before the war, astronomers only had observations of the Universe made at visible wavelengths, using optical telescopes. Although the fact that radio waves from space could be detected on Earth had been noticed in the 1930s (by Karl Jansky, working at the Bell Laboratories in New Jersey) there was no time for radio astronomy to develop properly before the war broke out. During the war, radar systems along the coast of the English Channel suffered from interference which was identified as radio noise coming from the Sun, and this fanned the interest of scientists involved in radar work; after the war, in many cases initially using war surplus radar equipment, some of them began to probe the Universe at wavelengths longer than those of visible light, in the radio part of the electromagnetic spectrum. This new window on the Universe transformed astronomy in the 1950s.


Radio astronomy has one great advantage over optical astronomy. The bright blue light of the sky, that makes the stars invisible by day, is actually blue light from the Sun that has been bounced around the Earth’s atmosphere (“scattered”) by tiny particles in the air, so that it comes at us from all directions. Red light, with longer wavelengths, is not scattered anywhere near so much, which is why sunsets are red. This kind of scattering does not happen at radio wavelengths, so provided they are not pointed directly at the Sun radio telescopes are not dazzled in the way that our eyes, or photographic equipment attached to telescopes, are dazzled during the day (and, in any case, the Sun is nowhere near as bright at radio wavelengths as it is at visible wavelengths). So radio astronomers can observe interesting objects in the heavens twenty four hours a day, and don’t have to shut down when the Sun is above the horizon.


In fact, the Sun does influence radio waves coming to us from space. But astronomers are cunning enough to make use of this “interference” with the signals they receive to find out more about the objects in space that emit the radio waves. There is a constant stream of material escaping from the surface of the Sun and blowing out into space and across the Solar System. This is a very tenuous cloud of gas known as the solar wind. The atoms in this wind are not electrically neutral, because even at the surface of the Sun conditions are sufficiently energetic to remove electrons from the outside of the atoms — the solar wind is an electrically charged plasma, although very much more tenuous than the hot plasma that exists inside a star like the Sun. The density of this plasma varies, as clouds of material move out from the Sun, and one effect of this is to make radio waves passing through the plasma vary slightly in strength — they “twinkle”, or scintillate, in just the way that variations in the atmosphere of the Earth make starlight twinkle.


But stars are only affected in this way because their images are very small — just points of light. Planets, which show as tiny discs in the sky, do not twinkle, because the tiny fluctuations are averaged out over the visible disc. Of course, stars are really bigger than planets; they only look like points of light, instead of discs, because they are so far away. The same rule applies to radio sources affected by the solar wind — but it provides extra information about radio sources because, unlike stars, some of them are so large that they do show up as extended features on the sky, not just as points. Especially in the early days of radio astronomy (less so today), it was difficult to get a precise “picture” of a radio source, a detailed map equivalent to a photograph of a star, so it was not always obvious whether the noise was coming from a point source or an extended one. Ones that twinkle, however, are definitely point sources; ones that do not twinkle are extended objects. And one inference is that twinkling radio sources must be a very long way away.


It works both ways. The fact that distant radio sources twinkle also reveals information about the nature of the solar wind, and it was this line of attack that led a young radio astronomer called Anthony Hewish to begin investigating such scintillating radio sources, as they are known, at the new radio astronomy observatory in Cambridge in the 1950s. From using scintillations as a probe if the solar wind in the 1950s, he moved on to using scintillations as a probe of the nature of radio sources, using a government grant of just Pound17,000 to build a new radio telescope. The pioneering radio astronomer Sir Bernard Lovell has described this award of funds as “one of the most cost-effective in scientific history.” For it was with the new telescope that one of Hewish’s research students, Jocelyn Bell, discovered the first pulsar in 1967.


Bell (now Jocelyn Burnell) had been born in Belfast in 1943, and graduated from the University of Glasgow in 1965. During the next two years, she started her PhD studies in Cambridge, and worked on the construction of Hewish’s new telescope — which bore little resemblance to the kind of bowl shaped antennas that the term “radio telescope” immediately conjures up in the minds of most people. You need a special kind of telescope to observe scintillation of radio sources, because it has to be able to respond to very rapid fluctuations in the strength of the radio noise coming from space. Your eyes, for example, can see stars twinkling because they react very quickly to changes in starlight, in “real time”, to use the computer jargon; but a photographic plate, exposed for several minutes (or several hours) will show an image built up over all that time (“integrated” for all that time). A photograph will show fainter stars than you can ever see with your unaided eyes, but it will never reveal twinkling. In the same way, a radio telescope that integrates the signal from a distant object for a long time might be useful in locating the object, but it will never reveal scintillation. The new scintillation telescope designed by Hewish would operate in real time, with a very rapid response to fluctuating signals.


It was more like an orchard than the everyday image of a telescope. A field covering four and a half acres was filled with an array of 2,048 regularly spaced dipole antennas. Each dipole (a long rod aerial) was mounted horizontally on an upright, so that it was a couple of meters above the ground, making a letter “T” with a wide cross bar. The length of the cross bar was chosen to fit the wavelength of radio noise that Hewish was interested in observing. (And, in fact, the cross bar was slightly below the top of its supporting pole; mixing the analogy, each of the dipoles, mounted across its support, looked like the crossed yard of a square rigged sailing ship, slung across its mast.) All of these antennas had to be wired up correctly, so that any radio noise that they picked up would be combined into one signal, which was fed into a receiver where the fluctuating signals were recorded automatically in pen and ink as wiggly lines on a long strip of paper continuously unrolling from a chart recorder. By varying the way in which the inputs from each of the 2,048 antennas were added together, this system made it possible to sweep a strip of the sky running north and south, and directly overhead at Cambridge. But in order to do this, the wiring had to be just right. This tedious wiring task was obviously just the job for a research student.





style="display:inline-block;width:320px;height:100px"

data-ad-client="ca-pub-1695684225628968"

data-ad-slot="9765765939">


The aim of the project was to identify very distant radio sources, known as quasars, by their scintillation. By the summer of 1967 (almost exactly at the time when I arrived in Cambridge to begin my own PhD studies, at the then-new Institute of Theoretical Astronomy), the new telescope was up and running, and showing up scintillating radio sources, as intended. You can’t “steer” a field full of antennas the way you can move a dish antenna around to look at different parts of the sky, but with a system like the one now being used by Bell for her real doctoral work you let the rotation of the Earth sweep everything around so that you cover the whole sky once every twenty four hours. Because the scintillation is caused by the solar wind, it is strongest when the Sun is high in the sky. But the Cambridge team left their system switched on permanently — having built it, it cost very little to run, and you never know when you might find something interesting and unexpected.


On 6 August 1967, that is exactly what happened. Each sweep around the sky produced a strip of chart 30 metres long, adorned with three wiggly lines from the pen recorders. As the telescope swept around the sky, any particular source would be “visible” to it for just three or four minutes, at a time when it was directly overhead. Bell’s job was to examine kilometers of chart to find anything that looked interesting in the wiggles. When she studied the chart for 6 August, she found a tiny fluctuation, about one centimetre long, corresponding to a faint source of radio noise observed by the telescope in the middle of the night, when it was pointing in the opposite direction from the Sun. It couldn’t be scintillation; most probably, it was interference from some human activity. Bell marked what she called the bit of “scruff” on the chart, and ignored it.


But the scruff kept coming back — almost, but not quite, at the same time every night. In September, Bell had enough information to show that the scruff was always coming from the same part of the sky, re-appearing at intervals not 24 hours apart, but 23 hours and 56 minutes long. This was an important clue, since, because of the motion of the Earth in its orbit around the Sun, the apparent passage of the stars overhead does indeed repeat every 23 hours and 56 minutes, not every 24 hours. Just when Bell and Hewish had decided that they had found something interesting, and set up a high speed recorder to monitor the fluctuations of the scruff, it faded from view for a few weeks. But in November it was back — and the new recorder showed that the scruff was actually a radio source fluctuating regularly with a period of 1.3 seconds.


This was such a surprise that, in spite of the fact that the source stayed in the same place among the fixed stars, Hewish dismissed it, once again, as interference from a human source of radio noise. Nobody had ever seen any astronomical object vary that rapidly — the most rapidly varying stars known in 1967 fluctuated with periods of about eight hours. But continuing observations gradually ruled out any possibility of human interference, and showed that the pulses themselves were extraordinarily precise, recurring every 1.33730113 seconds exactly, and each lasting for just 0.016 of a second. Together, these measurements showed that the source of the pulses must be very small. Because light travels at a finite speed, and nothing can travel faster, fluctuations in any signals from any source can only keep in step with one another if the source is small enough so that a light ray can travel right across it during the interval between pulses. It works like this. If a star like the Sun is so far away that we can only see it as a point of light, the brightness of the star, as we see it, depends on the brightness of different patches of the surface of the star, added together. You can imagine that the northern hemisphere of the star might get 10 per cent brighter while the southern hemisphere got 10 per cent dimmer, and the result would be that we saw no change in the total brightness of the star. We would only see the brightness fluctuate if the whole star got dimmer and brightened in step. And that can only happen if the variations happen slowly enough that there is time for some sort of message to get from the north pole to the south pole, saying, in effect, “I’m about to start getting brighter, so you’d better do so as well”. The “message” might be a regular variation in pressure, or a repeated change in the way convection is carrying energy outward from inside the star; the point is that whatever the physical cause of the variation, its influence can only spread at the speed of light, or less, so the whole star can only respond in step to a disturbance if it is small enough for the appropriate message to reach every part of it before the message changes. Otherwise, some parts will be getting brighter and others dimmer, in a confused mess of variations. A precise pulse 0.016 seconds long, repeating precisely every 1.33730113 seconds, could only come from something very small indeed — about the size of a planet, or even less.


Hewish and his team had to face the very real possibility, as of November 1967, that what they had detected was indeed a signal coming from a planet — a beacon radiated by another intelligent civilization. Tongues only slightly in their cheeks, they speculated among themselves that they might have made contact with little green men, and dubbed the source “LGM 1″. And Hewish decided to keep the lid on news of the discovery until they had carried out more observations. It was just as well that he did.


Working with another research group in Cambridge at that time, I knew, as did all the astronomers, that the radio people at the Cavendish were up to something. But just what it was they were up to, nobody could prise out of them. Well, we thought, no doubt they would tell us in their own good time. I wasn’t really very interested, anyway; I was too deeply embroiled in the first real task that I had been set as a research student, developing a computer programme that would describe the way in which stars oscillate, or vibrate. At the end of 1967, this seemed about as useful as spending your days wiring up a field full of antennas, and I still had no clear idea of how I might turn this work into anything useful enough to earn my PhD. By the end of February 1968, however, everything had changed.


Just before Christmas, Bell found another piece of scruff, coming from another part of the sky. This one turned out to be a similar source, pulsing with comparable precision to LGM 1, but with a period of 1.27379 seconds; soon, there were two more to add to the list, with periods of 1.1880 seconds and 0.253071 seconds, respectively. The more sources were discovered, the less likely the little green man explanation seemed. And, in any case, careful observations of the first of these objects had shown, by the beginning of 1968, no trace of the variations that you would expect if they were actually coming from a planet in orbit around a star. They must, after all, be natural. The LGM tag was quietly dropped, and Hewish decided it would be safe to go public — first with a seminar in Cambridge, to let the rest of the astronomers there in on the act, and then, almost immediately, with a paper in Nature (the issue dated 24 February 1968) announcing the discovery to the world.


The radio astronomers had indeed discovered a new kind of rapidly varying radio source. The title of the discovery paper was “Observation of a Rapidly Pulsating Radio Source”, and the term “pulsating radio source” soon gave rise to the name “pulsar”, which stuck. But what were these pulsars that Bell had discovered?


With the announcement of the discovery of pulsars, all hell broke loose among the theorists. A whole new kind of previously unsuspected astronomical objects had been discovered, and somebody was going to make their name by finding an explanation for the phenomenon. In the discovery paper, Hewish, Bell and their colleagues pointed towards what seemed the obvious possibilities. If the radio pulses were being produced by a natural process, not by an alien civilization, they had to be coming from a compact star. Nothing else could supply the energy required to power the pulses. A star the size of a planet like the Earth had to be a white dwarf; anything smaller (also allowed by the rapid pulsations) would have to be a neutron star.


At that time, astronomers knew that white dwarfs existed. But neutron stars — objects so dense that the mass of the Sun would be packed into a sphere a mere 10 km across — were regarded as a wild theoretical speculation.


Many stars were known to oscillate, or vibrate, breathing in and out as a result of regular variations in the processes producing energy inside them, and varying in brightness as a result. Maybe this could also happen in compact radio stars, “The extreme rapidity of the pulses,” said the Cambridge team, “suggests an origin in terms of the pulsation of an entire star.”4 And they pointed out that the rapid speed of the fluctuation meant that the star doing the pulsating had to be either a white dwarf or a neutron star. There was one snag. Although calculations of the pulsation periods of white dwarfs had been carried out by theorists in 1966, the basic periods they came up with were no lower than 8 seconds, a little too big to explain the pulsars. On the other hand, even a simple calculation showed that neutron stars would vibrate with periods much shorter than those of the first pulsars discovered, around a few thousandths of a second. White dwarfs looked the better bet, if some way could be found to allow them to vibrate a little more rapidly than the earlier calculations had suggested. Further calculations showed that by allowing for the effects of rotation, white dwarfs might vibrate as rapidly as ten times a second. But as more observations of more pulsars were made by radio astronomers around the world (a couple of dozen by the end of 1968; scores more by now), it became clear that pulsars could not possibly be white dwarfs, after all.


The problem was that the fastest possible vibration period, using unrealistic amounts of rotation, was still greater than the periods of some of the new pulsars being discovered. One discovery was particularly significant. It was made by astronomers using the 300- foot dish antenna at Green Bank, West Virginia — just about any kind of radio telescope can observe pulsars, once you know what to look for. They found a pulsar flicking on and off thirty times a second, near the centre of a glowing cloud of gas known as the Crab Nebula. The high speed of the Crab pulsar, as it became known, was already enough to put the white dwarf model in trouble (and even faster pulsars have been found since). Its location, however, was even more significant than its speed.


The Crab Nebula is actually the debris from a supernova explosion — one which was observed from Earth by Chinese astronomers in 1054 AD. Walter Baade had pointed out, years before, that if supernova explosions left neutron stars behind, the best place to look for a neutron star would be in the middle of the Crab Nebula. He had even identified a particular star in the Crab Nebula that he said might be the neutron star left behind by the explosion. Until 1968, almost every body else thought he was wrong — although, as the fact that neutron stars were even mentioned by Hewish’s team in the pulsar discovery paper shows, by the middle of the 1960s a few theorists were dabbling with calculations of the structure and behaviour of such objects. But the radio observations showed that the Crab pulsar seemed to be in the same place as the star Baade was so interested in. Further studies showed that this star was actually flicking on and off, in visible light, thirty times a second — something that nobody could have conceived as being possible just a few months before. A star that flickered so rapidly was beyond the wildest imaginings of the most daring theorist. Yet it did. It was indeed the pulsar, energetic enough to be detected as visible light, not just with lower energy radio waves.


By the time those observations were made, at the Steward Observatory on Kitt Peak, Arizona, in January 1969, everyone was convinced that pulsars are indeed neutron stars. And it had also become clear that, in spite of their name, they are not pulsating, but rotating, beaming radio waves (and in some cases light) out through space from an active site on their surface. The pulses produced by a pulsar are the equivalent of a celestial lighthouse (but natural, not the product of an alien civilization), flicking its beam past the Earth repeatedly as the underlying star rotates. There is now an overwhelming weight of evidence that this is indeed the case, and that pulsars are neutron stars spinning so fast that in many cases a spot on the equator of such a star is being whirled around at a sizeable fraction of the speed of light.


The person who put the idea down on paper, and published it in Nature in the early summer of that year (volume 218, page 731), was Tommy Gold, who thereby gained fame as the man who worked out the true nature of pulsars. In fact, not long before the announcement of the discovery of pulsars (and after Jocelyn Bell had first noticed the bit of scruff on her charts) Franco Pacini had published a paper in Nature, late in 19675, in which he pointed out that if an ordinary star did collapse to form a neutron star, the collapse would make it spin faster (like a spinning ice skater drawing in her arms) and strengthen the star’s magnetic field, as it was squeezed, along with the matter, into a smaller volume. Such a rotating magnetic dipole, said Pacini, would pour out electromagnetic radiation, and this could explain details of the way the central part of the Crab Nebula still seems to be being pushed outwards, nearly a thousand years after those Chinese astronomers saw the supernova explode. It may seem a little unfair that Gold to some extent stole Pacini’s thunder by linking the rotating neutron star idea with pulsars; however, it’s worth mentioning that similar ideas about the source of energy in the Crab nebula had been aired by Soviet researchers a couple of years previously, and that as far back as 1951 Gold had speculated, at a conference held at University College, London, that intense radio noise might be generated in the neighbourhood of collapsed, dense stars.


One key feature of the application of these ideas to pulsars, predicted by Gold in his 1968 paper, was that rotating neutron stars ought to slow down slightly, spinning less fast as time passes; when measurements carried out by a team using the thousand-foot dish antenna built into as natural valley in Arecibo, Puerto Rico, found that the pulse rate from the Crab pulsar was indeed slowing down, by about a millionth of a second per month, “Gold’s model” could no longer be doubted.


Jocelyn Bell got her PhD (and Hewish later received a Nobel prize) chiefly for discovering pulsars; I got my PhD partly for proving that pulsars could not possibly be pulsating white dwarf stars. And since 1968, dozens of astronomers have built entire careers upon her discovery of that little piece of scruff.


Redshift record may be longstanding


PROSPECTS of finding any quasars with redshifts higher than the present record of 4.9 are very low, according to an analysis carried out by Steve Warren, of the University of Oxford. Quasar redshifts are interpreted as a measure of the distance of these objects from us in the expanding Universe, and because light travels at a finite speed we see quasars with higher redshifts as they were when the Universe was younger. Warren’s analysis strongly suggests that there was a burst of quasar formation at a particular epoch in the life of the Universe. Working out just how many quasars there are at different redshifts is difficult, because faint objects become harder to see at greater distances. It has been clear for some time that the number density of quasars really does increase between redshifts of 2 and 3; but it has not been clear whether the lack of observations of many quasars with redshifts above 4 is because there really are few of them, or because they are too far away to be seen. Warren has compared the distribution of quasars with different brightnesses but the same redshifts against the numbers actually seen at each redshift. He told a specialist meeting of the Royal Astronomical Society in February that the best explanation of the observations is that there is a strong decline in the number density of quasars at redshifts above 3.3. At a redshift of 4, there are only one- sixth as many quasars as at a redshift of 3.3, so that the number density at a redshift of 4 is the same as the number density at a redshift of 2. It will, therefore, be “pretty hard work”, as Warren puts it, to find any quasar with a redshift greater than 5, no matter how good our telescopes become. The distribution of these objects does, however, definitely support the idea that very many galaxies (perhaps most) go through a quasar phase in their lifetimes. A quasar “lives” for about a hundred million years, and they are seen over a range of redshifts corresponding to a span of ten billion years, so less than one galaxy in a hundred need be active in this way at any one time. The model that best fits all of these observations is that galaxies are centred around supermassive black holes, and that quasar activity is produced in young galaxies by gas and other material falling in to the holes. After all the debris has been consumed, the galaxy settles down to a quiet life.


Tunnelling out of inflation


INFLATION theory, although widely favoured by cosmologists, is not the only way to explain why the temperature of the Universe is the same in all directions. According to David Hochberg and Thomas Kephart, of Vanderbilt University, in Nashville, Tennessee, the same effect could have been achieved by allowing energy to leak through quantum wormholes when the Universe was young.


The uniformity of the Universe, and in particular the remarkable smoothness of the background radiation measured by the COBE satellite, poses a puzzle known as the horizon problem. If the Universe was born in a Big Bang some 15 billion years ago, as the weight of evidence suggests, and has been expanding ever since, then regions on opposite sides of the sky today could not have been “touching” one another (that is, in causal contact) at the time the background radiation last interacted with ordinary matter before it “decoupled” and went its own way. So how do those regions both “know” that they should have the same temperature, 2.74 K?


Standard cosmological thinking today argues that very early in its life the entire observable Universe was in a region so small that it was in causal contact, and that in order for it to become as large as it is today this seed, much smalller than an atomic nucleus, was “inflated” to macroscopic size by a burst of exponentially fast expansion. But Hochberg and Kephart point out that under the extreme conditions of the primordial Universe, other bizarre effects come into play. In particular, the energy density of the Universe at that time and the curvature of spacetime by intense gravitational fields would have been so great that it would allow tiny quantum wormholes to open up. These are tunnels through hyperspace with a black hole “throat” at each end. Each wormhole, the researchers say, is like a “handle” on ordinary space (Physical Review Letters, vol 70 p 2665).


These wormholes might be too small to allow matter to pass through them, and they could be very short-lived. But “to be of interest, wormholes need only stay open long enough for radiation to traverse the throat”, allowing the temperature on opposite sides of the early Universe (and everywhere else) to equalise.


There is, though, a possible snag with this scenario. As the Vanderbilt team points out, some solutions to the equations describing wormholes suggest that they may form tunnels through time, as well as through space. If so, radiation from the fireball era of the Universe could leak out into the Universe today, destroying the uniformity of the background radiation that the model was set up to explain.


The Universe getting Older


The debate about the age of the Universe has been fanned by the latest measurement of the Hubble constant, a measure of how rapidly the Universe is expanding. Several recent measurements have suggested a relatively high value for this parameter, implying that the time elapsed since the Big Bang may be less than the ages of the oldest stars — an obvious nonsense. But the new technique provides scope for the Universe to be older than its contents, as commonsense dictates. The key feature of the new study, reported by Peter Nugent and colleagues at the University of Oklahoma, working with Peter Hauschildt, of Arizona State University, is that it is based on direct measurements of the distances to other galaxies, with no intermediate steps. The measurements which give a high value of the Hubble constant involve several stepping stones, the key one being the properties of stars known as Cepheid variables, which are regarded as “standard candles” whose brightness indicates their distance.


The direct technique used by Nugent and his colleagues (Physical Review Letters, vol 75 p 394) is based on studies of the expanding shells of gas produced when stars explode as supernovae. The interval between the beginning of the supernova outburst and the time when it reaches maximum brightness can be used to calculate the absolute brightness of the supernova at its peak. The apparent brightness then gives the distance to the supernova, and hence to its parent galaxy, in a single step. Comparing this with the redshift in the light from that galaxy then gives the Hubble constant directly.


The value that comes out of the new study for the Hubble constant is between 40 and 60, in standard units, with a best estimate of 50. This contrasts with other estimates, based on the Cepheid technique, of 80 or more — but other “one step” techniques, including those based on gravitational lensing, also give a low value. A value of 50 would be entirely adequate to make the Universe some 20 billion years old, comfortably older than the oldest stars. It therefore looks increasingly likely that there is something wrong with the traditional stepping stone technique for measuring cosmological distances based on Cepheid variables.





style="display:inline-block;width:320px;height:100px"

data-ad-client="ca-pub-1695684225628968"

data-ad-slot="9765765939">



Cosmology for Beginners - The Birth of the Universe

No comments:

Post a Comment