Billion year old water could harbor early life.

From the National Post.

An international research team reported Wednesday that miners near Timmins are tapping into an ancient underground oasis that may harbour prehistoric microbes. The water flowing out of fractures and bore holes in one mine near Timmins dates back more than a billion years, perhaps 2.6 billion, making it the oldest water known to exist on Earth, says the team that details the discovery in the journal Nature.

This water doesn’t predate life on Earth–we think life first appeared around 3.5B years ago–but it suggests that life may have originated below the Earth’s surface, where it would have been protected from UV radiation from the Sun. Before the evolution of bluegreen algae and the subsequent increase of atmospheric oxygen necessary to form the ozone layer which today protects the Earth’s surface from harmful amounts of UV radiation, life would have been restricted to the oceans. And now, perhaps underground as well. If ancient living organisms are found in the isolated underground water… I find this simply amazing to contemplate.

A flash of insight

Near the end of April, a record setting Gamma Ray Burst was observed in the constellation Leo. Not only did scientists observe the highest energy gamma ray photons ever measured from such an event, about 35 billion times more energetic than photons of visible light, but the duration of the event also set records. The hours-long ‘burst’ enabled other telescopes to observe the region of the sky containing the source object. The redshift distance was remarkably small for GRBs, too. This GRB was exceptionally energetic, remarkably long-lived, and closer to us than 95% of other GRBs seen. The image below is from NASA’s Swift X-ray telescope.

Image

The followup observations are expected to find a supernova remnant by the middle of May. The data collected may reveal new information about the physical processes involved in creating the most luminous events in the Universe.

Observation is the judge of the truth of an idea

It was thought in the Middle Ages that people simply made many observations, and the observations themselves suggest laws. But it does not work that way. It takes much more imagination than that. So the next thing we have to talk about is where the new ideas come from. Actually, it does not make any difference, as long as they come. We have a way of checking whether an idea is correct that has nothing to do with where it came from. We simply test it against observation. So in science we are not interested in where an idea comes from.

There is no authority who decides what is a good idea. We have lost the need to go to an authority to find out whether an idea is true or not. We can read an authority and let him suggest something; we can try it out and find out if it is true or not. If it is not true, so much the worse–so the “authorities” lose some of their “authority.”

Richard Feynman

Science is the belief in the ignorance of experts

Retraction Watch is a blog which “track[s] retractions as a window into the scientific process.” A very interesting RT post came up recently which led to a NYT Magazine article on scientific fraudster Diederik Stapel. Dr. Stapel currently has 53 scientific papers which have been retracted.

A fair number of published, and therefore peer-reviewed, scientific papers have been retracted over the last few years, especially in Medicine. A PNAS study published last year found

A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975.

retract

The paper suggested that the role of scientific misconduct has been underestimated because the retraction notices often downplay or hide the true reason for the retraction.

Which brings us back to the Stapel retractions. In the NYT article, which is well worth reading in its entirety, Stapel reveals how he got away with his bogus research for so long:

The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

Science, itself, isn’t a guarantee of truth. Science is a method for finding the truth, if scientists are ruthlessly honest with other scientists, with the data, but most of all, with themselves. My belief is that the current practice of Science places far too little emphasis on replicating results with different scientists in different labs, using just the information provided in scientific papers. The proof of the pudding is in the eating, and the proof of research is in the replication. Simply finding new results isn’t enough, but the way we do Science today rewards the new and essentially ignores what happens next. The effects of dishonest research often is not confined to just the retracted paper. Scientists build on earlier results, and a bogus result can contaminate later research, sending honest scientists chasing down bunny trails. Being careful of unreplicated results is one way of dealing with this problem. Insisting peer-reviewed journals publish not only the paper but the raw data (and associated analysis tools!) used in the paper permits independent inspection of the reasoning in the paper. It turns out to be really hard to fake data! The ultimate technique for weeding out erroneous results lies in independent replication of claimed results, but that path is not valued today. We shouldn’t seek just the novel, but rather the true. The Scientific method has correctives for mistakes, inadvertent as well as dishonest, but we must take advantage of those techniques.

Richard Feynman said it best: Science is the belief in the ignorance of experts.

Evidence, Skepticism, and the Scientific Method

Judith Luber-Narod, a high-school science teacher at the Abby Kelley Foster Charter Public School in Worcester, Mass., has incorporated climate change into her environmental studies classes, even though she teaches in a somewhat conservative area.

“I hesitated a little bit talking about something controversial,” she said. “But then I thought, how can you teach the environment without talking about it?”

Her students, on the other hand, love topics some deem controversial, she said. She devised an experiment in which she set up two terrariums with thermometers and then increased the level of carbon dioxide, the main greenhouse gas, in one of them.

The students watched as that terrarium got several degrees hotter than the other.

“I say to them, ‘I’m here to show you the evidence,’ ” she said. “ ‘If you want to believe the evidence when we’re done, that’s up to you.’ ”

I’m still working on that Dark Energy post, but it is proving to be ‘interesting’ to write. In the meantime, I wanted to talk a little about the role of experiment and skepticism in Science. The quote above comes from a New York Times science article New Guidelines Call for Broad Changes in Science Education. I don’t mean to be hard on the teacher. I do mean to be a little hard on the author and editors. But mostly, I’d like to use this as a cautionary tale showing why Good Science is not easy to do.

So, what’s wrong with the little experiment designed to show students how the greenhouse gas carbon dioxide raises Earth’s temperature? Almost everything. In particular, it is a great example of how a little knowledge is a dangerous thing, and how the role of experiment is often misunderstood.

First, the greenhouse effect is not really how a greenhouse warms. The glass of a greenhouse will indeed absorb infrared radiation, reradiating some heat–which would otherwise escape to the outside–back into the greenhouse. But this effect is quite minor, and real greenhouses warm because the glass enclosure blocks convection, preventing hot air from rising and being replaced by cooler air flowing in to take its place. It is (relatively) easy to demonstrate this by replacing the glass panes of a greenhouse with panes made of rock salt. The rock salt is transparent to infrared radiation, and so does not stop radiative cooling. The salt panes do block the formation of convective air currents just as well as glass. A greenhouse with rock salt panes will warm like a glass greenhouse, so the real warming mechanism is the elimination of convective flow and not the reduction in radiative cooling.

Similarly, in almost all terrarium experiments like the ones mentioned above, the real warming mechanism at work is not the carbon dioxide keeping the infrared radiation from carrying off heat energy, but the carbon dioxide inhibiting the formation of convective currents of air. The carbon dioxide, being heavier than air, stays within the open top terrarium. It doesn’t get hot enough to rise over the rim of the terrarium and allow cooler outside air to flow in. This is (relatively) easy to demonstrate by using argon gas instead of carbon dioxide. Argon is heavier than air, and argon is transparent to infrared radiation (like the rock salt). A terrarium filled with argon gas will heat just as well as one filled with carbon dioxide. Ergo, the warming effect has very little to do with the carbon dioxide reducing radiative cooling of the objects in the terrarium.

So, what about “I’m here to show you the evidence. If you want to believe the evidence when we’re done, that’s up to you” that the teacher claims? The problem is the experimental result (the terrarium warming) has more than one explanation, and the experiment isn’t designed to eliminate effects other than greenhouse gas style radiative warming. Good science is really hard, because even if you see a predicted effect, it is necessary to rule out alternative explanations for the observed evidence. If your hypothesis predicts A, but evidence shows B, the hypothesis is wrong. But if the hypothesis predicts A and the evidence shows A, this doesn’t necessarily show the hypothesis is correct. Experiments must be designed to test all other explanations for A and rule them out before the evidence shows the hypothesis is correct.

Science requires skepticism. Science requires more than even a theory agreeing with the evidence. Sometimes, what you see isn’t quite what you (or your teacher) think it is. Don’t be hasty to agree with authority. Be skeptical.

Expanding on the concept of Dark Energy

Last post was about Dark Matter. Now we’ll talk about Dark Energy; what is the evidence for Dark Energy, why is it needed, and what exactly is Dark Energy?

Our tale starts back around 1905, when Einstein was developing the theory of gravity called General Relativity. By 1915, he had worked out a system of equations that described the structure of spacetime, the famous Einstein field equations. Cosmologists began searching for solutions to these equations, and solutions were found in which spacetime was expanding or contracting, but not static. This was troubling, since before Edwin Hubble discovered evidence for the expansion of the universe, virtually everyone thought the universe was a eternal, static entity. So Einstein went back to the drawing board and added a term, the Cosmological Constant, Λ in the equation below, modifying the field equations so they allowed solutions giving a static universe. However, these solutions were unstable, and the slightest change would start the universe expanding or contracting forever.

Einstein Field Equations

In 1929, Hubble published experimental results showing the general expansion of the universe (which we call the Hubble Flow). When Einstein learned that the universe was not static, but expanding, he dropped the Cosmological Constant, referring to it as the biggest blunder of his career. Had he taken the equations seriously, he could have predicted the expansion of the universe years before it was discovered by observation. In fact, the cosmologist Georges Lemaître did propose an expanding universe, based on solutions of Einstein’s field equations, two years before Hubble’s discovery.

The expansion of space causes distant objects to recede from each other.

The expansion of space causes distant objects to recede from each other.

Cosmologists immediately realized that the gravitational attraction of all the mass in the universe would work to slow the rate of expansion of the universe. Astronomers tried to measure the rate of expansion, to determine if the universe would eventually stop and begin to contract, or continue to expand forever. The observations showed the rate of expansion was near the dividing line between expand forever and eventual collapse inward in a Big Crunch. But these measurements didn’t include objects in the distant (and therefore older) universe, because those objects were too faint to measure with the instruments of the day.

By the 1990s, the technology was available to measure the light from very distant exploding stars, or supernovae. It turns out that one class of supernovae, Type 1a, produces a consistent peak light output, therefore by measuring how bright the peak of a Type 1a supernova appears to us, astronomers can infer the distance to the supernova. The dimmer the peak luminosity, the farther the supernova is from us. In addition to distance, star’s motion directly toward or away from the Earth can be determined by measuring the color change of an object.  For objects at cosmological distances, the line-of-sight velocity is so high, we know the expansion of the universe is responsible for virtually all of the line-of-sight velocity.

You are now ready for the last piece of the puzzle. Since light travels at a fixed, finite velocity, the more distant the star, the longer it takes for the light to reach the Earth. If light must travel 500 million years to reach the Earth, when we measure the object emitting that light, we are looking back 500 million years in time. If we measure the line-of-sight velocity (the object will be moving away from us at that distance) of a supernova so distant the light takes 500 million years to reach us, we are measuring the expansion speed of the universe 500 million years ago! By finding and measuring lots of faint supernovae, scientists can plot the line-of-sight velocity (speed of the universe’s expansion) against the peak brightness (which determines the distances to the supernovae, and therefore how far back in time we are seeing) of each supernovae. That plot shows how the expansion of the universe is changing over time. And that is exactly what these two teams did, measuring some stars so distant that they had exploded 4 billion years ago and whose light was just now reaching the earth. It was hard and exacting work.

The results were completely unexpected. Stunningly unexpected. Both teams found the expansion speed of the universe was speeding up, not slowing down, as the universe gets older. The expectation had been that gravity, always working to pull matter together, would gradually slow the speed of expansion as the universe got older. Indeed, the earlier measurements had not been able to see as far back in time, and therefore didn’t clearly show the speedup. Something in the fabric of spacetime was pushing outward, forcing the universe to expand faster and faster, more than overcoming the force of gravity that was working to slow the expansion.

Well, you can probably guess that mysterious force that is causing the universe to expand faster and faster we call Dark Energy. No one knew this force existed until measuring the distant supernovae revealed the accelerating expansion to the two teams. This discovery resulted in Nobel Prizes for the leaders of the two teams (3 prizes were given, the customary maximum number, to Saul Perlmutter, Brian Schmidt, and Adam Riess). Today, we don’t really know much about Dark Energy, except that it is stupendously powerful, that it pervades all of space, and it was completely unexpected. Well, maybe not to Einstein and Lemaître, because Dark Energy behaves very much like the hypothesized Cosmological Constant in Einstein’s field equations that produces a force counterbalancing the pull of gravity.

We don’t know much, but what we do know is this, the energy contained in the Dark Energy field is the dominant force in the universe today, representing a little less than three quarters of the total energy of the universe today. While it is too weak to be detected on a small scale, Dark Energy fills all of space and is the dominant force affecting the evolution of our universe. In the next post, we’ll talk a little about the various conjectures put forth to explain the origin and nature of Dark Energy.

Shedding some light on Dark Matter

What is Dark Matter? Why did scientists propose Dark Matter in the first place?

For over 80 years, astronomers have had a problem. They can measure the amount of matter we can see, the luminous matter of the Universe, consisting of dust, gas, and stars. Stars emit light, and that light illuminates the gas and dust in the interstellar medium, and in some places causes the gas to glow with its own light. But what if stars aren’t illuminating some of the matter? We wouldn’t see it. But even if matter isn’t emitting or reflecting light, we can tell where it is by the gravitational effects it produces. And back in the 1930s, astronomers discovered that the orbits of stars within our galaxy indicated a lot of mass, much more than could be accounted for by the luminous matter. Careful observations of clusters of galaxies showed a similar pattern. The orbits of galaxies in large clusters indicated a lot more mass than could be seen. In fact, when astronomers started looking, it seemed as if everywhere they looked, if the scale was galaxy sized or larger, they saw indications of much more mass that could be accounted for by the luminous matter. Galactic rotations, the distribution of gas and dust, gravitational lensing, and recently the tiny asymmetries in the Cosmic Microwave Background; all gravitational calculations showed much more mass than we could see.

What form does the missing mass take? If the mass consists of baryons (that is, just normal matter made of protons, neutrons, and electrons), there are two problems. One, there are theoretical reasons for believing the Big Bang simply didn’t produce as much baryonic matter as we need to account for the gravitational effects. Two, since baryonic matter interacts with light, we’d see it if it were near stars. One solution to the latter problem is to put the dark baryonic matter out in the halo of galaxies where there are few stars and therefore little light. But even there, we should see some interaction with the dim light from distant objects, and astronomers have looked carefully without finding enough of those interactions to account for the missing mass if that mass consists of baryons. So the missing mass is unlikely to be baryonic matter.

Our missing mass must therefore consist of exotic particles. The particles cannot be baryons, they cannot interact with the electromagnetic field (in other words, they don’t emit or reflect light), and they must have mass.  There are various candidate particles; axions, supersymmetric particles, and neutrinos. There is some observational evidence that neutrinos can’t be a significant fraction of the missing mass. But as of today, we simply do not know what kinds of particles make up the bulk of the Dark Matter of the universe.

There is at least one other explanation for the missing mass. If our current understanding of gravity is slightly wrong at very large distances, we can account for observed large-scale orbital dynamics with just luminous matter. This theory, or really theories, since there are several versions of gravity modification, was first proposed in the early 1980s. However, if the luminous matter produces the bulk of the gravitational effects, the center of gravity calculated using luminous matter in a system also must be the center of gravity calculated from the orbital dynamics of the system. Observations of at least one set of colliding galaxies show the center of mass of the luminous matter is not the same as the center of mass calculated from orbital dynamics. It turns out to be not so easy to adjust the non-Newtonian gravity theories to account for these observations. So for now, most cosmologists and astronomers side with the Dark Matter explanation (exotic particles with mass and no electromagnetic interaction) to solve the missing mass problem.

And that’s what Dark Matter is and why scientists think it exists, even though no one has ever detected Dark Matter itself, aside from its gravitational effects.

Dark Energy Camera at Cerro Tololo

Dark Energy Camera at Cerro Tololo

The Dark Energy Camera will survey more than 300M galaxies over the next 5 years. Scientists hope to learn more about Dark Energy, the hypothesized force that is speeding up the expansion of the Universe. The DEC is mounted on a telescope in the silver dome.

Confidence in Science; more Dark Matter musings

In April of 2012, a team of astronomers at th European Southern Observatory, lead by Christian Moni Bidin of the Universidad de Concepción in Chile, published a paper claiming that there was no gravitational evidence of Dark Matter within about 13,000 ly of the Sun (“In conclusion, the observations point to a lack of Galactic DM at the solar position, contrary to the expectations of all the current models of Galactic mass distribution”). The paper came with a rather confident Press Release. Now I don’t mean to disparage the research. The idea was a very clever one, attempting to detect the gravitational effects of nearby Dark Matter by looking at the orbits of stars around the galactic center. But when a Press Release, and especially a confident Press Release, comes at the same time a paper is made public, I tend to have misgivings about the actual strength of the research. Indeed, in this case, the ESO paper’s results depended on several assumptions, including one about the expected orbital velocities of stars above the galactic plane.

Within a few months of the ESO paper, Jo Bovy and Scott Tremaine from the Institute for Advanced Study published a paper questioning the assumption about the velocities of stars away from the galactic plane. The Bovy and Tremaine paper is generally agreed to be correct, and when the ESO paper’s analysis was redone with the orbital velocities corrected using a more data-driven method suggested by Bovy and Tremaine, the Dark Matter reappeared.

The moral of the story? Even really smart researchers make mistakes. The ESO paper was peer-reviewed and accepted for publication, but peer-review isn’t a guarantee of correctness. Extraordinary claims require extraordinary evidence, and it is always wise to be cautious when strong claims (Dark Matter exists! No, we’ve ruled it out near the Sun!) are made. Let a lot of smart people think over the analysis for a bit, even if the research is peer-reviewed.

Why AMS hasn’t (yet!) demonstrated the existence of dark-matter

The recent publicity around AMS’s first observational results sparked quite a bit of speculation on what those observations mean for dark-matter. AMS is a particle detector attached to the International Space Station, and it observes a wide variety of high-energy particles that, for historical reasons, are generally called cosmic rays. The experimentalists running AMS hope to discover new physics by studying these cosmic rays.

The first announced results from AMS came out a few days ago, and despite what you might read in the press, dark-matter has not been detected. What has been detected is the largest sample of antimatter cosmic rays to date, in the form of positrons (positrons are the anti-particle of the electron). Since a particle/anti-particle collision results in the total annihilation of both particles (releasing pure energy), it would be easy to spot significant concentrations of antimatter in the universe, or at least the boundary between significant antimatter concentrations and ordinary matter. Just look for a blaze of gamma ray photons! We don’t see this phenomenon in general, so antimatter is rare in the observable universe. But AMS has observed about 400,000 positrons (out of roughtly 25 billion cosmic ray events) since it began taking observations in 2011, which is by far the most antimatter seen in Nature to date. Since we see little to no antimatter concentrations in the universe, the antiparticles detected by AMS must have been created somewhat near the Earth (the longer they travel, the higher the probability they will annihilate in a collision with matter). It turns out that we expect these positrons to have been created within our own galaxy–that’s ‘near’ by astronomical standards–by various physical processes. The earliest observations of positrons from satellites indicated an excess of positrons over what was predicted from known physical processes in our galaxy. AMS has confirmed that excess positron flux. That means either new physical processes nearby, or something is wrong with the prediction that positrons do not survive the trip across the intergalactic medium. Both explanations indicate new physics.

One explanation involving new physical processes involves dark-matter. Dark-matter consists of particles that do not interact with the electromagnetic field (hence the ‘dark’ part) but which do interact with the gravitational field (the ‘matter’ part). Dark-matter might also explain some other puzzling observations in astronomy, but if dark-matter is the correct explanation, there is a huge amount of it. In fact, the dark-matter explanation requires that dark-matter be the dominate form of particles with mass in the Universe. Most dark-matter theories have collisions between dark-matter particles that result in non-dark-matter fragments, including antimatter. It is the possibility that dark-matter collisions are producing the positron excess that has everyone so excited. But the best we can say at the moment is that AMS’s results are consistent with various theories of dark-matter that have been put forth. And that’s about it. The principal investigator, Dr. Sam Ting, has been very careful to avoid overstating the implications of these first observations. There are other possibilities, besides dark-matter, which might explain the positron excess. The ‘shape’ of the positron excess when plotted against positron energy can be used to rule out some of the various competing theories; in particular, a sharp drop off near the top of the energy range measured by AMS is consistent with some (but not all!) dark-matter theories, and incompatible with the most likely non-dark-matter explanations. Only more observations, enough to be statistically meaningful, will enable scientists to distinguish with confidence between the various explanations. So there is a lot more work to be done.