Friday 8 April 2016

Pill for long-term drug release

Engineers develop a pill for long-term drug release. New tablet attaches to the lining of the GI tract, resists being pulled away.


long-term drug release, pill , medicine, GI tract, gastrointestinal tract, antibiotics ,

Researchers have created a new type of dual-sided pill that attaches to the gastrointestinal tract. One side of the pill sticks to mucosal surfaces, while the other is omniphobic, meaning that it repels everything it encounters.
Illustration: Christine Daniloff/MIT


Scientia — Researchers from MIT and Brigham and Women’s Hospital have designed a new type of long-term drug release pill that, once swallowed, can attach to the lining of the gastrointestinal tract and slowly release its contents. The tablet is engineered so that one side adheres to tissue, while the other repels food and liquids that would otherwise pull it away from the attachment site.


Such extended-release pills could be used to reduce the dosage frequency of some drugs, the researchers say. For example, antibiotics that normally have to be taken two or three times a day could be given just once, making it easier for patients to stick to their dosing schedule.


“This could be adapted to many drugs. Any drug that is dosed frequently could be amenable to this kind of system,” says Giovanni Traverso, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research, a gastroenterologist at Brigham and Women’s Hospital, and one of the senior authors of a paper describing the device in the April 6 issue of the journal Advanced Healthcare Materials.





Robert Langer, the David H. Koch Institute Professor and a member of the Koch Institute, is also a senior author of the paper. The paper’s lead author is Young-Ah Lucy Lee, a technical assistant at the Koch Institute.


Two faces


Over the past several decades, Langer’s lab has developed many types of materials that can be implanted in the body or attached to the skin for long-term drug release. To achieve similar, long-term drug release in the gastrointestinal tract, the researchers focused on a type of material known as mucoadhesives, which can stick to the mucosal linings of organs such as the stomach.


Scientists have previously explored using this kind of material for drug delivery to the GI tract, but it has proven difficult because food and liquid in the stomach become stuck to the tablet, pulling it away from the tissue before it can deliver its entire drug payload.


“The challenge with mucoadhesives is that the GI tract is a very rough and abrasive environment,” says Lee, a 2014 Wellesley College graduate who began this project as her senior thesis.


To overcome this challenge, the researchers decided to create a dual-sided device, also called a Janus device after the two-faced Roman god. One side sticks to mucosal surfaces, while the other is omniphobic, meaning that it repels everything it encounters.


For the mucoadhesive side, the researchers used a commercially available polymer known as Carbopol, which is often used industrially as a stabilizing or thickening agent. The omniphobic side consists of cellulose acetate that the researchers textured so that its surface would mimic that of a lotus leaf, which has micro and nanoscale protrusions that make it extremely hydrophobic. They then fluorinated and lubricated the surface, making it repel nearly any material.


The researchers used a pill presser to combine the polymers into two-sided tablets, which can be formed in many shape and sizes. Drugs can be either embedded within the cellulose acetate layer or placed between the two layers.


Long-term attachment


Using intestinal tissue from pigs, the researchers tested three versions of the tablet — a dual-sided mucoadhesive tablet, a dual-sided omniphobic tablet, and the Janus version, with one mucoadhesive side and one omniphobic side.


To simulate the tumultuous environment of the GI tract, the researchers flowed a mix of food including liquids and small pieces of bread and rice along the tissue and then added the tablets. The dual-sided omniphobic tablet took less than 1 second to travel along the tissue, and the dual-sided mucoadhesive stuck to the tissue for only 7 seconds before being pulled off. The Janus version stayed attached for the length of the experiment, about 10 minutes.





Tejal Desai, a professor of bioengineering and therapeutic sciences at the University of California at San Francisco, says this approach could make it possible to deliver larger quantities of drugs through the GI tract.


“The ability to precisely engineer the adhesiveness of a particle opens up possibilities of designing particles to selectively adhere to specific regions of the GI tract, which in turn can increase the local or systemic concentrations of a particular drug,” says Desai, who was not involved in the work.


The researchers now plan to do further tests in animals to help them tune how long the tablets can stay attached, the rate at which drugs are released from the material, and the ability to target the material to specific sections of the GI tract.


In addition to delivering antibiotics, the two-sided material may help to simplify drug regimens for malaria or tuberculosis, among other diseases, Traverso says. The researchers may also further pursue the development of tablets with omniphobic coatings on both sides, which they believe could help patients who have trouble swallowing pills.


“There are certain medications that are known to get stuck, particularly in the esophagus. It causes this massive amount of inflammation because it gets stuck and it causes irritation,” Traverso says. “Texturing the surfaces really opens up a new way of thinking about controlling and tuning how these drug formulations travel.”


The research was funded by the Bill and Melinda Gates Foundation, the National Institutes of Health, and the Alexander von Humboldt Foundation.




– Credit and Resource –


Anne Trafton | MIT News Office




Pill for long-term drug release

Monday 4 April 2016

First temperature map of a super-Earth

First temperature map of a super-Earth reveals lava world. Exoplanet Super-Earth lacks atmosphere, is covered in rivers of magma.


super-Earth, lava world, Exoplanet , atmosphere, lakes of boiling hot magma, MIT

This artist’s rendering shows the super-Earth 55 Cancri e in front of its parent star. The planet is essentially a heat-seeking fireball, orbiting extremely close to its star. It circles in just 18 hours, compared to Earth’s leisurely 365-day journey around the sun.
Image: ESA/Hubble, M. Kornmesser


Scientia — Astronomers from MIT, Cambridge University, and elsewhere have generated the first temperature map of a super-Earth exoplanet, revealing an inhospitable world covered in rivers and lakes of boiling hot magma.


Temperatures on the planet are so high that any atmosphere is likely to have been burned off or vaporized into space. The results are published today in the journal Nature.





The planet, named 55 Cancri e, resides in the constellation Cancer, at a relatively close 40 light years from Earth. It’s thought to have a rocky, rather than gaseous, composition, and at roughly twice the size of our planet, it is considered a super-Earth. But that is where all Earthly similarities end, as 55 Cancri e is essentially a heat-seeking fireball, orbiting extremely close to its star. It circles in just 18 hours, compared with Earth’s leisurely 365-day journey around the sun.


Because of its scorching orbit, scientists have thought 55 Cancri e must be incredibly hot. But the new temperature map shows that even the planet’s extreme proximity to its star can’t explain its blistering heat; in fact, the planet may be boiling from the inside out.


Based on their calculations, the scientists estimate that the planet’s day side — the side permanently facing its star — reaches an unbearable 3,000 kelvins, or 4,940 degrees Fahrenheit, while its night side — the side that never sees the star — is a more moderate 1,400 K, or 2,000 F.


This huge temperature difference suggests that the planet has little to no atmosphere, because having one would generate winds to distribute heat more evenly. The night side may consist of lava that’s cooled and hardened, much like the lava flows found in Hawaii. The day side, in contrast, is a constantly boiling cauldron of magma-filled rivers and lakes.


“The planet’s day side gets to ridiculously high temperatures, which is completely unexpected,” says co-author Julien de Wit, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “It is so hot that we need an unknown, additional source to make it that hot.”


One possible source, de Wit says, may be a process called tidal dissipation: Since the planet is so close to its star, the star’s gravitational pull may be strong enough to put a constant squeeze on the planet, causing 55 Cancri e to inwardly roast.


“It would basically boil the planet from the inside out,” de Wit says.



“A big bowl of magma”


The scientists, led by former MIT postdoc Brice-Olivier Demory, now a postdoc at Cambridge University, generated the temperature map from observations of the planet made by NASA’s Spitzer Space Telescope. Over 80 hours, Spitzer recorded the planet’s light as it circled its star several times. During this period, the researchers obtained measurements of light from various faces of the planet as it circled its star. From this, the team reconstructed a brightness map of the entire planet.


The researchers then used known size measurements of the planet and star, as well as the star’s temperature, to convert the planet’s brightness measurements to temperature.


“This is the first temperature map of a rocky planet,” de Wit notes. “We’ve been able to map really big planets, because they have more of a signal. But because this one is relatively close to us, and really warm, we’re able to do this for the first time for such a small planet. And we find that this is basically a big bowl of magma — essentially, a lava world.”





The temperature map suggests that 55 Cancri e lacks an atmosphere and that it may be heating up by some mechanism other than its star’s radiation. The map also shows a hotspot on the planet — an area of intense heat — that is not directly underneath the blazing star but curiously off-center, shifted about 42 degrees to the east.


“Either by chance or because of some weird physical processes, there is a really large pull of magma to the east,” de Wit says. “Such large-scale lava flow is unusual, but it’s possible.”


“It’s nothing short of remarkable that we have actual direct evidence of possible surface features on an extrasolar planet,” says Gregory Laughlin, professor of astronomy and astrophysics at the University of California at Santa Cruz. “55 Cancri e is an incredibly alien world, and so it’s fascinating to speculate what these first tantalizing hints are telling us about what this planet looks like up close.”


Laughlin, who did not contribute to the study, adds that 55 Cancri e represents a class of super-Earths that are relatively common in the galaxy, though “compared to our Solar System’s roster of planets, it’s utterly bizarre.”


Nailing down science

Going forward, the team hopes to identify the source of the planet’s extreme heat, using more observations from Spitzer as well as from NASA’s James Webb Space Telescope, which is scheduled to launch in 2018. The observations that were used to generate 55 Cancri e’s temperature map were made possible thanks to enhancements that increased Spitzer’s sensitivity over the last few years.


“We were finding a signal that is so tiny compared to all the previous signals we’re used to observing from Spitzer,” de Wit says. “We were really pushing the boundaries and were able to nail down a very precise piece of science.”


In the end, de Wit says that extreme exoplanets like 55 Cancri e make our own planetary system seem increasingly unusual by comparison.


“Our solar system seems to be quite unique, when you look at all the crazy stuff that’s been discovered over the last 20 years,” de Wit says. “It’s fascinating, the more you think about it; it is a pretty rare, comfortable place here.”


This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory (JPL) and Caltech, under contract to NASA. Support for this work was provided by JPL/Caltech.




– Credit and Resource –


Jennifer Chu | MIT News Office




First temperature map of a super-Earth

Hybrid system cuts coal-plant emissions

Hybrid system could cut coal-plant emissions in half. Combining gasification with fuel-cell technology could boost efficiency of coal-powered plants.


greenhouse gas emissions, coal , generate electricity, electricity efficiency, Hybrid system, coal-plant, fuel-cell technology, coal gasification

This illustration depicts a possible configuration for the combined system proposed by MIT researchers. At the bottom, steam (pink arrows) passes through pulverized coal, releasing gaseous fuel (red arrows) made up of hydrogen and carbon monoxide. This fuel goes into a solid oxide fuel cell (disks near top), where it reacts with oxygen from the air (blue arrows) to produce electricity (loop at right).
Illustration: Jeffrey Hanna


Scientia — Most of the world’s nations have agreed to make substantial reductions in their greenhouse gas emissions, but achieving these goals is still a considerable technological, economic, and political challenge. The International Energy Agency has projected that, even with the new agreements in place, global coal-fired power generation will increase over the next few decades. Finding a cleaner way of using that coal could be a significant step toward achieving carbon-emissions reductions while meeting the needs of a growing and increasingly industrialized world population.





Now, researchers at MIT have come up with a plan that could contribute to that effort by making it possible to generate electricity from coal with much greater efficiency — possibly reaching as much as twice the fuel-to-electricity efficiency of today’s conventional coal plants. This would mean, all things being equal, a 50 percent reduction in carbon dioxide emissions for a given amount of power produced.


The concept, proposed by MIT doctoral student Katherine Ong and Ronald C. Crane (1972) Professor Ahmed Ghoniem, is described in their paper in the Journal of Power Sources. The key is combining into a single system two well-known technologies: coal gasification and fuel cells.


Coal gasification is a way of extracting burnable gaseous fuel from pulverized coal, rather than burning the coal itself. The technique is widely used in chemical processing plants as a way of producing hydrogen gas. Fuel cells produce electricity from a gaseous fuel by passing it through a battery-like system where the fuel reacts electrochemically with oxygen from the air.


The attraction of combining these two systems, Ong explains, is that both processes operate at similarly high temperatures of 800 degrees Celsius or more. Combining them in a single plant would thus allow the two components to exchange heat with minimal energy losses. In fact, the fuel cell would generate enough heat to sustain the gasification part of the process, she says, eliminating the need for a separate heating system, which is usually provided by burning a portion of the coal.


Coal gasification, by itself, works at a lower temperature than combustion and “is more efficient than burning,” Ong says. First, the coal is pulverized to a powder, which is then heated in a flow of hot steam, somewhat like popcorn kernels heated in an air-popper. The heat leads to chemical reactions that release gases from the coal particles — mainly carbon monoxide and hydrogen, both of which can produce electricity in a solid oxide fuel cell.


In the combined system, these gases would then be piped from the gasifier to a separate fuel cell stack, or ultimately, the fuel cell system could be installed in the same chamber as the gasifier so that the hot gas flows straight into the cell. In the fuel cell, a membrane separates the carbon monoxide and hydrogen from the oxygen, promoting an electrochemical reaction that generates electricity without burning the fuel.


Because there is no burning involved, the system produces less ash and other air pollutants than would be generated by combustion. It does produce carbon dioxide, but this is in a pure, uncontaminated stream and not mixed with air as in a conventional coal-burning plant. That would make it much easier to carry out carbon capture and sequestration (CCS) — that is, capturing the output gas and burying it underground or disposing of it some other way — to eliminate or drastically reduce the greenhouse gas emissions. In conventional plants, nitrogen from the air must be removed from the stream of gas in order to carry out CCS.


One of the big questions answered by this new research, which used simulations rather than lab experiments, was whether the process would work more efficiently using steam or carbon dioxide to react with the particles of coal. Both methods have been widely used, but most previous attempts to study gasification in combination with fuel cells chose the carbon dioxide option. This new study demonstrates that the system produces two to three times as much power output when steam is used instead.





Conventional coal-burning power plants typically have very low efficiency; only 30 percent of the energy contained in the fuel is actually converted to electricity. In comparison, the proposed combined gasification and fuel cell system could achieve efficiencies as high as 55 to 60 percent, Ong says, according to the simulations.


The next step would be to build a small, pilot-scale plant to measure the performance of the hybrid system in real-world conditions, Ong says. Because the individual component technologies are all well developed, a full-scale operational system could plausibly be built within a few years, she says. “This system requires no new technologies” that need more time to develop, she says. “It’s just a matter of coupling these existing technologies together well.”


The system would be more expensive than existing plants, she says, but the initial capital investment could be paid off within several years due to the system’s state-of-the-art efficiency. And given the importance of reducing emissions, that initial capital expense may be easy to justify, especially if new fees are attached to the carbon dioxide emitted by fossil fuels.


“If we’re going to cut down on carbon dioxide emissions in the near term, the only way to realistically do that is to increase the efficiency of our fossil fuel plants,” she says.


“The exploration of unconventional hybrid cycles” undertaken by Ong and Ghoniem “represents the future of clean energy production in this country,” says David Tucker, a research scientist at the U.S. Department of Energy’s National Energy Technology Laboratory in West Virginia, who was not involved in this research. “Many technologies that may seem unfeasible at first glance hold the greatest promise as solutions to difficult problems. The first step is always to evaluate the potential of these cycles,” as the MIT team has done, he says.




– Credit and Resource –


David L. Chandler | MIT News Office




Hybrid system cuts coal-plant emissions

Metal alloys that stand up to hydrogen

hydrogen, nuclear reactors, metal alloys,

An artist’s rendering of nuclear fuel rods in front of a colorful computational valley predicted for alloying compositions.
Image: Mostafa Youssef and Lixin Sun


How to make metal alloys that stand up to hydrogen. New approach to preventing embrittlement could be useful in nuclear reactors.


Scientia — High-tech metal alloys are widely used in important materials such as the cladding that protects the fuel inside a nuclear reactor. But even the best alloys degrade over time, victims of a reactor’s high temperatures, radiation, and hydrogen-rich environment. Now, a team of MIT researchers has found a way of greatly reducing the damaging effects these metals suffer from exposure to hydrogen.


The team’s analysis focused on zirconium alloys, which are widely used in the nuclear industry, but the basic principles they found could apply to many metallic alloys used in other energy systems and infrastructure applications, the researchers say. The findings appear in the journal Physical Review Applied, in a paper by MIT Associate Professor Bilge Yildiz, postdoc Mostafa Youssef, and graduate student Ming Yang.





Hydrogen, which is released when water molecules from a reactor’s coolant break apart, can enter the metal and react with it. This leads to a reduction in the metal’s ductility, or its ability to sustain a mechanical load before fracturing. That in turn can lead to premature cracking and failure. In nuclear power plants, “the mechanical integrity of that cladding is extremely important,” Yildiz says, so finding ways to improve its longevity is a high priority.


But it turns out that the initial entry of the hydrogen atoms into the metal depends crucially on the characteristics of a layer that forms on the metal’s surface.


A coating of zirconium oxide naturally forms on the surface of the zirconium in high-temperature water, and it acts as a kind of protective barrier. If carefully engineered, this layer of oxide could inhibit hydrogen from getting into the crystal structure of the metal. Or, under other conditions, it could emit the hydrogen in gas form.


While researchers have been studying hydrogen embrittlement for decades, Yildiz says, “almost all of the work has been on what happens to hydrogen inside the metal: What are the consequences, where does it go, how does it lead to embrittlement? And we learned a lot from those studies.” But there had been very little work on how hydrogen gets inside in the first place, she says. How hydrogen can enter through this surface oxide layer, or how it can be discharged as a gas from that layer, has not been quantified.


“If we know how it enters or how it can be discharged or ejected from the surface, that gives us the ability to predict surface modifications that can reduce the rate of entry,” Yildiz says. Her team has found that it’s possible to do just that, improving the barrier’s ability to block incoming hydrogen, potentially by as much as a thousandfold.


The hydrogen has to first dissolve in the oxide layer before penetrating into the bulk of the metal beneath. But the hydrogen’s dissolution can be controlled by doping that layer — that is, by introducing atoms of another element or elements into it. The team found that the amount of hydrogen solubility in the oxide follows a valley-shaped curve, depending on the doping element’s ability to introduce electrons into the oxide layer.


“There is a certain type of doping element that minimizes hydrogen’s ability to penetrate, whereas other doping elements can introduce a maximum amount of electrons in the oxide, and facilitate the ejection of hydrogen gas right at the surface of the oxide,” says Mostafa. So being able to predict the dopants that belong to each type is the essential trick to making an effective barrier.


The team’s findings suggest two potential strategies, one aimed at minimizing hydrogen penetration and one at maximizing the ejection of hydrogen atoms that do get in.


The blocking strategy is “to target the bottom of the valley” by incorporating the right amount of an element, such as chromium, that produces this effect. The other strategy is based on different elements, including niobium, that propel hydrogen out of the oxide surface and protect the underlying zirconium alloy.


The doping could be accomplished by incorporating a small amount of the dopant metal into the initial zirconium alloy matrix, so that this in turn gets incorporated into the oxidation layer that naturally forms on the metal, the team says.





The team stresses that what they found is likely to be a general approach that can be applied to all kinds of alloys that form oxidation layers on their surfaces, as most do. Their approach could lead to improvements in longevity for alloys used in fossil fuel plants, bridges, pipelines, fuel cells, and many other applications.


“Any place you have metals exposed to high temperatures and water,” Yildiz says — for example on equipment used in oil and gas extraction — is a potential situation where this work might be applicable.


“The behavior of hydrogen has been cited by the commercial nuclear power community as perhaps the greatest challenge in nuclear reactor fuel performance under normal operating conditions, and more recently, as a safety issue under accident conditions,” says Gary Was, a professor of sustainable energy, environmental, and earth systems engineering at the University of Michigan, who was not involved in this work. The approach to the problem taken by these researchers, he says, “is unique and intriguing but as important, it is the first attempt to provide a physics-based understanding of the behavior of hydrogen in zirconium alloys.”


While this paper doesn’t yet answer all questions about this material, he says, “the approach taken by Yildez and her students raises discussion of this topic to a higher level, and will undoubtedly stimulate more research that is grounded in physics rather than empiricism.”


The work was supported by the Consortium for Advanced Simulation of Light Water Reactors, funded by the U.S. Department of Energy, and computational support was provided by the U.S. National Science Foundation.




Freedawn Scientia –


David L. Chandler | MIT News Office


– Credit and Resource –



Metal alloys that stand up to hydrogen

New Aspects of Pluto and its Moons

Science Papers Reveal New Aspects of Pluto and its Moons


Pluto , NASA, solar system, planet, moons, New Horizons spacecraft,

This image of haze layers above Pluto’s limb was taken by the Ralph/Multispectral Visible Imaging Camera (MVIC) on NASA’s New Horizons spacecraft. About 20 haze layers are seen; the layers have been found to typically extend horizontally over hundreds of kilometers, but are not strictly parallel to the surface. For example, scientists note a haze layer about 3 miles (5 kilometers) above the surface (lower left area of the image), which descends to the surface at the right.
Credits: NASA/JHUAPL/SwRI/Gladstone et al./Science (2016)


Scientia — A year ago, Pluto was just a bright speck in the cameras of NASA’s approaching New Horizons spacecraft, not much different than its appearances in telescopes since Clyde Tombaugh discovered the then-ninth planet in 1930.

But this week, in the journal Science, New Horizons scientists have authored the first comprehensive set of papers describing results from last summer’s Pluto system flyby. “These five detailed papers completely transform our view of Pluto – revealing the former ‘astronomer’s planet’ to be a real world with diverse and active geology, exotic surface chemistry, a complex atmosphere, puzzling interaction with the sun and an intriguing system of small moons,” said Alan Stern, New Horizons principal investigator from the Southwest Research Institute (SwRI), Boulder, Colorado.





Pluto , NASA, solar system, planet, moons, New Horizons spacecraft, Sputnik Planum,

Above are New Horizons’ views of the informally named Sputnik Planum on Pluto (top) and the informally named Vulcan Planum on Charon (bottom). The Sputnik Planum strip measures 228 miles (367 kilometers) long, and the Vulcan Planum strip measures 194 miles (312 kilometers) long. Illumination is from the left. The bright, nitrogen-ice plains are defined by a network of crisscrossing troughs. This observation was obtained by the Ralph/Multispectral Visible Imaging Camera (MVIC) at a resolution of 1,050 feet (320 meters) per pixel. The Vulcan Planum view in the bottom panel includes the “moated mountain” Clarke Mons just above the center of the image. The water ice-rich plains display a range of surface textures, from smooth and grooved at left, to pitted and hummocky at right. This observation was obtained by the Long Range Reconnaissance Imager (LORRI) at a resolution of 525 feet (160 meters) per pixel.
Credits: NASA/JHUAPL/SwRI


After a 9.5-year, 3-billion-mile journey – launching faster and traveling farther than any spacecraft to reach its primary target – New Horizons zipped by Pluto on July 14, 2015. New Horizons’ seven science instruments collected about 50 gigabits of data on the spacecraft’s digital recorders, most of it coming over nine busy days surrounding the encounter.


The first close-up pictures revealed a large heart-shaped feature carved into Pluto’s surface, telling scientists that this “new” type of planetary world – the largest, brightest and first-explored in the mysterious, distant “third zone” of our solar system known as the Kuiper Belt – would be even more interesting and puzzling than models predicted.


The newly published Science papers bear that out; click here for a list of top results.


“Observing Pluto and Charon up close has caused us to completely reassess thinking on what sort of geological activity can be sustained on isolated planetary bodies in this distant region of the solar system, worlds that formerly had been thought to be relics little changed since the Kuiper Belt’s formation,” said Jeff Moore, lead author of the geology paper from NASA’s Ames Research Center, Moffett Field, California.


Scientists studying Pluto’s composition say the diversity of its landscape stems from eons of interaction between highly volatile and mobile methane, nitrogen and carbon monoxide ices with inert and sturdy water ice. “We see variations in the distribution of Pluto’s volatile ices that point to fascinating cycles of evaporation and condensation,” said Will Grundy of the Lowell Observatory, Flagstaff, Arizona, lead author of the composition paper. “These cycles are a lot richer than those on Earth, where there’s really only one material that condenses and evaporates – water. On Pluto, there are at least three materials, and while they interact in ways we don’t yet fully understand, we definitely see their effects all across Pluto’s surface.”





Pluto , NASA, solar system, planet, moons, New Horizons spacecraft, Sputnik Planum,

This enhanced color view of Pluto’s surface diversity was created by merging Ralph/Multispectral Visible Imaging Camera (MVIC) color imagery (650 meters or 2,132 feet per pixel) with Long Range Reconnaissance Imager panchromatic imagery (230 meters or 755 feet per pixel). At lower right, ancient, heavily cratered terrain is coated with dark, reddish tholins. At upper right, volatile ices filling the informally named Sputnik Planum have modified the surface, creating a chaos-like array of blocky mountains. Volatile ice also occupies a few nearby deep craters, and in some areas the volatile ice is pocked with arrays of small sublimation pits. At left, and across the bottom of the scene, gray-white methane ice deposits modify tectonic ridges, the rims of craters, and north-facing slopes. The scene in this image is 260 miles (420 kilometers) wide and 140 miles (225 kilometers) from top to bottom; north is to the upper left.
Credits: NASA/JHUAPL/SwRI


Above the surface, scientists discovered Pluto’s atmosphere contains layered hazes, and is both cooler and more compact than expected. This affects how Pluto’s upper atmosphere is lost to space, and how it interacts with the stream of charged particles from the sun known as the solar wind. “We’ve discovered that pre-New Horizons estimates wildly overestimated the loss of material from Pluto’s atmosphere,” said Fran Bagenal, from the University of Colorado, Boulder, and lead author of the particles and plasma paper. “The thought was that Pluto’s atmosphere was escaping like a comet, but it is actually escaping at a rate much more like Earth’s atmosphere.”


SwRI’s Randy Gladstone of San Antonio is the lead author of the Science paper on atmospheric findings. He added, “We’ve also discovered that methane, rather than nitrogen, is Pluto’s primary escaping gas. This is pretty surprising, since near Pluto’s surface the atmosphere is more than 99 percent nitrogen.”


Scientists also are analyzing the first close-up images of Pluto’s small moons—Styx, Nix, Kerberos and Hydra. Discovered between 2005 and 2012, the four moons range in diameter from about 25 miles (40 kilometers) for Nix and Hydra to about six miles (10 kilometers) for Styx and Kerberos. Mission scientists further observed that the small satellites have highly anomalous rotation rates and uniformly unusual pole orientations, as well as icy surfaces with brightness and colors distinctly different from those of Pluto and Charon.


They’ve found evidence that some of the moons resulted from mergers of even smaller bodies, and that their surface ages date back at least 4 billion years. “These latter two results reinforce the hypothesis that the small moons formed in the aftermath of a collision that produced the Pluto-Charon binary system,” said Hal Weaver, New Horizons project scientist from the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, and lead author of the Science paper on Pluto’s small moons.


About half of New Horizons’ flyby data has now been transmitted home – from distances where radio signals at light speed need nearly five hours to reach Earth – with all of it expected back by the end of 2016.


“This is why we explore,” said Curt Niebur, New Horizons program scientist at NASA Headquarters in Washington. “The many discoveries from New Horizons represent the best of humankind and inspire us to continue the journey of exploration to the solar system and beyond.”




– Credit and Resource –


NASA




New Aspects of Pluto and its Moons

Top New Horizons Findings Reported in Science

New Horizons, Pluto, Moons, solar system, planets

This high-resolution image captured by NASA’s New Horizons spacecraft combines blue, red and infrared images taken by the Ralph/Multispectral Visual Imaging Camera (MVIC). The bright expanse is the western lobe of the “heart,” informally called Sputnik Planum, which has been found to be rich in nitrogen, carbon monoxide and methane ices.
Credits: NASA/JHUAPL/SwRI





  1. The age-dating of Pluto’s surface through crater counts has revealed that Pluto has been geologically active throughout the past 4 billion years. Further, the surface of Pluto’s informally-named Sputnik Planum, a massive ice plain larger than Texas, is devoid of any detectable craters and estimated to be geologically young – no more than 10 million years old.

  2. Pluto’s moon Charon has been discovered to have an ancient surface. As an example, the great equatorial expanse of smooth plains on Charon informally named Vulcan Planum (home of the “moated mountains” informally named Kubrick and Clarke Mons) is likely a vast cryovolcanic flow or flows that erupted onto Charon’s surface about 4 billion years ago. These flows are likely related to the freezing of an internal ocean that globally ruptured Charon’s crust.

  3. The distribution of compositional units on Pluto’s surface – from nitrogen-rich, to methane-rich, to water-rich – has been found to be surprisingly complex, creating puzzles for understanding Pluto’s climate and geologic history. The variations in surface composition on Pluto are unprecedented elsewhere in the outer solar system.

  4. Pluto’s upper atmospheric temperature has been found to be much colder (by about 70 degrees Fahrenheit) than had been thought from Earth-based studies, with important implications for its atmospheric escape rate. Why the atmosphere is colder is a mystery.

  5. Composition profiles for numerous important species in Pluto’s atmosphere (including molecular nitrogen, methane, acetylene, ethylene and ethane) have been measured as a function of altitude for the first time.

  6. Also for the first time, a plausible mechanism for forming Pluto’s atmospheric haze layers has been found. This mechanism involves the concentration of haze particles by atmospheric buoyancy waves (called “gravity waves” by atmospheric scientists), created by winds blowing over Pluto’s mountainous topography.

  7. Before the flyby, the presence of Pluto’s four small moons raised concerns about debris hazards in the system. But the Venetia Burney Student Dust Counter only counted a single dust particle within five days of the flyby. This is similar to the density of dust particles in free space in the outer solar system — about 6 particles per cubic mile — showing that the region around Pluto is, in fact, not filled with debris.

  8. New Horizons’ charged-particle instruments revealed that the interaction region between the solar wind and Pluto’s atmosphere is confined on the dayside of Pluto to within 6 Pluto radii, about 4,500 miles (7,000 kilometers). This is much smaller than expected before the flyby, and is likely due to the reduced atmospheric escape rate found from modeling of ultraviolet atmospheric occultation data.

  9. The high albedos (reflectiveness) of Pluto’s small satellites – about 50 to 80 percent – are entirely different from the much lower albedos of the small bodies in the general Kuiper Belt population, which range from about 5 to 20 percent. This difference lends further support to the idea that these satellites were not captured from the general Kuiper Belt population, but instead formed by agglomeration in a disk of material produced in the aftermath of the giant collision that created the entire Pluto satellite system.



– Credit and Resource –


NASA




Top New Horizons Findings Reported in Science

Programming language for living cells

A programming language for living cells. New programming language lets researchers design novel biological circuits.


Scientia — MIT biological engineers have created a programming language that allows them to rapidly design complex, DNA-encoded circuits that give new functions to living cells.


Using this language, anyone can write a program for the function they want, such as detecting and responding to certain environmental conditions. They can then generate a DNA sequence that will achieve it.


“It is literally a programming language for bacteria,” says Christopher Voigt, an MIT professor of biological engineering. “You use a text-based language, just like you’re programming a computer. Then you take that text and you compile it and it turns it into a DNA sequence that you put into the cell, and the circuit runs inside the cell.”





Voigt and colleagues at Boston University and the National Institute of Standards and Technology have used this language, which they describe in the April 1 issue of Science, to build circuits that can detect up to three inputs and respond in different ways. Future applications for this kind of programming include designing bacterial cells that can produce a cancer drug when they detect a tumor, or creating yeast cells that can halt their own fermentation process if too many toxic byproducts build up.


The researchers plan to make the user design interface available on the Web.


No experience needed

Over the past 15 years, biologists and engineers have designed many genetic parts, such as sensors, memory switches, and biological clocks, that can be combined to modify existing cell functions and add new ones.


However, designing each circuit is a laborious process that requires great expertise and often a lot of trial and error. “You have to have this really intimate knowledge of how those pieces are going to work and how they’re going to come together,” Voigt says.


Users of the new programming language, however, need no special knowledge of genetic engineering.


“You could be completely naive as to how any of it works. That’s what’s really different about this,” Voigt says. “You could be a student in high school and go onto the Web-based server and type out the program you want, and it spits back the DNA sequence.”


The language is based on Verilog, which is commonly used to program computer chips. To create a version of the language that would work for cells, the researchers designed computing elements such as logic gates and sensors that can be encoded in a bacterial cell’s DNA. The sensors can detect different compounds, such as oxygen or glucose, as well as light, temperature, acidity, and other environmental conditions. Users can also add their own sensors. “It’s very customizable,” Voigt says.


The biggest challenge, he says, was designing the 14 logic gates used in the circuits so that they wouldn’t interfere with each other once placed in the complex environment of a living cell.





In the current version of the programming language, these genetic parts are optimized for E. coli, but the researchers are working on expanding the language for other strains of bacteria, including Bacteroides, commonly found in the human gut, and Pseudomonas, which often lives in plant roots, as well as the yeast Saccharomyces cerevisiae. This would allow users to write a single program and then compile it for different organisms to get the right DNA sequence for each one.


Biological circuits

Using this language, the researchers programmed 60 circuits with different functions, and 45 of them worked correctly the first time they were tested. Many of the circuits were designed to measure one or more environmental conditions, such as oxygen level or glucose concentration, and respond accordingly. Another circuit was designed to rank three different inputs and then respond based on the priority of each one.


One of the new circuits is the largest biological circuit ever built, containing seven logic gates and about 12,000 base pairs of DNA.


Another advantage of this technique is its speed. Until now, “it would take years to build these types of circuits. Now you just hit the button and immediately get a DNA sequence to test,” Voigt says.


His team plans to work on several different applications using this approach: bacteria that can be swallowed to aid in digestion of lactose; bacteria that can live on plant roots and produce insecticide if they sense the plant is under attack; and yeast that can be engineered to shut off when they are producing too many toxic byproducts in a fermentation reactor.


The lead author of the Science paper is MIT graduate student Alec Nielsen. Other authors are former MIT postdoc Bryan Der, MIT postdoc Jonghyeon Shin, Boston University graduate student Prashant Vaidyanathan, Boston University associate professor Douglas Densmore, and National Institute of Standards and Technology researchers Vanya Paralanov, Elizabeth Strychalski, and David Ross.




– Credit and Resource –


Anne Trafton | MIT News Office




Programming language for living cells