Saturday 28 May 2016

Link Between Primordial Black Holes and Dark Matter

Scientist suggests possible link between primordial black holes and dark matter


primordial black holes, dark matter, NASA, Spitzer Space Telescope, Ursa Major, universe, space, gravity, massive exotic particle, X-ray background glows,

Left: This image from NASA’s Spitzer Space Telescope shows an infrared view of a sky area in the constellation Ursa Major. Right: After masking out all known stars, galaxies and artifacts and enhancing what’s left, an irregular background glow appears. This is the cosmic infrared background (CIB); lighter colors indicate brighter areas. The CIB glow is more irregular than can be explained by distant unresolved galaxies, and this excess structure is thought to be light emitted when the universe was less than a billion years old. Scientists say it likely originated from the first luminous objects to form in the universe, which includes both the first stars and black holes. Credit: NASA/JPL-Caltech/A. Kashlinsky (Goddard)


Scientia — Dark matter is a mysterious substance composing most of the material universe, now widely thought to be some form of massive exotic particle. An intriguing alternative view is that dark matter is made of black holes formed during the first second of our universe’s existence, known as primordial black holes. Now a scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, suggests that this interpretation aligns with our knowledge of cosmic infrared and X-ray background glows and may explain the unexpectedly high masses of merging black holes detected last year.

“This study is an effort to bring together a broad set of ideas and observations to test how well they fit, and the fit is surprisingly good,” said Alexander Kashlinsky, an astrophysicist at NASA Goddard. “If this is correct, then all galaxies, including our own, are embedded within a vast sphere of black holes each about 30 times the sun’s mass.”





In 2005, Kashlinsky led a team of astronomers using NASA’s Spitzer Space Telescope to explore the background glow of infrared light in one part of the sky. The researchers reported excessive patchiness in the glow and concluded it was likely caused by the aggregate light of the first sources to illuminate the universe more than 13 billion years ago. Follow-up studies confirmed that this cosmic infrared background (CIB) showed similar unexpected structure in other parts of the sky.


In 2013, another study compared how the cosmic X-ray background (CXB) detected by NASA’s Chandra X-ray Observatory compared to the CIB in the same area of the sky. The first stars emitted mainly optical and ultraviolet light, which today is stretched into the infrared by the expansion of space, so they should not contribute significantly to the CXB.


Yet the irregular glow of low-energy X-rays in the CXB matched the patchiness of the CIB quite well. The only object we know of that can be sufficiently luminous across this wide an energy range is a black hole. The research team concluded that primordial black holes must have been abundant among the earliest stars, making up at least about one out of every five of the sources contributing to the CIB.


The nature of dark matter remains one of the most important unresolved issues in astrophysics. Scientists currently favor theoretical models that explain dark matter as an exotic massive particle, but so far searches have failed to turn up evidence these hypothetical particles actually exist. NASA is currently investigating this issue as part of its Alpha Magnetic Spectrometer and Fermi Gamma-ray Space Telescope missions.


“These studies are providing increasingly sensitive results, slowly shrinking the box of parameters where dark matter particles can hide,” Kashlinsky said. “The failure to find them has led to renewed interest in studying how well primordial black holes—black holes formed in the universe’s first fraction of a second—could work as dark matter.”


Physicists have outlined several ways in which the hot, rapidly expanding universe could produce primordial black holes in the first thousandths of a second after the Big Bang. The older the universe is when these mechanisms take hold, the larger the black holes can be. And because the window for creating them lasts only a tiny fraction of the first second, scientists expect primordial black holes would exhibit a narrow range of masses.


Primordial black holes, if they exist, could be similar to the merging black holes detected by the LIGO team in 2014. This computer simulation shows in slow motion what this merger would have looked like up close. The ring around the black holes, called an Einstein ring, arises from all the stars in a small region directly behind the holes whose light is distorted by gravitational lensing. The gravitational waves detected by LIGO are not shown in this video, although their effects can be seen in the Einstein ring. Gravitational waves traveling out behind the black holes disturb stellar images comprising the Einstein ring, causing them to slosh around in the ring even long after the merger is complete. Gravitational waves traveling in other directions cause weaker, shorter-lived sloshing everywhere outside the Einstein ring. If played back in real time, the movie would last about a third of a second. Credit: SXS Lensing

On Sept. 14, gravitational waves produced by a pair of merging black holes 1.3 billion light-years away were captured by the Laser Interferometer Gravitational-Wave Observatory (LIGO) facilities in Hanford, Washington, and Livingston, Louisiana. This event marked the first-ever detection of gravitational waves as well as the first direct detection of black holes. The signal provided LIGO scientists with information about the masses of the individual black holes, which were 29 and 36 times the sun’s mass, plus or minus about four solar masses. These values were both unexpectedly large and surprisingly similar.





“Depending on the mechanism at work, primordial black holes could have properties very similar to what LIGO detected,” Kashlinsky explained. “If we assume this is the case, that LIGO caught a merger of black holes formed in the early universe, we can look at the consequences this has on our understanding of how the cosmos ultimately evolved.”


In his new paper, published May 24 in The Astrophysical Journal Letters, Kashlinsky analyzes what might have happened if dark matter consisted of a population of black holes similar to those detected by LIGO. The black holes distort the distribution of mass in the early universe, adding a small fluctuation that has consequences hundreds of millions of years later, when the first stars begin to form.


For much of the universe’s first 500 million years, normal matter remained too hot to coalesce into the first stars. Dark matter was unaffected by the high temperature because, whatever its nature, it primarily interacts through gravity. Aggregating by mutual attraction, dark matter first collapsed into clumps called minihaloes, which provided a gravitational seed enabling normal matter to accumulate. Hot gas collapsed toward the minihaloes, resulting in pockets of gas dense enough to further collapse on their own into the first stars. Kashlinsky shows that if black holes play the part of dark matter, this process occurs more rapidly and easily produces the lumpiness of the CIB detected in Spitzer data even if only a small fraction of minihaloes manage to produce stars.


As cosmic gas fell into the minihaloes, their constituent black holes would naturally capture some of it too. Matter falling toward a black hole heats up and ultimately produces X-rays. Together, infrared light from the first stars and X-rays from gas falling into dark matter black holes can account for the observed agreement between the patchiness of the CIB and the CXB.


Occasionally, some primordial black holes will pass close enough to be gravitationally captured into binary systems. The black holes in each of these binaries will, over eons, emit gravitational radiation, lose orbital energy and spiral inward, ultimately merging into a larger black hole like the event LIGO observed.


“Future LIGO observing runs will tell us much more about the universe’s population of black holes, and it won’t be long before we’ll know if the scenario I outline is either supported or ruled out,” Kashlinsky said.


Kashlinsky leads science team centered at Goddard that is participating in the European Space Agency’s Euclid mission, which is currently scheduled to launch in 2020. The project, named LIBRAE, will enable the observatory to probe source populations in the CIB with high precision and determine what portion was produced by black holes.




– Credit and Resource –


More information: A. Kashlinsky. LIGO GRAVITATIONAL WAVE DETECTION, PRIMORDIAL BLACK HOLES, AND THE NEAR-IR COSMIC INFRARED BACKGROUND ANISOTROPIES, The Astrophysical Journal (2016). DOI: 10.3847/2041-8205/823/2/L25 , On Arxiv: arxiv.org/abs/1605.04023


Journal reference: Astrophysical Journal Letters, Astrophysical Journal, arXiv


Provided by: NASA




Link Between Primordial Black Holes and Dark Matter

Neuroscientists illuminate role of autism gene

Neuroscientists illuminate role of autism linked gene. Loss of Shank gene prevents neuronal synapses from properly maturing.


Scientia — A new study from MIT neuroscientists reveals that a gene mutation associated with autism plays a critical role in the formation and maturation of synapses — the connections that allow neurons to communicate with each other.


Autism, gene, Neuroscientists , neuronal , synapses , neurons, MIT , Shank3 , Cells


Many genetic variants have been linked to autism, but only a handful are potent enough to induce the disorder on their own. Among these variants, mutations in a gene called Shank3 are among the most common, occurring in about 0.5 percent of people with autism.





Scientists know that Shank3 helps cells respond to input from other neurons, but because there are two other Shank proteins, and all three can fill in for each other in certain ways, it has been difficult to determine exactly what the Shank proteins are doing.


“It’s clearly regulating something in the neuron that’s receiving a synaptic signal, but some people find one role and some people find another,” says Troy Littleton, a professor in the departments of Biology and of Brain and Cognitive Sciences at MIT, a member of MIT’s Picower Institute for Learning and Memory, and the senior author of the study. “There’s a lot of debate over what it really does at synapses.”


Key to the study is that fruit flies, which Littleton’s lab uses to study synapses, have only one version of the Shank gene. By knocking out that gene, the researchers eliminated all Shank protein from the flies.


“This is the first animal where we have completely removed all Shank family proteins,” says Kathryn Harris, a Picower Institute research scientist and lead author of the paper, which appears in the May 25 issue of the Journal of Neuroscience.


Synaptic organization


Scientists already knew that the Shank proteins are scaffold proteins, meaning that they help to organize the hundreds of other proteins found in the synapse of a postsynaptic cell — a cell that receives signals from a presynaptic cell. These proteins help to coordinate the cell’s response to the incoming signal.


“Shank is essentially a hub for signaling,” Harris says. “It brings in a lot of other partners and plays an organizational role at the postsynaptic membrane.”


In fruit flies lacking the Shank protein, the researchers found two dramatic effects. First, the postsynaptic cells had many fewer boutons, which are the sites where neurotransmitter release occurs. Second, many of the boutons that did form were not properly developed; that is, they were not surrounded by all of the postsynaptic proteins normally found there, which are required to respond to synaptic signals.


The researchers are now studying how this reduction in functional synapses affects the brain. Littleton suspects that the development of neural circuits could be impaired, which, if the same holds true in humans, may help explain some of the symptoms seen in autistic people.


“During critical windows of social and language learning, we reshape our connections to drive connectivity patterns that respond to rewards and language and social interactions,” he says. “If Shank is doing similar things in the mammalian brain, one could imagine potentially having those circuits form relatively normally early on, but if they fail to properly mature and form the proper number of connections, that could lead to a variety of behavioral defects.”


Pinpointing an exact link to autism symptoms would be difficult to do in fruit fly studies, however.





“Although the core molecular machines that allow neurons to communicate are highly conserved between fruit flies and humans, the anatomy of the various circuits that are formed during evolution are quite different,” Littleton says. “It’s hard to jump from a synaptic defect in the fly to an autism-like phenotype because the circuits are so different.”


An unexpected link


The researchers also showed, for the first time, that loss of Shank affects a well-known set of proteins that comprise the Wnt (also known as Wingless) signaling pathway. When a Wnt protein binds to a receptor on the cell, it initiates a series of interactions that influence which genes are turned on. This, in turn, contributes to many cell processes including embryonic development, tissue regeneration, and tumor formation.


When Shank is missing from fruit flies, Wnt signaling is disrupted because the receptor that normally binds to Wnt fails to be internalized by the cell. Normally, a small segment of the activated receptor moves to the cell nucleus and influences the transcription of genes that promote maturation of synapses. Without Shank, Wnt signaling is impaired and the synapses do not fully mature.


“The Shank protein and the Wnt protein family are thought to be involved in autism independently, but the fact that this study discovered that Wnt and Shank are interacting brings the story into better focus,” says Bryan Stewart, a professor of cell and systems biology at the University of Toronto at Mississauga, who was not involved in the research. “Now we can look and see if those interactions between Wnt and Shank are potentially responsible for their role in autism.”


The finding raises the possibility of treating autism with drugs that promote Wnt signaling, if the same connection is found in humans.


“Because the link to Wnt signaling is new and hasn’t been picked up in mammalian studies, we really hope that that can inspire people to look for a connection to Wnt signaling in mammalian models, and maybe that can offer another avenue for how loss of Shank could be counteracted,” Harris says.




– Credit and Resource –


The research was funded by the National Institutes of Health and the Simons Center for the Social Brain at MIT.


Anne Trafton | MIT News Office




Neuroscientists illuminate role of autism gene

Automatic bug finder - Analysis Practical Coding

Automatic bug finder. System could make complex analysis practical for programs that import huge swaths of code.


Scientia — Symbolic execution is a powerful software-analysis tool that can be used to automatically locate and even repair programming bugs. Essentially, it traces out every path that a program’s execution might take.


Automatic bug finder, analysis , coding, computer science, code, programming , frameworks


But it tends not to work well with applications written using today’s programming frameworks. An application might consist of only 1,000 lines of new code, but it will generally import functions — such as those that handle virtual buttons — from a programming framework, which includes huge libraries of frequently reused code. The additional burden of evaluating the imported code makes symbolic execution prohibitively time consuming.


Computer scientists address this problem by creating simple models of the imported libraries, which describe their interactions with new programs but don’t require line-by-line evaluation of their code. Building the models, however, is labor-intensive and error prone, and the models require regular updates, as programming frameworks are constantly evolving.


Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory, working with colleagues at the University of Maryland, have taken an important step toward enabling symbolic execution of applications written using programming frameworks, with a system that automatically constructs models of framework libraries.





The researchers compared a model generated by their system with a widely used model of Java’s standard library of graphical-user-interface components, which had been laboriously constructed over a period of years. They found that their new model plugged several holes in the hand-coded one.


They described their results in a paper they presented last week at the International Conference on Software Engineering. Their work was funded by the National Science Foundation’s Expeditions Program.


“Forty years ago, if you wanted to write a program, you went in, you wrote the code, and basically all the code you wrote was the code that executed,” says Armando Solar-Lezama, an associate professor of electrical engineering and computer science at MIT, whose group led the new work. “But today, if you want to write a program, you go and bring in these huge frameworks and these huge pieces of functionality that you then glue together, and you write a little code to get them to interact with each other. If you don’t understand what that big framework is doing, you’re not even going to know where your program is going to start executing.”


Consequently, a program analyzer can’t just dispense with the framework code and concentrate on the newly written code. But symbolic execution works by stepping through every instruction that a program executes for a wide range of input values. That process becomes untenable if the analyzer has to evaluate every instruction involved in adding a virtual button to a window — the positioning of the button on the screen, the movement of the button when the user scrolls down the window, the button’s change of appearance when it’s pressed, and so on.


For purposes of analysis, all that matters is what happens when the button is pressed, so that’s the only aspect of the button’s functionality that a framework model needs to capture. More precisely, the model describes only what happens when code imported from a standard programming framework returns control of a program to newly written code.


“The only thing we care about is what crosses the boundary between the application and the framework,” says Xiaokang Qiu, a postdoc in Solar-Lezama’s lab and a co-author on the new paper. “The framework itself is like a black box that we want to abstract away.”


To generate their model, the researchers ran a suite of tutorials designed to teach novices how to program in Java. Their system automatically tracked the interactions between the tutorial code and the framework code that the tutorials imported.


“The nice thing about tutorials is that they’re designed to help people understand how the framework works, so they’re also a good way to teach the synthesizer how the framework works,” Solar-Lezama says. “The problem is that if I just show you a trace of what my program did, there’s an infinite set of programs that could behave like that trace.”





To winnow down that set of possibilities, the researchers’ system tries to fit the program traces to a set of standard software “design patterns.” First proposed in the late 1970s and popularized in a 1995 book called “Design Patterns,” design patterns are based on the idea that most problems in software engineering fit into just a few categories, and their solutions have just a few general shapes.


Computer scientists have identified roughly 20 design patterns that describe communication between different components of a computer program. Solar-Lezama, Qiu, and their Maryland colleagues — Jinseong Jeon, Jonathan Fetter-Degges, and Jeffrey Foster — built four such patterns into their new system, which they call Pasket, for “pattern sketcher.” For any given group of program traces, Pasket tries to fit it to each of the design patterns, selecting only the one that works best.


Because a given design pattern needs to describe solutions to a huge range of problems that vary in their particulars, in the computer science literature, they’re described in very general terms. Fortunately, Solar-Lezama has spent much of his career developing a system, called Sketch, that takes general descriptions of program functionality and fills in the low-level computational details. Sketch is the basis of most of his group’s original research, and it’s what reconciles design patterns and program traces in Pasket.


“The availability of models for frameworks such as Swing [Java’s library of graphical-user-interface components] and Android is critical for enabling symbolic execution of applications built using these frameworks,” says Rajiv Gupta, a professor of computer science and engineering at the University of California at Riverside. “At present, framework models are developed and maintained manually. This work offers a compelling demonstration of how far synthesis technology has advanced. The scalability of Pasket is impressive — in a few minutes, it synthesized nearly 2,700 lines of code. Moreover, the generated models compare favorably with manually created ones.”




– Credit and Resource –


Larry Hardesty | MIT News Office




Automatic bug finder - Analysis Practical Coding

Juno Spacecraft Crosses Jupiter/Sun Gravity Boundary

NASA’s Juno Spacecraft Crosses Jupiter/Sun Gravitational Boundary


Scientia — Since its launch five years ago, there have been three forces tugging at NASA’s Juno spacecraft as it speeds through the solar system. The sun, Earth and Jupiter have all been influential — a gravitational trifecta of sorts. At times, Earth was close enough to be the frontrunner. More recently, the sun has had the most clout when it comes to Juno’s trajectory. Today, it can be reported that Jupiter is now in the gravitational driver’s seat, and the basketball court-sized spacecraft is not looking back.


Juno Spacecraft, Juno, spacecraft, space, NASA, solar system, plants, Jupiter, gravity, Sun, gravitational forces

This artist’s rendering shows NASA’s Juno spacecraft making one of its close passes over Jupiter.
Credits: NASA/JPL-Caltech


“Today the gravitational influence of Jupiter is neck and neck with that of the sun,” said Rick Nybakken, Juno project manager at NASA’s Jet Propulsion Laboratory in Pasadena, California. “As of tomorrow, and for the rest of the mission, we project Jupiter’s gravity will dominate as the trajectory-perturbing effects by other celestial bodies are reduced to insignificant roles.”






Juno was launched on Aug. 5, 2011. On July 4 of this year, it will perform a Jupiter orbit insertion maneuver — a 35-minute burn of its main engine, which will impart a mean change in velocity of 1,212 mph (542 meters per second) on the spacecraft. Once in orbit, the spacecraft will circle the Jovian world 37 times, skimming to within 3,100 miles (5,000 kilometers) above the planet’s cloud tops. During the flybys, Juno will probe beneath the obscuring cloud cover of Jupiter and study its auroras to learn more about the planet’s origins, structure, atmosphere and magnetosphere.


Juno’s name comes from Greek and Roman mythology. The mythical god Jupiter drew a veil of clouds around himself to hide his mischief, and his wife — the goddess Juno — was able to peer through the clouds and reveal Jupiter’s true nature.


NASA’s Jet Propulsion Laboratory, Pasadena, California, manages the Juno mission for the principal investigator, Scott Bolton, of Southwest Research Institute in San Antonio. Juno is part of NASA’s New Frontiers Program, which is managed at NASA’s Marshall Space Flight Center in Huntsville, Alabama, for NASA’s Science Mission Directorate. Lockheed Martin Space Systems, Denver, built the spacecraft. The California Institute of Technology in Pasadena manages JPL for NASA.






– Credit and Resource –


NASA




Juno Spacecraft Crosses Jupiter/Sun Gravity Boundary

Life on Ceres

Life on Ceres? Mysterious changes in the bright spots still baffle scientists


Scientia –Bright spots on the dwarf planet Ceres continue to puzzle researchers. When recently a team of astronomers led by Paolo Molaro of the Trieste Astronomical Observatory in Italy, conducted observations of these features, they found out something unexpected. The scientists were surprised to detect that the spots brighten during the day and also show other variations. This variability still remains a mystery.


Life on Ceres, ceres, life, space, planets, dwarf planets, solar system, astroid


The bright features have been discovered by NASA’s Dawn spacecraft which is orbiting this dwarf planet, constantly delivering substantial information about it. These spots reflect far more light than their much darker surroundings. The composition of these features is discussed as the scientists debate if they are made of water ice, of evaporated salts, or something else.


Molaro and his colleagues studied the spots on Ceres in July and August 2015, using the High Accuracy Radial velocity Planet Searcher (HARPS), as was reported by the European Southern Observatory (ESO) earlier this year. This instrument, mounted on ESO’s 3.6m telescope at La Silla Observatory in Chile, enables measurements of radial velocities with the highest accuracy currently available.






By utilizing HARPS, the researchers found out unexpected changes in the mysterious bright spots. However, at the beginning they thought that it was an instrumental problem. But after double checking, they had to conclude that the radial velocity anomalies were likely real. Then the team noticed that they were connected to periods of time when the bright spots in the Occator crater were visible from the Earth. So the scientists made an association between them.


However, these detected variations still continue to perplex the astronomers as they haven’t found a plausible explanation for their occurrence.


“We know nothing about these changes, really. And this increases the mystery of these spots,” Molaro told Astrowatch.net.


One of the proposed hypotheses is that the observed changes could be triggered by the presence of volatile substances that evaporate due to solar radiation. When the spots are on the side illuminated by the sun they form plumes that reflect sunlight very effectively. The scientists suggest that these plumes then evaporate quickly, lose reflectivity and produce the observed changes.


“It is already well known that a lot of water hides beneath the surface of Ceres, so water ice or clathrates hydrates are the most natural hypotheses. But a proper answer will be hopefully provided by scientists working in the Dawn team in the coming months,” Molaro said.


He noted that the indication of variability needs to be confirmed by direct imaging of Occator’s bright spot at the highest available spatial resolution.


“This kind of measurements are underway. I would say that the detection of a variability improves our ignorance rather than our understanding of this planetary body,” Molaro revealed.






The team is currently applying for further observations by the end of this year to repeat in a more systematical way what they have done in their pilot project. An important aspect of their work is to have shown a new way to study Ceres from ground, which could turn out to be useful even after the end of the Dawn mission. However by now, they are eager to see the results from the Dawn spacecraft in the next months.


If the team’s theory is confirmed, Ceres would seem to be internally active. While this dwarf planet is known to be rich in water, it is unclear whether this is related to the bright spots. It is also still debated if Ceres due to its vast reservoir of water, could be a suitable place to host microbial life.


“Life as we know it on Earth needs liquid water, biogenic elements and a stable source of energy. Is Ceres a good place to have these things simultaneously and for a substantial amount of time, like billions of years? Nobody knows at the moment,” Molaro concluded.


A little about Ceres


Life on Ceres, ceres, life, space, planets, dwarf planets, solar system, astroid


  • -Discovered: January 1,1801 by Giuseppe Piazzi of Italy (first asteroid/dwarf planet discovered)

  • -Size: 975 by 909 kilometers (606 by 565 miles)

  • -Shape: Spheroid

  • -Rotation: Once every 9 hours, 4.5 minutes

The object is known by astronomers as “1 Ceres” because it was the very first minor planet discovered. As big across as Texas, Ceres’ nearly spherical body has a differentiated interior – meaning that, like Earth, it has denser material at the core and lighter minerals near the surface. Astronomers believe that water ice may be buried under Ceres’ crust because its density is less than that of the Earth’s crust, and because the dust-covered surface bears spectral evidence of water-bearing minerals. Ceres could even boast frost-covered polar caps.


Astronomers estimate that if Ceres were composed of 25 percent water, it may have more water than all the fresh water on Earth. Ceres’ water, unlike Earth’s, is expected to be in the form of water ice located in its mantle.




– Credit and Resource –


NASA




Life on Ceres

Friday 27 May 2016

New concept battery technology

New concept turns battery technology upside-down. Pump-free design for flow battery could offer advantages in cost and simplicity.


Scientia — A new approach to the design of a liquid battery, using a passive, gravity-fed arrangement similar to an old-fashioned hourglass, could offer great advantages due to the system’s low cost and the simplicity of its design and operation, says a team of MIT researchers who have made a demonstration version of the new battery.


battery technology, Battery, membrane , gravity-fed, particles , liquid batteries, electrodes , energy , electricity, power

A new concept for a flow battery functions like an old hourglass or egg timer, with particles (in this case carried as a slurry) flowing through a narrow opening from one tank to another. The flow can then be reversed by turning the device over.
A new concept for a flow battery functions like an old hourglass or egg timer, with particles (in this case carried as a slurry) flowing through a narrow opening from one tank to another. The flow can then be reversed by turning the device over.
Image courtesy of the researchers.


Liquid flow batteries — in which the positive and negative electrodes are each in liquid form and separated by a membrane — are not a new concept, and some members of this research team unveiled an earlier concept three years ago. The basic technology can use a variety of chemical formulations, including the same chemical compounds found in today’s lithium-ion batteries. In this case, key components are not solid slabs that remain in place for the life of the battery, but rather tiny particles that can be carried along in a liquid slurry. Increasing storage capacity simply requires bigger tanks to hold the slurry.





But all previous versions of liquid batteries have relied on complex systems of tanks, valves, and pumps, adding to the cost and providing multiple opportunities for possible leaks and failures.


The new version, which substitutes a simple gravity feed for the pump system, eliminates that complexity. The rate of energy production can be adjusted simply by changing the angle of the device, thus speeding up or slowing down the rate of flow. The concept is described in a paper in the journal Energy and Environmental Science, co-authored by Kyocera Professor of Ceramics Yet-Ming Chiang, Pappalardo Professor of Mechanical Engineering Alexander Slocum, School of Engineering Professor of Teaching Innovation Gareth McKinley, and POSCO Professor of Materials Science and Engineering W. Craig Carter, as well as postdoc Xinwei Chen, graduate student Brandon Hopkins, and four others.


Chiang describes the new approach as something like a “concept car” — a design that is not expected to go into production as it is but that demonstrates some new ideas that can ultimately lead to a real product.


The original concept for flow batteries dates back to the 1970s, but the early versions used materials that had very low energy-density — that is, they had a low capacity for storing energy in proportion to their weight. A major new step in the development of flow batteries came with the introduction of high-energy-density versions a few years ago, including one developed by members of this MIT team, that used the same chemical compounds as conventional lithium-ion batteries. That version had many advantages but shared with other flow batteries the disadvantage of complexity in its plumbing systems.


The new version replaces all that plumbing with a simple, gravity-fed system. In principle, it functions like an old hourglass or egg timer, with particles flowing through a narrow opening from one tank to another. The flow can then be reversed by turning the device over. In this case, the overall shape looks more like a rectangular window frame, with a narrow slot at the place where two sashes would meet in the middle.


In the proof-of-concept version the team built, only one of the two sides of the battery is composed of flowing liquid, while the other side — a sheet of lithium — is in solid form. The team decided to try out the concept in a simpler form before making their ultimate goal, a version where both sides (the positive and negative electrodes) are liquid and flow side by side through an opening while separated by a membrane.


Solid batteries and liquid batteries each have advantages, depending on their specific applications, Chiang says, but “the concept here shows that you don’t need to be confined by these two extremes. This is an example of hybrid devices that fall somewhere in the middle.”


The new design should make possible simpler and more compact battery systems, which could be inexpensive and modular, allowing for gradual expansion of grid-connected storage systems to meet growing demand, Chiang says. Such storage systems will be critical for scaling up the use of intermittent power sources such as wind and solar.


While a conventional, all-solid battery requires electrical connectors for each of the cells that make up a large battery system, in the flow battery only the small region at the center — the “neck” of the hourglass — requires these contacts, greatly simplifying the mechanical assembly of the system, Chiang says. The components are simple enough that they could be made through injection molding or even 3-D printing, he says.





In addition, the basic concept of the flow battery makes it possible to choose independently the two main characteristics of a desired battery system: its power density (how much energy it can deliver at a given moment) and its energy density (how much total energy can be stored in the system). For the new liquid battery, the power density is determined by the size of the “stack,” the contacts where the battery particles flow through, while the energy density is determined by the size of its storage tanks. “In a conventional battery, the power and energy are highly interdependent,” Chiang says.


The trickiest part of the design process, he says, was controlling the characteristics of the liquid slurry to control the flow rates. The thick liquids behave a bit like ketchup in a bottle — it’s hard to get it flowing in the first place, but then once it starts, the flow can be too sudden. Getting the flow just right required a long process of fine-tuning both the liquid mixture and the design of the mechanical structures.


The rate of flow can be controlled by adjusting the angle of the device, Chiang says, and the team found that at a very shallow angle, close to horizontal, “the device would operate most efficiently, at a very steady but low flow rate.” The basic concept should work with many different chemical compositions for the different parts of the battery, he says, but “we chose to demonstrate it with one particular chemistry, one that we understood from previous work. We’re not proposing this particular chemistry as the end game.”


Venkat Viswanathan, an assistant professor of mechanical engineering at Carnegie Mellon University, who was not involved in this work, says: “The authors have been able to build a bridge between the usually disparate fields of fluid mechanics and electrochemistry,” and in so doing developed a promising new approach to battery storage. “Pumping represents a large part of the cost for flow batteries,” he says, “and this new pumpless design could truly inspire a class of passively driven flow batteries.”


The work was supported by the Joint Center for Energy Storage Research, funded by the U.S. Department of Energy. The team also included graduate students Ahmed Helal and Frank Fan, and postdocs Kyle Smith and Zheng Li.




– Credit and Resource –


David L. Chandler | MIT News Office




New concept battery technology

new formula for concrete

Finding a new formula for concrete. Researchers look to bones and shells as blueprints for stronger, more durable concrete.


Scientia — Researchers at MIT are seeking to redesign concrete — the most widely used human-made material in the world — by following nature’s blueprints.


formula for concrete, Concrete, bones , natural materials, molecular , Macro, From molecules to bridges


In a paper published online in the journal Construction and Building Materials, the team contrasts cement paste — concrete’s binding ingredient — with the structure and properties of natural materials such as bones, shells, and deep-sea sponges. As the researchers observed, these biological materials are exceptionally strong and durable, thanks in part to their precise assembly of structures at multiple length scales, from the molecular to the macro, or visible, level.


From their observations, the team, led by Oral Buyukozturk, a professor in MIT’s Department of Civil and Environmental Engineering (CEE), proposed a new bioinspired, “bottom-up” approach for designing cement paste.





“These materials are assembled in a fascinating fashion, with simple constituents arranging in complex geometric configurations that are beautiful to observe,” Buyukozturk says. “We want to see what kinds of micromechanisms exist within them that provide such superior properties, and how we can adopt a similar building-block-based approach for concrete.”


Ultimately, the team hopes to identify materials in nature that may be used as sustainable and longer-lasting alternatives to Portland cement, which requires a huge amount of energy to manufacture.


“If we can replace cement, partially or totally, with some other materials that may be readily and amply available in nature, we can meet our objectives for sustainability,” Buyukozturk says.


Co-authors on the paper include lead author and graduate student Steven Palkovic, graduate student Dieter Brommer, research scientist Kunal Kupwade-Patil, CEE assistant professor Admir Masic, and CEE department head Markus Buehler, the McAfee Professor of Engineering.


“The merger of theory, computation, new synthesis, and characterization methods have enabled a paradigm shift that will likely change the way we produce this ubiquitous material, forever,” Buehler says. “It could lead to more durable roads, bridges, structures, reduce the carbon and energy footprint, and even enable us to sequester carbon dioxide as the material is made. Implementing nanotechnology in concrete is one powerful example [of how] to scale up the power of nanoscience to solve grand engineering challenges.”


From molecules to bridges


Today’s concrete is a random assemblage of crushed rocks and stones, bound together by a cement paste. Concrete’s strength and durability depends partly on its internal structure and configuration of pores. For example, the more porous the material, the more vulnerable it is to cracking. However, there are no techniques available to precisely control concrete’s internal structure and overall properties.


“It’s mostly guesswork,” Buyukozturk says. “We want to change the culture and start controlling the material at the mesoscale.”


As Buyukozturk describes it, the “mesoscale” represents the connection between microscale structures and macroscale properties. For instance, how does cement’s microscopic arrangement affect the overall strength and durability of a tall building or a long bridge? Understanding this connection would help engineers identify features at various length scales that would improve concrete’s overall performance.


“We’re dealing with molecules on the one hand, and building a structure that’s on the order of kilometers in length on the other,” Buyukozturk says. “How do we connect the information we develop at the very small scale, to the information at the large scale? This is the riddle.”





Building from the bottom, up


A comparison of natural materials and cement paste demonstrates the steps by which smaller pieces assemble to form larger structures. Image courtesy of the researchers.


To start to understand this connection, he and his colleagues looked to biological materials such as bone, deep sea sponges, and nacre (an inner shell layer of mollusks), which have all been studied extensively for their mechanical and microscopic properties. They looked through the scientific literature for information on each biomaterial, and compared their structures and behavior, at the nano-, micro-, and macroscales, with that of cement paste.


They looked for connections between a material’s structure and its mechanical properties. For instance, the researchers found that a deep sea sponge’s onion-like structure of silica layers provides a mechanism for preventing cracks. Nacre has a “brick-and-mortar” arrangement of minerals that generates a strong bond between the mineral layers, making the material extremely tough.


“In this context, there is a wide range of multiscale characterization and computational modeling techniques that are well established for studying the complexities of biological and biomimetic materials, which can be easily translated into the cement community,” says Masic.


Applying the information they learned from investigating biological materials, as well as knowledge they gathered on existing cement paste design tools, the team developed a general, bioinspired framework, or methodology, for engineers to design cement, “from the bottom up.”


The framework is essentially a set of guidelines that engineers can follow, in order to determine how certain additives or ingredients of interest will impact cement’s overall strength and durability. For instance, in a related line of research, Buyukozturk is looking into volcanic ash as a cement additive or substitute. To see whether volcanic ash would improve cement paste’s properties, engineers, following the group’s framework, would first use existing experimental techniques, such as nuclear magnetic resonance, scanning electron microscopy, and X-ray diffraction to characterize volcanic ash’s solid and pore configurations over time.


Researchers could then plug these measurements into models that simulate concrete’s long-term evolution, to identify mesoscale relationships between, say, the properties of volcanic ash and the material’s contribution to the strength and durability of an ash-containing concrete bridge. These simulations can then be validated with conventional compression and nanoindentation experiments, to test actual samples of volcanic ash-based concrete.


Ultimately, the researchers hope the framework will help engineers identify ingredients that are structured and evolve in a way, similar to biomaterials, that may improve concrete’s performance and longevity.


“Hopefully this will lead us to some sort of recipe for more sustainable concrete,” Buyukozturk says. “Typically, buildings and bridges are given a certain design life. Can we extend that design life maybe twice or three times? That’s what we aim for. Our framework puts it all on paper, in a very concrete way, for engineers to use.”


This research was supported in part by the Kuwait Foundation for the Advancement of Sciences through the Kuwait-MIT Center for Natural Resources and the Environment, the National Institute of Standards and Technology, and Argonne National Laboratory.




– Credit and Resource –


Jennifer Chu | MIT News Office




new formula for concrete

Automating DNA origami opens many new uses

Automating DNA origami opens door to many new uses. Like 3-D printing did for larger objects, method makes it easy to build nanoparticles out of DNA.


Scientia — Researchers can build complex, nanometer-scale structures of almost any shape and form, using strands of DNA. But these particles must be designed by hand, in a complex and laborious process.


Automating DNA, DNA, origami , MIT, nanoparticles, 3-D printing

3D surface- versus DNA-based renderings of diverse DNA nanoparticles designed autonomously by the algorithm DAEDALUS.
image by Digizyme


This has limited the technique, known as DNA origami, to just a small group of experts in the field.


Now a team of researchers at MIT and elsewhere has developed an algorithm that can build these DNA nanoparticles automatically.


In this way the algorithm, which is reported together with a novel synthesis approach in the journal Science this week, could allow the technique to be used to develop nanoparticles for a much broader range of applications, including scaffolds for vaccines, carriers for gene editing tools, and in archival memory storage.





Unlike traditional DNA origami, in which the structure is built up manually by hand, the algorithm starts with a simple, 3-D geometric representation of the final shape of the object, and then decides how it should be assembled from DNA, according to Mark Bathe, an associate professor of biological engineering at MIT, who led the research.


“The paper turns the problem around from one in which an expert designs the DNA needed to synthesize the object, to one in which the object itself is the starting point, with the DNA sequences that are needed automatically defined by the algorithm,” Bathe says. “Our hope is that this automation significantly broadens participation of others in the use of this powerful molecular design paradigm.”


The algorithm first represents the object as a perfectly smooth, continuous outline of its surface. It then breaks the surface up into a series of polygonal shapes.


Next, it routes a long, single strand of DNA, called the scaffold, which acts like a piece of thread, throughout the entire structure to hold it together.


Automating DNA, DNA, origami , MIT, nanoparticles, 3-D printing

A new algorithm for DNA origami starts with a simple, 3-D geometric representation of the final shape of the object, and then decides how it should be assembled from DNA.
Image courtesy of the researchers.


The algorithm weaves the scaffold in one fast and efficient step, which can be used for any shape of 3-D object, Bathe says.


“That [step] is a powerful part of the algorithm, because it does not require any manual or human interface, and it is guaranteed to work for any 3-D object very efficiently,” he says.


The algorithm, which is known as DAEDALUS (DNA Origami Sequence Design Algorithm for User-defined Structures) after the Greek craftsman and artist who designed labyrinths that resemble origami’s complex scaffold structures, can build any type of 3-D shape, provided it has a closed surface. This can include shapes with one or more holes, such as a torus.


In contrast, a previous algorithm, published last year in the journal Nature, is only capable of designing and building the surfaces of spherical objects, and even then still requires manual intervention.


Most previous work on DNA origami has explored 2-D and 3-D structures whose design required some steps to be done by hand, says Paul Rothemund, a research professor at Caltech, who was not involved in the paper.


“The current work provides a complete pipeline, starting from a 3-D form, and arriving at a DNA design and a corresponding predicted atomic model which can be compared quantitatively with experiments,” he says.


The team’s strategy in designing and synthesizing the DNA nanoparticles was also validated using 3-D cryo-electron microscopy reconstructions by Bathe’s collaborator, Wah Chiu at Baylor College of Medicine.


The researchers are now investigating a number of applications for the DNA nanoparticles built by the DAEDALUS algorithm. One such application is a scaffold for viral peptides and proteins for use as vaccines.





The surface of the nanoparticles could be designed with any combination of peptides and proteins, located at any desired location on the structure, in order to mimic the way in which a virus appears to the body’s immune system.


The researchers demonstrated that the DNA nanoparticles are stable for more than six hours in serum, and are now attempting to increase their stability further.


The nanoparticles could also be used to encapsulate the CRISPR-Cas9 gene editing tool. The CRISPR-Cas9 tool has enormous potential in therapeutics, thanks to its ability to edit targeted genes. However, there is a significant need to develop techniques to package the tool and deliver it to specific cells within the body, Bathe says.


This is currently done using viruses, but these are limited in the size of package they can carry, restricting their use. The DNA nanoparticles, in contrast, are capable of carrying much larger gene packages and can easily be equipped with molecules that help target the right cells or tissue.


The team is also investigating the use of the nanoparticles as DNA memory blocks. Previous research has shown that information can be stored in DNA, in a similar way to the 0s and 1s used to store data digitally. The information to be stored is “written” using DNA synthesis and can then be read back using DNA sequencing technology.


Automating DNA, DNA, origami , MIT, nanoparticles, 3-D printing

“The current work provides a complete pipeline, starting from a 3-D form, and arriving at a DNA design and a corresponding predicted atomic model which can be compared quantitatively with experiments,” says MIT’s Mark Bathe.
Image courtesy of the researchers.


Using the DNA nanoparticles would allow this information to be stored in a structured and protected way, with each particle akin to a page or chapter of a book. Recalling a particular chapter or book would then be as simple as reading that nanoparticle’s identity, somewhat like using library index cards, Bathe says.


The most exciting aspect of the work, however, is that it should significantly broaden participation in the application of this technology, Bathe says, much like 3-D printing has done for complex 3-D geometric models at the macroscopic scale.


Bathe’s co-authors on the paper are Rémi Veneziano, a postdoc in the Department of Biological Engineering; Sakul Ratanalert, a graduate student in the departments of Biological Engineering and Chemical Engineering; and others from Baylor College of Medicine and Arizona State University.




– Credit and Resource –


MIT | Helen Knight




Automating DNA origami opens many new uses