Friday, 30 September 2016

Protoplanetary Disk Around Young Star Shows Spiral

Protoplanetary Disk Around a Young Star Exhibits Spiral Structure


Scientia — Astronomers have found a distinct structure involving spiral arms in the reservoir of gas and dust disk surrounding the young star Elias 2-27. While spiral features have been observed on the surfaces of protoplanetary disks, these new observations, from the ALMA observatory in Chile, are the first to reveal that such spirals occur at the disk midplane, the region where planet formation takes place. This is of importance for planet formation: structures such as these could either indicate the presence of a newly formed planet, or else create the necessary conditions for a planet to form. As such, these results are a crucial step towards a better understanding how planetary systems like our Solar system came into being.




An international team led by Laura Pérez, an Alexander von Humboldt Research Fellow from the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn, also including researchers from the Max Planck Institute for Astronomy (MPIA) in Heidelberg, has obtained the first image of a spiral structure seen in thermal dust emission coming from a protoplanetary disk, the potential birthplace of a new Solar System. Such structures are thought to play a key role in allowing planets to form around young stars. The researchers used the international observatory ALMA to image the disk around the young star Elias 2-27, in the constellation Ophiuchus, at a distance of about 450 light-years from Earth.


Planets are formed in disks of gas and dust around newborn stars. But while this general concept goes way back, astronomers have only recently gained the ability to observe such disks directly. One early example is the discovery of disk silhouettes in front of extended emission in the Orion Nebula by the Hubble Space Telescope in the 1990s, the so-called proplyds. The ability to observe not only each disk as a whole, but also its sub-structure, is more recent still. Gaps in a protoplanetary disks, in the form of concentric rings, were first observed with ALMA in 2014.


These new observations are of particular interest to anyone interested in the formation of planets. Without these structures, planets might not been able to form in the first place! The reason is as follows: In a smooth disk, planets can only grow step by step. Dust particles within the gas of the disk occasionally collide and clump together, and by successive collisions, ever larger particles, grains, and eventually solid bodies form. But as soon as such bodies reach a size of about one meter, drag by the surrounding gas of the disk will make them migrate inwards, towards the star, on a time scale of 1000 years or shorter. The time needed for such bodies to collect sufficient mass by successive collisions, eventually reaching a size where gas drag becomes a negligible influence, is much larger than that.


spiral structure, planet formation, spiral structure, Spiral arms, Protoplanetary, disk, Protoplanetary disk, star ,

Infrared image of the Rho Ophiuchi star formation region at a distance of 450 light years (left). The image on the right shows thermal dust emission from the protoplanetary disk surrounding the young star Elias 2-27. Credit: © NASA/Spitzer/JPL-Caltech/WISE-Team (left image), B. Saxton (NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO), L. Pérez (MPIfR) (right image).





So how can bodies larger than about a meter form in the first place? Without a good explanation, we could not understand how planetary systems, including our own solar system, came into being in the first place.


There are several possible mechanisms that would allow primordial rocks to grow larger more quickly, until they finally reach a size were mutual gravitational attraction forms them into full-size planets. “The observed spirals in Elias 2-27 are the first direct evidence for the shocks of spiral density waves in a protoplanetary disk”, says Laura M. Pérez from MPIfR, the leading author of the paper. “They show that density instabilities are possible within the disk, which can eventually lead to strong disk inhomogeneities and further planet formation.” Such instabilities are not confined to the scale of planet formation. In fact, the best-known example are density waves in disk galaxies, which create the spectacular spiral arms of spiral galaxies.


The two sweeping spiral arms in Elias 2-27 extend more than 10 billion kilometers away from the newborn star, at a larger distance than the location of the Kuiper Belt in our Solar System. “The presence of spiral density waves at these extreme distances may help explain puzzling observations of extrasolar planets at similar far-away locations”, Pérez notes, “such planets cannot form in-situ under our standard picture of planet formation.” But in regions of increased density, planet formation could proceed much faster, both due to those region’s gravity and due to the more confined space, which would make collisions of grains or rocks more probable. In this way, the problem of how to go beyond meter or ten-meter size objects could be solved. On the other hand, planets that have already started forming within a disk can launch spiral waves in the disk as they orbit their host stars. Distinguishing those two roles of spiral and other features – consequences of planet formation, or the cause of it? – will require a deeper understanding of such features, which in turn requires high-resolution images that show details of these structures.


The new observations with ALMA targeted the young star Elias 2-27, a member of a much larger star-forming region known as the ρ-Ophiuchi star-forming complex. Elias 2-27 is estimated to have formed about a million years ago; a very short time, compared to the age of our Sun of five billion years. The star was already known to have a circumstellar disk, but judging by previous observations (at resolutions showing details in the range of 0.6″-1.1″) this appeared to be a featureless, axisymmetric disk. The new ALMA observations with a spatial resolution of 0.24” were made in the millimeter wave regime, at a wavelength of 1.3 millimeters. They trace the thermal emission from dust grains, which may make up between 1 and 10% (by mass) of protoplanetary disks. In this way, astronomers were able to trace a gigantic spiral pattern at distances between about 100 astronomical units (that is, 100 times the average distance of the Sun from the Earth) and 300 astronomical units away from the central star.


spiral structure, planet formation, spiral structure, Spiral arms, Protoplanetary, disk, Protoplanetary disk, star ,

Telescopes of the Atacama Large Millimetre Array (ALMA) directed at the night sky in the Chilean Atacama desert, 5100 m above sea level. © ESO/C. Malin.


“Finally, we see the disks around young stars in all their beauty and diversity – and now this includes seeing spiral structures. These observations will be a great help in understanding the formation of planets”, says Thomas Henning, Director at MPIA, and a co-author of the paper.


“Over the past two decades astronomers have found a great variety of exoplanets. To understand that diversity, we need to understand the early stages of planet formation. Detailed images such as those delivered by ALMA provide crucial information about the various mechanisms at work in protoplanetary disks”, adds Hendrik Linz, also from MPIA.




The interaction with a planet that has already formed and is now orbiting within the disk, is one plausible explanation for these spiral features. ALMA detected a narrow band in the disk with significantly less dust, but such a small gap is not consistent with the large planet needed to create the observed spiral arms. On the other hand, the disk’s own gravity causes instabilities which can trigger the formation of the spiral pattern. Taking into account estimates for the total mass of the disk, and the shape and symmetry of the spiral pattern, the authors consider this possibility as also likely.


“Similar observations with ALMA will become increasingly common, and more and more detailed images showing inhomogeneous structures in disk density become available”, concludes Karl Menten, Director at MPIfR and also a co-author of the paper. “Astronomers should increasingly be able to investigate the properties of such features, and to eventually define their role in the planetary formation process.”


– Credit and Resource –


Provided by: National Radio Astronomy Observatory




Protoplanetary Disk Around Young Star Shows Spiral

Pinpoint laser heating creates magnetic nanotextures

Pinpoint laser heating creates a maelstrom of magnetic nanotextures


Scientia — A simulation study by researchers from the RIKEN Center for Emergent Matter Science has demonstrated the feasibility of using lasers to create and manipulate nanoscale magnetic vortices. The ability to create and control these ‘skyrmions’ could lead to the development of skyrmion-based information storage devices.


Freedawn Scientia - Pinpoint laser heating creates a maelstrom of magnetic nanotextures

Figure 1: Schematic representation of skyrmion creation by local heating using a laser. Credit: Mari Ishida, RIKEN Center for Emergent Matter Science (lower part); modified with permission from ref. 1 © 2014 W. Koshibae & N. Nagaosa (insets)


The information we consume and work with is encoded in binary form (as ‘1’s or ‘0’s) by switching the characteristics of memory media between two states. As we approach the performance and capacity limits of conventional memory media, researchers are looking toward exotic physics to develop the next generation of magnetic memories.






One such exotic phenomenon is the skyrmion—a stable, nanoscale whirlpool-like magnetic feature characterized by a constantly rotating magnetic moment. Theoretically, the presence or absence of a skyrmion at any location in a magnetic medium could be used to represent the binary states needed for information storage. However, researchers have found it challenging to reliably create and annihilate skyrmions experimentally due to the difficulty in probing the mechanics of these processes in any detail. The challenge lies in the incredibly short timescale of these processes, which at just a tenth of a nanosecond is up to billion times shorter than the timescale observable under the Lorentz microscope used to measure magnetic properties.


The study authors, Wataru Koshibae and Naoto Nagaosa, sought a solution to this problem by constructing a computational model that simulates the heating of a ferromagnetic material with pinpoint lasers (Fig. 1). This localized heating creates both skyrmions and ‘antiskyrmions’. The simulations, based on known physics for these systems, showed that the characteristics of skyrmions are heavily dependent on the intensity and spot size of the laser. Further, by manipulating these two parameters, it is possible to control skyrmion characteristics such as creation time and size.


“Heat leads to random motion of magnetic spins,” explains Nagaosa. “We therefore found it surprising that local heating created a topologically nontrivial ordered object, let alone composite structures of skyrmions and antiskyrmions” The issue of control is what differentiates these structures.


Nagaosa believes that as skyrmions are quite stable, these nanoscale features could conceivably be used as an information carrier if a reliable means of creating them at will can be achieved. Koshibae and Nagaosa’s work could therefore form the basis of the development of state-of-the-art memory devices. The work also provides valuable information on the creation of topological particles, which is crucial for advancing knowledge in many other areas of physics.






– Credit and Resource –


More information: Koshibae, W. & Nagaosa, N. “Creation of skyrmions and antiskyrmions by local heating.” Nature Communications 5, 5148 (2014).> DOI: 10.1038/ncomms6148


Provided by RIKEN



Pinpoint laser heating creates magnetic nanotextures

Electric eel delivers a Shock as powerful as a Taser

Electric eel delivers a Taser-like shocks


Scientia –The electric eel – the scaleless Amazonian fish that can deliver an electrical jolt strong enough to knock down a full-grown horse – possesses an electroshock system uncannily similar to a Taser.


electric eel, eel, biologist , electroshock , motor neurons,


That is the conclusion of a nine-month study of the way in which the electric eel uses high-voltage electrical discharges to locate and incapacitate its prey. The research was conducted by Vanderbilt University Stevenson Professor of Biological Sciences Kenneth Catania and is described in the article “The shocking predatory strike of the electric eel” published in the Dec. 5 issue of the journal Science.


People have known about electric fish for a long time. The ancient Egyptians used an electric marine ray to treat epilepsy. Michael Faraday used eels to investigate the nature of electricity and eel anatomy helped inspire Volta to create the first battery. Biologists have determined that a six-foot electric eel can generate about 600 volts of electricity – five times that of a U.S. electrical outlet. This summer scientists at the University of Wisconsin-Madison announced that they had sequenced the complete electric eel genome.


Until now, however, no one had figured out how the eel’s electroshock system actually worked. In order to do so, Catania equipped a large aquarium with a system that can detect the eel’s electric signals and obtained several eels, ranging up to four feet in length.


As he began observing the eels’ behavior, the biologist discovered that their movements are incredibly fast. They can strike and swallow a worm or small fish in about a tenth of a second. So Catania rigged up a high-speed video system that ran at a thousand frames per second so he could study the eel’s actions in slow motion.





Catania recorded three different kinds of electrical discharges from the eels: low-voltage pulses for sensing their environment; short sequences of two or three high-voltage millisecond pulses (called doublets or triplets) given off while hunting; and volleys of high-voltage, high-frequency pulses when capturing prey or defending themselves from attack.


He found that the eel begins its attack on free-swimming prey with a high-frequency volley of high-voltage pulses about 10 to 15 milliseconds before it strikes. In the high-speed video, it became apparent that the fish were completely immobilized within three to four milliseconds after the volley hit them. The paralysis was temporary: If the eel didn’t immediately capture a fish, it normally regained its mobility after a short period and swam away.


“It’s amazing. The eel can totally inactivate its prey in just three milliseconds. The fish are completely paralyzed,” said Catania.


These observations raised an obvious question: How do the eels do it? For that, there was no clear answer in the scientific literature.


electric eel, eel, biologist , electroshock , motor neurons,


“I have some friends in law enforcement, so I was familiar with how a Taser works,” said Catania. “And I was struck by the similarity between the eel’s volley and a Taser discharge. A Taser delivers 19 high-voltage pulses per second while the electric eel produces 400 pulses per second.”


The Taser works by overwhelming the nerves that control the muscles in the target’s body, causing the muscles to involuntarily contract. To determine if the eel’s electrical discharge had the same effect, Catania walled off part of the aquarium with an electrically permeable barrier. He placed a pithed fish on other side of the barrier from the eel and then fed the eel some earthworms, which triggered its electrical volleys. The volleys that passed through the barrier and struck the fish produced strong muscle contractions.


To determine whether the discharges were acting on the prey’s motor neurons – the nerves that control the muscles – or on the muscles themselves, he placed two pithed fish behind the barrier: one injected with saline solution and other injected with curare, a paralytic agent that targets the nervous system. The muscles of the fish with the saline continued to contract in response to the eel’s electrical discharges but the muscle contractions in the fish given the curare disappeared as the drug took effect. This demonstrated that the eel’s electrical discharges were acting through the motor neurons just like Taser discharges.


Next Catania turned his attention to the way in which the eel uses electrical signals for hunting. The eel is nocturnal and doesn’t have very good eyesight. So it needs other ways to detect hidden prey.





The biologist determined that the closely space doublets and triplets that the eel emits correspond to the electric signal that motor neurons send to muscles to produce an extremely rapid contraction.


electric eel, eel, biologist , electroshock , motor neurons,


“Normally, you or I or any other animal can’t cause all of the muscles in our body to contract at the same time. However, that is just what the eel can cause with this signal,” Catania said.


Putting together the fact that the eels are extremely sensitive to water movements with the fact that the whole-body muscle contraction causes the prey’s body to twitch, creating water movements that the eel can sense, Catania concluded that the eel is using these signals to locate hidden prey.


To test this hypothesis, Catania connected a pithed fish to a stimulator.. He put the fish in a clear plastic bag to protect it from the eel’s emissions. He found that when he stimulated the fish to twitch right after the eel emitted one of its signals, the eel would attack. But, when the fish failed to respond to its signal, the eel did not attack. The result supports the idea that the eel uses its electroshock system to force its prey to reveal their location.


“If you take a step back and think about it, what the eel can do is extremely remarkable,” said Catania. “It can use its electrical system to take remote control of its prey’s body. If a fish is hiding nearby, the eel can force it to twitch, giving away its location, and if the eel is ready to capture a fish, it can paralyze it so it can’t escape.”






Electric eel delivers a Shock as powerful as a Taser

Origami and kirigami applied to metamaterials

Topological origami and kirigami techniques applied experimentally to metamaterials


Topological, origami, kirigami


Scientia — A team of researchers with members from Universiteit Leiden in the Netherlands, Cornell University and the University of Massachusetts has developed for the first time metamaterials that are based on topological origami and kirigami techniques. In their paper published in Physical Review Letters, the team describes their techniques and the benefits of such materials.




Over the past several years as researchers have looked for new ways to create metamaterials—those that are artificial that have well-defined, tunable properties—they have become increasingly interested in the Japanese arts of origami (paper folding) and kirigami (paper folding and cutting). Thousands of years of working with paper has led to constructs that exhibit remarkable properties, (it has also been noted that the ancient art could lead to the creation of metamaterials with properties such as Poisson ratio, curvature and states that could be tuned using nothing but geometric criteria) which modern researchers would like to apply to new metamaterial development efforts. In this new endeavor, the researchers used their knowledge of origami to construct a metamaterial that has the properties of being soft along one edge, while remaining stiff on the other—two distinct topological phases while being made from a just single base material.


The work by the team was an extension of research done two years ago by a team at the University of Pennsylvania that came up with the idea of “topological mechanics” which was itself based on topological states seen in quantum mechanics. That led to the discovery that simple mechanical structures could be created that were polarized, which in this sense, meant soft along one side and hard along another.


The researchers report that their metamaterial was made by hooking plastic four-sided units together via hinges, which resulted in a single long, but thin rectangular structure. Squishing the structure from the ends caused it to buckle in a uniform way leading to the formation of hills and valleys—the folds slowly transitioned from soft at one end to rigid at the other. The team notes that bigger versions of the material could be made but only by applying kirigami techniques, i.e. cutting out certain sections. They suggest other materials could be created using similar techniques for mechanical or industrial applications.




– Credit and Resource –


More information: Bryan Gin-ge Chen et al. Topological Mechanics of Origami and Kirigami, Physical Review Letters (2016). DOI: 10.1103/PhysRevLett.116.135501 , On Arxiv: http://arxiv.org/abs/1508.00795


ABSTRACT: Origami and kirigami have emerged as potential tools for the design of mechanical metamaterials whose properties such as curvature, Poisson ratio, and existence of metastable states can be tuned using purely geometric criteria. A major obstacle to exploiting this property is the scarcity of tools to identify and program the flexibility of fold patterns. We exploit a recent connection between spring networks and quantum topological states to design origami with localized folding motions at boundaries and study them both experimentally and theoretically. These folding motions exist due to an underlying topological invariant rather than a local imbalance between constraints and degrees of freedom. We give a simple example of a quasi-1D folding pattern that realizes such topological states. We also demonstrate how to generalize these topological design principles to two dimensions. A striking consequence is that a domain wall between two topologically distinct, mechanically rigid structures is deformable even when constraints locally match the degrees of freedom.


Journal reference: Physical Review Letters




Origami and kirigami applied to metamaterials

Star Escapes Black Hole with Minor Damage

Star Escapes Black Hole with Minor Damage


Scientia — Astronomers have gotten the closest look yet at what happens when a black hole takes a bite out of a star—and the star lives to tell the tale.


black hole, supermassive black holes, galaxy, Royal Astronomical Society, planet, solar system




We may think of black holes as swallowing entire stars—or any other object that wanders too close to their immense gravity. But sometimes, a star that is almost captured by a black hole escapes with only a portion of its mass torn off. Such was the case for a star some 650 million light years away toward Ursa Major, the constellation that contains the “Big Dipper,” where a supermassive black hole tore off a chunk of material from a star that got away.


Astronomers at The Ohio State University couldn’t see the star itself with their All-Sky Automated Survey for Supernovae (ASAS-SN, pronounced “assassin”). But they did see the light that flared as the black hole “ate” the material that it managed to capture.


black hole, supermassive black holes, galaxy, Royal Astronomical Society, planet, solar system





In a paper to appear in the Monthly Notices of the Royal Astronomical Society, they report that the star and the black hole are located in a galaxy outside of the newly dubbed Laniakea Supercluster, of which our home Milky Way Galaxy is a part.


If Laniakea is our galactic “city,” this event—called a “tidal disruption event,” or TDE— happened in our larger metropolitan area. Still, it’s the closest TDE ever spotted, and it gives astronomers the best chance yet of learning more about how supermassive black holes form and grow.


black hole, supermassive black holes, galaxy, Royal Astronomical Society, planet, solar system


ASAS-SN has so far spotted more than 60 bright and nearby supernovae; one of the program’s other goals is to try to determine how often TDEs happen in the nearby universe. But study co-author Krzysztof Stanek, professor of astronomy at Ohio State, and his collaborators were surprised to find one in January 2014, just a few months after ASAS-SN’s four telescopes in Hawaii began gathering data.


To Stanek, the fact that the survey made such a rare find so quickly suggests that TDEs may be more common than astronomers realized.


black hole, supermassive black holes, galaxy, Royal Astronomical Society, planet, solar system





Star Escapes Black Hole with Minor Damage

Thursday, 29 September 2016

Pinpointing Brain Circuit That can Keep Fear at Bay

Pinpointing a brain circuit that can keep fear at bay – Study suggests path to prolonging treatment effectiveness for phobias or post-traumatic stress disorder.


fear, brain circuit, phobias, post-traumatic stress disorde , scared


Scientia — People who are too frightened of flying to board an airplane, or fear spiders so much they cannot venture into the basement, can seek a kind of treatment called exposure therapy. In a safe environment, they repeatedly face cues such as photos of planes or black widows, as a way to stamp out their fearful response — a process known as extinction.


fear, brain circuit, phobias, post-traumatic stress disorde , scared

MIT scientists have identified a way to enhance the long-term benefit of exposure therapy in rats, offering a way to improve the therapy in people suffering from phobias and more complicated conditions such as post-traumatic stress disorder (PTSD).
Image: Christine Daniloff/MIT




Unfortunately, the effects of exposure therapy are not permanent, and many people experience a relapse. MIT scientists have now identified a way to enhance the long-term benefit of extinction in rats, offering a way to improve the therapy in people suffering from phobias and more complicated conditions such as post-traumatic stress disorder (PTSD).


Work conducted in the laboratory of Ki Goosens, a research affiliate of the McGovern Institute for Brain Research, has pinpointed a neural circuit that becomes active during exposure therapy in the rats. In a study published Sept. 27 in eLife, the researchers showed that they could stretch the therapy’s benefits for at least two months by boosting the circuit’s activity during treatment.


“When you give extinction training to humans or rats, and you wait long enough, you observe a phenomenon called spontaneous recovery, in which the fear that was originally learned comes back,” Goosens explains. “It’s one of the barriers to this type of therapy. You spend all this time going through it, but then it’s not a permanent fix for your problem.”


According to statistics from the National Institute of Mental Health, 18 percent of U.S. adults are diagnosed with a fear or anxiety disorder each year, with 22 percent of those patients experiencing severe symptoms.


How to quench a fear

The neural circuit identified by the scientists connects a part of the brain involved in fear memory, called the basolateral amygdala (BLA), with another region called the nucleus accumbens (NAc), that helps the brain process rewarding events. Goosens and her colleagues call it the BLA-NAc circuit.


Researchers have been considering a link between fear and reward for some time, Goosens says. “The amygdala is a part of the brain that is tightly linked with fear memory but it’s also been linked to positive reward learning as well, and the accumbens is a key reward area in the brain,” she explains. “What we’ve been thinking about is whether extinction is rewarding. When you’re expecting something bad and you don’t get it, does your brain treat that like it’s a good thing?”


To find out if there was a specific brain circuit involved, the researchers first trained rats to fear a certain noise by pairing it with foot shock. They later gave the rats extinction training, during which the noise was presented in the absence of foot shock, and they looked at markers of neural activity in the brain. The results revealed the BLA-NAc reward circuit was recruited by the brain during exposure therapy, as the rats gave up their fear of the bad noise.


Once Goosens and her colleagues had identified the circuit, they looked for ways to boost its activity. First, they paired a sugary drink with the fear-related sound during extinction training, hoping to associate the sound with a reward. This type of training, called counterconditioning, associates fear-eliciting cues with rewarding events or memories, instead of with neutral events as in most extinction training.


Rats that received the counterconditioning were significantly less likely to spontaneously revert to their fearful states, compared to those that received regular extinction training for up to 55 days later, the scientists found.


They also found that the benefits of extinction could be prolonged with optogenetic stimulation, in which the circuit was genetically modified so that it could be stimulated directly with tiny bursts of light from an optical fiber.




The ongoing benefit that came from stimulating the circuit was one of the most surprising — and welcome — findings from the study, Goosens says. “The effect that we saw was one that really emerged months later, and we want to know what’s happening over those two months. What is the circuit doing to suppress the recovery of fear over that period of time? We still don’t understand what that is.”


Another interesting finding from the study was that the circuit was active during both fear learning and fear extinction, says lead author Susana Correia, a research scientist in the Goosens lab. “Understanding if these are molecularly different subcircuits within this projection could allow the development of a pharmaceutical approach to target the fear extinction pathway and to improve cognitive therapy,” Correia says.


Immediate and future impacts on therapy

Some therapists are already using counterconditioning in treating PTSD, and Goosens suggests that the rat study might encourage further exploration of this technique in human therapy.


And while it isn’t likely that humans will receive direct optogenetic therapy any time soon, Goosens says there is a benefit to knowing exactly which circuits are involved in extinction.


In neurofeedback studies, for instance, brain scan technologies such as fMRI or EEG could be used to help a patient learn to activate specific parts of their brain, including the BLA-NAc reward circuit, during exposure therapy.


Studies like this one, Goosens says, offer a “target for a personalized medicine approach where feedback is used during therapy to enhance the effectiveness of that therapy.”


“This study provides beautiful basic science support for the approach of counterconditioning, and it is my hope that this clinical approach might be tested on a larger scale in PTSD treatment,” says Mireya Nadal-Vicens, a researcher at Harvard Medical School and Massachusetts General Hospital, who was not involved with the study. “In addition to perhaps being more effective clinically, it might also be a more tolerable, less initially distressing treatment due to the reward conditioning component.”


Other MIT authors on the paper include technical assistant Anna McGrath, undergraduate Allison Lee, and McGovern principal investigator and Institute Professor Ann Graybiel.


The study was funded by the U.S. Army Research Office, the Defense Advanced Research Projects Agency (DARPA), and the National Institute of Mental Health.




– Credit and Resource –




Pinpointing Brain Circuit That can Keep Fear at Bay

Visible light based Imaging for Medical Devices

Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles – System accounts for the deflection of light particles passing through animal tissue or fog.


Scientia — MIT researchers have developed a technique for recovering visual information from light that has scattered because of interactions with the environment — such as passing through human tissue.


Light, Algorithm , visible-light-based imaging, medical devices, autonomous vehicles, light particles, X-rays, Ultrasound waves, computer vision systems, MIT, Medical Imaging systems

In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.
Image courtesy of the researchers.




The technique could lead to medical-imaging systems that use visible light, which carries much more information than X-rays or ultrasound waves, or to computer vision systems that work in fog or drizzle. The development of such vision systems has been a major obstacle to self-driving cars.


In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.


From that information, the researchers’ algorithms were able to reconstruct an accurate image of the pattern cut into the mask.



An imaging algorithm from the MIT Media Lab’s Camera Culture group compensates for the scattering of light. The advance could potentially be used to develop optical-wavelength medical imaging and autonomous vehicles.

Video: Camera Culture Group



“The reason our eyes are sensitive only in this narrow part of the spectrum is because this is where light and matter interact most,” says Guy Satat, a graduate student at the MIT Media Lab and first author on the new paper. “This is why X-ray is able to go inside the body, because there is very little interaction. That’s why it can’t distinguish between different types of tissue, or see bleeding, or see oxygenated or deoxygenated blood.”


The imaging technique’s potential applications in automotive sensing may be even more compelling than those in medical imaging, however. Many experimental algorithms for guiding autonomous vehicles are highly reliable under good illumination, but they fall apart completely in fog or drizzle; computer vision systems misinterpret the scattered light as having reflected off of objects that don’t exist. The new technique could address that problem.

Satat’s coauthors on the new paper, published today in Scientific Reports, are three other members of the Media Lab’s Camera Culture group: Ramesh Raskar, the group’s leader, Satat’s thesis advisor, and an associate professor of media arts and sciences; Barmak Heshmat, a research scientist; and Dan Raviv, a postdoc.


Light, Algorithm , visible-light-based imaging, medical devices, autonomous vehicles, light particles, X-rays, Ultrasound waves, computer vision systems, MIT, Medical Imaging systems

An illustration shows the researchers’ experimental setup. The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time.
Illustration courtesy of the researchers.




Expanding circles

Like many of the Camera Culture group’s projects, the new system relies on a pulsed laser that emits ultrashort bursts of light, and a high-speed camera that can distinguish the arrival times of different groups of photons, or light particles. When a light burst reaches a scattering medium, such as a tissue phantom, some photons pass through unmolested; some are only slightly deflected from a straight path; and some bounce around inside the medium for a comparatively long time. The first photons to arrive at the sensor have thus undergone the least scattering; the last to arrive have undergone the most.


Where previous techniques have attempted to reconstruct images using only those first, unscattered photons, the MIT researchers’ technique uses the entire optical signal. Hence its name: all-photons imaging.


The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time. To get a sense of how all-photons imaging works, suppose that light arrives at the camera from only one point in the visual field. The first photons to reach the camera pass through the scattering medium unimpeded: They show up as just a single illuminated pixel in the first frame of the movie.


The next photons to arrive have undergone slightly more scattering, so in the second frame of the video, they show up as a small circle centered on the single pixel from the first frame. With each successive frame, the circle expands in diameter, until the final frame just shows a general, hazy light.


The problem, of course, is that in practice the camera is registering light from many points in the visual field, whose expanding circles overlap. The job of the researchers’ algorithm is to sort out which photons illuminating which pixels of the image originated where.


Cascading probabilities

The first step is to determine how the overall intensity of the image changes in time. This provides an estimate of how much scattering the light has undergone: If the intensity spikes quickly and tails off quickly, the light hasn’t been scattered much. If the intensity increases slowly and tails off slowly, it has.


On the basis of that estimate, the algorithm considers each pixel of each successive frame and calculates the probability that it corresponds to any given point in the visual field. Then it goes back to the first frame of video and, using the probabilistic model it has just constructed, predicts what the next frame of video will look like. With each successive frame, it compares its prediction to the actual camera measurement and adjusts its model accordingly. Finally, using the final version of the model, it deduces the pattern of light most likely to have produced the sequence of measurements the camera made.


One limitation of the current version of the system is that the light emitter and the camera are on opposite sides of the scattering medium. That limits its applicability for medical imaging, although Satat believes that it should be possible to use fluorescent particles known as fluorophores, which can be injected into the bloodstream and are already used in medical imaging, as a light source. And fog scatters light much less than human tissue does, so reflected light from laser pulses fired into the environment could be good enough for automotive sensing.


“People have been using what is known as time gating, the idea that photons not only have intensity but also time-of-arrival information and that if you gate for a particular time of arrival you get photons with certain specific path lengths and therefore [come] from a certain specific depth in the object,” says Ashok Veeraraghavan, an assistant professor of electrical and computer engineering at Rice University. “This paper is taking that concept one level further and saying that even the photons that arrive at slightly different times contribute some spatial information.”


“Looking through scattering media is a problem that’s of large consequence,” he adds. But he cautions that the new paper does not entirely solve it. “There’s maybe one barrier that’s been crossed, but there are maybe three more barriers that need to be crossed before this becomes practical,” he says.




– Credit and Resource –


Larry Hardesty | MIT News Office




Visible light based Imaging for Medical Devices