Friday 30 September 2016

Protoplanetary Disk Around Young Star Shows Spiral

Protoplanetary Disk Around a Young Star Exhibits Spiral Structure


Scientia — Astronomers have found a distinct structure involving spiral arms in the reservoir of gas and dust disk surrounding the young star Elias 2-27. While spiral features have been observed on the surfaces of protoplanetary disks, these new observations, from the ALMA observatory in Chile, are the first to reveal that such spirals occur at the disk midplane, the region where planet formation takes place. This is of importance for planet formation: structures such as these could either indicate the presence of a newly formed planet, or else create the necessary conditions for a planet to form. As such, these results are a crucial step towards a better understanding how planetary systems like our Solar system came into being.




An international team led by Laura Pérez, an Alexander von Humboldt Research Fellow from the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn, also including researchers from the Max Planck Institute for Astronomy (MPIA) in Heidelberg, has obtained the first image of a spiral structure seen in thermal dust emission coming from a protoplanetary disk, the potential birthplace of a new Solar System. Such structures are thought to play a key role in allowing planets to form around young stars. The researchers used the international observatory ALMA to image the disk around the young star Elias 2-27, in the constellation Ophiuchus, at a distance of about 450 light-years from Earth.


Planets are formed in disks of gas and dust around newborn stars. But while this general concept goes way back, astronomers have only recently gained the ability to observe such disks directly. One early example is the discovery of disk silhouettes in front of extended emission in the Orion Nebula by the Hubble Space Telescope in the 1990s, the so-called proplyds. The ability to observe not only each disk as a whole, but also its sub-structure, is more recent still. Gaps in a protoplanetary disks, in the form of concentric rings, were first observed with ALMA in 2014.


These new observations are of particular interest to anyone interested in the formation of planets. Without these structures, planets might not been able to form in the first place! The reason is as follows: In a smooth disk, planets can only grow step by step. Dust particles within the gas of the disk occasionally collide and clump together, and by successive collisions, ever larger particles, grains, and eventually solid bodies form. But as soon as such bodies reach a size of about one meter, drag by the surrounding gas of the disk will make them migrate inwards, towards the star, on a time scale of 1000 years or shorter. The time needed for such bodies to collect sufficient mass by successive collisions, eventually reaching a size where gas drag becomes a negligible influence, is much larger than that.


spiral structure, planet formation, spiral structure, Spiral arms, Protoplanetary, disk, Protoplanetary disk, star ,

Infrared image of the Rho Ophiuchi star formation region at a distance of 450 light years (left). The image on the right shows thermal dust emission from the protoplanetary disk surrounding the young star Elias 2-27. Credit: © NASA/Spitzer/JPL-Caltech/WISE-Team (left image), B. Saxton (NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO), L. Pérez (MPIfR) (right image).





So how can bodies larger than about a meter form in the first place? Without a good explanation, we could not understand how planetary systems, including our own solar system, came into being in the first place.


There are several possible mechanisms that would allow primordial rocks to grow larger more quickly, until they finally reach a size were mutual gravitational attraction forms them into full-size planets. “The observed spirals in Elias 2-27 are the first direct evidence for the shocks of spiral density waves in a protoplanetary disk”, says Laura M. Pérez from MPIfR, the leading author of the paper. “They show that density instabilities are possible within the disk, which can eventually lead to strong disk inhomogeneities and further planet formation.” Such instabilities are not confined to the scale of planet formation. In fact, the best-known example are density waves in disk galaxies, which create the spectacular spiral arms of spiral galaxies.


The two sweeping spiral arms in Elias 2-27 extend more than 10 billion kilometers away from the newborn star, at a larger distance than the location of the Kuiper Belt in our Solar System. “The presence of spiral density waves at these extreme distances may help explain puzzling observations of extrasolar planets at similar far-away locations”, Pérez notes, “such planets cannot form in-situ under our standard picture of planet formation.” But in regions of increased density, planet formation could proceed much faster, both due to those region’s gravity and due to the more confined space, which would make collisions of grains or rocks more probable. In this way, the problem of how to go beyond meter or ten-meter size objects could be solved. On the other hand, planets that have already started forming within a disk can launch spiral waves in the disk as they orbit their host stars. Distinguishing those two roles of spiral and other features – consequences of planet formation, or the cause of it? – will require a deeper understanding of such features, which in turn requires high-resolution images that show details of these structures.


The new observations with ALMA targeted the young star Elias 2-27, a member of a much larger star-forming region known as the ρ-Ophiuchi star-forming complex. Elias 2-27 is estimated to have formed about a million years ago; a very short time, compared to the age of our Sun of five billion years. The star was already known to have a circumstellar disk, but judging by previous observations (at resolutions showing details in the range of 0.6″-1.1″) this appeared to be a featureless, axisymmetric disk. The new ALMA observations with a spatial resolution of 0.24” were made in the millimeter wave regime, at a wavelength of 1.3 millimeters. They trace the thermal emission from dust grains, which may make up between 1 and 10% (by mass) of protoplanetary disks. In this way, astronomers were able to trace a gigantic spiral pattern at distances between about 100 astronomical units (that is, 100 times the average distance of the Sun from the Earth) and 300 astronomical units away from the central star.


spiral structure, planet formation, spiral structure, Spiral arms, Protoplanetary, disk, Protoplanetary disk, star ,

Telescopes of the Atacama Large Millimetre Array (ALMA) directed at the night sky in the Chilean Atacama desert, 5100 m above sea level. © ESO/C. Malin.


“Finally, we see the disks around young stars in all their beauty and diversity – and now this includes seeing spiral structures. These observations will be a great help in understanding the formation of planets”, says Thomas Henning, Director at MPIA, and a co-author of the paper.


“Over the past two decades astronomers have found a great variety of exoplanets. To understand that diversity, we need to understand the early stages of planet formation. Detailed images such as those delivered by ALMA provide crucial information about the various mechanisms at work in protoplanetary disks”, adds Hendrik Linz, also from MPIA.




The interaction with a planet that has already formed and is now orbiting within the disk, is one plausible explanation for these spiral features. ALMA detected a narrow band in the disk with significantly less dust, but such a small gap is not consistent with the large planet needed to create the observed spiral arms. On the other hand, the disk’s own gravity causes instabilities which can trigger the formation of the spiral pattern. Taking into account estimates for the total mass of the disk, and the shape and symmetry of the spiral pattern, the authors consider this possibility as also likely.


“Similar observations with ALMA will become increasingly common, and more and more detailed images showing inhomogeneous structures in disk density become available”, concludes Karl Menten, Director at MPIfR and also a co-author of the paper. “Astronomers should increasingly be able to investigate the properties of such features, and to eventually define their role in the planetary formation process.”


– Credit and Resource –


Provided by: National Radio Astronomy Observatory




Protoplanetary Disk Around Young Star Shows Spiral

Pinpoint laser heating creates magnetic nanotextures

Pinpoint laser heating creates a maelstrom of magnetic nanotextures


Scientia — A simulation study by researchers from the RIKEN Center for Emergent Matter Science has demonstrated the feasibility of using lasers to create and manipulate nanoscale magnetic vortices. The ability to create and control these ‘skyrmions’ could lead to the development of skyrmion-based information storage devices.


Freedawn Scientia - Pinpoint laser heating creates a maelstrom of magnetic nanotextures

Figure 1: Schematic representation of skyrmion creation by local heating using a laser. Credit: Mari Ishida, RIKEN Center for Emergent Matter Science (lower part); modified with permission from ref. 1 © 2014 W. Koshibae & N. Nagaosa (insets)


The information we consume and work with is encoded in binary form (as ‘1’s or ‘0’s) by switching the characteristics of memory media between two states. As we approach the performance and capacity limits of conventional memory media, researchers are looking toward exotic physics to develop the next generation of magnetic memories.






One such exotic phenomenon is the skyrmion—a stable, nanoscale whirlpool-like magnetic feature characterized by a constantly rotating magnetic moment. Theoretically, the presence or absence of a skyrmion at any location in a magnetic medium could be used to represent the binary states needed for information storage. However, researchers have found it challenging to reliably create and annihilate skyrmions experimentally due to the difficulty in probing the mechanics of these processes in any detail. The challenge lies in the incredibly short timescale of these processes, which at just a tenth of a nanosecond is up to billion times shorter than the timescale observable under the Lorentz microscope used to measure magnetic properties.


The study authors, Wataru Koshibae and Naoto Nagaosa, sought a solution to this problem by constructing a computational model that simulates the heating of a ferromagnetic material with pinpoint lasers (Fig. 1). This localized heating creates both skyrmions and ‘antiskyrmions’. The simulations, based on known physics for these systems, showed that the characteristics of skyrmions are heavily dependent on the intensity and spot size of the laser. Further, by manipulating these two parameters, it is possible to control skyrmion characteristics such as creation time and size.


“Heat leads to random motion of magnetic spins,” explains Nagaosa. “We therefore found it surprising that local heating created a topologically nontrivial ordered object, let alone composite structures of skyrmions and antiskyrmions” The issue of control is what differentiates these structures.


Nagaosa believes that as skyrmions are quite stable, these nanoscale features could conceivably be used as an information carrier if a reliable means of creating them at will can be achieved. Koshibae and Nagaosa’s work could therefore form the basis of the development of state-of-the-art memory devices. The work also provides valuable information on the creation of topological particles, which is crucial for advancing knowledge in many other areas of physics.






– Credit and Resource –


More information: Koshibae, W. & Nagaosa, N. “Creation of skyrmions and antiskyrmions by local heating.” Nature Communications 5, 5148 (2014).> DOI: 10.1038/ncomms6148


Provided by RIKEN



Pinpoint laser heating creates magnetic nanotextures

Electric eel delivers a Shock as powerful as a Taser

Electric eel delivers a Taser-like shocks


Scientia –The electric eel – the scaleless Amazonian fish that can deliver an electrical jolt strong enough to knock down a full-grown horse – possesses an electroshock system uncannily similar to a Taser.


electric eel, eel, biologist , electroshock , motor neurons,


That is the conclusion of a nine-month study of the way in which the electric eel uses high-voltage electrical discharges to locate and incapacitate its prey. The research was conducted by Vanderbilt University Stevenson Professor of Biological Sciences Kenneth Catania and is described in the article “The shocking predatory strike of the electric eel” published in the Dec. 5 issue of the journal Science.


People have known about electric fish for a long time. The ancient Egyptians used an electric marine ray to treat epilepsy. Michael Faraday used eels to investigate the nature of electricity and eel anatomy helped inspire Volta to create the first battery. Biologists have determined that a six-foot electric eel can generate about 600 volts of electricity – five times that of a U.S. electrical outlet. This summer scientists at the University of Wisconsin-Madison announced that they had sequenced the complete electric eel genome.


Until now, however, no one had figured out how the eel’s electroshock system actually worked. In order to do so, Catania equipped a large aquarium with a system that can detect the eel’s electric signals and obtained several eels, ranging up to four feet in length.


As he began observing the eels’ behavior, the biologist discovered that their movements are incredibly fast. They can strike and swallow a worm or small fish in about a tenth of a second. So Catania rigged up a high-speed video system that ran at a thousand frames per second so he could study the eel’s actions in slow motion.





Catania recorded three different kinds of electrical discharges from the eels: low-voltage pulses for sensing their environment; short sequences of two or three high-voltage millisecond pulses (called doublets or triplets) given off while hunting; and volleys of high-voltage, high-frequency pulses when capturing prey or defending themselves from attack.


He found that the eel begins its attack on free-swimming prey with a high-frequency volley of high-voltage pulses about 10 to 15 milliseconds before it strikes. In the high-speed video, it became apparent that the fish were completely immobilized within three to four milliseconds after the volley hit them. The paralysis was temporary: If the eel didn’t immediately capture a fish, it normally regained its mobility after a short period and swam away.


“It’s amazing. The eel can totally inactivate its prey in just three milliseconds. The fish are completely paralyzed,” said Catania.


These observations raised an obvious question: How do the eels do it? For that, there was no clear answer in the scientific literature.


electric eel, eel, biologist , electroshock , motor neurons,


“I have some friends in law enforcement, so I was familiar with how a Taser works,” said Catania. “And I was struck by the similarity between the eel’s volley and a Taser discharge. A Taser delivers 19 high-voltage pulses per second while the electric eel produces 400 pulses per second.”


The Taser works by overwhelming the nerves that control the muscles in the target’s body, causing the muscles to involuntarily contract. To determine if the eel’s electrical discharge had the same effect, Catania walled off part of the aquarium with an electrically permeable barrier. He placed a pithed fish on other side of the barrier from the eel and then fed the eel some earthworms, which triggered its electrical volleys. The volleys that passed through the barrier and struck the fish produced strong muscle contractions.


To determine whether the discharges were acting on the prey’s motor neurons – the nerves that control the muscles – or on the muscles themselves, he placed two pithed fish behind the barrier: one injected with saline solution and other injected with curare, a paralytic agent that targets the nervous system. The muscles of the fish with the saline continued to contract in response to the eel’s electrical discharges but the muscle contractions in the fish given the curare disappeared as the drug took effect. This demonstrated that the eel’s electrical discharges were acting through the motor neurons just like Taser discharges.


Next Catania turned his attention to the way in which the eel uses electrical signals for hunting. The eel is nocturnal and doesn’t have very good eyesight. So it needs other ways to detect hidden prey.





The biologist determined that the closely space doublets and triplets that the eel emits correspond to the electric signal that motor neurons send to muscles to produce an extremely rapid contraction.


electric eel, eel, biologist , electroshock , motor neurons,


“Normally, you or I or any other animal can’t cause all of the muscles in our body to contract at the same time. However, that is just what the eel can cause with this signal,” Catania said.


Putting together the fact that the eels are extremely sensitive to water movements with the fact that the whole-body muscle contraction causes the prey’s body to twitch, creating water movements that the eel can sense, Catania concluded that the eel is using these signals to locate hidden prey.


To test this hypothesis, Catania connected a pithed fish to a stimulator.. He put the fish in a clear plastic bag to protect it from the eel’s emissions. He found that when he stimulated the fish to twitch right after the eel emitted one of its signals, the eel would attack. But, when the fish failed to respond to its signal, the eel did not attack. The result supports the idea that the eel uses its electroshock system to force its prey to reveal their location.


“If you take a step back and think about it, what the eel can do is extremely remarkable,” said Catania. “It can use its electrical system to take remote control of its prey’s body. If a fish is hiding nearby, the eel can force it to twitch, giving away its location, and if the eel is ready to capture a fish, it can paralyze it so it can’t escape.”






Electric eel delivers a Shock as powerful as a Taser

Origami and kirigami applied to metamaterials

Topological origami and kirigami techniques applied experimentally to metamaterials


Topological, origami, kirigami


Scientia — A team of researchers with members from Universiteit Leiden in the Netherlands, Cornell University and the University of Massachusetts has developed for the first time metamaterials that are based on topological origami and kirigami techniques. In their paper published in Physical Review Letters, the team describes their techniques and the benefits of such materials.




Over the past several years as researchers have looked for new ways to create metamaterials—those that are artificial that have well-defined, tunable properties—they have become increasingly interested in the Japanese arts of origami (paper folding) and kirigami (paper folding and cutting). Thousands of years of working with paper has led to constructs that exhibit remarkable properties, (it has also been noted that the ancient art could lead to the creation of metamaterials with properties such as Poisson ratio, curvature and states that could be tuned using nothing but geometric criteria) which modern researchers would like to apply to new metamaterial development efforts. In this new endeavor, the researchers used their knowledge of origami to construct a metamaterial that has the properties of being soft along one edge, while remaining stiff on the other—two distinct topological phases while being made from a just single base material.


The work by the team was an extension of research done two years ago by a team at the University of Pennsylvania that came up with the idea of “topological mechanics” which was itself based on topological states seen in quantum mechanics. That led to the discovery that simple mechanical structures could be created that were polarized, which in this sense, meant soft along one side and hard along another.


The researchers report that their metamaterial was made by hooking plastic four-sided units together via hinges, which resulted in a single long, but thin rectangular structure. Squishing the structure from the ends caused it to buckle in a uniform way leading to the formation of hills and valleys—the folds slowly transitioned from soft at one end to rigid at the other. The team notes that bigger versions of the material could be made but only by applying kirigami techniques, i.e. cutting out certain sections. They suggest other materials could be created using similar techniques for mechanical or industrial applications.




– Credit and Resource –


More information: Bryan Gin-ge Chen et al. Topological Mechanics of Origami and Kirigami, Physical Review Letters (2016). DOI: 10.1103/PhysRevLett.116.135501 , On Arxiv: http://arxiv.org/abs/1508.00795


ABSTRACT: Origami and kirigami have emerged as potential tools for the design of mechanical metamaterials whose properties such as curvature, Poisson ratio, and existence of metastable states can be tuned using purely geometric criteria. A major obstacle to exploiting this property is the scarcity of tools to identify and program the flexibility of fold patterns. We exploit a recent connection between spring networks and quantum topological states to design origami with localized folding motions at boundaries and study them both experimentally and theoretically. These folding motions exist due to an underlying topological invariant rather than a local imbalance between constraints and degrees of freedom. We give a simple example of a quasi-1D folding pattern that realizes such topological states. We also demonstrate how to generalize these topological design principles to two dimensions. A striking consequence is that a domain wall between two topologically distinct, mechanically rigid structures is deformable even when constraints locally match the degrees of freedom.


Journal reference: Physical Review Letters




Origami and kirigami applied to metamaterials

Star Escapes Black Hole with Minor Damage

Star Escapes Black Hole with Minor Damage


Scientia — Astronomers have gotten the closest look yet at what happens when a black hole takes a bite out of a star—and the star lives to tell the tale.


black hole, supermassive black holes, galaxy, Royal Astronomical Society, planet, solar system




We may think of black holes as swallowing entire stars—or any other object that wanders too close to their immense gravity. But sometimes, a star that is almost captured by a black hole escapes with only a portion of its mass torn off. Such was the case for a star some 650 million light years away toward Ursa Major, the constellation that contains the “Big Dipper,” where a supermassive black hole tore off a chunk of material from a star that got away.


Astronomers at The Ohio State University couldn’t see the star itself with their All-Sky Automated Survey for Supernovae (ASAS-SN, pronounced “assassin”). But they did see the light that flared as the black hole “ate” the material that it managed to capture.


black hole, supermassive black holes, galaxy, Royal Astronomical Society, planet, solar system





In a paper to appear in the Monthly Notices of the Royal Astronomical Society, they report that the star and the black hole are located in a galaxy outside of the newly dubbed Laniakea Supercluster, of which our home Milky Way Galaxy is a part.


If Laniakea is our galactic “city,” this event—called a “tidal disruption event,” or TDE— happened in our larger metropolitan area. Still, it’s the closest TDE ever spotted, and it gives astronomers the best chance yet of learning more about how supermassive black holes form and grow.


black hole, supermassive black holes, galaxy, Royal Astronomical Society, planet, solar system


ASAS-SN has so far spotted more than 60 bright and nearby supernovae; one of the program’s other goals is to try to determine how often TDEs happen in the nearby universe. But study co-author Krzysztof Stanek, professor of astronomy at Ohio State, and his collaborators were surprised to find one in January 2014, just a few months after ASAS-SN’s four telescopes in Hawaii began gathering data.


To Stanek, the fact that the survey made such a rare find so quickly suggests that TDEs may be more common than astronomers realized.


black hole, supermassive black holes, galaxy, Royal Astronomical Society, planet, solar system





Star Escapes Black Hole with Minor Damage

Thursday 29 September 2016

Pinpointing Brain Circuit That can Keep Fear at Bay

Pinpointing a brain circuit that can keep fear at bay – Study suggests path to prolonging treatment effectiveness for phobias or post-traumatic stress disorder.


fear, brain circuit, phobias, post-traumatic stress disorde , scared


Scientia — People who are too frightened of flying to board an airplane, or fear spiders so much they cannot venture into the basement, can seek a kind of treatment called exposure therapy. In a safe environment, they repeatedly face cues such as photos of planes or black widows, as a way to stamp out their fearful response — a process known as extinction.


fear, brain circuit, phobias, post-traumatic stress disorde , scared

MIT scientists have identified a way to enhance the long-term benefit of exposure therapy in rats, offering a way to improve the therapy in people suffering from phobias and more complicated conditions such as post-traumatic stress disorder (PTSD).
Image: Christine Daniloff/MIT




Unfortunately, the effects of exposure therapy are not permanent, and many people experience a relapse. MIT scientists have now identified a way to enhance the long-term benefit of extinction in rats, offering a way to improve the therapy in people suffering from phobias and more complicated conditions such as post-traumatic stress disorder (PTSD).


Work conducted in the laboratory of Ki Goosens, a research affiliate of the McGovern Institute for Brain Research, has pinpointed a neural circuit that becomes active during exposure therapy in the rats. In a study published Sept. 27 in eLife, the researchers showed that they could stretch the therapy’s benefits for at least two months by boosting the circuit’s activity during treatment.


“When you give extinction training to humans or rats, and you wait long enough, you observe a phenomenon called spontaneous recovery, in which the fear that was originally learned comes back,” Goosens explains. “It’s one of the barriers to this type of therapy. You spend all this time going through it, but then it’s not a permanent fix for your problem.”


According to statistics from the National Institute of Mental Health, 18 percent of U.S. adults are diagnosed with a fear or anxiety disorder each year, with 22 percent of those patients experiencing severe symptoms.


How to quench a fear

The neural circuit identified by the scientists connects a part of the brain involved in fear memory, called the basolateral amygdala (BLA), with another region called the nucleus accumbens (NAc), that helps the brain process rewarding events. Goosens and her colleagues call it the BLA-NAc circuit.


Researchers have been considering a link between fear and reward for some time, Goosens says. “The amygdala is a part of the brain that is tightly linked with fear memory but it’s also been linked to positive reward learning as well, and the accumbens is a key reward area in the brain,” she explains. “What we’ve been thinking about is whether extinction is rewarding. When you’re expecting something bad and you don’t get it, does your brain treat that like it’s a good thing?”


To find out if there was a specific brain circuit involved, the researchers first trained rats to fear a certain noise by pairing it with foot shock. They later gave the rats extinction training, during which the noise was presented in the absence of foot shock, and they looked at markers of neural activity in the brain. The results revealed the BLA-NAc reward circuit was recruited by the brain during exposure therapy, as the rats gave up their fear of the bad noise.


Once Goosens and her colleagues had identified the circuit, they looked for ways to boost its activity. First, they paired a sugary drink with the fear-related sound during extinction training, hoping to associate the sound with a reward. This type of training, called counterconditioning, associates fear-eliciting cues with rewarding events or memories, instead of with neutral events as in most extinction training.


Rats that received the counterconditioning were significantly less likely to spontaneously revert to their fearful states, compared to those that received regular extinction training for up to 55 days later, the scientists found.


They also found that the benefits of extinction could be prolonged with optogenetic stimulation, in which the circuit was genetically modified so that it could be stimulated directly with tiny bursts of light from an optical fiber.




The ongoing benefit that came from stimulating the circuit was one of the most surprising — and welcome — findings from the study, Goosens says. “The effect that we saw was one that really emerged months later, and we want to know what’s happening over those two months. What is the circuit doing to suppress the recovery of fear over that period of time? We still don’t understand what that is.”


Another interesting finding from the study was that the circuit was active during both fear learning and fear extinction, says lead author Susana Correia, a research scientist in the Goosens lab. “Understanding if these are molecularly different subcircuits within this projection could allow the development of a pharmaceutical approach to target the fear extinction pathway and to improve cognitive therapy,” Correia says.


Immediate and future impacts on therapy

Some therapists are already using counterconditioning in treating PTSD, and Goosens suggests that the rat study might encourage further exploration of this technique in human therapy.


And while it isn’t likely that humans will receive direct optogenetic therapy any time soon, Goosens says there is a benefit to knowing exactly which circuits are involved in extinction.


In neurofeedback studies, for instance, brain scan technologies such as fMRI or EEG could be used to help a patient learn to activate specific parts of their brain, including the BLA-NAc reward circuit, during exposure therapy.


Studies like this one, Goosens says, offer a “target for a personalized medicine approach where feedback is used during therapy to enhance the effectiveness of that therapy.”


“This study provides beautiful basic science support for the approach of counterconditioning, and it is my hope that this clinical approach might be tested on a larger scale in PTSD treatment,” says Mireya Nadal-Vicens, a researcher at Harvard Medical School and Massachusetts General Hospital, who was not involved with the study. “In addition to perhaps being more effective clinically, it might also be a more tolerable, less initially distressing treatment due to the reward conditioning component.”


Other MIT authors on the paper include technical assistant Anna McGrath, undergraduate Allison Lee, and McGovern principal investigator and Institute Professor Ann Graybiel.


The study was funded by the U.S. Army Research Office, the Defense Advanced Research Projects Agency (DARPA), and the National Institute of Mental Health.




– Credit and Resource –




Pinpointing Brain Circuit That can Keep Fear at Bay

Visible light based Imaging for Medical Devices

Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles – System accounts for the deflection of light particles passing through animal tissue or fog.


Scientia — MIT researchers have developed a technique for recovering visual information from light that has scattered because of interactions with the environment — such as passing through human tissue.


Light, Algorithm , visible-light-based imaging, medical devices, autonomous vehicles, light particles, X-rays, Ultrasound waves, computer vision systems, MIT, Medical Imaging systems

In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.
Image courtesy of the researchers.




The technique could lead to medical-imaging systems that use visible light, which carries much more information than X-rays or ultrasound waves, or to computer vision systems that work in fog or drizzle. The development of such vision systems has been a major obstacle to self-driving cars.


In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.


From that information, the researchers’ algorithms were able to reconstruct an accurate image of the pattern cut into the mask.



An imaging algorithm from the MIT Media Lab’s Camera Culture group compensates for the scattering of light. The advance could potentially be used to develop optical-wavelength medical imaging and autonomous vehicles.

Video: Camera Culture Group



“The reason our eyes are sensitive only in this narrow part of the spectrum is because this is where light and matter interact most,” says Guy Satat, a graduate student at the MIT Media Lab and first author on the new paper. “This is why X-ray is able to go inside the body, because there is very little interaction. That’s why it can’t distinguish between different types of tissue, or see bleeding, or see oxygenated or deoxygenated blood.”


The imaging technique’s potential applications in automotive sensing may be even more compelling than those in medical imaging, however. Many experimental algorithms for guiding autonomous vehicles are highly reliable under good illumination, but they fall apart completely in fog or drizzle; computer vision systems misinterpret the scattered light as having reflected off of objects that don’t exist. The new technique could address that problem.

Satat’s coauthors on the new paper, published today in Scientific Reports, are three other members of the Media Lab’s Camera Culture group: Ramesh Raskar, the group’s leader, Satat’s thesis advisor, and an associate professor of media arts and sciences; Barmak Heshmat, a research scientist; and Dan Raviv, a postdoc.


Light, Algorithm , visible-light-based imaging, medical devices, autonomous vehicles, light particles, X-rays, Ultrasound waves, computer vision systems, MIT, Medical Imaging systems

An illustration shows the researchers’ experimental setup. The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time.
Illustration courtesy of the researchers.




Expanding circles

Like many of the Camera Culture group’s projects, the new system relies on a pulsed laser that emits ultrashort bursts of light, and a high-speed camera that can distinguish the arrival times of different groups of photons, or light particles. When a light burst reaches a scattering medium, such as a tissue phantom, some photons pass through unmolested; some are only slightly deflected from a straight path; and some bounce around inside the medium for a comparatively long time. The first photons to arrive at the sensor have thus undergone the least scattering; the last to arrive have undergone the most.


Where previous techniques have attempted to reconstruct images using only those first, unscattered photons, the MIT researchers’ technique uses the entire optical signal. Hence its name: all-photons imaging.


The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time. To get a sense of how all-photons imaging works, suppose that light arrives at the camera from only one point in the visual field. The first photons to reach the camera pass through the scattering medium unimpeded: They show up as just a single illuminated pixel in the first frame of the movie.


The next photons to arrive have undergone slightly more scattering, so in the second frame of the video, they show up as a small circle centered on the single pixel from the first frame. With each successive frame, the circle expands in diameter, until the final frame just shows a general, hazy light.


The problem, of course, is that in practice the camera is registering light from many points in the visual field, whose expanding circles overlap. The job of the researchers’ algorithm is to sort out which photons illuminating which pixels of the image originated where.


Cascading probabilities

The first step is to determine how the overall intensity of the image changes in time. This provides an estimate of how much scattering the light has undergone: If the intensity spikes quickly and tails off quickly, the light hasn’t been scattered much. If the intensity increases slowly and tails off slowly, it has.


On the basis of that estimate, the algorithm considers each pixel of each successive frame and calculates the probability that it corresponds to any given point in the visual field. Then it goes back to the first frame of video and, using the probabilistic model it has just constructed, predicts what the next frame of video will look like. With each successive frame, it compares its prediction to the actual camera measurement and adjusts its model accordingly. Finally, using the final version of the model, it deduces the pattern of light most likely to have produced the sequence of measurements the camera made.


One limitation of the current version of the system is that the light emitter and the camera are on opposite sides of the scattering medium. That limits its applicability for medical imaging, although Satat believes that it should be possible to use fluorescent particles known as fluorophores, which can be injected into the bloodstream and are already used in medical imaging, as a light source. And fog scatters light much less than human tissue does, so reflected light from laser pulses fired into the environment could be good enough for automotive sensing.


“People have been using what is known as time gating, the idea that photons not only have intensity but also time-of-arrival information and that if you gate for a particular time of arrival you get photons with certain specific path lengths and therefore [come] from a certain specific depth in the object,” says Ashok Veeraraghavan, an assistant professor of electrical and computer engineering at Rice University. “This paper is taking that concept one level further and saying that even the photons that arrive at slightly different times contribute some spatial information.”


“Looking through scattering media is a problem that’s of large consequence,” he adds. But he cautions that the new paper does not entirely solve it. “There’s maybe one barrier that’s been crossed, but there are maybe three more barriers that need to be crossed before this becomes practical,” he says.




– Credit and Resource –


Larry Hardesty | MIT News Office




Visible light based Imaging for Medical Devices

Test Post from Scientia

Test Post from Scientia http://www.freedawn.co.uk/scientia

Friday 10 June 2016

In The Life of The ISS

In In The Life of The ISS – Question Time


This post is dedicated to our friend Mihaela from Facebook that recently asked us a question that is partially related to the International Space Station (ISS). The Question was…

How many sunrises can an astronaut see from an orbiting space station, if it orbits the Earth in 90 minutes, and why?


This is a brilliant question and thank you Mihaela for asking us to look into it for you. I will be going through two parts to answering this. The first part is the straight forward answer, we will then go through a little more information about the ISS with some visulisations, that will help see why the aforementioned occurs. I have also used the ISS as an example as this is the most commonly known space station (seriously, isn’t it pretty cool we have space stations just whipping above us as we speak), plus there are some great apps available that allow you to track the ISS.


The Short Answer…

The ISS takes 92 minutes to orbit the earth once. Where as somewhere on Earth takes 24 hours to complete a full rotation and we see one sun rise and one sunset.


As it takes approximately 92 minutes for the ISS to complete and orbit, this means that every 45 minutes or so they will witness a sunrise or a sunset as they effectively pass over the point that Earth would be having a sun rise or a sun set at that particular time.


ISS, sunrise, sunset, international space station, question, information, info, space, Earth, orbit,


This means, over the course of 24 hours (Earth’s full rotation) the ISS will observe between 15 and 16 sun rises and sunsets.






Now a little more information on the ISS


The space station is made of many pieces. The pieces were put together in space by astronauts. The space station’s orbit is about 220 miles above Earth. NASA uses the station to learn about living and working in space. These lessons will help NASA explore space.


ISS, sunrise, sunset, international space station, question, information, info, space, Earth, orbit,


Questions and Facts

How Old Is the Space Station?

The first piece of the International Space Station was launched in 1998. A Russian rocket launched that piece. After that, more pieces were added. Two years later, the station was ready for people. The first crew arrived on November 2, 2000. People have lived on the space station ever since. Over time more pieces have been added. NASA and its partners around the world finished the space station in 2011.


How Big Is the Space Station?

The space station is as big inside as a house with five bedrooms. It has two bathrooms, a gymnasium and a big bay window. Six people are able to live there. It weighs almost a million pounds. It is big enough to cover a football field including the end zones. It has science labs from the United States, Russia, Japan and Europe.


What Are the Parts of the Space Station?

The space station has many parts. The parts are called modules. The first modules had parts needed to make the space station work. Astronauts also lived in those modules. Modules called “nodes” connect parts of the station to each other. Labs on the space station let astronauts do research.


On the sides of the space station are solar arrays. These arrays collect energy from the sun. They turn sunlight into electricity. Robot arms are attached outside. The robot arms helped to build the space station. They also can move astronauts around outside and control science experiments.


Airlocks on the space station are like doors. Astronauts use them to go outside on spacewalks.


Docking ports are like doors, too. The ports allow visiting spacecraft to connect to the space station. New crews and visitors enter the station through the docking ports. Astronauts fly to the space station on the Russian Soyuz. The crew members use the ports to move supplies onto the station.


Why Is the Space Station Important?

The space station is a home in orbit. People have lived in space every day since the year 2000. The space station’s labs are where crew members do research. This research could not be done on Earth.


Scientists study what happens to people when they live in space. NASA has learned how to keep a spacecraft working for a long time. These lessons will be important in the future.


NASA has a plan to send humans deeper into space than ever before. The space station is one of the first steps. NASA will use lessons from the space station to get astronauts ready for the journey ahead.


ISS, sunrise, sunset, international space station, question, information, info, space, Earth, orbit,





Some Quick Facts about the ISS

  • 1. It took an astounding 136 space flights on seven different types of launch vehicles to build it.

  • 2. It flies at 4.791 miles per second (7.71 km/s). That’s fast enough to go to the Moon and back in about a day.

  • 3. It weighs almost 1 million pounds including visiting spacecraft. Picture 120,000 gallons of milk in supermarket cartons in your mind.

  • 4. It has 8 miles of wire just to connect the electrical power system. That will be enough to connect a hair dryer in Newark, New Jersey, to a power plug in New York City.

  • 5. It has a complete surface area the size of a US football field, which actually makes it almost as large as the Tantive IV, the Corellian Corvette that carried Princess Leia.

  • 6. It has more livable space than a 6-bedroom house.

  • 7. It has two bathrooms, a gymnasium and a 360-degree bay window.

  • 8. It’s been the spaceport for 89 Russian Soyuz spacecraft, 37 Space Shuttle missions, three SpaceX Dragons, four Japanese HTV cargo spacecraft, and four European ATV cargo spacecraft.

  • 9. All its research experiments and spacecraft systems are housed in a bit more than one hundred telephone-booth sized racks.

  • 10. The US solar array surface area on the is 38,400 sq. feet (.88 acre), which is large enough to cover 8 basketball courts

  • 11. According to NASA, “there are 52 computers controlling the ISS.” Just for the US segment, there are “1.5 million lines of flight software code run on 44 computers communicating via 100 data networks transferring 400,000 signals.”

  • 12. Its internal pressurized volume is 32,333 cubic feet, which is about the same of a Jumbo Boeing 747.

  • 13. The ISS crews have eaten about 25,000 meals since 2000. That’s a staggering “seven tons of supplies per three astronauts for six months.” That’s 32,558 Big Macs.

  • 14. 211 people from 15 countries have visited the ISS so far.

  • 15. When it reaches the end of its life, some of the most modern Russian modules—like Nauka—will be reused to make a third space station to support interplanetary mission to Mars, the Moon and Saturn, serving as a launching and return point.

ISS, sunrise, sunset, international space station, question, information, info, space, Earth, orbit,


Some Questions Answered by the NASA Team

Do you believe in that the design of the ISS will cause a problem in case of a meteor shower? Why?

That’s a really good question. The space environment is a very harsh environment: there’s radiation and micrometeorite strikes, and other things in the environment that cause it to be very hazardous. So, one of the things that we’ve designed the space station for is to protect the astronauts against micrometeorite striking the outer shell of the space station. Now, in doing so, the basic design philosophy of the pressurized modules has been to develop an inner shell, which contains the pressurized interior of the space station, and then a layer of insulation around that inner shell, and then an outer armor plating, if you will, to the exterior. And what that does is protects against small pieces of debris that strike the station and can cause leaks. Now, for larger pieces of debris: they actually track them and have to actually move the space station out of the way of the larger pieces that could cause serious damage to the station.


What kind of contingency plan does the ISS have in case of an emergency? How long do life support systems on board last for the stranded astronauts? Is there such a thing as an emergency launch to the ISS using the current space shuttles?

Well, there are several redundant systems on the space station, which really enable the astronauts to survive for long periods of time without a space shuttle or a Russian re-supply ship coming to bring additional supplies. Now, in the event of an outright emergency, where the lives of the astronauts were threatened, they would have to evacuate the space station using the Soyuz module, but the life support systems themselves are designed to last for months at a time without a re-supply ship.


What would you say to date has been the greatest benefit to mankind from the space station, and what is its predicted benefits?

Well, I think it’s all a matter of judgment, but to me the greatest benefit of the space station is the international cooperation to date that we’ve had with over 16 different countries contributing to the International Space Station; countries that were at one time, enemies of each other, have now come together to do something that will benefit mankind. I think down the road the space station will bring great leaps in science, in medical fields, in the materials manufacturing fields, and it will also teach us a lot about long duration human space flight so that we can expand our civilization beyond Earth.


Why is the center truss section called S-Zero?

That’s actually a really good question, because the trusses are named for whether they’re on the starboard side or the port side; so you have S-Zero, S-One, P-One, S-Three, P-Three, P-Four, S-Four, P-Five, S-Five. Well, S-Zero being in the middle, I guess they couldn’t decide whether to call it S-Zero or P-Zero, and maybe they flipped a coin or whatever else and decided to call it S-Zero, but it’s actually in the center, it’s not on the starboard side or the port side, so it could have just as easily been named P-Zero.


Does the International Space Station have any hardware or machines that were specifically invented for it and cannot be found anywhere else? What are they?

Well, the International Space Station has lots of unique hardware elements that were designed specifically for the International Space Station. They also use off-the-shelf technology when possible; one instance of that is the cameras that they use on the space station for the interior of the space station are actually just off-the-shelf camcorders. But, some, there’s certainly a great amount of technology that was developed specifically for the International Space Station to function specifically in the space environment. I think one of the best examples of that is the Canadian robotic arm. The Canadian robotic arm was developed specifically for the International Space Station and fills the task of actually constructing the International Space Station, and it doesn’t even function in the Earth environment in the one-G conditions that we have here on Earth.


When will the International Space Station be completed?

Well also that’s a very interesting question. The core complete milestone that we are reaching for right now is due in the mid-2004 timeframe. Now, after we finish building what’s essentially the core of the International Space Station then we have a lot of additional options to add elements developed by international partners, and other additional features that we might want to add. The fact that the space station was designed the way it was allows us to once we get to the core complete milestone to expand it to provide lots of additional capabilities.


Which ISS docking port is being used by the Soyuz TM-34 spacecraft? Also, where on the station will Endeavour and Leonardo dock?

Well, the Soyuz module is nominally docked to the end of the Russian service module. Now, there are additional docking ports on the Russian functional cargo block, I’m sorry, on the bottom of the service module, where the Soyuz modules can be docked. And when they bring a second one up onto orbit in order to switch out the first one when they have to replace them, they actually have to move one of the Soyuz modules from the end of the service module to the bottom of the service module, and the second service module goes on to the end. The space shuttle, on the other hand, docks to the American side of the space station, to the Destiny laboratory. And the MPLM, Leonardo, in this case, is docked to Node-1, which was also built by an American company, Boeing.


Is it possible to give the times and locations of when the ISS passes over Central California?

Well, it’s actually possible to find out when the space station will be passing over your head no matter where you live, and there’s a website, it’s http://spaceflight.nasa.gov, and if you go that website, you can follow links and actually no matter what city you are in the country, you can find out when the space station will be traveling overhead.


With respect to the space station, why can’t we just shoot the trash off towards the sun instead of bringing it back to Earth?

Well, that’s actually a question that I used to wonder about when I was growing up, why didn’t we just put all the trash into the sun to save our garbage problems here on Earth. Unfortunately, it would take a lot of rocket power to get anything to the Sun, and so it’s sort of a limiting factor to be able to launch something out the sphere of influence of the Earth. Now, the trash on the International Space Station, not all of it is brought back to Earth. Some of it is placed in the Russian Progress modules, which are sent on a trajectory into the Earth’s atmosphere that burns it back up. So it’s not all brought back to Earth, just some of it in the MPLM modules.


After the completion of the ISS, how much will it contribute to the flight of humans to Mars, and return trips to the Moon?

Well, this kind of goes with the earlier question, about what the benefits of the International Space Station are. If we’re going to go to Mars, or spend long periods of time on the Moon, we have to learn what the effects of long term space flight is going to be on our astronauts. We don’t have a lot of information about what the space environment does to our astronauts, beyond, say six months. There are astronauts, particularly from Russia, who have spend more time than that in space, but very few, so we don’t have a large amount of data, and it’d be very risky to send astronauts to Mars, to spend say, a year and a half outside of the Earth environment, or more, without knowing exactly what the effects of the long term exposure to space would be. So, the International Space Station in addition to us just developing the technology to live in space for large amounts of time, it gives us the information that we need about how long astronauts can safely stay in space.


Is it possible to use a flywheel mechanism to produce power for the space station? Have there been any experiments using this technology to produce power in space?

Well, it’s actually not possible to use flywheels to generate power in the classical sense, but you can use flywheels to store power. So, you would have to use some other source to generate the power, but then to store it you could spin up flywheels and then use the kinetic energy from the flywheels to actually store energy. But because the power requirements of the space station are so large, it’s a lot more practical for us to use batteries to store power on the station. So, the answer to the question is no, we don’t use flywheels to store power.


How many different civilian contracting companies, on average, participate in the building of one of our space station modules?

Well, most of the American space station modules were developed and built by the prime contractor for the space station, which is Boeing. Now, Boeing has dozens, if not hundreds of subcontractors that it uses to build everything from the smallest screw used on the space station to a complex computer, or a solar array. So, there’s one prime contractor, but dozens, if not hundreds of subcontractors.


When the space station needs to make an orbital adjustment, do the occupants of the space station feel the movement of the adjustment?

Well, the reason why I think that’s such a good question is because it really highlights one of the most fundamental laws of physics we have, and there are basically three laws of physics that Isaac Newton postulated hundreds of years ago, and one of those laws it that force equals mass times acceleration. Now, the key thing about these laws is that no matter where you are in the universe, they are true. So whether you’re on Earth or whether you’re in space, these laws are true. Now, this particular law, force equals mass plus acceleration, when you press the gas pedal in your car, your car accelerates, you go from say, 55 miles an hour to 60 miles an hour. That acceleration is what causes you to feel that force. Now, in space, when they fire the thrusters on the space station, the space station also accelerates. But the acceleration is generally very, very small. So sometimes the astronauts might not notice the space station is accelerating. But that also brings in another interesting point, in what they might see, since the astronauts are floating free with respect to the space station, that when the space station fires it thrusters, the space station would move, and the astronauts, not touching one of the surfaces, would not move, so they would see the space station actually moving around them.


During a 24 hour period, how many times does the ISS orbit the Earth?

Well, the space station orbits Earth about every 90 minutes, so that means in a 24 hour day, the space station orbits approximately 16 times.


In operating, maintaining, and troubleshooting problems on the ISS, how involved does the ISS crew get versus the control center team?

Well, that’s a very good question. NASA has an entire army of people supporting the operations of the International Space Station. Of course, the astronauts are often the first line of defense, and especially in emergency situations, they have to make quick, critical decisions that will allow everybody to be safe. Now, the mission control people are a huge part of supporting that and laying out those plans for the emergency situations. But, in the event that something goes wrong on the station, NASA has the ability to go back to the people who actually designed the hardware and ask them what they think about the problem, and if it’s something they might have seen before in ground testing. So it’s a collaborative effort across all of the different countries that make up the hardware that we use on the International Space Station.


On certain days we are able to visualize the space station as it seems to streak across the sky. How fast is the ISS traveling?

Well, in order for the space station to stay in orbit, it has to travel at seven kilometers per second, which the equivalent in miles per hour, is around 15,500 miles per hour. So that’s pretty fast!


I hope you enjoyed this post and Mihaela, I hope this answered your question and then some more 🙂 thank you so so much for the question. If you or anyone else has any more questions please let us know on our Scientia facebook page or use the forums on Scientia to start your own thread.



– Credit and Resource –


NASA




In The Life of The ISS

Saturday 28 May 2016

Link Between Primordial Black Holes and Dark Matter

Scientist suggests possible link between primordial black holes and dark matter


primordial black holes, dark matter, NASA, Spitzer Space Telescope, Ursa Major, universe, space, gravity, massive exotic particle, X-ray background glows,

Left: This image from NASA’s Spitzer Space Telescope shows an infrared view of a sky area in the constellation Ursa Major. Right: After masking out all known stars, galaxies and artifacts and enhancing what’s left, an irregular background glow appears. This is the cosmic infrared background (CIB); lighter colors indicate brighter areas. The CIB glow is more irregular than can be explained by distant unresolved galaxies, and this excess structure is thought to be light emitted when the universe was less than a billion years old. Scientists say it likely originated from the first luminous objects to form in the universe, which includes both the first stars and black holes. Credit: NASA/JPL-Caltech/A. Kashlinsky (Goddard)


Scientia — Dark matter is a mysterious substance composing most of the material universe, now widely thought to be some form of massive exotic particle. An intriguing alternative view is that dark matter is made of black holes formed during the first second of our universe’s existence, known as primordial black holes. Now a scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, suggests that this interpretation aligns with our knowledge of cosmic infrared and X-ray background glows and may explain the unexpectedly high masses of merging black holes detected last year.

“This study is an effort to bring together a broad set of ideas and observations to test how well they fit, and the fit is surprisingly good,” said Alexander Kashlinsky, an astrophysicist at NASA Goddard. “If this is correct, then all galaxies, including our own, are embedded within a vast sphere of black holes each about 30 times the sun’s mass.”





In 2005, Kashlinsky led a team of astronomers using NASA’s Spitzer Space Telescope to explore the background glow of infrared light in one part of the sky. The researchers reported excessive patchiness in the glow and concluded it was likely caused by the aggregate light of the first sources to illuminate the universe more than 13 billion years ago. Follow-up studies confirmed that this cosmic infrared background (CIB) showed similar unexpected structure in other parts of the sky.


In 2013, another study compared how the cosmic X-ray background (CXB) detected by NASA’s Chandra X-ray Observatory compared to the CIB in the same area of the sky. The first stars emitted mainly optical and ultraviolet light, which today is stretched into the infrared by the expansion of space, so they should not contribute significantly to the CXB.


Yet the irregular glow of low-energy X-rays in the CXB matched the patchiness of the CIB quite well. The only object we know of that can be sufficiently luminous across this wide an energy range is a black hole. The research team concluded that primordial black holes must have been abundant among the earliest stars, making up at least about one out of every five of the sources contributing to the CIB.


The nature of dark matter remains one of the most important unresolved issues in astrophysics. Scientists currently favor theoretical models that explain dark matter as an exotic massive particle, but so far searches have failed to turn up evidence these hypothetical particles actually exist. NASA is currently investigating this issue as part of its Alpha Magnetic Spectrometer and Fermi Gamma-ray Space Telescope missions.


“These studies are providing increasingly sensitive results, slowly shrinking the box of parameters where dark matter particles can hide,” Kashlinsky said. “The failure to find them has led to renewed interest in studying how well primordial black holes—black holes formed in the universe’s first fraction of a second—could work as dark matter.”


Physicists have outlined several ways in which the hot, rapidly expanding universe could produce primordial black holes in the first thousandths of a second after the Big Bang. The older the universe is when these mechanisms take hold, the larger the black holes can be. And because the window for creating them lasts only a tiny fraction of the first second, scientists expect primordial black holes would exhibit a narrow range of masses.


Primordial black holes, if they exist, could be similar to the merging black holes detected by the LIGO team in 2014. This computer simulation shows in slow motion what this merger would have looked like up close. The ring around the black holes, called an Einstein ring, arises from all the stars in a small region directly behind the holes whose light is distorted by gravitational lensing. The gravitational waves detected by LIGO are not shown in this video, although their effects can be seen in the Einstein ring. Gravitational waves traveling out behind the black holes disturb stellar images comprising the Einstein ring, causing them to slosh around in the ring even long after the merger is complete. Gravitational waves traveling in other directions cause weaker, shorter-lived sloshing everywhere outside the Einstein ring. If played back in real time, the movie would last about a third of a second. Credit: SXS Lensing

On Sept. 14, gravitational waves produced by a pair of merging black holes 1.3 billion light-years away were captured by the Laser Interferometer Gravitational-Wave Observatory (LIGO) facilities in Hanford, Washington, and Livingston, Louisiana. This event marked the first-ever detection of gravitational waves as well as the first direct detection of black holes. The signal provided LIGO scientists with information about the masses of the individual black holes, which were 29 and 36 times the sun’s mass, plus or minus about four solar masses. These values were both unexpectedly large and surprisingly similar.





“Depending on the mechanism at work, primordial black holes could have properties very similar to what LIGO detected,” Kashlinsky explained. “If we assume this is the case, that LIGO caught a merger of black holes formed in the early universe, we can look at the consequences this has on our understanding of how the cosmos ultimately evolved.”


In his new paper, published May 24 in The Astrophysical Journal Letters, Kashlinsky analyzes what might have happened if dark matter consisted of a population of black holes similar to those detected by LIGO. The black holes distort the distribution of mass in the early universe, adding a small fluctuation that has consequences hundreds of millions of years later, when the first stars begin to form.


For much of the universe’s first 500 million years, normal matter remained too hot to coalesce into the first stars. Dark matter was unaffected by the high temperature because, whatever its nature, it primarily interacts through gravity. Aggregating by mutual attraction, dark matter first collapsed into clumps called minihaloes, which provided a gravitational seed enabling normal matter to accumulate. Hot gas collapsed toward the minihaloes, resulting in pockets of gas dense enough to further collapse on their own into the first stars. Kashlinsky shows that if black holes play the part of dark matter, this process occurs more rapidly and easily produces the lumpiness of the CIB detected in Spitzer data even if only a small fraction of minihaloes manage to produce stars.


As cosmic gas fell into the minihaloes, their constituent black holes would naturally capture some of it too. Matter falling toward a black hole heats up and ultimately produces X-rays. Together, infrared light from the first stars and X-rays from gas falling into dark matter black holes can account for the observed agreement between the patchiness of the CIB and the CXB.


Occasionally, some primordial black holes will pass close enough to be gravitationally captured into binary systems. The black holes in each of these binaries will, over eons, emit gravitational radiation, lose orbital energy and spiral inward, ultimately merging into a larger black hole like the event LIGO observed.


“Future LIGO observing runs will tell us much more about the universe’s population of black holes, and it won’t be long before we’ll know if the scenario I outline is either supported or ruled out,” Kashlinsky said.


Kashlinsky leads science team centered at Goddard that is participating in the European Space Agency’s Euclid mission, which is currently scheduled to launch in 2020. The project, named LIBRAE, will enable the observatory to probe source populations in the CIB with high precision and determine what portion was produced by black holes.




– Credit and Resource –


More information: A. Kashlinsky. LIGO GRAVITATIONAL WAVE DETECTION, PRIMORDIAL BLACK HOLES, AND THE NEAR-IR COSMIC INFRARED BACKGROUND ANISOTROPIES, The Astrophysical Journal (2016). DOI: 10.3847/2041-8205/823/2/L25 , On Arxiv: arxiv.org/abs/1605.04023


Journal reference: Astrophysical Journal Letters, Astrophysical Journal, arXiv


Provided by: NASA




Link Between Primordial Black Holes and Dark Matter

Neuroscientists illuminate role of autism gene

Neuroscientists illuminate role of autism linked gene. Loss of Shank gene prevents neuronal synapses from properly maturing.


Scientia — A new study from MIT neuroscientists reveals that a gene mutation associated with autism plays a critical role in the formation and maturation of synapses — the connections that allow neurons to communicate with each other.


Autism, gene, Neuroscientists , neuronal , synapses , neurons, MIT , Shank3 , Cells


Many genetic variants have been linked to autism, but only a handful are potent enough to induce the disorder on their own. Among these variants, mutations in a gene called Shank3 are among the most common, occurring in about 0.5 percent of people with autism.





Scientists know that Shank3 helps cells respond to input from other neurons, but because there are two other Shank proteins, and all three can fill in for each other in certain ways, it has been difficult to determine exactly what the Shank proteins are doing.


“It’s clearly regulating something in the neuron that’s receiving a synaptic signal, but some people find one role and some people find another,” says Troy Littleton, a professor in the departments of Biology and of Brain and Cognitive Sciences at MIT, a member of MIT’s Picower Institute for Learning and Memory, and the senior author of the study. “There’s a lot of debate over what it really does at synapses.”


Key to the study is that fruit flies, which Littleton’s lab uses to study synapses, have only one version of the Shank gene. By knocking out that gene, the researchers eliminated all Shank protein from the flies.


“This is the first animal where we have completely removed all Shank family proteins,” says Kathryn Harris, a Picower Institute research scientist and lead author of the paper, which appears in the May 25 issue of the Journal of Neuroscience.


Synaptic organization


Scientists already knew that the Shank proteins are scaffold proteins, meaning that they help to organize the hundreds of other proteins found in the synapse of a postsynaptic cell — a cell that receives signals from a presynaptic cell. These proteins help to coordinate the cell’s response to the incoming signal.


“Shank is essentially a hub for signaling,” Harris says. “It brings in a lot of other partners and plays an organizational role at the postsynaptic membrane.”


In fruit flies lacking the Shank protein, the researchers found two dramatic effects. First, the postsynaptic cells had many fewer boutons, which are the sites where neurotransmitter release occurs. Second, many of the boutons that did form were not properly developed; that is, they were not surrounded by all of the postsynaptic proteins normally found there, which are required to respond to synaptic signals.


The researchers are now studying how this reduction in functional synapses affects the brain. Littleton suspects that the development of neural circuits could be impaired, which, if the same holds true in humans, may help explain some of the symptoms seen in autistic people.


“During critical windows of social and language learning, we reshape our connections to drive connectivity patterns that respond to rewards and language and social interactions,” he says. “If Shank is doing similar things in the mammalian brain, one could imagine potentially having those circuits form relatively normally early on, but if they fail to properly mature and form the proper number of connections, that could lead to a variety of behavioral defects.”


Pinpointing an exact link to autism symptoms would be difficult to do in fruit fly studies, however.





“Although the core molecular machines that allow neurons to communicate are highly conserved between fruit flies and humans, the anatomy of the various circuits that are formed during evolution are quite different,” Littleton says. “It’s hard to jump from a synaptic defect in the fly to an autism-like phenotype because the circuits are so different.”


An unexpected link


The researchers also showed, for the first time, that loss of Shank affects a well-known set of proteins that comprise the Wnt (also known as Wingless) signaling pathway. When a Wnt protein binds to a receptor on the cell, it initiates a series of interactions that influence which genes are turned on. This, in turn, contributes to many cell processes including embryonic development, tissue regeneration, and tumor formation.


When Shank is missing from fruit flies, Wnt signaling is disrupted because the receptor that normally binds to Wnt fails to be internalized by the cell. Normally, a small segment of the activated receptor moves to the cell nucleus and influences the transcription of genes that promote maturation of synapses. Without Shank, Wnt signaling is impaired and the synapses do not fully mature.


“The Shank protein and the Wnt protein family are thought to be involved in autism independently, but the fact that this study discovered that Wnt and Shank are interacting brings the story into better focus,” says Bryan Stewart, a professor of cell and systems biology at the University of Toronto at Mississauga, who was not involved in the research. “Now we can look and see if those interactions between Wnt and Shank are potentially responsible for their role in autism.”


The finding raises the possibility of treating autism with drugs that promote Wnt signaling, if the same connection is found in humans.


“Because the link to Wnt signaling is new and hasn’t been picked up in mammalian studies, we really hope that that can inspire people to look for a connection to Wnt signaling in mammalian models, and maybe that can offer another avenue for how loss of Shank could be counteracted,” Harris says.




– Credit and Resource –


The research was funded by the National Institutes of Health and the Simons Center for the Social Brain at MIT.


Anne Trafton | MIT News Office




Neuroscientists illuminate role of autism gene