Friday 30 October 2015

Observing the unseen with Euclid

Scientia — In a notorious hunt for elusive dark matter, seeing the unseen is the key to scientific success. The latest addition to the fleet of probes searching for dark matter, European Space Agency’s (ESA) Euclid spacecraft, is being designed to bring breakthrough results while observing the yet unobserved.


Euclid, Observing the unseen, dark matter, European Space Agency, ESA, Euclid spacecraft, universe , galaxies, dark energy, spectroscopy


“Euclid will observe the unseen, meaning the part of the universe that does not emit or absorb light, but we know it is there because of the global properties of the universe, its geometrical properties and its expansion rate,” Giuseppe D. Racca, ESA’s Euclid Project Manager told Phys.org.






To find the traces of dark matter, the spacecraft will map the large-scale structure of the universe over the entire extragalactic sky. According to ESA, the probe will measure galaxies out to redshifts of about two, looking back in time 10 billion years as it covers the period over which dark energy accelerated the expansion of the universe. Dark matter is invisible, but has gravity and acts to slow the expansion. It is now assumed that ordinary matter makes up only about 4 percent of the universe; the rest is actually dominated by dark matter and dark energy.


To achieve its ambitious goals, besides a 1.2 m-diameter telescope, Euclid will be equipped with two powerful instruments designed to carry out scientific observations.


“We have two instruments, one visible imager, called VIS, capable to resolve images of galaxies in the 550 to 900 nm passband. The VIS nominal survey images are used to determine the shapes of at least 30 galaxies per arcmin, for a total of 1.5 billion galaxies,” Racca said.


“The other instrument is NISP [an infrared instrument], designed to carry out slitless spectroscopy and imaging photometry in the near-infrared (NIR) wavelength. The NISP spectroscopy measures the redshifted H-alpha emission line of galaxies,” he added.


VIS and NISP, large-format cameras, will be used to characterize the morphometric, photometric and spectroscopic properties of galaxies. Euclid will use two techniques to observe the invisible, called galaxy clustering (GC) and weak lensing (WL).


In Euclid’s case, GC means performing a measurement of the redshift distribution of galaxies from their H-alpha emission line survey using near-infrared slitless spectroscopy. This method also provides direct information of the validity of general relativity because it enables monitoring the evolution of structures subject to the combined effects of gravity, which forces clumping of matter and the opposing force caused by the accelerated expansion.




The WL technique measures the distortion of the galaxy shapes due to the gravitational lensing caused by the predominantly dark matter distribution from the galaxies and the observer. The obtained galaxy shear field can be transformed into the matter distribution. WL requires extremely high image quality because possible image distortions by the optical system must be suppressed or calibrated out in order to measure the true distortions caused by gravity.


“It is important that the two techniques are performed at the same time. The complementarity of the two probes will, indeed, provide important additional information on possible systematics, which limit the accuracy of each of the probes,” Racca noted.


The Euclid mission is currently on track for the planned launch in late 2020. It was selected for implementation as a Medium-class mission in ESA’s Cosmic Vision program in October 2011 and formally adopted in June 2012. The mission will operate from the Sun–Earth Lagrangian point (L2), situated 1.5 million km from Earth. Science and spacecraft operations will be conducted by ESA. The Euclid Consortium, consisting of more than one hundred scientific institutes, is responsible for the development and delivery of the spacecraft’s instruments and the scientific data processing.




– Credit and Resource –


ESA




Observing the unseen with Euclid

The Euclid Spacecraft

Scientia


Euclid is an ESA medium class astronomy and astrophysics space mission. Euclid was selected by ESA in October 2011 (see the Euclid ESA page). Its launch is planned for Q1 2020. In June 2012 ESA officially selected the “Euclid Consortium” as the single team having the scientific responsibility of the mission, the data production and of the scientific instruments.


The Euclid mission aims at understanding why the expansion of the Universe is accelerating and what is the nature of the source responsible for this acceleration which physicists refer to as dark energy. Dark energy represents around 75% of the energy content of the Universe today, and together with dark matter it dominates the Universes’ matter-energy content. Both are mysterious and of unknown nature but control the past, present and future evolution of Universe.






Euclid will explore how the Universe evolved over the past 10 billion years to address questions related to fundamental physics and cosmology on the nature and properties of dark energy, dark matter and gravity, as well as on the physics of the early universe and the initial conditions which seed the formation of cosmic structure.


The imprints of dark energy and gravity will be tracked by using two complementary cosmological probes to capture signatures of the expansion rate of the Universe and the growth of cosmic structures: Weak gravitational Lensing and Galaxy Clustering (Baryonic Acoustic Oscillations and Redshift Space Distortion).


To accomplish the Euclid mission ESA has selected Thales Alenia Space (see also the ESA press release ) for the construction of the satellite and its Service Module and Airbus Defence and Space (ex-Astrium) for the Payload Module.


Euclid will be equipped with a 1.2 m diameter Silicon Carbide (SiC) mirror telescope made by Airbus Defence and Space feeding 2 instruments, VIS and NISP, built by the Euclid Consortium : a high quality panoramic visible imager (VIS), a near infrared 3-filter (Y, J and H) photometer (NISP-P) and a slitless spectrograph (NISP-S). With these instruments physicists will probe the expansion history of the Universe and the evolution of cosmic structures by measuring the modification of shapes of galaxies induced by gravitational lensing effects of dark matter and the 3-dimension distribution of structures from spectroscopic red­shifts of galaxies and clusters of galaxies.


The satellite will be launched by a Soyuz ST-2.1B rocket and then travel to the L2 Sun-Earth Lagrangian point for a 6 years mission.


Euclid will observe 15,000 deg2 of the darkest sky that is free of contamination by light from our Galaxy and our Solar System (see the ESA Euclid mission summary ). Two “Euclid Deep Fields” covering around 20 deg2 each will be also observed extending the scientific scope of the mission the high-redshift universe.


The complete survey represents hundreds of thousands images and several tens of Petabytes of data. About 10 billion sources will be observed by Euclid out of which more than 1 billion will be used for weak lensing and several tens of million galaxy redshifts will be also measured and used for galaxy clustering. The scientific analysis and interpretation of these data is led by the scientists of the Euclid Consortium.


Euclid Scientific Objectives


Euclid is primarily a cosmology and fundamental physics mission. Its main scientific objective is to understand the source of the accelerating expansion of the Universe and discover its very nature that physicists refer to as dark energy.


Euclid will then address to the following questions:


  • is dark energy merely a cosmological constant, as first discussed by Einstein, or

  • is it a new kind of field that evolves dynamically with the expansion of the universe?

  • alternatively, is dark energy instead a manifestation of a breakdown of General Relativity and deviations from the law of gravity?

  • what are the nature and properties of dark matter?

  • what are the initial conditions which seed the formation of cosmic structure?

  • what will be the future of the Universe over the next ten billion years?



The imprints of dark energy and gravity will be detected from their signatures on the expansion rate of the Universe and the growth of cosmic structures using gravitational lensing effects on galaxies (Weak Lensing) and the properties of galaxy clustering (Baryonic Acoustic Oscillations and Redshift Space Distortion). Baryon acoustic oscillations provide a direct distance-redshift probe to explore the expansion rate of the Universe. Weak lensing provides an almost direct probe of dark matter but combines together angular distances that probes the expansion rate and the mass density contrast that probe the growth rate of structure and gravity. In contrast, redshift space distortion probes the growth rate of cosmic structures and gravity. Combined together these three probes are solid and complementary probes of the effects of dark energy.


These observations will be complemented by independent observations also derived from Euclid data on clusters of galaxies and the Integrated Sachs-Wolf effect. They will be used to cross-check the results obtained from Weak Lensing, Baryonic Acoustic Oscillations and Redshift Space Distortion and to better understand and control systematic errors.


Euclid, Observing the unseen, dark matter, European Space Agency, ESA, Euclid spacecraft, universe , galaxies, dark energy, spectroscopy

Illustration of the primary probes of the Euclid mission. Left: Baryon acoustic oscillations, (BAO), Redshift Space Distortion (RSD). Right: Weak Lensing (WL)- Courtesy Euclid Consortium/Science Working Group.






– Credit and Resource –


For more information, check out the Euclid site from our awesome friends at the ESA.




The Euclid Spacecraft

How Crayons are Made

Scientia — So as I am writing this the date is 30 October 2015 and like me you might be getting ready for Halloween. I was making a drawing with my nephew and using coloured crayons to colour it in, when I woundered how they make crayons. I vaguly remember watching a video on it years ago but decided to make a post on it, because come on…..who doesn’t want to know how crayons are made…right :)


How its made, crayons, crayon, colour pencil



How Crayons are Made


Step One: The Melting

How its made, crayons, crayon, colour pencil

Twice a week, railcars full of uncolored paraffin wax pull up to the factory. An oil-filled boiler heats the cars with steam, and workers pump the now-molten glop into a silo. Each silo holds up to 100,000 pounds of wax, and the plant empties a silo nearly every day.






Step Two: The Mixing

How its made, crayons, crayon, colour pencil

From the silos, the wax moves through pipes to the mix kettles. Operators add a strengthening additive and dump in a bag of powdered pigment. The amount varies by the saturation and opacity of the color—yellow requires only a few pounds per 250-pound batch; black requires a lot more.


How its made, crayons, crayon, colour pencil

Once all the ingredients are placed into the ‘kettle’ (mixing drum) this is stirred to ensure that all the components are fully mixed together.



Step Three: Casting

How its made, crayons, crayon, colour pencil

The liquid solution is poured into set crayon shaped casts via an injection machine. This is left to harden. A rotary blade then scrapes off the excess wax and this is remelted to be reused again.


How its made, crayons, crayon, colour pencil

Now the excess has been taken away you are left with nice shiny crayons which are pushed from the metal cast and onto a moving belt to be taken to the next stage of the process.


How its made, crayons, crayon, colour pencil

Step Four: Labels

How its made, crayons, crayon, colour pencil

Each single crayon is wrapped with a label with adhesive. Crayola state that they use two wraps to aid strength.


How its made, crayons, crayon, colour pencil
How its made, crayons, crayon, colour pencil

Step Five: Sorting and Packing

How its made, crayons, crayon, colour pencil

The crayons are sorted into their respective colours and placed into small filter drums. These drums drop in one of each colour into the crayons pack.


How its made, crayons, crayon, colour pencil



Step Six: Scanning and Box Packing

How its made, crayons, crayon, colour pencil

The filled boxes of crayons are run on a belt. A laser etches a date code on the cardboard, and a metal detector makes sure nothing but crayon is inside. Then, robotic packing machines bundle the boxes onto pallets ready for shipping.


Crayon Facts


  • The favorite color crayon for most people in the US is blue.

  • The 100 billionth crayon was made in 1996 and was made by Fred Rogers of Mister Rogers’ Neighborhood. The color was blue ribbon.

  • The largest crayon in the world, Big Blue, weighs 1500 pounds, is 15 feet long and 16 inches in diameter. It was made from 123,000 old blue crayons that were gathered from kids around the country. It would color an entire football field.

  • Crayola’s Easton manufacturing plant produces 650 crayons per minute.

  • Crayons are made from paraffin, a waxy substance derived from wood, coal, or petroleum

  • Paraffin was produced commercially by 1867, and crayons appeared around the turn of the century. The early crayons were black and sold mainly to factories and plants, where they were used as waterproof markers. Colored crayons for artistic purposes were introduced in Europe around the same time, but like the black crayons, they contained materials that were toxic (usually charcoal and wax) and thus were not appropriate for children. The Binney & Smith Company, who still make crayons, had a canny grasp of the American educational market, having previously marketed dustless chalk for chalkboards. This company sold its first package of eight colored crayons, suitable for use in schools by children, in 1903.

How Crayons are Made Video









How Crayons are Made

New Material for Solar Cell Technology

Scientia — The University of Liverpool is part of an international research team that have demonstrated a new semiconductor material made from abundant elements which can be “tuned” for use in solar cells instead of rare elements.






Semiconductors are vital to the electronics industry and are used in everything from smart phones to solar panels but they rely on rare elements such as tellurium, gallium and indium and there is increasing concern over their cost and availability.


Researchers focussed on the compound zinc tin nitride (ZnSnN2) which has been recently synthesized by research groups around the world, using zinc and tin – metals which are readily available through mature recycling facilities – rather than expensive and rare metals.


Working with the compound, they discovered an innovative tuning process which means it could provide a possible replacement for semiconductor material currently used in solar cells.


It had been thought that the band gap (a semiconductor’s defining characteristic) was too large for device applications such as solar cells. However, the researchers found that the alloy’s band gap can be “tuned” by altering the perfectly ordered crystalline lattice by introducing disorder in the form of randomly distributed zinc and tin atoms.


Dr Tim Veal, a reader in Physics and researcher in the University of Liverpool’s Stephenson Institute for Renewable Energy, said: “Such tuneability is typically achieved in other material systems by alloying, or blending in other elements, to obtain the desired result, However, this is not necessary with ZnSnN2, given the recent discovery.”


Dr Steve Durbin, Professor and Chair of Electrical and Computer Engineering at Western Michigan University, added: “We use a sophisticated crystal growth technique known as molecular beam epitaxy. It allowed us to control crystal quality by carefully adjusting parameters such as temperature and the ratio of incident atomic (or molecular) beams.”


Dr Veal added: “By doing so, the team has been able to observe a wide range of disorder in a number of samples, and correlated this with a significant reduction in band gap energy – paving the way for this material to be considered for solar cell applications.”


The research, funded by the UK Engineering and Physical Science Research Council and US National Science Foundation, is published in the journal Advanced Energy Materials.




– Credit and Resource –


More information: Tim D. Veal et al. Band Gap Dependence on Cation Disorder in ZnSnN2 Solar Absorber , Advanced Energy Materials (2015). DOI: 10.1002/aenm.201501462


Journal reference: Advanced Energy Materials


Provided by: University of Liverpool




New Material for Solar Cell Technology

Tuesday 27 October 2015

Drug-resistance Mechanism in Tumor Cells

Biologists unravel drug-resistance mechanism in tumor cells – Targeting the RNA-binding protein that promotes resistance could lead to better cancer therapies.


 drug-resistance, tumor, cells, tumor cells, RNA, RNA-binding protein, cancer therapies, DNA, hnRNPA0, MIT

P53, which helps healthy cells prevent genetic mutations, is missing from about half of all tumors. Researchers have found that a backup system takes over when p53 is disabled and encourages cancer cells to continue dividing. In the background of this illustration are crystal structures of p53 DNA-binding domains.
Image: Jose-Luis Olivares/MIT (p53 illustration by Richard Wheeler/Wikimedia Commons)


Scientia — About half of all tumors are missing a gene called p53, which helps healthy cells prevent genetic mutations. Many of these tumors develop resistance to chemotherapy drugs that kill cells by damaging their DNA.


MIT cancer biologists have now discovered how this happens: A backup system that takes over when p53 is disabled encourages cancer cells to continue dividing even when they have suffered extensive DNA damage. The researchers also discovered that an RNA-binding protein called hnRNPA0 is a key player in this pathway.






“I would argue that this particular RNA-binding protein is really what makes tumor cells resistant to being killed by chemotherapy when p53 is not around,” says Michael Yaffe, the David H. Koch Professor in Science, a member of the Koch Institute for Integrative Cancer Research, and the senior author of the study, which appears in the Oct. 22 issue of Cancer Cell.


The findings suggest that shutting off this backup system could make p53-deficient tumors much more susceptible to chemotherapy. It may also be possible to predict which patients are most likely to benefit from chemotherapy and which will not, by measuring how active this system is in patients’ tumors.


Rewired for resistance

In healthy cells, p53 oversees the cell division process, halting division if necessary to repair damaged DNA. If the damage is too great, p53 induces the cell to undergo programmed cell death.


In many cancer cells, if p53 is lost, cells undergo a rewiring process in which a backup system, known as the MK2 pathway, takes over part of p53’s function. The MK2 pathway allows cells to repair DNA damage and continue dividing, but does not force cells to undergo cell suicide if the damage is too great. This allows cancer cells to continue growing unchecked after chemotherapy treatment.


“It only rescues the bad parts of p53’s function, but it doesn’t rescue the part of p53’s function that you would want, which is killing the tumor cells,” says Yaffe, who first discovered this backup system in 2013.


In the new study, the researchers delved further into the pathway and found that the MK2 protein exerts control by activating the hnRNPA0 RNA-binding protein.


RNA-binding proteins are proteins that bind to RNA and help control many aspects of gene expression. For example, some RNA-binding proteins bind to messenger RNA (mRNA), which carries genetic information copied from DNA. This binding stabilizes the mRNA and helps it stick around longer so the protein it codes for will be produced in larger quantities.


“RNA-binding proteins, as a class, are becoming more appreciated as something that’s important for response to cancer therapy. But the mechanistic details of how those function at the molecular level are not known at all, apart from this one,” says Ian Cannell, a research scientist at the Koch Institute and the lead author of the Cancer Cell paper.


In this paper, Cannell found that hnRNPA0 takes charge at two different checkpoints in the cell division process. In healthy cells, these checkpoints allow the cell to pause to repair genetic abnormalities that may have been introduced during the copying of chromosomes.


One of these checkpoints, known as G2/M, is controlled by a protein called Gadd45, which is normally activated by p53. In lung cancer cells without p53, hnRNPA0 stabilizes mRNA coding for Gadd45. At another checkpoint called G1/S, p53 normally turns on a protein called p21. When p53 is missing, hnRNPA0 stabilizes mRNA for a protein called p27, a backup to p21. Together, Gadd45 and p27 help cancer cells to pause the cell cycle and repair DNA so they can continue dividing.




Personalized medicine

The researchers also found that measuring the levels of mRNA for Gadd45 and p27 could help predict patients’ response to chemotherapy. In a clinical trial of patients with stage 2 lung tumors, they found that patients who responded best had low levels of both of those mRNAs. Those with high levels did not benefit from chemotherapy.


“You could measure the RNAs that this pathway controls, in patient samples, and use that as a surrogate for the presence or absence of this pathway,” Yaffe says. “In this trial, it was very good at predicting which patients responded to chemotherapy and which patients didn’t.”


“The most exciting thing about this study is that it not only fills in gaps in our understanding of how p53-deficient lung cancer cells become resistant to chemotherapy, it also identifies actionable events to target and could help us to identify which patients will respond best to cisplatin, which is a very toxic and harsh drug,” says Daniel Durocher, a senior investigator at the Samuel Lunenfeld Research Institute of Mount Sinai Hospital in Toronto, who was not part of the research team.


The MK2 pathway could also be a good target for new drugs that could make tumors more susceptible to DNA-damaging chemotherapy drugs. Yaffe’s lab is now testing potential drugs in mice, including nanoparticle-based sponges that would soak up all of the RNA binding protein so it could no longer promote cell survival.




– Credit and Resource –


Anne Trafton | MIT News Office




Drug-resistance Mechanism in Tumor Cells

Sonic tractor beam

Scientia — A team of researchers from the Universities of Bristol and Sussex in collaboration with Ultrahaptics have built the world’s first sonic tractor beam that can lift and move objects using sound waves.


Tractor beams are mysterious rays that can grab and lift objects. The concept has been used by science-fiction writers, and programmes like Star Trek, but has since come to fascinate scientists and engineers. Researchers have now built a working tractor beam that uses high-amplitude sound waves to generate an acoustic hologram which can pick up and move small objects.






The technique, published in Nature Communications, could be developed for a wide range of applications, for example a sonic production line could transport delicate objects and assemble them, all without physical contact. On the other hand, a miniature version could grip and transport drug capsules or microsurgical instruments through living tissue.


Asier Marzo, PhD student and the lead author, said: “It was an incredible experience the first time we saw the object held in place by the tractor beam. All my hard work has paid off, it’s brilliant.”


Bruce Drinkwater, Professor of Ultrasonics in the University of Bristol’s Department of Mechanical Engineering, added: “We all know that sound waves can have a physical effect. But here we have managed to control the sound to a degree never previously achieved.”


Sriram Subramanian, Professor of Informatics at the University of Sussex and co-founder of Ultrahaptics, explained: “In our device we manipulate objects in mid-air and seemingly defy gravity. Here we individually control dozens of loudspeakers to tell us an optimal solution to generate an acoustic hologram that can manipulate multiple objects in real-time without contact.”


Sonic , Sonic tractor beam, Ultrahaptics , sound waves, acoustic hologram, hologram,

Holograms are tridimensional light-fields that can be projected from a two-dimensional surface. We have created acoustic holograms with shapes such as tweezers, twisters and cages that exert forces on particles to levitate and manipulate them. Credit: Image courtesy of Asier Marzo, Bruce Drinkwater and Sriram Subramanian, copyright © 2015.


The researchers used an array of 64 miniature loudspeakers to create high-pitch and high-intensity sound waves. The tractor beam works by surrounding the object with high-intensity sound and this creates a force field that keeps the objects in place. By carefully controlling the output of the loudspeakers the object can be either held in place, moved or rotated.




The team have shown that three different shapes of acoustic force fields work as tractor beams. The first is an acoustic force field that resembles a pair of fingers or tweezers. The second is an acoustic vortex, the objects becoming stuck-in and then trapped at the core and the third is best described as a high-intensity cage that surrounds the objects and holds them in place from all directions.



Acoustic holograms are projected from a flat surface and contrary to traditional holograms, they exert considerable forces on the objects contained within. The acoustic holograms can be updated in real-time to translate, rotate and combine levitated particles enabling unprecedented contactless manipulators such as tractor beams. Credit: Asier Marzo, Bruce Drinkwater and Sriram Subramanian, copyright © 2015


– Credit and Resource –


More information: DOI: 10.1038/ncomms9661


Journal reference: Nature Communications


Provided by: University of Sussex




Sonic tractor beam

Huge Flare from Black Hole

Scientia — The baffling and strange behaviors of black holes have become somewhat less mysterious recently, with new observations from NASA’s Explorer missions Swift and the Nuclear Spectroscopic Telescope Array, or NuSTAR. The two space telescopes caught a supermassive black hole in the midst of a giant eruption of X-ray light, helping astronomers address an ongoing puzzle: How do supermassive black holes flare?


X-rays, corona , NuSTAR, Black hole, galaxy, Supermassive black holes, NASA, Nuclear Spectroscopic Telescope Array

This diagram shows how a shifting feature, called a corona, can create a flare of X-rays around a black hole. Credit: NASA/JPL-Caltech


The results suggest that supermassive black holes send out beams of X-rays when their surrounding coronas—sources of extremely energetic particles—shoot, or launch, away from the black holes.


“This is the first time we have been able to link the launching of the corona to a flare,” said Dan Wilkins of Saint Mary’s University in Halifax, Canada, lead author of a new paper on the results appearing in the Monthly Notices of the Royal Astronomical Society. “This will help us understand how supermassive black holes power some of the brightest objects in the universe.”






Supermassive black holes don’t give off any light themselves, but they are often encircled by disks of hot, glowing material. The gravity of a black hole pulls swirling gas into it, heating this material and causing it to shine with different types of light. Another source of radiation near a black hole is the corona. Coronas are made up of highly energetic particles that generate X-ray light, but details about their appearance, and how they form, are unclear.


Astronomers think coronas have one of two likely configurations. The “lamppost” model says they are compact sources of light, similar to light bulbs, that sit above and below the black hole, along its rotation axis. The other model proposes that the coronas are spread out more diffusely, either as a larger cloud around the black hole, or as a “sandwich” that envelops the surrounding disk of material like slices of bread. In fact, it’s possible that coronas switch between both the lamppost and sandwich configurations.


The new data support the “lamppost” model—and demonstrate, in the finest detail yet, how the light-bulb-like coronas move. The observations began when Swift, which monitors the sky for cosmic outbursts of X-rays and gamma rays, caught a large flare coming from the supermassive black hole called Markarian 335, or Mrk 335, located 324 million light-years away in the direction of the constellation Pegasus. This supermassive black hole, which sits at the center of a galaxy, was once one of the brightest X-ray sources in the sky.


“Something very strange happened in 2007, when Mrk 335 faded by a factor of 30. What we have found is that it continues to erupt in flares but has not reached the brightness levels and stability seen before,” said Luigi Gallo, the principal investigator for the project at Saint Mary’s University. Another co-author, Dirk Grupe of Morehead State University in Kentucky, has been using Swift to regularly monitor the black hole since 2007.


In September 2014, Swift caught Mrk 335 in a huge flare. Once Gallo found out, he sent a request to the NuSTAR team to quickly follow up on the object as part of a “target of opportunity” program, where the observatory’s previously planned observing schedule is interrupted for important events. Eight days later, NuSTAR set its X-ray eyes on the target, witnessing the final half of the flare event.




After careful scrutiny of the data, the astronomers realized they were seeing the ejection, and eventual collapse, of the black hole’s corona.


“The corona gathered inward at first and then launched upwards like a jet,” said Wilkins. “We still don’t know how jets in black holes form, but it’s an exciting possibility that this black hole’s corona was beginning to form the base of a jet before it collapsed.”


How could the researchers tell the corona moved? The corona gives off X-ray light that has a slightly different spectrum—X-ray “colors”—than the light coming from the disk around the black hole. By analyzing a spectrum of X-ray light from Mrk 335 across a range of wavelengths observed by both Swift and NuSTAR, the researchers could tell that the corona X-ray light had brightened—and that this brightening was due to the motion of the corona.


Coronas can move very fast. The corona associated with Mrk 335, according to the scientists, was traveling at about 20 percent the speed of light. When this happens, and the corona launches in our direction, its light is brightened in an effect called relativistic Doppler boosting.


Putting this all together, the results show that the X-ray flare from this black hole was caused by the ejected corona.


“The nature of the energetic source of X-rays we call the corona is mysterious, but now with the ability to see dramatic changes like this we are getting clues about its size and structure,” said Fiona Harrison, the principal investigator of NuSTAR at the California Institute of Technology in Pasadena, who was not affiliated with the study.


Many other black hole brainteasers remain. For example, astronomers want to understand what causes the ejection of the corona in the first place.




– Credit and Resource –


Journal reference: Monthly Notices of the Royal Astronomical Society




Huge Flare from Black Hole

Monday 26 October 2015

Cassini"s Enceladus Final Flyby

Scientia — In late 2015, NASA’s Cassini spacecraft is making its final three flybys of Enceladus, the little moon that stunned scientists with the revelation it harbors a global ocean under its icy shell, active geysers of water-ice feeding one of Saturn’s rings and the first tantalizing signs of hydrothermal activity beyond Earth. All these discoveries have vaulted Enceladus to one of the top future destinations for exploration and the search for signs of potential life beyond Earth.


  • 14 October 2015: Cassini aligned it self to get some of the best shots of Saturn’s Moon Enceladus. Cassini was looking at the moon’s north polar region at an altitude of just 1,142 miles (1,839 kilometers).
    Freedawn, Scientia, Nasa, Cassini , Enceladus, spacecraft, solar system, Moon,

    This high-resolution Cassini image shows a landscape of stark contrasts on Saturn’s moon Enceladus. Thin cracks cross over the pole — the northernmost extent of a global system of such fractures. Credit: NASA/JPL/Space Science Institute – Provided by NASA



    Scientists expected the north polar region of Enceladus to be heavily cratered, based on low-resolution images from the Voyager mission, but the new high-resolution Cassini images show a landscape of stark contrasts. “The northern regions are crisscrossed by a spidery network of gossamer-thin cracks that slice through the craters,” said Paul Helfenstein, a member of the Cassini imaging team at Cornell University, Ithaca, New York. “These thin cracks are ubiquitous on Enceladus, and now we see that they extend across the northern terrains as well.”
    Freedawn, Scientia, Nasa, Cassini , Enceladus, spacecraft, solar system, Moon,







  • 28 October 2015: Cassini will make a daring flight through the moon’s famous plume only 30 miles (48 kilometers) above Enceladus’ south pole. The flyby is Cassini’s deepest-ever dive through the jets. The encounter will allow Cassini to obtain the most accurate measurements yet of the plume’s composition, and new insights into the ocean world beneath the ice.

    NASA’s Cassini spacecraft will sample the ocean of Saturn’s moon Enceladus on Wednesday, Oct. 28, when it flies through the moon’s plume of icy spray.


    Cassini launched in 1997 and entered orbit around Saturn in 2004. Since then, it has been studying the huge planet, its rings and its magnetic field. Here are some things to know about the mission’s upcoming close flyby of Enceladus:


    1. Enceladus is an icy moon of Saturn. Early in its mission, Cassini discovered Enceladus has remarkable geologic activity, including a towering plume of ice, water vapor and organic molecules spraying from its south polar region. Cassini later determined the moon has a global ocean and likely hydrothermal activity, meaning it could have the ingredients needed to support simple life.

    2. The flyby will be Cassini’s deepest-ever dive through the Enceladus plume, which is thought to come from the ocean below. The spacecraft has flown closer to the surface of Enceladus before, but never this low directly through the active plume.

    3. The flyby is not intended to detect life, but it will provide powerful new insights about how habitable the ocean environment is within Enceladus.

    4. Cassini scientists are hopeful the flyby will provide insights about how much hydrothermal activity — that is, chemistry involving rock and hot water — is occurring within Enceladus. This activity could have important implications for the potential habitability of the ocean for simple forms of life. The critical measurement for these questions is the detection of molecular hydrogen by the spacecraft.

    5. Scientists also expect to better understand the chemistry of the plume as a result of the flyby. The low altitude of the encounter is, in part, intended to afford Cassini greater sensitivity to heavier, more massive molecules, including organics, than the spacecraft has observed during previous, higher-altitude passes through the plume.

    6. The flyby will help solve the mystery of whether the plume is composed of column-like, individual jets, or sinuous, icy curtain eruptions — or a combination of both. The answer would make clearer how material is getting to the surface from the ocean below.

    7. Researchers are not sure how much icy material the plumes are actually spraying into space. The amount of activity has major implications for how long Enceladus might have been active.






  • 19 December 2015: Cassini’s final targeted flyby will allow the spacecraft to measure heat flow from the moon’s interior at an altitude of 3,106 miles, or 4,999 kilometers.









– Credit and Resource –


NASA




Cassini"s Enceladus Final Flyby

Wednesday 21 October 2015

Physicists realize a quantum Hilbert hotel

Physicists , quantum , Hilbert hotel, quantum states, infinite number, thought experiment,

When the light “petals” (quantum states with an infinite number of values representing the infinite number of hotel rooms) in the top row are multiplied by 3, the number of petals in the bottom row is tripled—analogous to “tripling infinity.” Credit: Václav Potoček, et al. ©2015 American Physical Society.


Scientia — In 1924, the mathematician David Hilbert described a hotel with an infinite number of rooms that are all occupied. Demonstrating the counterintuitive nature of infinity, he showed that the hotel could still accommodate additional guests. Although clearly no such brick-and-mortar hotel exists, in a new paper published in Physical Review Letters, physicists Václav Potoček, et al., have physically realized a quantum Hilbert hotel by using a beam of light.




In Hilbert’s thought experiment, he explained that additional rooms could be created in a hotel that already has an infinite number of rooms because the hotel manager could simply “shift” all of the current guests to a new room according to some rule, such as moving everyone up one room (to leave the first room empty) or moving everyone up to twice their current room number (to create an infinite number of empty rooms by leaving the odd-numbered rooms empty).


In their paper, the physicists proposed two ways to model this phenomena—one theoretical and one experimental—both of which use the infinite number of quantum states of a quantum system to represent the infinite number of hotel rooms in a hotel. The theoretical proposal uses the infinite number of energy levels of a particle in a potential well, and the experimental demonstration uses the infinite number of orbital angular momentum states of light.


The scientists showed that, even though there is initially an infinite number of these states (rooms), the states’ amplitudes (room numbers) can be remapped to twice their original values, producing an infinite number of additional states. On one hand, the phenomena is counterintuitive: by doubling an infinite number of things, you get infinitely many more of them. And yet, as the physicists explain, it still makes sense because the total sum of the values of an infinite number of things can actually be finite.


“As far as there being an infinite amount of ‘something,’ it can make physical sense if the things we can measure are still finite,” coauthor Filippo Miatto, at the University of Waterloo and the University of Ottawa, told Phys.org. “For example, a coherent state of a laser mode is made with an infinite set of number states, but as the number of photons in each of the number states increases, the amplitudes decrease so at the end of the day when you sum everything up the total energy is finite. The same can hold for all of the other quantum properties, so no, it is not surprising to the trained eye.”


Physicists , quantum , Hilbert hotel, quantum states, infinite number, thought experiment,

Steps to realizing the theoretical version of a quantum analogue of the Hilbert hotel, which is done by remapping the amplitudes shown in (a) (which initially have an infinite number of values) to twice their original values, as shown in (f), by expanding and shrinking the potential well. Credit: Václav Potoček, et al. ©2015 American Physical Society


The physicists also showed that the remapping can be done not only by doubling, but also by tripling, quadrupling, etc., the states’ values. In the laser experiment, these procedures produce visible “petals” of light that correspond to the number that the states were multiplied by.


The ability to remap energy states in this way could also have applications in quantum and classical information processing, where, for example, it could be used to increase the number of states produced or to increase the information capacity of a channel.




– Credit and Resource –


More information: Václav Potoček, et al. “Quantum Hilbert Hotel.” Physical Review Letters. DOI: 10.1103/PhysRevLett.115.160505


Journal reference: Physical Review Letters




Physicists realize a quantum Hilbert hotel

Using ultrasound to improve drug delivery

New approach could aid in treatment of inflammatory bowel disease.


Scientia — Using ultrasound waves, researchers from MIT and Massachusetts General Hospital (MGH) have found a way to enable ultra-rapid delivery of drugs to the gastrointestinal (GI) tract. This approach could make it easier to deliver drugs to patients suffering from GI disorders such as inflammatory bowel disease, ulcerative colitis, and Crohn’s disease, the researchers say.


Currently, such diseases are usually treated with drugs administered as an enema, which must be maintained in the colon for hours while the drug is absorbed. However, this can be difficult for patients who are suffering from diarrhea and incontinence. To overcome that, the researchers sought a way to stimulate more rapid drug absorption.






“We’re not changing how you administer the drug. What we are changing is the amount of time that the formulation needs to be there, because we’re accelerating how the drug enters the tissue,” says Giovanni Traverso, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research, a gastroenterologist at MGH, and one of the senior authors of a paper describing the technique in the Oct. 21 issue of Science Translational Medicine.


“With additional research, our technology could prove invaluable in both clinical and research settings, enabling improved therapies and expansion of research techniques applied to the GI tract. It demonstrates for the first time the active administration of drugs, including biologics, through the GI tract,” says Daniel Blankschtein, the Hermann P. Meissner Professor in Chemical Engineering, who is also a senior author of the paper.


Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute, is also a senior author of the paper. The study’s lead author is Carl Schoellhammer, a graduate student in chemical engineering.



See how researchers use ultrasound waves to deliver drugs to the gastrointestinal tract. Video: Melanie Gonick/MIT (animation courtesy of the researchers)


Enhanced delivery

Langer began exploring the possibility of using ultrasound to enhance drug delivery 30 years ago. In 1995, he and Blankschtein reported in Science that ultrasound could enable delivery of drugs through the skin, but until now it had not been explored in the GI tract.


“We’ve been working on ultrasound as a means to enhance transport through materials and skin since the mid-1980s, and I think the implications of this new approach have the potential to aid many patients,” Langer says.


Ultrasound improves drug delivery by a mechanism known as transient cavitation. When a fluid is exposed to sound waves, the waves induce the formation of tiny bubbles that implode and create microjets that can penetrate and push medication into tissue.


In this study, the researchers first tested their new approach in the pig GI tract, where they found that applying ultrasound greatly increased absorption of both insulin, a large protein, and mesalamine, a smaller molecule often used to treat colitis.


“Demonstrating delivery of molecules with a wide range of sizes, including active biologics, underscores the potentially broad areas in which this technology could be applied,” says Schoellhammer, who won the $15,000 Lemelson-MIT “Cure it!” Student Prize earlier this year for this research and for a microneedle pill that delivers drugs directly into GI tissue. The team also reached the finals of the MIT $100K Entrepreneurship Competition.




Better treatment

The researchers next investigated whether ultrasound-enhanced drug delivery could effectively treat disease in animals.


In tests of mice, the researchers found that they could resolve colitis symptoms by delivering mesalamine followed by one second of ultrasound every day for two weeks. Giving this treatment every other day also helped, but delivering the drug without ultrasound had no effect.


They also showed that ultrasound-enhanced delivery of insulin effectively lowered blood sugar levels in pigs.


“The capabilities of the technology are quite well demonstrated. This technology has great utility in localized as well as systemic delivery of drugs,” says Samir Mitragotri, a professor of systems biology and bioengineering at the University of California at Santa Barbara. Mitragotri was also an author of the 1995 Science paper on ultrasound drug delivery but was not involved in the new research.


While inflammatory GI diseases are an obvious first target for this type of drug delivery, it could also be used to administer drugs for colon cancer or infections of the GI tract, Traverso says. The researchers are now performing additional animal studies to help them optimize the ultrasound device and prepare it for testing in human patients.




– Credit and Resource –


Anne Trafton | MIT News Office




Using ultrasound to improve drug delivery

Thursday 15 October 2015

How the brain controls sleep

Brain structure generates pockets of sleep within the brain.


Scientia — Sleep is usually considered an all-or-nothing state: The brain is either entirely awake or entirely asleep. However, MIT neuroscientists have discovered a brain circuit that can trigger small regions of the brain to fall asleep or become less alert, while the rest of the brain remains awake.


Freedawn, Scientia, brain, sleep, REM, MIT, cortex, thalamic reticular nucleus, thalamus , oscillations


This circuit originates in a brain structure known as the thalamic reticular nucleus (TRN), which relays signals to the thalamus and then the brain’s cortex, inducing pockets of the slow, oscillating brain waves characteristic of deep sleep. Slow oscillations also occur during coma and general anesthesia, and are associated with decreased arousal. With enough TRN activity, these waves can take over the entire brain.






The researchers believe the TRN may help the brain consolidate new memories by coordinating slow waves between different parts of the brain, allowing them to share information more easily.


“During sleep, maybe specific brain regions have slow waves at the same time because they need to exchange information with each other, whereas other ones don’t,” says Laura Lewis, a research affiliate in MIT’s Department of Brain and Cognitive Sciences and one of the lead authors of the new study, which appears today in the journal eLife.


The TRN may also be responsible for what happens in the brain when sleep-deprived people experience brief sensations of “zoning out” while struggling to stay awake, the researchers say.


The paper’s other first author is Jakob Voigts, an MIT graduate student in brain and cognitive sciences. Senior authors are Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT and an anesthesiologist at Massachusetts General Hospital, and Michael Halassa, an assistant professor at New York University. Other authors are MIT research affiliate Francisco Flores and Matthew Wilson, the Sherman Fairchild Professor in Neurobiology and a member of MIT’s Picower Institute for Learning and Memory.


Local control

Until now, most sleep research has focused on global control of sleep, which occurs when the entire brain is awash in slow waves — oscillations of brain activity created when sets of neurons are silenced for brief periods.


However, recent studies have shown that sleep-deprived animals can exhibit slow waves in parts of their brain while they are still awake, suggesting that the brain can also control alertness at a local level.


Freedawn, Scientia, brain, sleep, REM, MIT, cortex, thalamic reticular nucleus, thalamus , oscillations


The MIT team began its investigation of local control of alertness or drowsiness with the TRN because its physical location makes it perfectly positioned to play a role in sleep, Lewis says. The TRN surrounds the thalamus like a shell and can act as a gatekeeper for sensory information entering the thalamus, which then sends information to the cortex for further processing.


Using optogenetics, a technique that allows scientists to stimulate or silence neurons with light, the researchers found that if they weakly stimulated the TRN in awake mice, slow waves appeared in a small part of the cortex. With more stimulation, the entire cortex showed slow waves.




“We also found that when you induce these slow waves across the cortex, animals start to behaviorally act like they’re drowsy. They’ll stop moving around, their muscle tone will go down,” Lewis says.


The researchers believe the TRN fine-tunes the brain’s control over local brain regions, enhancing or reducing slow waves in certain regions so those areas can communicate with each other, or inducing some areas to become less alert when the brain is very drowsy. This may explain what happens in humans when they are sleep-deprived and momentarily zone out without really falling asleep.


“I’m inclined to think that happens because the brain begins to transition into sleep, and some local brain regions become drowsy even if you force yourself to stay awake,” Lewis says.


“The strength of this paper is that it’s the first to use optogenetics to try to dissect the role of part of the thalamo-cortical circuitry in generating slow waves in the cortex,” says Mark Opp, a professor of anesthesiology and pain medicine at the University of Washington who was not part of the research team.


Freedawn, Scientia, brain, sleep, REM, MIT, cortex, thalamic reticular nucleus, thalamus , oscillations


Natural sleep and general anesthesia

Understanding how the brain controls arousal could help researchers design new sleep and anesthetic drugs that create a state more similar to natural sleep. Stimulating the TRN can induce deep, non-REM-like sleep states, and previous research by Brown and colleagues uncovered a circuit that turns on REM sleep.


Brown adds, “The TRN is rich in synapses — connections in the brain — that release the inhibitory neurotransmitter GABA. Therefore, the TRN is almost certainly a site of action of many anesthetic drugs, given that a large classes of them act at these synapses and produce slow waves as one of their characteristic features.”


Previous work by Lewis and colleagues has shown that unlike the slow waves of sleep, the slow waves under general anesthesia are not coordinated, suggesting a mechanism for why these drugs impair information exchange in the brain and produce unconsciousness.




– Credit and Resource –


Anne Trafton | MIT News Office




How the brain controls sleep

Latest Experiment at The Large Hadron Collider

Latest experiment at the Large Hadron Collider reports first results


Scientists precisely count particles produced in a typical proton collision.

Freedawn, Scientia, Large Hadron Collider, LHC, particles, proton , particle accelerator, teraelectronvolts , CERN, collaboration at the European Organization for Nuclear Research

Particles created from the proton collision stream out from the center of the Compact Muon Solenoid detector. They are first detected by the Silicon Tracker, whose data can be used to reconstruct the particle trajectories, indicated by yellow lines. An Electromagnetic Calorimeter detects energy deposited by electrons and photons, indicated by green boxes. The energy detected by the Hadronic Calorimeter, the primary component of jets, is indicated by blue boxes. Particles reaching the outermost parts of the detector are indicated in red.


Scientia –After a two-year hiatus, the Large Hadron Collider, the largest and most powerful particle accelerator in the world, began its second run of experiments in June, smashing together subatomic particles at 13 teraelectronvolts (TeV) — the highest energy ever achieved in a laboratory. Physicists hope that such high-energy collisions may produce completely new particles, and potentially simulate the conditions that were seen in the early universe.






In a paper to appear in the journal Physics Letters B, the Compact Muon Solenoid (CMS) collaboration at the European Organization for Nuclear Research (CERN) reports on the run’s very first particle collisions, and describes what an average collision between two protons looks like at 13 TeV. One of the study leaders is MIT assistant professor of physics Yen-Jie Lee, who leads MIT’s Relativistic Heavy Ion Group, together with physics professors Gunther Roland and Bolek Wyslouch.


In the experimental run, researchers sent two proton beams hurtling in opposite directions around the collider at close to the speed of light. Each beam contained 476 bunches of 100 billion protons, with collisions between protons occurring every 50 nanoseconds. The team analyzed 20 million “snapshots” of the interacting proton beams, and identified 150,000 events containing proton-proton collisions.


For each collision that the researchers identified, they determined the number and angle of particles scattered from the colliding protons. The average proton collision produced about 22 charged particles known as hadrons, which were mainly scattered along the transverse plane, immediately around the main collision point.


Compared with the collider’s first run, at an energy intensity of 7 TeV, the recent experiment at 13 TeV produced 30 percent more particles per collision.


Lee says the results support the theory that higher-energy collisions may increase the chance of finding new particles. The results also provide a precise picture of a typical proton collision — a picture that may help scientists sift through average events looking for atypical particles.


“At this high intensity, we will observe hundreds of millions of collisions each second,” Lee says. “But the problem is, almost all of these collisions are typical background events. You really need to understand the background well, so you can separate it from the signals for new physics effects. Now we’ve prepared ourselves for the potential discovery of new particles.”


Shrinking the uncertainty of tiny collisions

Normally, 13 TeV is not a large amount of energy — about that expended by a flying mosquito. But when that energy is packed into a single proton, less than a trillionth the size of a mosquito, that particle’s energy density becomes enormous. When two such energy-packed protons smash into each other, they can knock off constituents from each proton — either quarks or gluons — that may, in turn, interact to produce entirely new particles.


Predicting the number of particles produced by a proton collision could help scientists determine the probability of detecting a new particle. However, existing models generate predictions with an uncertainty of 30 to 40 percent. That means that for high-energy collisions that produce a large number of particles, the uncertainty of detecting rare particles can be a considerable problem.


“For high-luminosity runs, you might have up to 100 collisions, and the uncertainty of the background level, based on existing models, would be very big,” Lee says.


To shrink this uncertainty and more precisely count the number of particles produced in an average proton collision, Lee and his team used the Large Hadron Collider’s CMS detector. The detector is built around a massive magnet that can generate a field that’s 100,000 times stronger than the Earth’s magnetic field.


Typically, a magnetic field acts to bend charged particles that are produced by proton collisions. This bending allows scientists to measure a particle’s momentum. However, an average collision typically produces lightweight particles with very low momentum — particles that, in a magnetic field, end up coiling their way toward the main collider’s beam pipe, instead of bending toward the CMS detector.




To count these charged, lightweight particles, the scientists analyzed the data with the detector’s magnet off. While they couldn’t measure the particles’ momentum, they could precisely count the number of charged particles, and measure the angles at which they arrived at the detector. The measurements, Lee says, give a more accurate picture of an average proton collision, compared with existing theoretical models.


“Our measurement actually shrinks the uncertainty dramatically, to just a few percent,” Lee says.


Simulating the early universe

Knowing what a typical proton collision looks like will help scientists set the collider to essentially see through the background of average events, to more efficiently detect rare particles.


Lee says the new results may also have a significant impact on the study of the hot and dense medium from the early universe. In addition to proton collisions, scientists also plan to study the highest-energy collisions of lead ions, each of which contain 208 protons and neutrons. When accelerated in a collider, lead ions flatten into disks due to a force called the Lorentz contraction. When smashed together, lead ions can generate hundreds of interactions between protons and produce an extremely dense medium that is thought to mimic the conditions of space just after the Big Bang. In this way, the Large Hadron Collider experiment could potentially simulate the condition of the very first moments of the early universe.


“One microsecond after the Big Bang, the universe was very dense and hot — about 1 trillion degrees,” Lee says. “With lead ion collisions, we can reproduce the early universe in a ‘small bang.’ If we can understand what one proton collision looks like, we may be able to get some more insights about what will happen when hundreds of them occur at the same time. Then we can see what we can learn about the early universe.”


This research was funded, in part, by the U.S. Department of Energy.


– Credit and Resource –


Jennifer Chu | MIT News Office




Latest Experiment at The Large Hadron Collider