Friday 8 January 2016

A New Way to Store Solar Heat

A new way to store solar heat. Material could harvest sunlight by day, release heat on demand hours or days later.


solar heat, Solar, Solar Panels, Solar power, sun’s energy, Molecules with two configurations, solar thermal fuels,

The layer-by-layer solar thermal fuel polymer film comprises three distinct layers (4 to 5 microns in thickness for each). Cross-linking after each layer enables building up films of tunable thickness.
Courtesy of the researchers


Scientia — Imagine if your clothing could, on demand, release just enough heat to keep you warm and cozy, allowing you to dial back on your thermostat settings and stay comfortable in a cooler room. Or, picture a car windshield that stores the sun’s energy and then releases it as a burst of heat to melt away a layer of ice.

According to a team of researchers at MIT, both scenarios may be possible before long, thanks to a new material that can store solar energy during the day and release it later as heat, whenever it’s needed. This transparent polymer film could be applied to many different surfaces, such as window glass or clothing.





Although the sun is a virtually inexhaustible source of energy, it’s only available about half the time we need it — during daylight. For the sun to become a major power provider for human needs, there has to be an efficient way to save it up for use during nighttime and stormy days. Most such efforts have focused on storing and recovering solar energy in the form of electricity, but the new finding could provide a highly efficient method for storing the sun’s energy through a chemical reaction and releasing it later as heat.


The finding, by MIT professor Jeffrey Grossman, postdoc David Zhitomirsky, and graduate student Eugene Cho, is described in a paper in the journal Advanced Energy Materials. The key to enabling long-term, stable storage of solar heat, the team says, is to store it in the form of a chemical change rather than storing the heat itself. Whereas heat inevitably dissipates over time no matter how good the insulation around it, a chemical storage system can retain the energy indefinitely in a stable molecular configuration, until its release is triggered by a small jolt of heat (or light or electricity).


solar heat, Solar, Solar Panels, Solar power, sun’s energy, Molecules with two configurations, solar thermal fuels,

In the researchers’ platform for testing macroscopic heat release, a heating element provides sufficient energy to trigger the solar thermal fuel materials, while an infrared camera monitors the temperature. The charged film (right) releases heat enabling a higher temperature relative to the uncharged film (left).
Courtesy of the researchers


Molecules with two configurations

The key is a molecule that can remain stable in either of two different configurations. When exposed to sunlight, the energy of the light kicks the molecules into their “charged” configuration, and they can stay that way for long periods. Then, when triggered by a very specific temperature or other stimulus, the molecules snap back to their original shape, giving off a burst of heat in the process.


Such chemically-based storage materials, known as solar thermal fuels (STF), have been developed before, including in previous work by Grossman and his team. But those earlier efforts “had limited utility in solid-state applications” because they were designed to be used in liquid solutions and not capable of making durable solid-state films, Zhitomirsky says. The new approach is the first based on a solid-state material, in this case a polymer, and the first based on inexpensive materials and widespread manufacturing technology.


“This work presents an exciting avenue for simultaneous energy harvesting and storage within a single material,” says Ted Sargent, university professor at the University of Toronto, who was not involved in this research.


Manufacturing the new material requires just a two-step process that is “very simple and very scalable,” says Cho. The system is based on previous work that was aimed at developing a solar cooker that could store solar heat for cooking after sundown, but “there were challenges with that,” he says. The team realized that if the heat-storing material could be made in the form of a thin film, then it could be “incorporated into many different materials,” he says, including glass or even fabric.


To make the film capable of storing a useful amount of heat, and to ensure that it could be manufactured easily and reliably, the team started with materials called azobenzenes that change their molecular configuration in response to light. The azobenzenes can then can be stimulated by a tiny pulse of heat, to revert to their original configuration and release much more heat in the process. The researchers modified the material’s chemistry to improve its energy density — the amount of energy that can be stored for a given weight — its ability to form smooth, uniform layers, and its responsiveness to the activating heat pulse.


solar heat, Solar, Solar Panels, Solar power, sun’s energy, Molecules with two configurations, solar thermal fuels,

A spin-coating process enables deposition of the solar thermal fuel polymer material from solution. The film can then be readily charged with ultraviolet light. This approach can be extended to a variable thickness in a layer-by-layer process.
Courtesy of the researchers





Shedding the ice

The material they ended up with is highly transparent, which could make it useful for de-icing car windshields, says Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems and a professor of materials science and engineering. While many cars already have fine heating wires embedded in rear windows for that purpose, anything that blocks the view through the front window is forbidden by law, even thin wires. But a transparent film made of the new material, sandwiched between two layers of glass — as is currently done with bonding polymers to prevent pieces of broken glass from flying around in an accident — could provide the same de-icing effect without any blockage. German auto company BMW, a sponsor of this research, is interested in that potential application, he says.


With such a window, energy would be stored in the polymer every time the car sits out in the sunlight. Then, “when you trigger it,” using just a small amount of heat that could be provided by a heating wire or puff of heated air, “you get this blast of heat,” Grossman says. “We did tests to show you could get enough heat to drop ice off a windshield.” Accomplishing that, he explains, doesn’t require that all the ice actually be melted, just that the ice closest to the glass melts enough to provide a layer of water that releases the rest of the ice to slide off by gravity or be pushed aside by the windshield wipers.


solar heat, Solar, Solar Panels, Solar power, sun’s energy, Molecules with two configurations, solar thermal fuels,

A heating element is used to provide sufficient energy to trigger the solar thermal fuel materials, while an infrared camera monitors the temperature. (View full screen to see animation.)
Courtesy of the researchers


The team is continuing to work on improving the film’s properties, Grossman says. The material currently has a slight yellowish tinge, so the researchers are working on improving its transparency. And it can release a burst of about 10 degrees Celsius above the surrounding temperature — sufficient for the ice-melting application — but they are trying to boost that to 20 degrees.


Already, the system as it exists now might be a significant boon for electric cars, which devote so much energy to heating and de-icing that their driving ranges can drop by 30 percent in cold conditions. The new polymer could significantly reduce that drain, Grossman says.


“The approach is innovative and distinctive,” says Sargent, from the University of Toronto. “The research is a major advance towards the practical application of solid-state energy-storage/heat-release materials from both a scientific and engineering point of view.”




– Credit and Resource –


Written by: David L. Chandler | MIT News Office


Provided by: The work was supported by a NSERC Canada Banting Fellowship and by BMW.




A New Way to Store Solar Heat

Massive Galaxy Cluster Identified

Most distant massive galaxy cluster identified. Formed in the first 4 billion years of the universe, cluster is 1,000 times more massive than the Milky Way.


Massive Galaxy, Cluster, galaxies, NASA, MIT, Milky Way, universe, the Big Bang,

Astronomers have detected a massive, sprawling, churning galaxy cluster that formed only 3.8 billion years after the Big Bang. The cluster, shown here, is the most massive cluster of galaxies yet discovered in the first 4 billion years after the Big Bang.
Image: NASA, European Space Agency, University of Florida, University of Missouri, and University of California


Scientia — The early universe was a chaotic mess of gas and matter that only began to coalesce into distinct galaxies hundreds of millions of years after the Big Bang. It would take several billion more years for such galaxies to assemble into massive galaxy clusters — or so scientists had thought.

Now astronomers at MIT, the University of Missouri, the University of Florida, and elsewhere, have detected a massive, sprawling, churning galaxy cluster that formed only 3.8 billion years after the Big Bang. Located 10 billion light years from Earth and potentially comprising thousands of individual galaxies, the megastructure is about 250 trillion times more massive than the sun, or 1,000 times more massive than the Milky Way galaxy.





The cluster, named IDCS J1426.5+3508 (or IDCS 1426), is the most massive cluster of galaxies yet discovered in the first 4 billion years after the Big Bang.


IDCS 1426 appears to be undergoing a substantial amount of upheaval: The researchers observed a bright knot of X-rays, slightly off-center in the cluster, indicating that the cluster’s core may have shifted some hundred thousand light years from its center. The scientists surmise that the core may have been dislodged from a violent collision with another massive galaxy cluster, causing the gas within the cluster to slosh around, like wine in a glass that has been suddenly moved.


Michael McDonald, assistant professor of physics and a member of MIT’s Kavli Center for Astrophysics and Space Research, says such a collision may explain how IDCS 1426 formed so quickly in the early universe, at a time when individual galaxies were only beginning to take shape.


“In the grand scheme of things, galaxies probably didn’t start forming until the universe was relatively cool, and yet this thing has popped up very shortly after that,” McDonald says. “Our guess is that another similarly massive cluster came in and sort of wrecked the place up a bit. That would explain why this is so massive and growing so quickly. It’s the first one to the gate, basically.”


McDonald and his colleagues presented their results this week at the 227th American Astronomical Society meeting in Kissimmee, Florida. Their findings will also be published in The Astrophysical Journal.


Massive Galaxy, Cluster, galaxies, NASA, MIT, Milky Way, universe, the Big Bang,

To get a more precise estimate of the galaxy cluster’s mass, Michael McDonald and his colleagues used data from several of NASA’s Great Observatories: the Hubble Space Telescope, the Keck Observatory, and the Chandra X-ray Observatory.
Courtesy of the researchers


“Cities in space”

Galaxy clusters are conglomerations of hundreds to thousands of galaxies bound together by gravity. They are the most massive structures in the universe, and those located relatively nearby, such as the Virgo cluster, are extremely bright and easy to spot in the sky.


“They are sort of like cities in space, where all these galaxies live very closely together,” McDonald says. “In the nearby universe, if you look at one galaxy cluster, you’ve basically seen them all — they all look pretty uniform. The further back you look, the more different they start to appear.”


However, finding galaxy clusters that are farther away in space — and further back in time — is a difficult and uncertain exercise.


In 2012, scientists using NASA’s Spitzer Space Telescope first detected signs of IDCS 1426 and made some initial estimates of its mass.


“We had some sense of how massive and distant it was, but we weren’t fully convinced,” McDonald says. “These new results are the nail in the coffin that proves that it is what we initially thought.”





“Tip of the iceberg”

To get a more precise estimate of the galaxy cluster’s mass, McDonald and his colleagues used data from several of NASA’s Great Observatories: the Hubble Space Telescope, the Keck Observatory, and the Chandra X-ray Observatory.


“We were basically using three completely different methods to weigh this cluster,” McDonald explains.


Both the Hubble and Keck Observatories recorded optical data from the cluster, which the researchers analyzed to determine the amount of light that was bending around the cluster as a result of gravity — a phenomenon known as gravitational lensing. The more massive the cluster, the more gravitational force it exerts, and the more light it bends.


They also examined X-ray data from the Chandra Observatory to get a sense of the temperature of the cluster. High-temperature objects give off X-rays, and the hotter a galaxy cluster, the more the gas within that cluster has been compressed, making the cluster more massive.


From the X-ray data, McDonald and his colleagues also calculated the amount of gas in the cluster, which can be an indication of the amount of matter — and mass — in the cluster.


Using all three methods, the group calculated roughly the same mass — about 250 trillion times the mass of the sun. Now, the team is looking for individual galaxies within the cluster to get a sense for how such megastructures can form in the early universe.


“This cluster is sort of like a construction site — it’s messy, loud, and dirty, and there’s a lot that’s incomplete,” McDonald says. “By seeing that incompleteness, we can get a sense for how [clusters] grow. So far, we’ve confirmed about a dozen or so galaxies, but we’re just seeing the tip of the iceberg, really.”


He hopes that scientists may get an even better view of IDCS 1426 in 2018, with the launch of the James Webb Space Telescope — an infrared telescope that is hundreds of times more sensitive than the Spitzer Telescope that first detected the cluster.


“People had kind of put away this idea of finding clusters in the optical and infrared, in favor of X-ray and radio signatures,” McDonald says. “We’re now re-emerging and saying it’s actually a fantastic way of finding clusters. It suggests that maybe we need to branch out a little more in how we find these things.”




– Credit and Resource –


Written by: Jennifer Chu | MIT News Office


Provided and researched by: NASA




Massive Galaxy Cluster Identified

Saturday 2 January 2016

Atom sized Pieces Into Practical Products

Program Seeks Ability to Assemble Atom sized Pieces Into Practical Products. DARPA selects 10 performers to develop technologies that bridge the existing manufacturing gap between nano-scale pieces and millimeter-scale components


Atom sized Pieces, DARPA ,Atom , nanometer-scale pieces, micromachines
Image caption: Microscopic tools such as this nanoscale “atom writer” can be used to fabricate minuscule light-manipulating structures on surfaces. DARPA has selected 10 performers for its Atoms to Product (A2P) program whose goal is to develop technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of atoms—into systems, components, or materials that are at least millimeter-scale in size. (Image credit: Boston University)
Scientia — DARPA recently launched its Atoms to Product (A2P) program, with the goal of developing technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of an atom into systems, components, or materials that are at least millimeter-scale in size. At the heart of that goal was a frustrating reality: Many common materials, when fabricated at nanometer-scale, exhibit unique and attractive “atomic-scale” behaviors including quantized current-voltage behavior, dramatically lower melting points and significantly higher specific heats—but they tend to lose these potentially beneficial traits when they are manufactured at larger “product-scale” dimensions, typically on the order of a few centimeters, for integration into devices and systems.





DARPA recently selected 10 performers to tackle this challenge: Zyvex Labs, Richardson, Texas; SRI, Menlo Park, California; Boston University, Boston, Massachusetts; University of Notre Dame, South Bend, Indiana; HRL Laboratories, Malibu, California; PARC, Palo Alto, California; Embody, Norfolk, Virginia; Voxtel, Beaverton, Oregon; Harvard University, Cambridge, Massachusetts; and Draper Laboratory, Cambridge, Massachusetts.


“The ability to assemble atomic-scale pieces into practical components and products is the key to unlocking the full potential of micromachines,” said John Main, DARPA program manager. “The DARPA Atoms to Product Program aims to bring the benefits of microelectronic-style miniaturization to systems and products that combine mechanical, electrical, and chemical processes.”


The program calls for closing the assembly gap in two steps: From atoms to microns and from microns to millimeters. Performers are tasked with addressing one or both of these steps and have been assigned to one of three working groups, each with a distinct focus area.


To view summary descriptions of each performer’s planned work within the three working groups click here.





– Credit and Resource –


DARPA




Atom sized Pieces Into Practical Products

Hubble Views Two Galaxies Merging

Galaxies Merging


Galaxies Merging, Hubble Telescope, NASA, ESA, galaxy, Galaxies, Merging, stars, planets, sola systems

This image, taken with the Wide Field Planetary Camera 2 on board the NASA/ESA Hubble Space Telescope, shows the galaxy NGC 6052, located around 230 million light-years away in the constellation of Hercules. It would be reasonable to think of this as a single abnormal galaxy, and it was originally classified as such. However, it is in fact a “new” galaxy in the process of forming. Two separate galaxies have been gradually drawn together, attracted by gravity, and have collided. We now see them merging into a single structure. As the merging process continues, individual stars are thrown out of their original orbits and placed onto entirely new paths, some very distant from the region of the collision itself. Since the stars produce the light we see, the “galaxy” now appears to have a highly chaotic shape. Eventually, this new galaxy will settle down into a stable shape, which may not resemble either of the two original galaxies.


Scientia — This image, taken with the Wide Field Planetary Camera 2 on board the NASA/ESA Hubble Space Telescope, shows the galaxy NGC 6052, located around 230 million light-years away in the constellation of Hercules.

It would be reasonable to think of this as a single abnormal galaxy, and it was originally classified as such. However, it is in fact a “new” galaxy in the process of forming. Two separate galaxies have been gradually drawn together, attracted by gravity, and have collided. We now see them merging into a single structure.


As the merging process continues, individual stars are thrown out of their original orbits and placed onto entirely new paths, some very distant from the region of the collision itself. Since the stars produce the light we see, the “galaxy” now appears to have a highly chaotic shape. Eventually, this new galaxy will settle down into a stable shape, which may not resemble either of the two original galaxies.





What is the Wide Field Planetary Camera 2

The Wide Field and Planetary Camera 2 (WFPC2) was Hubble’s workhorse camera for many years. It recorded images through a selection of 48 colour filters covering a spectral range from far-ultraviolet to visible and near-infrared wavelengths. The ‘heart’ of WFPC2 consisted of an L-shaped trio of wide-field sensors and a smaller, high resolution (Planetary) Camera placed at the square’s remaining corner.


WFPC2 produced most of the stunning images that have been released as public outreach images over the years. Its resolution and excellent quality were some of the reasons that WFPC2 was the most used instrument in the first 13 years of Hubble’s life.


WFPC2 was replaced by WFC3 during Servicing Mission 4 in 2009.


After being brought down by the Space Shuttle, WFPC2 was put on display at the Smithsonian National Air and Space Museum in Washington DC, alongside parts from its predecessor, WFPC1.


Galaxies Merging, Hubble Telescope, NASA, ESA, galaxy, Galaxies, Merging, stars, planets, sola systems, Wide Field Planetary Camera 2

WFPC2 is being readied for insertion into Hubble during the First Servicing Mission.


Galaxies Merging, Hubble Telescope, NASA, ESA, galaxy, Galaxies, Merging, stars, planets, sola systems, Wide Field Planetary Camera 2

This colourful image from the Hubble Space Telescope shows the collision of two gases near a dying star. Astronomers have dubbed the tadpole-like objects in the upper right-hand corner ‘cometary knots’ because their glowing heads and gossamer tails resemble comets.
This colorful image from the Hubble Space Telescope shows the collision of two gases near a dying star.
Astronomers have dubbed the tadpole-like objects in the upper right-hand corner ‘cometary knots’ because their glowing heads and gossamer tails resemble comets.
Credit:
Robert O’Dell, Kerry P. Handron (Rice University, Houston, Texas) and NASA/ESA





– Credit and Resource –


Image credit: ESA/Hubble & NASA, Acknowledgement: Judy Schmidt
Text credit: European Space Agency




Hubble Views Two Galaxies Merging

Machines that learn like people

Machines that learn like people. Algorithms could learn to recognize objects from a few examples, not millions; may better model human cognition.


Machines that learn, Machines , Algorithms , Object-recognition,


Scientia — Object-recognition systems are beginning to get pretty good — and in the case of Facebook’s face-recognition algorithms, frighteningly good.


But object-recognition systems are typically trained on millions of visual examples, which is a far cry from how humans learn. Show a human two or three pictures of an object, and he or she can usually identify new instances of it.


Four years ago, Tomaso Poggio’s group at MIT’s McGovern Institute for Brain Research began developing a new computational model of visual representation, intended to reflect what the brain actually does. And in a forthcoming issue of the journal Theoretical Computer Science, the researchers prove that a machine-learning system based on their model could indeed make highly reliable object discriminations on the basis of just a few examples.





In both that paper and another that appeared in October in PLOS Computational Biology, they also show that aspects of their model accord well with empirical evidence about how the brain works.


“If I am given an image of your face from a certain distance, and then the next time I see you, I see you from a different distance, the image is quite different, and simple ways to match it don’t work,” says Poggio, the Eugene McDermott Professor in the Brain Sciences in MIT’s Department of Brain and Cognitive Sciences. “In order solve this, you either need a lot of examples — I need to see your face not only in one position but in all possible positions — or you need an invariant representation of an object.”


An invariant representation of an object is one that’s immune to differences such as size, location, and rotation within the plane. Computer vision researchers have proposed several techniques for invariant object representation, but Poggio’s group had the further challenge of finding an invariant representation that was consistent with what we know about the brain’s machinery.


What nerves compute

Nerve cells, or neurons, are long, thin cells with branching ends. In the cerebral cortex, which is where visual processing happens, each neuron has about 10,000 branches at each end.


Two cortical neurons thus communicate with each other across 10,000 distinct chemical junctions, known as synapses. Each synapse has its own “weight,” a factor by which it multiplies the strength of an incoming signal. The signals crossing all 10,000 synapses are then added together in the body of the neuron. Patterns of stimulation and electrical activity change the weights of synapses over time, which is the mechanism by which habits and memories become ingrained.


A key operation in the branch of mathematics known as linear algebra is the dot-product, which takes two sequences of numbers — or vectors — multiplies their elements together in an orderly way, and adds up the results to yield a single number. In the cortex, the output of a single neural circuit could thus be thought of as the dot-product of two 10,000-variable vectors. That’s a very large calculation that each neuron in the brain can do at a stroke.


Poggio’s group developed an invariant representation of objects that’s based on dot-products. Suppose that you make a little digital movie of an object rotating 360 degrees in a plane — say, 24 frames, each depicting the object as rotated a little bit further than it was in the last one. You store the movie as a sequence of 24 stills.


Suppose next that you’re presented with a digital image of an unfamiliar object. Because the image can be interpreted as a string of numbers describing the color values of pixels — a vector — you can calculate its dot-product with each of the stills from your movie and store that sequence of 24 numbers.


Invariance

Now, if you’re presented with an image of the same object rotated, say, 90 degrees, and you calculate its dot-product with your sequence of stills, you’ll get the same 24 numbers. They won’t be in the same order: What was the dot-product with the first still will now be the dot-product with the sixth. But they’ll be the same numbers.


That list of numbers, then, is a representation of the new object that is invariant to rotation. Similar sequences of stills, which depict an object at various sizes, or at various locations around the frame, will yield sequences of dot-products that are invariant to size and location.


In their new paper, Poggio and his colleagues — first author Fabio Anselmi, a postdoc in Poggio’s group; Joel Leibo, a research affiliate at the McGovern Institute and a research scientist at Google DeepMind; Lorenzo Rosasco, a visiting professor in the Department of Brain and Cognitive Science; and Jim Mutch and Andrea Tacchetti, graduate students in Poggio’s group — demonstrate that, if the goal is to produce an object representation invariant to rotation, size, and location, then the ideal template is a set of images known as Gabor filters. And Gabor filters, it turns out, are known to offer a good description of the image-processing operations performed by the so-called “simple cells” in the visual cortex.





Three dimensions

While this technique works well for visual transformations within a plane, however, it doesn’t work as well for rotation in three dimensions. The dot-product between a new image and that of, say, a car seen straight on would be very different from the dot-product of the same image and that of a car seen from the side.


But Poggio’s group has shown that if the template of still images depicts an object of the same type as the new object, dot-products will still yield adequately invariant descriptions. And this observation accords with recent research by MIT’s Nancy Kanwisher and others, indicating that the visual cortex has regions specialized for recognizing particular classes of objects, such as faces or bodies.


In the work described in PLOS Computational Biology, Poggio and his colleagues — Leibo, Anselmi, and Qianli Liao, a graduate student in electrical engineering and computer science — built a computer system that assembled a set of still images and used the dot-product algorithm to learn to classify thousands of random objects.


For each of the object classes that the system learned, it produced a set of templates that predicted the size and variance of the regions in the human visual cortex devoted to corresponding classes. That suggests, the researchers argue, that the brain and their system may be doing something similar.


The researchers’ invariance hypothesis is “a powerful approach to bridge the large gap between contemporary machine learning, with its emphasis on millions of labeled examples, and the primate visual system that in many instances can learn from a single example,” says Christof Koch, a professor of biology and engineering at Caltech and chief scientific officer of the Allen Institute for Brain Science. “This sort of elegant mathematical framework will be necessary if we are to understand existing natural intelligent systems, on the road to building powerful artificial systems.”


The researchers’ work was sponsored, in part, by MIT’s Center for Brains, Minds, and Machines, which is funded by the National Science Foundation and directed by Poggio.




– Credit and Resource –


Larry Hardesty | MIT News Office




Machines that learn like people

Optoelectronic microprocessors existing chip manufacturing

Optoelectronic microprocessors built using existing chip manufacturing. High-performance prototype means chipmakers could now start building optoelectronic chips.


Optoelectronic , microprocessors , chip, High-performance prototype, processors, MIT, computes electronically but uses light, Optical communication, optical filters, waveguides, light modulators, optical interfaces, photodetectors

Researchers have produced a working optoelectronic chip that computes electronically but uses light to move information. The chip has 850 optical components and 70 million transistors, which, while significantly less than the billion-odd transistors of a typical microprocessor, is enough to demonstrate all the functionality that a commercial optical chip would require.
Image: Glenn J. Asakawa


Scientia — Using only processes found in existing microchip fabrication facilities, researchers at MIT, the University of California at Berkeley, and the University of Colorado have produced a working optoelectronic microprocessor, which computes electronically but uses light to move information.


Optical communication could dramatically reduce chips’ power consumption, which is not only desirable in its own right but essential to maintaining the steady increases in computing power that we’ve come to expect.





Demonstrating that optical chips can be built with no alteration to existing semiconductor manufacturing processes should make optical communication more attractive to the computer industry. But it also makes an already daunting engineering challenge even more difficult.


“You have to use new physics and new designs to figure out how you take ingredients and process recipes that are used to make transistors, and use those to make photodetectors, light modulators, waveguides, optical filters, and optical interfaces,” says MIT professor of electrical engineering Rajeev Ram, referring to the optical components necessary to encode data onto different wavelengths of light, transmit it across a chip, and then decode it. “How do you build all the optics using only the layers out of a transistor? It felt a bit like an episode of ‘MacGyver’ where he has to build an optical network using only old computer parts.”


The project began as a collaboration between Ram, Vladimir Stojanović, and Krste Asanovic, who were then on the MIT Department of Electrical Engineering and Computer Science faculty. Stojanović and Asanovic have since moved to Berkeley, and they, Ram, and Miloš A. Popović, who was a graduate student and postdoc at MIT before becoming an assistant professor of electrical engineering at Colorado, are the senior authors on a paper in Nature that describes the new chip.


They’re joined by 19 co-authors, eight of whom were at MIT when the work was done, including two of the four first authors: graduate students Chen Sun and Jason Orcutt, who has since joined IBM’s T. J. Watson Research Center.


Powering down

The chip has 850 optical components and 70 million transistors, which, while significantly less than the billion-odd transistors of a typical microprocessor, is enough to demonstrate all the functionality that a commercial optical chip would require. In tests, the researchers found that the performance of their transistors was virtually indistinguishable from that of all-electronic computing devices built in the same facility.


Computer chips are constantly shipping data back and forth between logic circuits and memory, and today’s chips cannot keep the logic circuits supplied with enough data to take advantage of their ever-increasing speed. Boosting the bandwidth of the electrical connections between logic and memory would require more power, and that would raise the chips’ operating temperatures to unsustainable levels.


Optical data connections are, in principle, much more energy efficient. And unlike electrical connections, their power requirements don’t increase dramatically with distance. So optical connections could link processors that were meters rather than micrometers apart, with little loss in performance.


The researchers’ chip was manufactured by GlobalFoundries, a semiconductor manufacturing company that uses a silicon-on-insulator process, meaning that in its products, layers of silicon are insulated by layers of glass. The researchers build their waveguides — the optical components that guide light — atop a thin layer of glass on a silicon wafer. Then they etch away the silicon beneath them. The difference in refractive index — the degree to which a material bends light — between the silicon and the glass helps contain light traveling through the waveguides.





Conflicting needs

One of the difficulties in using transistor-manufacturing processes to produce optical devices is that transistor components are intended to conduct electricity, at least some of the time. But conductivity requires free charge carriers, which tend to absorb light particles, limiting optical transmission.


Computer chips, however, generally use both negative charge carriers — electrons — and positive charge carriers — “holes,” or the absence of an electron where one would be expected. “That means that somewhere in there, there should be some way to block every kind of [carrier] implant that they’re doing for every layer,” Ram explains. “We just had to figure out how we do that.”


In an optoelectronic chip, at some point, light signals have to be converted to electricity. But contact with metal also interferes with optical data transmission. The researchers found a way to pattern metal onto the inner ring of a donut-shaped optical component called a ring resonator. The metal doesn’t interact with light traveling around the resonator’s outer ring, but when a voltage is applied to it, it can either modify the optical properties of the resonator or register changes in a data-carrying light signal, allowing it to translate back and forth between optical and electrical signals.


On the new chip, the researchers demonstrated light detectors built from these ring resonators that are so sensitive that they could get the energy cost of transmitting a bit of information down to about a picojoule, or one-tenth of what all-electronic chips require, even over very short distances.


The new paper “certainly is an important result,” says Jagdeep Shah, a researcher at the U.S. Department of Defense’s Institute for Defense Analyses who, as a program director at the Defense Advanced Research Project Agency, initiated the program that sponsored the researchers’ work. “It is not at the megascale yet, and there are steps that need to be taken in order to get there. But this is a good step in that direction.”


“I think that the GlobalFoundries process was an industry-standard 45-nanometer design-rule process,” Shah adds. “I don’t think that there need be any concern that there’s any foundry that can’t make these things.”




– Credit and Resource –


Larry Hardesty | MIT News Office




Optoelectronic microprocessors existing chip manufacturing