June 06, 2009

Onchip Photonics Advance Toward Scalable Quantum Computers

A team of physicists and engineers at Bristol University has demonstrated exquisite control of single particles of light — photons — on a silicon chip to make a major advance towards long-sought-after quantum technologies, including super-powerful quantum computers and ultra-precise measurements. The most exciting thing about this work is its potential for scalability. The small size of the [device] means that far greater complexity is possible than with large-scale optics

The Bristol Centre for Quantum Photonics has demonstrated precise control of four photons using a microscopic metal electrode lithographically patterned onto a silicon chip. The photons propagate in silica waveguides — much like in optical fibres — patterned on a silicon chip, and are manipulated with the electrode, resulting in a high-performance miniaturized device.

Making two photons “talk” to each other to generate the all-important entangled states is much harder, but Professor O’Brien and his colleagues at the University of Queensland demonstrated this in a quantum logic gate back in 2003 [Nature 426, 264 (2003)].

Last year, the Centre for Quantum Photonics at Bristol showed how such interactions between photons could be realised on a silicon chip, pointing the way to advanced quantum technologies based on photons [Science 320, 646 (2008)].

Photons are also required to “talk” to each other to realise the ultra-precise measurements that harness the laws of quantum mechanics. In 2007 Professor O’Brien and his Japanese collaborators reported such a quantum metrology measurement with four photons [Science 316, 726 (2007)].

“Despite these impressive advances, the ability to manipulate photons on a chip has been missing,” said Mr Politi.

The team coupled photons into and out of the chip, fabricated at CIP Technologies, using optical fibres. Application of a voltage across the metal electrode changed the temperature of the silica waveguide directly beneath it, thereby changing the path that the photons travelled. By measuring the output of the device they confirmed high-performance manipulation of photons in the chip.

The researchers proved that one of the strangest phenomena of the quantum world, namely “quantum entanglement”, was achieved on-chip with up to four photons. Quantum entanglement of two particles means that the state of either of the particles is not defined, but only their collective state, and results in an instantaneous linking of the particles.

This on-chip entanglement has important applications in quantum metrology and the team demonstrated an ultra-precise measurement in this way.

“As well as quantum computing and quantum metrology, on-chip photonic quantum circuits could have important applications in quantum communication, since they can be easily integrated with optical fibres to send photons between remote locations,” said Alberto Politi.

“The really exciting thing about this result is that it will enable the development of reconfigurable and adaptive quantum circuits for photons. This opens up all kinds of possibilities,” said Prof O’Brien.

“The most exciting thing about this work is its potential for scalability. The small size of the [device] means that far greater complexity is possible than with large-scale optics.”

Abstract in Nature Photonics journal Manipulation of multiphoton entanglement in waveguide quantum circuits

On-chip integrated photonic circuits are crucial to further progress towards quantum technologies and in the science of quantum optics. Here we report precise control of single photon states and multiphoton entanglement directly on-chip. We manipulate the state of path-encoded qubits using integrated optical phase control based on resistive elements, observing an interference contrast of 98.2 plusminus 0.3%. We demonstrate integrated quantum metrology by observing interference fringes with two- and four-photon entangled states generated in a waveguide circuit, with respective interference contrasts of 97.2 plusminus 0.4% and 92 plusminus 4%, sufficient to beat the standard quantum limit. Finally, we demonstrate a reconfigurable circuit that continuously and accurately tunes the degree of quantum interference, yielding a maximum visibility of 98.2 plusminus 0.9%. These results open up adaptive and fully reconfigurable photonic quantum circuits not just for single photons, but for all quantum states of light.

5 pages of supplemental information to the Nature Photonics paper.

Moores Law Some See the End, Others See Ways Forward

Eli Harari, the chief executive of SanDisk, indicates that the two dimensional size reduction of flash components is likely to end within 5 years and then they will stack layers for higher three dimensional densities.

Once the industry goes from its current 64-billion-bit flash chip to a 256-billion-bit chip (that’s 32 gigabytes), it will hit that brick wall of too few electrons per cell.

Mr. Harari said, the company has been able to build chips with four or eight layers. That’s the good news. The bad news is that they can write information to those chips only one time. That might be all right for distributing software or video games, but most flash memory is sold for use in devices like cameras, which need memory that can be erased and rewritten.

Forbes has an interview Nvidia Chief Jen-Hsun Huang, who talks about a plan for beating Moore's law.

The chips are too power hungry. And so the computer industry is at a bit of a crossroad. And that's why GPU computing, this technology that we invented, has captured the imagination of the whole industry. Microsoft, with Windows7, is going to include Direct X computing, which is basically GPU computing. Apple computer, with the Snow Leopard operating system, is going to have Open CL, which is their version of GPU computing for their operating system. So all of a sudden the two most important operating systems in the world will include GPU computing or [use] the GPU's parallel computing technology core into it.

Here we [Nvidia] are with our technology called Cuda [a C-based architecture for coding in GPU] and GPU computing. And all of a sudden the speed-up is 50 times, 100 times, 200 times. And people are just astonished by the speed-ups. By the end of 2010, you're going to see GPU computing in the vast majority of the world's personal computers.

NVIDIA ION is a system/motherboard platform that includes NVIDIA's GeForce 9400M (MCP79) GPU and Intel's Atom on a Pico-ITXe motherboard designed for netbook and nettop devices. In February 2009, Microsoft certified the upcoming ION-based PCs as Vista-capable. The small form factor ION-based PCs are expected to be released in the summer of 2009, starting at $299.99. ION systems will be DirectX 10 capable and full 1080p HD video with true-fidelity 7.1 audio. ION systems can also use NVIDIA® CUDA™ technology.

Intel announced that it would invest more than $7 billion upgrading its factories in the United States.
"We doubled down on manufacturing," Maloney says. "People said, 'You're insane!' " But Intel's entire competitive advantage--its ability to keep pace with Moore's Law and even exceed it--is in its manufacturing processes.

The new Nehalem EP chip is expected to allow one server to replace nine existing ones and pay for itself in just eight months. "Our job," Maloney says, "is to give them something so wonderful that they'll spend money again."

Graphene could replace copper for computer chip interconnects.

A key problem with copper interconnects is that at nanoscale-dimensions conductance is affected by scattering at the grain boundaries and sidewalls, Murali explained. "These add up to increased resistivity, which nearly doubles as the interconnect sizes shrink to 30nm." At the ~20nm scale the increased resistance would offset performance increases and negates gains made in higher density -- a roadblock to performance increases, if not to actual scaling.

The conductivity of graphene strips is found to be limited by impurity scattering as well as line-edge roughness scattering; as a result, the best reported GNR resistivity is three times the limit imposed by substrate phonon scattering. This letter reveals that even moderate-quality graphene nanowires have the potential to outperform copper for use as on-chip interconnects.

EEStor Predict Production Quality Components by end of 2009

Tom Weir, VP at EEStor Inc told BariumTitanate blog: "Our objective is to complete component testing by September 2009. In parallel, we will be finalizing our second objective which consists of the assembly processes necessary to deliver production quality components and/or EESU's by the end of 2009.

EEStor claims to have revolutionary ultracapacitors with storage capacity higher than lithium ion batteries.

Zenn, the electric car company, has increased its stake in EEstor.

If EEStor can meets its product performance claims and scale up production they will revolutionsize electric and hybrid cars. Range and performance will go up.

Carbon Nanotubes Make Aluminum as Hard as Steel But One Third the Weight

The hardness of the composite aluminum and carbon nanotubes is several times greater than that of unalloyed aluminum, tensile strengths comparable to those of steel can be achieved, and the impact strength and thermal conductivity of the lightweight metal can be improved significantly.

High hardness levels and tensile strengths could only be achieved in aluminum by a complex alloying process based on rare and expensive metals.

"Our carbon nanotubes are an attractive alternative to such complicated alloys. Baytubes carbon nanotubes can also significantly reinforce aluminum materials already alloyed with metals," says Adams.

The density of CNT-reinforced aluminum is only around one third that of steel. Therefore, the material can be used in any number of applications in which the goal is to reduce weight and energy consumption.

With its combination of high strength and low weight, Baytubes-reinforced aluminum is a welcome alternative to steel, expensive specialty metals such as titanium, and carbon-fiber-reinforced plastics.

"This new class of materials has great potential for the production, for example, of screws and other connecting elements, allowing existing manufacturing processes (stamping, CNC) to be retained. Lightweight, heavy-duty components for wheelchairs or athletic equipment are also ideal candidates for the material," says Adams.

Baytubes-reinforced aluminum I-beams could conceivably be manufactured for the construction industry, because they are much lighter than steel I-beams, making it possible to construct taller buildings. Steel I-beams currently are a factor limiting the maximum height of a skyscraper, because of their inherent weight.

Promising applications exist too in the automotive and aircraft industries.

Carnival of Space 106

This week we are hosting the Carnival of Space.

1. Crowlspace has a feature on the Catalytic Nuclear Fusion Interstellar Ramjet.

How do we make the reaction go faster than a regular interstellar ramjet? Physicist Daniel Whitmire proposed we burn the hydrogen via the well-known CNO Bi-Cycle. Basically a hydrogen fuses to a carbon-12, then another is fused to it to make nitrogen-14, then two more to make oxygen-16, which is then highly ‘excited’ and it spits out a helium nucleus (He-4) to return the nitrogen-14 back to carbon-12. Since the carbon-12 isn’t consumed it’s called a “catalytic” cycle, but it’s not chemical catalysis as we know it. Oxygen-16 releases an alpha particle not the nitrogen-14. Call it “nuclear chemistry”.

2. Centauri Dreams sends "Antimatter Propulsion: A Critical Look"

This is a review of those parts of Frank Close's new book Antimatter that address using the stuff for space propulsion, and the problems the idea creates, including how to store antimatter efficiently (and without massive storage facilities) and how to create sufficient amounts of antimatter in the first place.

3. Nextbigfuture has details on a proposed modified space elevator with rotating space hoops instead of just a tether.

4. There is also an update on the Space elevator games which finally have a set date.

5. there is also a look at alternatives if there are delays launching replacement satellites for the GPS (global positioning system.

6. Supernova Condensate has an article "You are now leaving the Milky Way"

The star SDSS J090745.0+024507 which is also known as "The Outcast Star" is being flung outwards at a blistering speed from our galaxy.

7. Simostronomy contributes an intereview with Dr Andrew Drake.

8. Astronoise discusses some "ask the astronomer" options which you can use to get answers about Astronomy.

9. 21st Century Waves has "10 Spiritual Connections With the Human Exploration of Space"

10. Kentucky Space posted a small piece with a couple of pictures about a solar booth that they will use to test satellite EPS. As with the construction of a clean room, it's another piece in Kentucky Space's ongoing infrastructure development.

11. There are new online lunar maps at the LPI.

USGS Geological Atlas of the Moon 1:5,000,000

I-703 Geologic Map of the Near Side of the Moon
I-948 Geologic Map of the East Side of the Moon
I-1034 Geologic Map of the West Side of the Moon
I-1047 Geologic Map of the Central Far Side of the Moon
I-1062 Geologic Map of the North Side of the Moon
I-1162 Geologic Map of the South Side of the Moon
These maps are also available through the U. S. Geological Survey's Astrogeology website.

USGS Shaded Relief Maps of the Moon 1:5,000,000

I-1089 Shaded Relief Map of the Mare Orientale Area of the Moon
I-1218-A Map Showing Relief and Surface Markings on the Lunar Far Side
I-1218-B Shaded Relief Map of the Lunar Far Side
I-1326A Shaded Relief Map of the Lunar Polar Regions
I-1326-B Map Showing Relief and Surface Markings of the Lunar Polar Regions
I-2276-A Shaded Relief and Surface Markings Map
I-2276-B Shaded Relief Map of the Lunar Near Side

12. Orbital Hub looks at Imaging Space And Time With the Hubble Space Telescope.

The history of astronomic discoveries beginning with Galileo Galilei in 1609 and continued by William Herschel, William Huggins, George Ellery Hale, and Edwin Hubble, is presented in 'Hubble – Imaging Space And Time', a book authored by David DeVorkin and Robert W. Smith. The book is replete with spectacular images captured by the Hubble Space Telescope. Images of Carina Nebula, Eagle Nebula, Orion Nebula, and Swan Nebula, just to name a few, are a celebration of color and convey the majestic beauty of the Cosmos.

13. Science blog: Cat Dynamics looks at sunspo cycle 24.

14. Beyond Apollo looks at MODAP (1963) and

15. Robot explorers looks at Asteroid Belt flythroughs and Jupiter flybys (1965)

16. Cheap Astronomy has a collection over a dozen podcasts with topics like the Space Race, neutrino telescopes, gravity wells and much more. Several of the podcasts have links to transcripts. Cheap Astronomy delivered the 3rd of June 365 Days of Astronomy Podcast episode investigating the scientific bonanza that would arise from being able to detect the cosmic neutrino background. Find it in full at the IYA 365 days site or at Cheap Astronomy minus the fab George Hrab intro and end-credits.

17. Music of the Spheres has a Podcast of the Jet Propulsion Laboratory's (JPL's) Greatest Hits

18. Chandra's blog discusses an image of the center of the Milky Way Galaxy.

19. Collect Space looks at the unpacking of Hubble artifacts and astronaut mementos from the space shuttle Atlantis.

Space shuttle Atlantis landed on May 24, completing the final mission to service the Hubble Space Telescope. As technicians work to prepare the orbiter for its next flight, they will find a variety of Hubble artifacts and astronauts' mementos to unpack first.

20. Out of the Cradle's Ken Murphy has an extensive report with many links of his trip to the International Space Development Conference. Get a rundown of the many goodies available.

21. Alan Boyle's Cosmic Log has a slideshow of pictures of the underbelly of NASA's Spirit rover, and asks "Will the Mars rover roll again?"

22. Starts with a Bang discusses wide angle lens and astronomy. Take a look at some panoramas of the sky.

23. Discovery Blog: Bad Astronomy looks at misidentified meteors by the mainstream media. If a rock that falls from space is a meteorite, then one that’s misidentified is a meteorwrong. And I am sure that’s what we have in Texas.

24. Discovery Blog: Cosmic Variance considers "Did a meteor bring down Air France 447?"

Obviously for any given flight the chances are very, very small that a meteor will bring down an airliner, but as Hailey and Helfand pointed out in a letter to the NYT in 1996, the correct question to ask is this: “What is the probability that, for all flights in history, one or more could have been downed by a meteor?” They concluded that there was a 1-in-10 chance that this could happen.

Helfand, an astronomer, is presumably the one who estimated that “approximately 3,000 meteors a day with the requisite mass strike Earth”. This is a difficult number to get. How much mass? How fast does it need to be moving? But let’s assume that this number is correct; it translates to 125 meteors per hour.

Next we need to know the total number of flight hours at altitude for all commercial planes. In 2000 there were about 18 million flights per year. Clearly in the past 20 years (which we’ll take as our reference, since it spans 1989-2009, with both flights 800 and 447) it was not always so…but let’s take a guess that the 18 million figure is roughly correct for that 20 year period. That would yield 360 million commercial airline flights from 1989-2000. Hailey and Helfand assumed that each flight was two hours in duration. Again, a tough number to find on line, so we’ll take it at face value, giving us 720 million flight hours in our reference period.

They also claim that if there were 3500 planes in the air at any time, this would correspond to covering two-billionths of Earth’s surface.

25. Discovery:Free Space looks at NASA plans for invitation-only launch access for Twittering and blogging members of the media.

26. Spacewriter Ramblings discusses the cosmic dance of the collision of galaxies.

27. At A Babe in the Universe, we get a first peek at a full-scale mockup of the Altair lunar lander the Altair. The current design of the lander will be 4 stories tall. If all goes well, in 10 years people will again walk on the Moon.

28. Astroblogger describes his new toy, an adaptor which clamps over the eye-piece/eye tube of his telescope and allows me to attach my digital camera .

D-day and World War II Context [written June 6, 2008]

June 6, 1944, H-Hour was 6:30 am. It was D-day

The assault was conducted in two phases: an air assault landing of American and British airborne divisions shortly after midnight, and an amphibious landing of Allied infantry and armoured divisions on the coast of France commencing at 06:30 British Double Summer Time.

The operation was the largest single-day invasion of all time, with over 130,000 troops landed on June 6, 1944. 195,700 Allied naval and merchant navy personnel were involved. The landings took place along a stretch of the Normandy coast divided into five sections: Gold, Juno, Omaha, Sword and Utah.

D-Day/normandy Invasion Casualties
United States: 1,465 dead, 5,138 wounded, missing or captured;
United Kingdom: 2,700 dead, wounded or captured;
Canada: 500 dead; 621 wounded or captured;

The combined deaths of this one battle are more than the fatal losses of America and its allies after five years of the Iraq war.

Nazi Germany: Between 4,000 and 9,000 dead, wounded or captured

By D-Day 157 German divisions were stationed in the Soviet Union, 6 in Finland, 12 in Norway, 6 in Denmark, 9 in Germany, 21 in the Balkans, 26 in Italy and 59 in France, Belgium and the Netherlands. However, these statistics are somewhat misleading since a significant number of the divisions in the east were depleted; German records indicate that the average personnel complement was at about 50% in the spring of 1944.

the Importance of the Soviet Union in winning World War 2
Not to diminish the great effort of the USA in WW2 and the great sacrifice of D-day, but it is important to know the historical contribution of the Soviet Union in WW2.
A great deal of importance for the success of D-day has been placed on tricking Hitler into placing more of his troops at Calais. It was also important that the 12th Panzer division did not move quickly into the conflict. Without the eastern front drain and commitment of divisions there would have been more armor and divisions all over France and everywhere else.

At the beginning of June 1944 the 12th Panzer division was declared ready for combat operations. The Division's tank strength at this time was 81 Panther ausf A / G and 104 Panzer IV ausf H / J tanks. The division was also equipped with Jagdpanzer IV tank destroyers, three prototype Wirbelwind flakpanzer vehicles, along with a number of 20 mm, 37 mm and 88 mm flak guns, Hummel, Wespe and sIG 33 self-propelled guns and regular towed artillery pieces.

Tanks on the east front peaked at 5,202 in November 1944.

So a huge credit for a successful invasion is that Soviets had regrouped from losses in 1941 and turned things around in 1942.

The Soviets lost 26 million people in the war. About 11 million of those were military losses. The Red Army lost 3 million men in the summer of 1941 (killed or missing). They lost about 4.5 million in the last 6 months of 1941.

Stalins Keys to Victory by William Dunn details the amazing recruitment effort to rebuild and replace the Red Army three times over 18 months.

The United States lost 418,500 people over the course of World War 2.

The Eastern front was the largest theater of war in history and was notorious for its unprecedented ferocity, destruction, and immense loss of life. More people fought and died on the Eastern Front than in all other theaters of World War II combined. With over 30 million dead, many of them civilians, the Eastern Front has been called a war of extermination.

Over the course of WW2, the US mobilized an army of 100 divisions.

The Germans had mobilized 400 divisions.
The Soviets had mobilized 700 divisions.

The Soviet losses in 1941-1943 would not have been so severe if Stalin had not purged his experienced military officers in 1938.
The Soviets might not have been able to motivate and recruit so successfully for the defence of Mother Russia if not for the brutality and harsh treatment of the nazis against the one third of Russia that they conquered in 1941.
The Soviet wars preceding WW2 left a larger reserve of military veterans to rebuild the Red Army after the devastating initial losses.
The Americans helped supply gear with the lend lease program for trucks etc.. but the Soviets made their own guns and tanks. Factories the American engineers helped build in the 1930s. The soviets had learned the lessons of mass production to only build as good as you need. Tanks only lasted about 6 months before being destroyed. So it did not matter if engine was poorly made and would breakdown in 2-5 years. The Tank would not last that long.

The recruitment and production effort to get the people and weapons put together while fighting the most fierce battles in history is an interesting and informative study.

The USA probably could still have won WW2 if the Soviets had been defeated and not been able to regroup after 1941 or lost Moscow and Stalingrad, but it would have been far more costly and the USA would have to have an army 4-6 times larger than the one they did. Or the US would have had to wait until 1945 when they developed the nuclear bomb.

Operation Barbarossa, the initial German invasion of the Soviet Union. Germany had 4.5 million men.

In 1941, the Soviet armed forces in the western districts were outnumbered by their German counterparts, 4.3 million Axis soldiers vs. 2.6 million Soviet soldiers. The overall size of the Soviet armed forces in early July 1941, though, amounted to a little more than 5 million men, 2.6 million in the west, 1.8 million in the far east, with the rest being deployed or training elsewhere

Soviets: At least 802,191 killed, unknown wounded, and some 3,300,000 captured.

Battle of Stalingrad

Germans: 750,000 killed or wounded, 250,000 captured
Soviets: 700,000 killed, wounded or captured, 40,000+ civilian dead

The Battle of Moscow

Germans: 248,000–400,000 casualties
Soviets: 650,000–1,280,000 casualties

Battle of Kursk

Germans: 50,000 dead, wounded, or captured
Soviets: 500,000 dead, wounded, or captured

Autumn and winter 1943 on the eastern front

Battle of Crimea 8 April 1944 - 12 May 1944

Soviet: 85,000 all causes
German/Romanian: 97,000 all causes

The Germans were already getting pushed back quite a ways by D-Day. On the US side, by June 4th 1944 all of Italy had been captured (campaign started with invasion of Sicily July 1943) and before that North Africa.

Belorussian Offensive. June 22, 1944 Two weeks after D-day.

Germans: 300,000-400,000 killed, wounded and taken prisoner.
Soviets: 60,000 KIA/MIA, 110,000 WIA/sick

June 05, 2009

University of Florida has new Light Driven Nanomotors

Sunlight prompts a newly developed molecular nanomotor to unclasp in this artist’s illustration. In a paper expected to appear soon in the online edition of the journal Nano Letters, a team of researchers from the University of Florida reports building a new type of “molecular nanomotor” driven only by photons, or particles of light. While it is not the first photon-driven nanomotor, the almost infinitesimal device is the first built entirely with a single molecule of DNA – giving it a simplicity that increases its potential for development, manufacture and real-world applications in areas ranging from medicine to manufacturing, the scientists say.

In a paper expected to appear soon in the online edition of the journal Nano Letters, the University of Florida team reports building a new type of “molecular nanomotor” driven only by photons, or particles of light. While it is not the first photon-driven nanomotor, the almost infinitesimal device is the first built entirely with a single molecule of DNA — giving it a simplicity that increases its potential for development, manufacture and real-world applications in areas ranging from medicine to manufacturing, the scientists say.

In its clasped, or closed, form, the nanomotor measures 2 to 5 nanometers — 2 to 5 billionths of a meter. In its unclasped form, it extends as long as 10 to 12 nanometers. Although the scientists say their calculations show it uses considerably more of the energy in light than traditional solar cells, the amount of force it exerts is proportional to its small size.

Applications in the larger world are more distant. Powering a vehicle, running an assembly line or otherwise replacing traditional electricity or fossil fuels would require untold trillions of nanomotors, all working together in tandem — a difficult challenge by any measure.

“The major difficulty lies ahead,” said Weihong Tan, a UF professor of chemistry and physiology, author of the paper and the leader of the research group reporting the findings. “That is how to collect the molecular level force into a coherent accumulated force that can do real work when the motor absorbs sunlight.”

Tan added that the group has already begun working on the problem.

“Some prototype DNA nanostructures incorporating single photo-switchable motors are in the making which will synchronize molecular motions to accumulate forces,” he said.

To make the nanomotor, the researchers combined a DNA molecule they created in the lab with azobenzene, a chemical compound that responds to light. A high-energy photon prompts one response; lower energy another.

To demonstrate the movement, the researchers attached a fluorophore, or light-emitter, to one end of the nanomotor and a quencher, which can quench the emitting light, to the other end. Their instruments recorded emitted light intensity that corresponded to the motor movement.

DARPA and Other Programmable Matter Projects

The goal of Programmable Matter Program is to demonstrate a new functional form of matter, based on mesoscale particles, which can reversibly assemble into complex 3D objects upon external command. These 3D objects will exhibit all the functionality of their conventional counterparts.

Programmable Matter represents the convergence of chemistry, information theory, and control into a new materials design paradigm referred to as "InfoChemistry"—building information directly into materials

Wired also has coverage.

One team from Harvard is working on a kind of “generalized Rubik’s Cube” that can fold into all kinds of shapes. Another is trying to order large strands of synthetic DNA to bind together in a “molecular Velcro.” An MIT group is building “’self-folding origami’ machines that use specialized sheets of material with built-in actuators and data. These machines use cutting-edge mathematical theorems to fold themselves into virtually any three-dimensional object.”

The Programmable Matter program is now approximately five months into its second phase, which is scheduled to last about 15 months. The first phase of the effort involved five teams, two from Harvard University, two from the Massachusetts Institute of Technology (MIT) and one from Cornell University. Zakin notes that all of the teams successfully met their goals and are all now working on phase two. The teams are made up of experts from a range of disciplines such as computer scientists, roboticists, biologists, chemical engineers, mechanical engineers, physicists and artists. Zakin describes the research on programmable matter as “the ultimate interdisciplinary endeavor.” Another important part of the program is that the five teams are collaborating with each other, not competing. This is because each team has its own strengths and weaknesses and they share information. The teams meet on a regular basis and present their results to each other to help facilitate the information sharing.

At the end of phase two, the teams must be able to assemble four or five three-dimensional solids of a specific size and shape from a set of building blocks. Zakin notes that not all of the building blocks have to be used to create a specific shape, but they must demonstrate the ability to build objects the size and shape of a tool. The teams must also demonstrate that when the building blocks form a shape, they can adhere with the strength of a standard industrial/engineering plastic.

Once programmable matter’s capabilities have been proven, phase three will begin looking at the different applications for the technology. This phase will focus on using the science for specific applications, either through this program or other DARPA efforts.

Zakin observes that much more can be done with the science of programmable matter. One possible direction for the technology is programming adaptability into the material itself. The Programmable Matter program is a first step, he explains. Adaptability, for example, could produce electronics that can cope with heat and dust in the desert and then shift to resist humidity and moisture in a jungle environment.

The team working with DNA is planning to use it as a “molecular Velcro.” [DNA Origami] The team’s scientists believe that it is necessary to get enough DNA on a surface to achieve adhesion. “DNA strands stick together. Each pair that sticks together is an adhesive. The trick is getting enough, and that means getting a density of DNA on a certain area,” he says. In the program’s first phase, the researchers demonstrated the highest density of DNA coverage on a surface ever achieved. Zakin says that this approach has potential applications in biological sciences and medicine.

To achieve the Programmable Matter vision, key technological breakthroughs will center on the following critical areas:

* Encoding information into chemistry, or fusing materials with machines.
* Fabrication of mesoscale particles with arbitrary complex shapes, composition, and function.
* Interlocking/adhesion mechanisms that are strong and reversible.
* Global assembly strategies that translate information into action.
* Mathematical theory for construction of 3D objects from particles

Intel is continuing to work with Claytronics which is a separate effort from the DARPA project.

DARPA also has the chembots program.

The goal of the Chemical Robots (ChemBots) Program is to create a new class of soft, flexible, mesoscale mobile objects that can identify and maneuver through openings smaller than their dimensions and perform various tasks.

The program seeks to develop a ChemBot that can perform several operations in sequence:

* Travel a distance;
* Traverse an arbitrary-shaped opening much smaller than the largest characteristic dimension of the robot itself;
* Reconstitute its size, shape, and functionality after traversing the opening;
* Travel a distance; and Perform a function or task using an embedded payload.

This program creates a convergence between materials chemistry and robotics through the application of any one of a number of approaches, including gel-solid phase transitions, electro- and magneto-rheological materials, geometric transitions, and reversible chemical and/or particle association and dissociation.

How Close We Live and Work and How We Move Has Large Effects on Human Behavior

1. Increasing population density, rather than boosts in human brain power, appears to have catalysed the emergence of modern human behaviour, according to a new study by UCL (University College London) scientists published in the journal Science. High population density leads to greater exchange of ideas and skills and prevents the loss of new innovations. It is this skill maintenance, combined with a greater probability of useful innovations, that led to modern human behaviour appearing at different times in different parts of the world.

This work further emphasizes the importance of cluster development of economic development.

2. In a separate study from Northwestern University found that the mere act of physically approaching a potential partner, versus being approached, seemed to increase desire for that partner.

Regardless of gender, those who rotated experienced greater romantic desire for their partners, compared to those who sat throughout the event. The rotators, compared to the sitters, tended to have a greater interest in seeing their speed-dating partners again.

“Given that men generally are expected -- and sometimes required -- to approach a potential love interest, the implications are intriguing,” Finkel said.

This would mean that men and women are less different in how they choose mates, but that the cultural expectation that men approach women is causing differences.

Architectural design and city planning (and advanced time and motion and comprehensive event tracking and recording studies) in combination with technology seem likely to yield further increases in productivity via improved collaboration, connection and other factors.

Non-desktop Nanofactories and the Bootstrap Pathway

J Storrs Hall, Foresight President, discusses general desktop nanofactories versus larger nanofactories and specialized nanofactories.

This is part of a series of articles that have been posted at Accelerating Future, CRNano (Center for Responsible Nanotech), Foresight and this site (nextbigfuture) that consider how nanofactories will emerge and the impact.

From the outside, the line from here to nanofactories goes through 3D printers. To the user, a nanofactory as described in the above technical discussions is just a 3D printer that can produce a wider variety of products than the ones now available. My guess at the point they start being called nanofactories is when the process includes nanoscale printing (as in embedded circuitry or surface nantennas for color effects and photovoltaics. Which can be done now — so it shouldn’t be too long before someone includes it in a solid-freeform-fab process.

So: why should we expect a sudden jump in 3D printer capabilities? We shouldn’t. They will continue getting cheaper for the same capabilities, and more capable for the same price; but on the same smooth growth curve we’ve been seeing all along.

On the other end of the scale, a billion-dollar factory will always be able to out-produce a million-dollar factory, and that will always outproduce a thousand-dollar countertop machine. The fact that the workstation I’m using to write this essay could out-calculate all the Cray-1s ever built doesn’t mean that they quit building supercomputers: it means that for the same money they build unimaginably monsterific humongulated gigantoid supercomputers. The same will be true of factories.

At CRNano, Chris Phoenix discusses microfactories and why they are much more limited than nanofactories.

J Storrs Hall had a paper from 1998 that addresses nanofactory architecture and has the concept of a "Zeno Factory, which is the end product of iterative design bootstrapping.A>

Iterative Design Bootstrap - J Storrs Hall

This is the optimal bootstrapping pathway.

Given a single "hand built" simple system, we could:

* Build a complex system immediately atom by atom. This would take 2x10^10 seconds (over 600 years).
* Reproduce simple systems until, 69 days later, one has macroscopic capability.

Simply assume that building the first complex system is the goal, and employ the formula above; it gives an optimum of 17 generations. This results in a first, reproductive, phase of 472 hours, resulting in 131,072 assemblers, which build the complex system in a second phase of 42 hours, for a total of 21 and a half days.

This has implications with respect to an optimal overall approach to bootstrapping replicators. Suppose we have a series of designs, each some small factor, say 5.8, more complex, and some small factor, say 2, faster to replicate, than the previous. Then we optimally run each design 2 generations and build one unit of the next design. As long as we have a series of new designs whose size and speed improve at this rate, we can build the entire series in a fixed, constant amount of time no matter how many designs are in the series. (It's the sum of an arbitrarily long series of exponentially decreasing terms. Perhaps we should call such a scheme "Zeno's Factory".)

For example, with the appropriate series of designs starting from the simple system above, the asymptotic limit is a week and a half. (About 100 hours for design 1 to build design 2, followed by 50 hours for design 2 to build design 3, plus 25 hours for design 3 to build design 4, etc.) Note that this sequence runs through systems with a generation time similar to the "complex system" at about the 25th successive design. Attempts to push the sequence much further will founder on physical constraints, such as the ability to provide materials, energy, and instructions in a timely fashion. Well before then we will run into the problem of not having the requisite designs. Since all the designs need to be ready essentially at once, construction time is to all intents and purposes limited by the time it takes to design the all the replicators in the series.

Systems Analysis of Self-replicating Manufacturing
J Storrs Hall has done a systems analysis for self-replicating manufacturing systems. There are several results of this analysis that have implications for their design. First, replicators do not benefit from raw internal parallelism but do benefit from concurrency of effort involving specialization and pipelining. Given the enormous range of possible replicator designs, the optimal pathway from a given (microscopic) replicator to a given (macroscopic) product generally involves a series of increasingly complex replicators. The optimal procedure for replicators of any fixed design to build a given product is to replicate until the quantity of replicators is 69% of the quantity of desired product, and then divert to building the product.

Therefore, it is desirable to design a replicator with the understanding that it will reproduce itself only for a few generations, and then build something else. Furthermore, it is crucial to design replicators that can cooperate in the construction of objects larger and more complex than themselves. I have outlined a system that embodies these desiderata.

Finally, considerations encountered in the design of the present system have convinced the author that, given a precursor or bootstrap technology (e.g. self-assembly from biomolecules) that can produce usable parts, the first thing to be built from the parts should be a parts-assembly mechanism and not a parts-fabrication mechanism. A parts-assembly robot constitutes a self-replicating kernel in an environment of parts, and a growth path that maintains the self-replicating system property (i.e. that each vertex of its diagram is the terminus of an endogenous capital edge) appears to work best.

The most important constraint is the availability of designs for the successive systems.

J Storrs Hall says that there will not be a lengthy time when there will be perfected desktop nanofactory capability.

You’re within a month of replacing the entire infrastructure of the Earth, every last farmer’s hut and the plants and animals grown for food as well as the cars, trucks, roads, and cities, with one vast, integrated machine. Luxury apartment, robot servants, personal aircraft, you name it, for everyone (and all still a tiny fraction of the capabilities of the overall machine). Ask for anything, and it will simply ooze out of the nearest wall, which will of course be a solid slab of productive nanomachinery (or Utility Fog). To recycle anything, just drop it on the floor.

This would also mean that issues of who has access to the actual machines and factories is less vital. Widespread specialized production systems and high quality product would be mostly not constrained. There is the issue of how things play out in the where things do not work so well phase but where there is big impact.

We can project forward from
* Flexible electronics production
* Advanced rapid manufacturing
* DNA Nanotechnology, guide self assembly and other developing methods

See how well we can get to the start of useful bootstrapping.

A Book Review on Nanofuture A Book Review on Nanofuture Aileen Grace Delima STS - nanotechnology

Space Elevator Climbing and Beaming Competition July 14-16, 2009

The Space Elevator Games 2009 finally have a specific scheduled date and they are July 14-16, 2009.

Here is the schedule:
6/05: non-US press badging deadline
6/08: Aviation Safety Review Board (must pass)
6/15: Tether System test flight (Dryden)
6/18: Team Laser safety/qual tests (Dryden)
7/09: Teams arrive for setup (Dryden)
7/14 - 7/16: Power Beaming Challenge (Dryden)
7/17: SE day at SFF conference (Ames)
8/13: Tether Challenge (Seattle)
8/13 - 8/17: Space Elevator Conference (Seattle)

WHO: Prize purse by NASA CC, management by The Spaceward Foundation, six competing teams.
WHAT: The Space Elevator Power Beaming Challenge, a 1-km vertical laser-powered race demonstrating the concept of the Space Elevator. Total prize purse is $2,000,000.
WHERE: NASA Dryden Flight Research Center / Edwards Air-Force Base

NASA’s Centennial Challenges Program, NASA’s Dryden Flight Research Center, and the Spaceward Foundation are announcing that the 2009 Power-Beaming Challenge, part of Spaceward’s Space Elevator Games, will be held at NASA’s Dryden Flight Research Center at the Edwards Air Force Base in California’s Mojave Desert on July 14, 2009.

“We are very pleased that we can host this year’s Climber / Power-Beaming competition at our facility” said John Kelly, Deputy Mission Director for Exploration at the Dryden Flight Research Center. “Dryden has a rich history of research into the technologies of tomorrow, reaching back to the first supersonic flights and then later to the Apollo lunar missions. The Space Elevator can revolutionize our ability to travel to space, and it is only natural that testing of the concept will take place at our facilities.”

Power-beamking Teams
* Lasermotive, Seattle, WA, Led by Jordin Kare
* McGill University, Montreal, Quebec, Canada, Led by Cyrus Foster
* Kansas City Robotics, Kansas City, MO, Led by Brian Turner
* University of Saskatchewan, Saskatoon, Saskatchewan, Canada, Led by: Clayton Ruszkowski
* NSS Space Elevator Team, National Space Society, Ellicott City, Maryland, Led by Bert Murray
* M Climber, University of Michigan, Ann Arbor, MI, Led by Andrew Lyjak

June 04, 2009

EUV Lithography Could Be Commercial 2012-2014

ASML Holding NV (Netherlands) tipped a roadmap for the introduction of its first extreme ultraviolet (EUV) lithography machines at the IMEC Technology Forum.

Martin van den Brink, executive vice president for products and technology said "EUV is the cost effective successor of 193-nm lithography below 20-nm." and that ASML believes it can extend EUV down to sub-5nm.

The company took delivery of a new [power] source in May and is "ready to integrate a system" van den Brink said.

This will be the first example of ASML's NXE platform of EUV lithography machines; the NXE3100 capable of between 60 and 100 wafers per hour throughput. This will have a numerical aperture of 0.25 and should be capable of 28-nm resolution, the same as is being achieved on the EUV Advanced Development Tool installed in IMEC, but at a commercial throughput.

AMD, IBM and others launched renewed effort in EUV in Feb, 2009. The companies are developing a 22-nm ''test chip,'' with plans to embark on a similar program at the 15-nm node. EUV, however, is not targeted for production until the 15-nm node in 2013 or 2014.

The improved throughput is expected to come from a new light-source from Cymer Inc. (San Diego, Calif.) capable of 100-watt illumination, which typically translates to 60-wph throughput. This was shipped to IMEC in May for installation on the ADT. In the second half of 2009 a further improvement in the source should take the illumination level to 200 watts and the potential throughput to 100-wph or more, he said.

Meanwhile ASML will allow the application of a number of the techniques already developed for optical lithography, such as increasing the numerical aperture and allowing off-axis illumination to further improve the achievable resolution.

Van den Brink showed the conference a slide with the NXE 3300 and NXE 3350 addressing 22- and 16-nm resolution respectively, while the NXE 3XX0, with an NA of 0.4 would push down to 11-nm resolution. The target date for the shipment of this first preproduction machine, the NXE 3100, is Q2 2010.

Van den Brink also discussed likely introduction strategies for EUV lithography in memory fabs. He indicated that while commercial purchasing of tools would not begin before 2012 it could be delayed until 2014. However, on an economic analysis the optimum strategy for fabs was to undergo a gradual ramp of EUV lithography tool purchase and EUV production, rather than fast ramp either in 2012 or in 2014.

June 03, 2009

Ultracold - Two Ion Quantum Computer Proposal

Apparatus for generating a two-color optical lattice. Co-propagating beams at both wavelengths are incident on a diffractive optical element (DOE) shown in (a), formed by a photolithographed gold-coated fused silica surface consisting of a regular array of raised equilateral triangles. The image shown was obtained with an atomic force microscope. In (b), reflected light is diffracted, primarily into three first-order beams at each wavelength in a triangular pattern. These beams are then routed by a pair of lenses to form a pair of triangular optical lattices shown in (c), on the image plane of the DOE. The relative position of the two lattices is controlled with a set of electro-optic phase modulators, formed by patterned deposition of mirror/electrodes onto the rear surface of a single lithium-niobate crystal. The lattice structure shown was imaged with a microscope objective onto a CCD camera.

Ultracold molecules: vehicles to scalable quantum information processing in the New Journal of Physics

In this paper, we describe a novel scheme to implement scalable quantum information processing using Li–Cs molecular states to entangle 6Li and 133Cs ultracold atoms held in independent optical lattices. The 6Li atoms will act as quantum bits to store information and 133Cs atoms will serve as messenger bits that aid in quantum gate operations and mediate entanglement between distant qubit atoms. Each atomic species is held in a separate optical lattice and the atoms can be overlapped by translating the lattices with respect to each other. When the messenger and qubit atoms are overlapped, targeted single-spin operations and entangling operations can be performed by coupling the atomic states to a molecular state with radio-frequency pulses. By controlling the frequency and duration of the radio-frequency pulses, entanglement can be either created or swapped between a qubit messenger pair. We estimate operation fidelities for entangling two distant qubits and discuss scalability of this scheme and constraints on the optical lattice lasers. Finally we demonstrate experimental control of the optical potentials sufficient to translate atoms in the lattice.

A truly scalable quantum computer information processing remains an elusive goal. This is due in part to the stringent requirements on long coherence times, the technical difficulties in implementing high fidelity entangling operations, and the challenge to store and control interactions between many quantum bits (qubits). While neutral atoms provide a natural advantage in coupling weakly to their environment and to other atoms at long distance, atomic interactions at short-range, well described by contact interactions, can be strong, coherent and their effect can be controlled by overlapping the atomic wavefunctions. In particular, the strength of this contact interaction is highly sensitive to underlying molecular structure, and can be precisely manipulated by introducing direct coupling mechanisms between free atoms and molecules.

A system using both ultracold molecules and atoms held in an optical lattice may be a promising system to realize a scalable quantum computer due to the high degree of control available in these systems.

The proposal presented here is a novel approach to use two atomic species, each manipulated by a separate optical lattice potential. Highlighted is the fabrication of lattice structure independent of optical wavelength, use of molecular states to induce entanglement between atoms and introduction of single site addressability without the need for spatially resolved manipulations.

A key aspect of this approach is the introduction of auxiliary messenger atoms used both to probe and to manipulate quantum states and entanglement in an array of qubit atoms. By utilizing two separate species of atom for these two roles and carrying information in their internal states, it becomes technically feasible to manipulate spatial overlap of atoms and thereby their interactions, without disrupting the sensitive quantum coherences. We propose to use fermionic 6Li atoms as qubits, prepared in the lattice with ideally one atom per site. Bosonic 133Cs will act as messenger atoms to aid in the gate operations and mediate entanglement among the qubits, and will be less densely populated, on average, by one atom per 100 sites of a separate lattice potential of identical structure to the first. By shifting the relative alignment of the lattices through optical phases, each 133Cs atom can, in principle, be transported to any distant 6Li atom; similar schemes can be found. Since there may be many 133Cs atoms, multiple copies of the same computation can proceed in parallel.

Implementation of the necessary gates via coupling to a molecular state |M. Not only does the molecular state allow entangling operations to be carried out between the atoms, but it also allows single qubit addressing. Part (a) shows how to execute a targeted single qubit rotation. When the atoms are overlapped, rf pulses allow the qubit atom to be rotated into a superposition state. Part (b) shows the sequence to entangle two distant qubits. After entangling messenger and qubit (step 1), the messenger can be transported to a second qubit and subsequently entanglement can be exchanged (step 2).

We have presented a scheme for scalable quantum information processing based on two species of ultracold atoms held in controlled bichromatic optical lattice potentials, including methods to entangle 6Li and 133Cs atoms locally through coupling to bound 6Li–133Cs molecules, and methods to transport entanglement to distant atoms through multiple quantum manipulations. We have identified simple quantum logic gate operations possible in this scenario. Methods are based on the production of translatable optical lattices at two wavelengths with identical structure, for which we have demonstrated a novel realization utilizing diffractive optics and electro-optic modulation. We have discussed gate operations in detail, identifying necessary timescales for entangling via a molecular state and transporting atoms adiabatically. This compares favorably with the expected coherence time, including the effects of off-resonant scattering, qubit tunneling, external field instabilities and state-dependent light shifts. Finally, we have analyzed the effects of realistic experimental uncertainties to ascertain expected fidelities, and compared this with measured errors in lattice construction; with incremental improvement in passive stability, fidelities of > 97% are achievable in entangling distant qubits.

Plot of limits on the intensities of the 681 nm optical lattice L1 versus the 1064 nm optical lattice L2. The diagonal black lines show the bounds imposed by requiring independent control of L1 over 6Li and L2 over 133Cs. Also shown are the tunneling rate limit and off-resonant scattering rate limit for both 6Li (blue lines) and 133Cs (red lines) for a decoherence rate of 2 s–1. The green shaded box shows the available parameter space satisfying all of the above conditions. The black dot corresponds to conditions assumed for calculations in the text.

The capability to independently control the two atomic species allows us to have single site addressability of the qubit atoms. This is accomplished by shifting the optical phases of the messenger lattice, allowing the 133Cs messenger atom to be translated to any 6Li qubit atom. This is a necessary step for many operations in this proposal, including detection and creation of a universal gate set.

What We Might Expect of Early Nanofactories

J Storrs Hall, President of the Foresight Institute, considers what will be the likely situation with early nanofactories.

The first nanofactories will probably be DNA/RNA/protein gadgets requiring thousands of steps by skilled scientists to coax them to build a new gadget (which will consist only of DNA/RNA/protein), or diamondoid gadgets in high vacuum requiring thousands of steps by skilled scientists to coax them to build a new gadget (which will consist only of diamondoid), or possibly even tungsten carbide gadgets doing EDM with nanotubes, requiring thousands of steps by skilled scientists to coax them to build a new gadget (which will consist only of tungsten carbide, the nanotubes having to be supplied from outside). Early nanofactories will be cranky and experimental, expensive, require expensive inputs, be able to produce only very limited products, and be very lucky to replicate themselves before they break down.

Michael Anissimov at Accelerating Future believes that there could be rapid and sudden emergence from this early phase.

95% of the investment costs in building a nanofactory will go into building nanoscale machines, including an assembler, making them work reliably, putting them into a cooperative, redundant architecture that works without letting in molecular contamination, letting molecular contamination escape its internal confines, and so on. These are all low-level problems. If they aren’t all pretty much solved, you are going to get precisely nowhere. The cost of building the first nanofactory will be immense. But if you have a basic nanoscale modular architecture that can reliably build itself up from the micron level to the centimeter level, then it’s not going to matter whether you are building 100 centimeter-scale units or a million. The salient scaling issues are at the nanoscale and microscale. By the time you’re at the macroscale, the system has to be completely automated, and hence likely inexpensive.

The number one expense in any product comes from human input, attention, and craftsmanship on a per unit basis — the less you need, the cheaper it is. Desktop nanofactories will need to be almost completely automated, or they wouldn’t exist in the first place. You cannot micromanage each of every 10^17 fabrication event and expect to leave the work bench any time in this geologic eon.

So there is general agreement that current trends are towards specialized "potentially buggy multi-step mainframe" and micro/nano level components of the assembly systems.

How useful will the specialized kluge systems be ?
How long will it take from "almost" getting things to work right for end to end molecular manufacturing to reliably and cheaply doing it ?
Once you start bootstrapping up the capability ladder then how fast can you proceed ?

Chris Phoenix, at the Center for Responsible Nanotechnology, has recently been considering issues around a "fast-takeoff".

After being able to perform atomically precise chemistry at a reasonable rate (being able to do it at all beyond proof of concept handful of reactions which we are at now).

Key issues are around reduction of error rates.

Multi-staging to control errors is discussed, which is also related to J Storrs Hall comment about initial work being done with multi-stage process factories.

In some processes, it may be relatively easy to build a 100 atom part (of which maybe 98% will be perfect), then test and throw out the 2%, then stick the perfect parts together to make a perfect meta-part.

In other processes, an error will not only destroy the workpiece, but also the tool. So if a 10,000-atom tool is destroyed by each error, then the success rate needs to be 99.99% or better.

In a multi-stage process, each stage may have a different error rate for a different reason, and need different error handling.

Tools that can only be built by molecular manufacturing may reduce the error rate drastically. For example, sorting rotor cascades can give you any purity you need, and atomically perfect sliding seals can exclude all contaminants. In that case, it may be a quick step from just barely good enough, to so good you don't even have to think about it.

A more general argument is that once you have general-purpose molecular manufacturing, you can probably build improved versions of your tools right away. So... although I can't prove that any given molecular manufacturing pathway will have a fast takeoff thanks to error rates crossing a threshold of significance, it does seem pretty likely.

This follows a general trend of argument: the difference between unfeasible and adequate is generally bigger than the difference between adequate and excellent. Feed "excellent" into an exponential growth equation, and you get a fast takeoff.

This does not mean that the difference between adequate and excellent is less than one year or is a perfectly smooth transition. Also, many early assumptions for a fast takeoff had more pre-design work (particularly system design work) so that once we had adequate then we rapidly roll out pre-planned strategies to upgrade. Pre-design is not happening yet, but will likely happen during the time of clearly feasible and almost adequate to barely adequate and beyond phases.

There are industries and product-lines that are already using some nanoscale nanopatterned surfaces and nanoparticles which would be able to readily take almost adequate for nanofactory technology and run with them. Things like various kinds of computer disks and memory and certain communication and computer components. Synthetic biology, medicine, gene therapy, stem cells, microbiology and other areas would get a big boost from almost nanofactory ready technology.

How useful will the specialized kluge systems be ? I would say very useful and very profitable. This means beyond the current few billion dollar per year going into the not well defined and mostly chemistry research area of "nanotechnology" this money will become more sharply focused on the real molecular manufacturing development and one hundred times that money and effort will come from the big industries that benefit from each small step from feasible to barely adequate to excellent. However, it appears that funding transition could be slower from clearly feasible to barely adequate. We have seen that many people are will to deny "clearly feasible". We are likely to be going at our current pace with slight increase into the specialized kluge phase that J Storrs Hall described. Some companies have to make whacks of dough and make splashy impacts to jar people and wake them to the fact that "the game is finally on and molecular manufacturing will be leaving the station".

Analog RF Radio Chip is Faster than Digital and Uses 100 Times Less Power and Could Enable Universal Radio

Associate Professor of Electrical Engineering Rahul Sarpeshkar, left, and Soumyajit Mandal display their RF (radio frequency) cochlea, a low-power, ultra-broadband radio chip. The chip, held by Mandal, is attached to an antenna, held by Sarpeshkar. The diagram on the computer monitor shows the wiring layout of the chip.

MIT engineers have built a fast, ultra-broadband, low-power radio chip, modeled on the human inner ear, that could enable wireless devices capable of receiving cell phone, Internet, radio and television signals.

Rahul Sarpeshkar, associate professor of electrical engineering and computer science, and his graduate student, Soumyajit Mandal, designed the chip to mimic the inner ear, or cochlea. The chip is faster than any human-designed radio-frequency spectrum analyzer and also operates at much lower power.

The analog RF cochlea chip is faster than any other RF spectrum analyzer and consumes about 100 times less power than what would be required for direct digitization of the entire bandwidth. That makes it desirable as a component of a universal or "cognitive" radio, which could receive a broad range of frequencies and select which ones to attend to.

The RF cochlea, embedded on a silicon chip measuring 1.5 mm by 3 mm, works as an analog spectrum analyzer, detecting the composition of any electromagnetic waves within its perception range. Electromagnetic waves travel through electronic inductors and capacitors (analogous to the biological cochlea's fluid and membrane). Electronic transistors play the role of the cochlea's hair cells.

June 02, 2009

Chip Scale Atomic Clocks, GPS Towers

There is some concern that if there are delays in launching replacement GPS satellites that there could a degradation in the availability and accuracy of GPS (global positioning).

There are currently 31 operational GPS satellites.

According to the GAO report, if the Air Force keeps to its current launch schedule, the chance that we'll still have 24 working GPS satellites dips below 95 percent next year, and falls to about a 80 percent in the 2011-2012 timeframe. Those might sound like good odds, but consider this: The GAO warns that if the Air Force falls behind on its planned deployment of next-generation GPS III satellites (the first is slated to go up in 2014) by just two years (and there's a "considerable risk" that the Air Force will fall behind, according to the GAO's report), the delay could leave us with a mere 10 percent chance of 24 working GPS satellites by 2017.

If needed European or Russian launchers could be commissioned if there is a problem with launchers for the US Air Force and the satellites do start failing. There is ZERO chance that we will fall below 24 working satellites for any more than one year. The reason being is if the satellites start failing fast enough then the US military will throw money and the problem and get the launchers and some more version II satellites up to maintain performance for the US military.

For full accuracy with fewer satellites or enhanced accuracy, we will have chip scale atomic clocks (or larger but still affordable atomic clocks). Military receivers and civilians that have atomic clock enabled receivers could have 24 satellite accuracy even with 21 or possibly fewer satellites. Also, regional GPS could be broadcast from towers.

TV Towers for GPS alternative

Chip scale atomic clocks are being developed by DARPA

The goal of the Chip-Scale Atomic Clock program is to create ultra-miniaturized, low-power, atomic time and frequency reference units that will achieve, relative to present approaches:

* >200X reduction in size (from 230 cm3 to <1 cm3),
* >300X reduction in power consumption (from 10 W to <30 mW), and
* Matching performance (1 X 10^-11 accuracy ⇒ 1 µ/day). Example of future payoff is wristwatch size high-security UHF communicator / jam-resistant GPS receiver.

Benefits of Atomic Clock Augmentation for GPS

Most GPS receivers use inexpensive quartz oscillators as a time reference. The receiver has a clock bias from GPS time but this bias is removed by treating it as an unknown when solving for position. In effect, the receiver clock is continually calibrated to GPS time. If a highly stable reference were used, however, the receiver time could be based on this clock without solving for a bias. "Clock coasting" (with no clock model), as it is referred to, requires an atomic clock with superior long term stability. The analysis in this chapter shows the
potential for clock coasting to improve vertical positioning accuracy.

4.2 Atomic Clock Benefits
Currently, GPS receivers need four measurements to solve for three-dimensional position and time. The receiver clocks are not synchronized with GPS time which necessitates the fourth measurement. Using an atomic clock synchronized to GPS time in the receiver would eliminate the need for one of the measurements. Clock coasting has been shown to provide a navigation solution during periods which otherwise might be declared GPS outages. Misra has proposed a clock model in order to make the receiver clock available as a measurement continuously. Although five or more satellites are visible at any time with the current 24-satellite constellation, availability can be compromised by satellite failures. Therefore, the negative impact of satellite failures could be reduced by atomic clock augmentation. Redundant oscillators could be used in the receiver to lower the probability of a receiver clock failure. Thus, availability of GPS positioning would be increased. van Graas has noted that adding an atomic clock improves availability more than adding three GPS satellites or a geostationary satellite. Also, a perfect clock helps more than including the altimeter as a measurement.

Strontium clock accurate to 1 second over 200 million years

Micromagic Clock: Microwave Clock Based on Atoms in an Engineered Optical Lattice

A new class of atomic microwave clocks is proposed based on the hyperfine transitions in the ground state of aluminum or gallium atoms trapped in optical lattices. For such elements magic wavelengths exist at which both levels of the hyperfine doublet are shifted at the same rate by the lattice laser field, canceling its effect on the clock transition. A similar mechanism for the magic wavelengths may work in microwave hyperfine transitions in other atoms which have the fine-structure multiplets in the ground state.

Andrei Derevianko and Kyle Beloy of the University of Nevada in Reno and colleagues have come up with the idea of trapping the atoms in place using lasers. This means their energy states could be monitored in an area only a few micrometres across, potentially leading to more accurate measurements. This is difficult to get right, though, because the lasers distort an atom's energy levels in a complex way, making it impossible to define a jump that equates to a second.

Derevianko's team overcome this problem by finding a laser frequency that alters both hyperfine states by exactly the same amount - a trick that works in aluminium and gallium but not as well in caesium (www.arxiv.org/abs/0808.2821). "Then, the energy difference between the levels is the same as if the atoms are in vacuum," says Derevianko.

Using this method, the team has calculated the second to be 1506 million cycles of microwaves for aluminium-27 and 2678 million cycles for gallium-69.

Although the atoms can be trapped in an area only a few micrometres across, the lasers, and cooling and computing equipment will add to the bulk. Nevertheless, the team say the clocks may be portable and could be used in space-based experiments that require extremely accurate timekeeping, such as those for detecting gravitational waves or for testing Einstein's theories.

Tom Heavner, who works on fountain clocks at NIST, describes the proposal as forward-thinking and original. "It is a really clever way to meld together the old-style clocks with new laser technology," he says.

A Manufacturable Chip-Scale Atomic Clock

Several factors are converging to enable atomic clocks to be manufactured with very small dimensions and run at low operating power. MOEMS technology, high-speed vcsels, microelectronics, wafer-scale packaging, and the all-optical CPT method of exciting atomic transitions are key ingredients in the quest to make precision time-keeping devices with chip-scale dimensions. In this paper we report on the design and process that enable an atomic clock to be made with a total volume of 1.7 cm3, a total power budget of 57 mWatts, and an Allan Deviation at 1 hour of 5E-12.

Current methods for achieving GPS Level navigation and location accuracy are with things like using robotic vision algorithms and coupled MEMS-based dead-reckoning modules.

The vast majority of these efforts have focused on the use of inexpensive MEMS-based dead reckoning units to provide position, navigation, and timing (PNT) information. Unfortunately, these approaches suffer from errors in estimating angular rotation and accrue PNT errors in a linear fashion. Small, low-cost implementations are limited to providing short duration benefit before the error accumulations render them useless. Magnetically hostile environments such as light industrial buildings further degrade PNT effectiveness. Attempts to provide useful MEMS-based error correction sources in an acceptable size, weight, and power consumption tend to be limited to techniques that are equally susceptible to distortion. Stride-based correction mechanisms require precise calibration to the individual and are immature when dealing with combat/first-responder body poses.

Rockwell Collins has addressed these issues by challenging the fundamental assumption that MEMS aiding will provide an acceptable solution. Instead, robotic vision algorithm-based navigation is aided by the MEMS. Current robotic vision algorithms such as Simultaneous Location and Mapping (SLAM) are through put intensive and are not presently practical for the dismounted user. The use of the MEMS provides excellent instantaneous sensor pointing information that reduces the SLAM processing requirements significantly. The use of Feature Constellation Tracking (FCT) algorithm improvements further reduce the processing requirements by allowing intelligent thinning of the features tracked. This approach has allowed the Ghostwalker IR&D project to demonstrate accurate GPS-denied indoor and outdoor navigation without prior knowledge of features or telltale emissions.

DARPA is also funding inertial navigation system development

Форма для связи


Email *

Message *