February 15, 2008

Feed in Tariffs- support for renewable power

Some call Feed in Tariffs justified to make up for unpaid externalities from other energy sources like coal and natural gas but the incremental Feed in tariff payment is support. Even Ren21 of France calls it support. Feed in tariffs are paying a subsidy for 20 years on renewable capacity that is installed. 5.3 billion Euro is an older estimate from 2001 and the amount has gone up since then with a bigger program in Spain and other places. I like wind and solar power, but I think it is wrong to say that wind and solar are not getting enough support relative to nuclear power.

Feed in Tariff's at wikipedia

Feed in tariff is an incentive structure that boosts the adoption of renewable energy through government legislation. The regional or national electricity utilities are obligated to buy renewable electricity (electricity generated from renewable sources such as solar photovoltaics, wind power, biomass, and geothermal power) at above market rates.

Feed in tariff presentation

Not just Germany but Spain and Denmark have big feed in tariff systems. Many other European countries and Canada also have feed in tariff systems (they are just smaller programs than the German one)

Estimated impact on end use electricity prices (according to european commmission)
Between 4% and 5% for Germany and Spain
Around 15% for Denmark

Back in 2001: The Netherlands (more than EUR 1.5 billion), the UK (circa EUR 1.5 billion) and Germany (circa EUR 1.8 billion) provided substantial off -budget support to electricity consumption. The Feed in tariff support has gone up since then.

European Environment Agency figures in 2004 gave indicative estimates of total energy subsidies in the EU-15 for 2001: solid fuel (coal) EUR 13.0, oil & gas EUR 8.7, nuclear EUR 2.2, renewables EUR 5.3 billion.

Ren21 in France (pro-renewables) calls Feed in Tariffs support.
The European Environment Agency estimated at least $0.8 billion in on-budget support and $6 billion in offbudget support for renewable energy in Europe in 2001. A large share of the off-budget support was due to feed-in tariffs, with purchase obligations and competitive tendering representing other forms of off-budget support

Calculating the difference between the Feed in Tariff and the market price to get the level of support.
40 billion kwh X 3 cents Euro (8 cents for wind less 5 cent market price) per kwh. Euro 1.2 billion.
That is only Germany and not the Spain and Denmark subsidies. Spain and Denmark combined have about the same wind as Germany. So that would double up differential only subsidy. 2.4 billion. Still have over a dozen other European countries like France and UK but those programs are smaller.

The solar PV tax credit is about 8 to twenty times the market rate. So subtracting out that part of the subsidy is not that different.

About 2 billion kwh for solar in Germany at 30 cents per kwh or a 25 cent premium. Some figures I have seen for the solar feed in tariff are 71 cents/kwh. 400 million more including the solar part. Double that for the rest of Europe or triple to get to the world figure.

So 2.4 billion for wind (europe only) and 800 million for solar europe only for the differential above market price for $3.2 billion Europe only feed in tariff estimate. About 30 other countries have feed in tariffs for renewables. US, Canada, Japan and other countries also have subsidies.

Most utilities in the USA charge 2-32 cent/kwh added charges for wind.
Figure from the American wind energy association

Plus there is the 1.5 cent per kwh production tax credit

Wind production incentives

Also, I don't agree with the subtract carbon emissions from that total. The carbon emissions should be counted separately as an externality or subsidy for coal, oil and natural gas. Otherwise it would be an adjustment for nuclear.

Some estimates for Solar have fairly high greenhouse gas emissions although still better than coal and natural gas

The Solar PV incentives swamp the market price figure.

So solar and wind should be supported, but it is not true that they are not getting enough support. Coal and oil are the things that should be penalized and shifted away from and it will take support for every other energy source to make that happen in a timely way.

Robotic Surgery today

Da Vinci surgical robots, which sell for nearly $1.4 million and weigh half a ton are seeing sales growth of 60% per year
Intuitive said the 78 units sold in the fourth quarter of 2007 lifted the worldwide installed base to 795. Beyond prostate surgeries, the robots are now being employed for hysterectomies, fibroid removal and other gynecological procedures, while making inroads into numerous other areas like heart valve replacement and kidney surgery. Intuitive robots were used in 85,000 procedures last year and the company expected a 55 percent increase in 2008.

Intuitive Surgical’s da Vinci® Surgical System combines superior 3D visualization along with greatly enhanced dexterity, precision and control in an intuitive, ergonomic interface with breakthrough surgical capabilities.

The da Vinci Surgical System is improving patient experiences and outcomes by fundamentally changing surgery in three ways:

Simplifies many existing MIS (minimally invasive surgery) procedures
Many surgical procedures performed today using standard laparoscopic technique may be performed more quickly and easily using the da Vinci Surgical System. This is because the da Vinci System delivers increased clinical capability while maintaining the same "look and feel" of open surgery.

Makes difficult MIS operations routine
Traditional laparoscopy has never become widely applied outside a limited set of routine procedures. Only a select group of highly skilled surgeons routinely attempt complex procedures using a minimally invasive approach . The da Vinci Surgical System finally allows more surgeons to perform complex procedures using a minimally invasive approach – routinely and with confidence.

Makes new MIS procedures possible
A number of procedures that could not be performed using traditional MIS technologies can now be performed using the da Vinci Surgical System. The advanced feature set and extensive EndoWrist® instrumentation of the da Vinci System enable surgeons to perform more procedures through 1-2 cm incisions.

Patients may experience the following benefits:

-Reduced trauma to the body
-Reduced blood loss and need for transfusions
-Less post-operative pain and discomfort
-Less risk of infection
-Shorter hospital stay
-Faster recovery and return to normal daily activities
-Less scarring and improved cosmesis

They also have a 3D high definition system

Da Vinci has many systems for robotically assisted minimally invasive surgery

Robots are being used to test potentially hazardous chemicals on cells grown in a laboratory instead of animals Robots will allow over 10,000 screens on cells and molecules in a single day compared with 10 to 100 studies a year on rodent models.

Quantum annealing can be millions of times faster than Classical computing

Picture is the 16 qubit prototype. There was a 28 qubit prototype as well. A new announcement seems to imply 2000-4000 qubits by the end of 2008. "low thousands of qubits by the end of the year [2008]". The die has room for a million qubits.

A research paper has results that compare the time for a quantum annealer to achieve the same levels of accuracy. They obtain times of 10 milliseconds for the quantum annealer for 10 hours of simulated annealing time–a speed-up of more than six orders of magnitude. The speed improvement in the analyzed case was 3,600,000 times.

Dwave Systems has an analog quantum computer, which should have a 2000-4000 qubit version by the end of 2008. It appears likely that they will be able to solve certain classes of optimization problems significantly faster than current methods. Optimization problems are important for businesses like Airlines and delivery companies like Fedex. If they make money solving those problems then they will have more money to research and development better versions (cleaner qubits that stay coherent longer) of their system that could solve a wider range of problems.

It is important to note that for any given problem, heuristics superior to simulated annealing almost always exist. Therefore comparing the performance benefits of quantum vs. classical annealing does not fully answer the question of what the expected speed-up of quantum annealing over the best known classical approaches is. In order to perform this analysis, more specificity with the instance class involved and the specific heuristic being used to solve the problem are required.

D-Wave processors are designed to harness a fundamental principle of nature that operates in both quantum that operates in both quantum and classical regimes - the propensity for all physical systems to minimize their free energy.

Free energy minimization in a classical system is often referred to as annealing. For example, in metallurgy, annealing a metal involves heating it and then cooling
it. This type of thermal annealing allows a metal that is originally filled with defects (a metastable ‘high energy’ state) to become crystalline and defect-free (the minimum free energy state).

The simulation of this type of thermal annealing using classical computers is known as simulated annealing, which is a commonly used heuristic approach to solving
certain classes of hard optimization problems.

Quantum annealing slide

Another research paper from Dwave: Quantum annealing may provide good solutions [a good approximate answer] in a short time, although finding the global minimum [perfect answer] via AQC can take an extremely long time. The energy gaps
considered here are only the avoided crossing type, which correspond to first-order quantum phase transitions. This means that only certain classes of quantum computer algorithms would work at this time.

Problems that have exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.

One type of problems for which quantum mechanics may provide an advantage over classical computation is optimization. In optimization problems, one is interested in finding solutions that optimize some function subject to some constraints. Usually, not only the best solution, but also solutions close to it are of interest.
So optimization problems are ones where AQC could provide a speedup over current systems and a better useful answer.

The global scheme of adiabatic quantum computation maintains its performance even for strong decoherence. The more efficient local adiabatic computation, however, does not improve scaling of the computation time with the number of qubits n as in the decoherence-free case, although it does provide some “prefactor” improvement. The scaling improvement requires phase coherence throughout the computation, limiting the computation time and the problem size n. This means that some algorithms like the Adiabatic Grover search (faster database search) would not be sped up unless the quality of the qubits is improved beyond the level that Dwave Systems will initially be starting at.

I had taken the highlights of a Dwave presentation from Nov 2007 on how their quantum computer works

More quantum computer research papers


Trading Futures
Nano Technology
Netbook     Technology News
Computer Software
Future Predictions

Carnival of Space Week 41, one article on technology adoption and access

Carnival of space week 41 is up at the New Frontiers Blog. It is the largest carnival of space ever with 2 articles.

My contribution was my article on the magnetic catapult a launch device concept that is superior to railguns and an improved superconducting coilgun.

Hobbyspace discusses the spread of luxury goods to the rich to commodity goods to the middle class and how this should apply to space access

Household consumption should be used instead income to measure differences in society. The difference between upper middle class or moderately affluent is not that much more than those who are lower middle class.

The top fifth of American households earned an average of $149,963 a year in 2006. As shown in the first accompanying chart, they spent $69,863 on food, clothing, shelter, utilities, transportation, health care and other categories of consumption. The bottom fifth earned just $9,974, but spent nearly twice that — an average of $18,153 a year. Lower-income families have access to various sources of spending money that doesn’t fall under taxable income. These sources include portions of sales of property like homes and cars and securities that are not subject to capital gains taxes, insurance policies redeemed, or the drawing down of bank accounts. While some of these families are mired in poverty, many (the exact proportion is unclear) are headed by retirees and those temporarily between jobs, and thus their low income total doesn’t accurately reflect their long-term financial status.

The incomes of the top and bottom fifths, we see a ratio of 15 to 1. If we turn to consumption, the gap declines to around 4 to 1. If we look at consumption per person, the difference between the richest and poorest households falls to just 2.1 to 1.

This is related to other studies of the increasing speed of market penetration and of production adoption in emerging markets

Toward Practical visible light metamaterial Hyperlens and new wireless metamaterial breakthroughs

Xiang Zhang (on of the makers of the UV light hyperlens: Theoretically, the biggest obstacle [to progress on visible light hyperlens] is the inevitable loss in the metamaterials. This means that -- to obtain reasonable transmission -- the hyperlens can't be very bulky; this will limit the magnification and imaging area …. Low loss metals and high refractive index dielectrics will greatly boost this field but are extremely hard to find."

Pendry agrees that loss is a major issue for optical metamaterials. "Typically we might use highly conducting metals such as copper or silver which work just fine until we try to use them at the highest frequencies where they become lossy. We need the help of materials scientists to develop new alloys with lower losses."

At the end of 2007, Princeton researchers reported they had developed an optically-thick low-loss negative-index material consisting of alternating layers of highly doped InGaAs and AlInAs -- no metals. Interestingly, the Princeton material uses an entirely different effect than the others to implement the negative index. The optical properties are created by anisotropy in the material's dielectric response rather than resonances in both the permeability and permittivity of the constituent layers. This gives it an inherent flexibility.

On top of its other advantages, the Princeton metamaterial is the first in a new class not only in terms of mechanism but in its potential to be practically fabricated.

Wireless equipment developers are also starting to realize the potential of using components based on metamaterials. In fact Netgear introduced two new routers that use metamaterial antenna systems—WNR3500 and WNDR3300—at CES 2008.

Rayspan has developed metamaterial-based MIMO antenna arrays exhibiting performance characteristics equivalent to conventional MIMO antenna arrays, yet take up less space.

What essential communications components and subsystems are enabled by metamaterials? Metamaterials technology brings three powerful enabling capabilities: (1) the ability to strongly manipulate the propagation of electromagnetic waves in the confines of small structures, (2) simultaneous support of multiple RF functions, and (3) the freedom to precisely determine a broad set of parameters which include operating frequency and bandwidth; positive, negative and zero phase offsets; constant phase propagation; and matching conditions and number and positioning of ports.

These capabilities make possible a broad range of metamaterial components and subsystems:

- Physically small, but electrically large components such as compact antennas sized on the order of a signal’s wavelength/10 while providing performance equal to or better than conventional antennas sized wavelength/2 - a five times size reduction.

- Broadband matching circuits, phase-shifting components and transmission lines which preserve phase linearity over frequency ranges five to ten times greater than those provided by conventional counterparts.

- Multi-band components whose frequencies of operation can be tailored to specific applications and are not limited to harmonic frequency multiples.

metamaterials and superlens

Metamaterials background and superlens

February 14, 2008

China and Taiwan's future

Ma Ying-jeou will be elected president of Taiwan on March 22, 2008 Tensions will then ease further. Look for 2011-2012 (start of talks) or 2013-2016 (second term possible completion): talks on peace treaties and unification between Taiwan and China. Those who are Still predicting war between the US and China over Taiwan are behind the curve I would expect it to be an European Union style loose and gradual unification.

Ma has indicated that unification can only happen with a democratic China, so the prize of unification with Taiwan could be a motivation towards more democracy in China (that way the KMT-Taiwan leader could have a shot at ruling all of a unified Taiwan-China)

Yang said progress on direct air and sea links was possible by the spring of 2009, but that any breakthrough on political relations — including a framework for a peace treaty — was unlikely until the second half of Ma's four-year term. George Tsai of Taipei's Chinese Culture University said the pace of a Taiwan-China rapprochement would depend on China's attitude toward Ma, possible Democratic Progressive Party efforts to derail progress, and Ma's calculations about a re-election bid in 2012.

[I believe that the recent redistricting in Taiwan has structurally ensured KMT victories. Much the way districting in the USA ensures that 98% of incumbent congressmen and senators get re-elected.]

Recent polls published by the government's Mainland Affairs Council indicate about 14 percent of respondents favor unification with Beijing — either now or in the future — while about twice that number support independence.

But most Taiwanese also want greater economic engagement with China.

They believe that Chen's policies to restrict investment and prohibit direct air and maritime links were major factors in the island's relatively anemic annual growth during his nearly eight years in office.

That stands at 3.8 percent, against 6.5 percent in the early and mid-1990s when the Nationalists held the presidency.

Taiwan real estate is getting a boost based on the expected win

Gene Therapy enhanced immune system of mice against viruses

Researchers at McGill University have discovered a way to boost an organism’s natural anti-virus defences, effectively making its cells immune to influenza and other viruses. Drugs could be produced for humans that would generate the same effect of kicking the bodies anti-virus mechanisms into overdrive. As a drug the immunity boost would stop when the drugs were not in the body. Either people could have the immunity increase on all the time or activated whenever there was a need based on detected pathogens.

This would be part of a powerful knockout punch with DNA sequencing methods of identifying any virus or pathogen even artificial ones and previously unknown natural ones. Rev up the immune system and rapidly identify and distribute antidotes to any sequenced problem. These advances would also work well with cheap lab on chip systems for spotting any outbreak of a growing catalog of known pathogens.

The process – which could lead to the development of new anti-viral therapies in humans – involved knocking out two genes in mice that repress production of the protein interferon, the cell’s first line of defence against viruses. Without these repressor genes, the mouse cells produced much higher levels of interferon, which effectively blocked viruses from reproducing. The researchers tested the process on influenza virus, encephalomyocarditis virus, vesicular stomatitis virus and Sindbis virus.

The researchers detected no abnormalities or negative side-effects resulting from enhanced interferon production in the mice, Dr. Costa-Mattioli said. Dr. Sonenberg explained that the process of knocking out genes is not possible in humans, but the researchers are optimistic new pharmaceutical therapies will evolve from their research.

“If we are able to target 4E-BP1 and 4E-BP2 with drugs, we will have a molecule that can protect you from viral infection. That’s a very exciting idea.” Dr. Costa-Mattiolo said. “We don't have that yet, but it’s the obvious next step.”

Gene sequencing discovers and identifies mystery microbes could help find sources of thousands of previously unidentified cases

In the picture, cells infected with the new virus have been stained using antibodies.

After several other identification techniques failed, the new sequencing approach was used to discover a never-before-seen virus that was likely responsible for the deaths of three transplant patients who received organs from the same donor.

As many as 40 percent of cases of central nervous system disease cannot be traced back to a specific culprit. For respiratory illness, the figure is 30 to 60 percent. In the United States alone, 5,000 deaths each year result from unidentified food-borne infections.

It would also be useful in identifying any newly synthesized virus or microbes.

As powerful as 454 sequencing is for discovering new pathogens, it is not fast or cost efficient enough for use in routine screening of transplant tissue. But microbes discovered using this technique could be incorporated into existing screening techniques.

The technique, called unbiased high-throughput pyrosequencing, or 454 sequencing, was developed by 454 Life Sciences, owned by Roche. This is the first time it was used to probe for the cause of an infectious-disease outbreak in humans, and experts say that it could ultimately usher in a new era in discovering and testing for agents of infectious disease.

To find the mystery pathogen responsible for the deaths, Lipkin's team extracted RNA from the tissues of two of the patients and prepared the sample by treating it with an enzyme that removed all traces of human DNA; this enriched the sample for viral sequences. The researchers then amplified the RNA into millions of copies of the corresponding DNA using a reverse transcriptase polymerase chain reaction (PCR). Usually, PCR requires some advance knowledge of the sequence in question because it relies on molecular primers that match the string of code to be amplified. But 454 sequencing avoids that problem by using a large number of random primers.

The resulting strands of DNA were sequenced using pyrosequencing, which determines the sequence of a piece of DNA by adding new complementary nucleotides one by one in a reaction that gives off a burst of light. Pyrosequencing allows for fast, simultaneous analysis of hundreds of thousands of DNA fragments. Although traditional pyrosequencing generally produces relatively short chunks of sequence compared with earlier sequencing techniques, 454 Life Sciences has improved upon the technology such that longer reads are possible.

February 13, 2008

US Dept of Energy analysis of central power costs

Click on the picture to get a larger view.

New Nuclear Plant Orders
A new nuclear technology competes with other fossil-fired and renewable technologies as new generating capacity is needed to meet increasing demand, or replace retiring capacity, throughout the forecast period. The cost assumptions for new nuclear units are based on an analysis of recent cost estimates for nuclear designs available in the United States and worldwide. The capital cost assumptions in the reference case represent the expense of building a new single unit nuclear plant of approximately 1,000 megawatts at a new “Greenfield” site. Since no new nuclear plants have been built in the US in many years, there is a great deal of uncertainty about the true costs of a new unit. The estimate used for AEO2007 is an average of the construction costs incurred in completed advanced reactor builds in Asia, adjusting for expected learning from other units still under construction.

Nuclear Uprates
The AEO2007 nuclear power forecast also assumes capacity increases at existing units. Nuclear plant operators can increase the rated capacity at plants through power uprates, which are license amendments that must be approved by the U.S. Nuclear Regulatory Commission (NRC). Uprates can vary from small (less than 2 percent) increases in capacity, which require very little capital investment or plant modifications, to extended uprates of 15-20 percent, requiring significant modifications. Historically, most uprates were small, and the AEO forecasts accounted for them only after they were implemented and reported, but recent surveys by the NRC and EIA have indicated that more extended power uprates are expected in the near future. The NRC approved 5 applications for power uprates in 2005, and another 13 were approved or pending in 2006. AEO2007 assumes that all of those uprates will be implemented, as well as others expected by the NRC over the next 15 years, for a capacity increase of 2.7 gigawatts between 2006 and 2030. Table 43 provides a summary of projected uprate capacity additions by region. In cases where the NRC did not specifically identify the unit expected to uprate, EIA assumed the units with the lowest operating costs would be the next likely candidates for power increases.

Nuclear Cost Cases
For nuclear power plants, two nuclear cost cases analyze the sensitivity of the projections to lower and higher costs for new plants. The cost assumptions for the low nuclear cost case reflect a ten percent reduction in the capital and operating cost for the advanced nuclear technology in 2030, relative to the reference case. Since the reference case assumes some learning occurs regardless of new orders and construction, the reference case already projects a 17 percent reduction in capital costs between 2006 and 2030. The low nuclear case therefore assumes a 25 percent reduction between 2006 and 2030. The high nuclear cost case assumes that capital costs for the advanced nuclear technology do not decline from 2006 levels (Table 49). Cost and performance characteristics for all other technologies are as assumed in the reference case.

Nuclear power is the energy that has faired the best over time in terms of cost estimate price stability

Recnet Sept 2007 report on rising utility construction costs.

New implantable device can extract stem cells from the bloodstream

Cell Traffix, using a microtube device coated with the protein P-selectin, has isolated and collected adult stem cells residing in human bone marrow to eight times greater purity than can be obtained through traditional centrifugation.

The device, a length of plastic tubing coated with proteins, could lead to better bone-marrow transplants and stem-cell therapies, and it also shows promise as a way to capture and reprogram cancer cells roaming the bloodstream. The system could capture and differentiate stem cells and other cells in the body and allow them to altered or replaced inside or outside the body. It is a path to killing cancer and removing aged cells with rejuvenated cells. There has also been promising work in reprogramming adult stem cells to revert to embryonic stem cells

Researchers used genetic alteration to turn back the clock on human skin cells and create cells that are nearly identical to human embryonic stem cells, which have the ability to become every cell type found in the human body. Reprogramming adult stem cells into embryonic stem cells could generate a potentially limitless source of immune-compatible cells for tissue engineering and transplantation medicine.

The behavior of stem cells, or any new tissue, in the body has a great deal to do with the holistic functioning of signaling networks and the cellular environment. The Cell Traffix device can enable alterations in the signaling to regular cells, cancer cells and stem cells.

Direct capture of blood-borne nucleated cells from circulation using P-selectin and non-coated control surfaces in implanted devices. Following incorporation into the femoral artery of anesthetized rats and 1-h blood perfusion, P-selectin coated tubes (A) showed a significantly greater average concentration of captured nucleated cells than non-coated control tubes (B) [184·6 ± 19·9 cells/mm2 for P-selectin tubes (40 μg/ml) vs. 4·7 ± 1·4 cells/mm2 for control surfaces (P < 0·01), bar = 50 μm]. (C) Total cell yields from 50 cm implanted tubes with cell adhesion molecule surfaces were significantly greater than the yield from non-specific binding in control tubes (**P < 0·01).

The new device mimics a small blood vessel: it's a plastic tube a few hundred micrometers in diameter that's coated with proteins called selectins. The purpose of selectins in the body seems to be to slow down a few types of cells so that they can receive other chemical signals. A white blood cell, for instance, might be instructed to leave the circulation and enter a wound, where it would protect against infection. "Selectins cause [some] cells to stick and slow down," says Michael King, a chemical engineer at the University of Rochester who's developing the cell-capture devices. Different types of selectins associate with different kinds of cells, including platelets, bone-marrow-derived stem cells, and immune cells such as white cells.

Nanowerk describes and ridicules a pure brute force nanotechnology robot approach to cellular repair. The new work by Cell Traffix shows that there could be other more clever paths to being able to achieve cellular rejuvenation.

Twenty-eight percent of the cells captured by King's implants were stem cells. "This is astounding given how rare they are in the bloodstream," says King. Implants would probably not be able to capture enough stem cells for transplant. But King believes that filtering a donor's blood through a long stretch of selectin-coated tubing outside the body, in a process similar to dialysis, would be very efficient. "This technique will clearly be useful outside the body" as a means of purifying bone-marrow-derived stem cells, says Daniel Hammer, chair of bioengineering at the University of Pennsylvania.

Mike King holding the cell capture device
Hammer believes that King's devices will also have broader applications as implants that serve to mobilize a person's own stem cells to regenerate damaged tissues. By slowing down cells with selectins and then exposing them to other kinds of signals, says Hammer, King's devices "could capture stem cells, concentrate them, and differentiate them, without ever having to take the cells out of the body." There might be a way to use selectins to extract neural stem cells, too. "This is a very broad-reaching discovery," says Hammer. Indeed, King says that he has already had some success using selectin coatings to reprogram cancer cells. Leukemia is a blood cancer, but King expects that the anticancer coating would work for solid tumors as well. Devices lined with these coatings might be implanted into cancer patients to prevent or slow metastasis. The company hopes to begin clinical testing of the anticancer coatings by early 2010.

Implanted CellTraffix Device Extracts Adult Stem Cells Directly from the Bloodstream, online in the British Journal of Haematology

Cancer killing invention also harvest stem cells

Other stem cell work: embryonic stem cells can be used to create functional immune system blood cells, a finding which is an important step in the utilisation of embryonic stem cells as an alternative source of cells for bone marrow transplantation.

Any protein can be made from synthesized DNA on a chip

Harvard has a new protein array lab on a chip. The system could produce any desired protein from synthesized DNA placed on the chip. It is another step in making protein production and engineering and biotechnology in general faster and cheaper.

Schematic representation of screening protein-protein interactions with NAPPA.
(A) On each spot, a target plasmid and an affinity capture molecule are linked to the prepared glass slide.
(B) The slide is bathed with the cell-free in vitro transcription/translation mix, containing one or more query plasmids expressing different affinity tags.
(C) The target proteins are expressed and immobilized on the spots, as are query proteins if they bind to that target.
(D) The wells are then washed to remove unbound protein.
(E) Target-query complexes are detected by fluorescence.

Existing protein arrays involve the tedious and lengthy process of expressing proteins in living cells followed by purifying, stabilizing, and spotting the samples. This process is a bottleneck in the preparation of the arrays. Moreover, functionally active proteins require careful manipulation, and the less that is needed the better. Our approach to developing a protein array, a Nucleic Acid-Programmable Protein Array (NAPPA), replaces the complex process of spotting purified proteins with the straightforward and much simpler process of spotting plasmid DNA.

NAPPA Microarray. Genes (35) involved in eukaryotic DNA replication are expressed and immobilized in situ as GST fusion proteins on a microarray format. Expression of target proteins was detected by fluorescence. The samples were arrayed using a GMS 417 pin arrayer with 900µm spacing.

New error correction in chips will allow 32 percent more performance or 35% less power

New error correction circuitry will enable 32% more performance or 35% less power usage or a mix of those gains in new computer processors within 3 years. Basically the new circuits make overclocking safe and reliable.

On the chip, next to certain ubiquitous circuits called flip-flops, they placed similar circuits called latches. Though both the flip-flops and the latches had the same input, the data reached the latches a quarter or a half cycle later. Data is supposed to come into the flip-flop at a particular phase of the clock signal, but if there’s an error, it comes in late and the latch catches it instead. If the bits measured by the flip-flop and the latch don’t match, a controller knows there was an error and tells the processor to rerun whatever instruction was affected.

Of course it takes time to detect the error and rerun the instruction. But that minor drop in performance is more than compensated for by the performance gained when all the error-free circuits do their work so much faster. Blaauw says in his setup, which is known as Razor II and is a simplified version of a similar scheme he presented three years ago, detecting and correcting errors costs about 2.5 percent of the chip’s total power consumption. But by using Razor II, he can run it at so much lower a voltage that he’s putting 35 percent less power into the chip overall and getting the same performance. Intel, on the other hand, kept the power consumption the same and reported a gain in performance of between 25 percent and 32 percent. “We work hard for a few percentage points of improvement, so getting this much is a lot,” Mooney says.

It could take about three years to develop the setup for commercial use, although Mooney says that Intel has no plans for a product based on the technology at the moment. Intel researchers are trying to implement the technology using fewer, smaller transistors and reducing the clock power to make the whole thing more efficient.

Ultimate airships from Warren Design Vision and others

Back in 1997, Mike deGyurky, a program manager at the Jet Propulsion Laboratory (JPL), had a design for a giant blimp, perhaps a mile in length. With a cargo capacity of 50,000 tons or more.

Mike and others at JPL had the skytrain concept, which were a bunch of blimp "box cars" connected together for less drag and more fuel efficiency. The largest of the box car designs was about 45,000 tons of cargo. Updated designs would be even better now because they were depending on thin film material for the skin and for thin film solar cells for power. Both of those have seen a lot of improvement and more improvement is anticipated.

JPL was and is not in the business of building airships, but was directly connected to an institution that was; The California Institute of Technology (CalTech). CalTech had been involved in Zeppelin research during the early 1930's. Theodore von Karman, a professor of aerodynamics at CalTech and a person who had participated in airship research in Germany (including the construction of the Zeppelin "Los Angeles" as part of wartime reparations to the US.) had proposed high speed dirigibles in his autobiography . This connection was first noted by deGyurky.

There were a number of design questions that arose and a number of which remained unanswered at the time. Among them, how would the system be powered? Elements of the SkyTrain could be covered with new ultra light weight solar cells to the point of being completely solar powered. A conventionally fueled backup would be necessary for staging operations. The 1994 analysis showed that this would reduce the pure solar SkyTrain cruising speed to around 43 mph. [a proposed Skycat airship design should have a speed of 97mph. The old Zeppelin's had a maximum speed of about 65 mph, There is an Aeroscraft, hybrid airship, that has a top speed of 174 miles per hour.]

1994 analysis developed the concept in a simple minded comparative analysis of three prototypical airships, each autonomously powered and controlled, able to link and unlink to sibling cars at will.

The first SkyBoxCar analyzed was a small technology demonstrator. It was sized so that 50% of its buoyant capacity was used for lifting cargo. As a demonstrator, it would fly at relatively low altitudes and would be unable to take advantage of favorable clines in wind and solar irradiance. It would cruise at 31,000 feet at a speed of 30.7 miles per hour with a cargo capacity of 4000 pounds. It would be 125 feet long and would cost $2.5 million dollars.

The second SkyBoxCar analyzed corresponded to that of a Mack truck. It was sized to carry 45,000 pounds of payload. This increase in scale produced an efficiency increase to 74% compared to the 50% of its smaller sibling. It would cruise at 31,500 feet at a speed of approximately 43.4 mph. At 244 feet long it is four fifths the length of a football field. It would cost approximately $9,485,000.

In the interest of sheer immensity, a third SkyBoxCar was modeled with a cargo capacity 45,000 metric tons. This radical increase in size produces an efficiency increase to 98% compared to the 74% of its smaller sibling. Like its sibling however it would cruise at 33,500 feet at a speed of 43.4 mph. At 2900 feet long, it would be over half a mile in length and would cost a little over 1.3 billion dollars. A SkyBoxCar of this size violates the concept of bite-sized chunks, but is of academic interest because of its lifting efficiency and flight envelope.

"With a train of 50 airships, as opposed to 50 independent airships, you could realize perhaps a 50 [-98%] percent savings in energy, and the savings go up as the speed of travel increases.

Mass produced airships could have a projected cost of 10 cents per ton-mile, compared with the 40-to-50 cents per ton-mile charged by standard air carriers.

Skycat airship. A completed ship should be flying in 2008 and production models in early 2009.

If we had cheap carbon nanotubes [prices likely falling from $200/kg to $4/kg over the next few years] that were able to provide most of the strength to the macroscale and next generation solar cells, then Skycats that were made on the scale of the giant airships would be able to travel at 300-600mph and carry over one hundred thousand tons. Something that could make sense in the 2015-2025 timeframe.

P-791, an experimental aerostatic/aerodynamic hybrid airship developed by Lockheed-Martin corporation. The first flight of the P-791 was made on 31 January 2006. The P-791 appears to be essentially identical in design to the SkyCat design

The cancelled DARPA Walrus airship (500-1000 tons of cargo) whose work continues with Skycat and P-791.

February 12, 2008

Possible breakthrough for cheap Carbon nanotubes

Images of (A,B) shaped CNT solids monoliths, (C) shards, and (D) powders derived from said monoliths. Credit: Naval Research Laboratory.

Multi-walled carbon nanotubes (CNTs) have been produced in high yields in bulk solid compositions using commercially available aromatic containing resins. The concentration of multi-walled carbon nanotubes (MWNTs) and metal nanoparticles can be easily varied within the shaped carbonaceous solid.

The CNTs obtained by this patented method are not formed from gaseous components, as is common with the current CNT production based on chemical vapor deposition (CVD) methods, but rather evolve from metal and carbon nanoparticles that form within the carbonaceous solid during the carbonization process above 500°C. Only a small amount of the organometallic compound or metal salt is needed to achieve the formation of CNTs in high yield, but large quantities of the metal source can be used, depending on the application, if desired.

The solid-state method enables the large-scale production of MWNTs in moldable solid forms, films, and fibers using low-cost precursors and equipment, thereby reducing economic barriers that are inherent with carbon nanotube materials produced by more conventional methods, such as CVD.

The use of commercially available resins is a potentially inexpensive route to CNTs. Using this simple, potentially cost-effective method could result in the production of CNTs in large quantities and various shapes. Scientists are evaluating them for possible use in numerous aerospace, marine, and electronic applications.

Carbon nanotube production in 2007

More on the carbon nanotube market

Currently largest wind turbine will generate 7+ MW

Aubrey De Grey was on the Colbert Report

February 11, 2008

Magnetic Catapult a feasible advanced earth launcher

A 9000 meter long magnetic catapult was proposed in 2003 by Warren D Smith, mathematician at Temple University. It was designed to launch 5 meter long, 1 meter diameter projectiles at 2250 gees of constant acceleration with a launch velocity of 20km/s (twice earth escape velocity, Mach 58). It would cost about $2-20 billion to build and operating costs would be $10-100 million/year. This system is not a railgun and is not a coilgun. It is similar to a superconducting coilgun but is better than the quench gun proposal.

2250 gees would allow electronics to be launched

It is an engineering project on the same scale as the largest particle accelerators. So it is difficult but achievable. The system should be cheaper and safer than chemical rocket launches.

The magnetic catapult would be different from an electromagnetic railgun or linear electric motors.

The magnetic catapult has the following advantages:
1. No electrical or mechanical contact between the projectile and anything else
2. No capacitors or other external energy storage devices. The superconducting magnets of the launcher are the energy storage device and the energy stored is at essentially uniform density in the form of magnetic field.
3. the accumulation of electric power may be accomplished gradually, losslessly and purely mechanically
4. Essentially 100% of the stored energy is converted to projectile kinetic energy. Guns, railguns and rotary pellet launchers suffer damage from friction and other wasted energy.
5. Magnetic catapult has inherently stable operation. Instabilities of other designs at these velocities could be catastrophic.
6. No switching at large current and voltages (like Linear electric motors). Switching only occurs when current and voltage are zero. This minimizes stress on the switch and maximizes energy efficiency.

Railguns use two sliding contacts that permit a large electric current to pass through the projectile. This current interacts with the strong magnetic fields generated by the rails and this accelerates the projectile.

The Magnetic catapult is similar to a superconducting coilguns (quench gun), which are contactless and which use a magnetic field generated by external coils arranged along the barrel to accelerate a magnetic projectile. However, the quench gun releases a lot of heat during coil quenching.

The picture to the left is a standard copper coil coilgun. The magnetic catapult would need cryogenics (to cool the superconductors) and a lot of superconducting material. The magnetic catapult design will need to address the issue of magnetic quenching. During projectile flyby of a superconducting ring, the magnetic environment near that ring can change as much as plus or minus 12 Tesla. to protect against quenching a finely intermixed composite material made of both YCBO and an electrically and magnetically inert companion material is needed. TiO2 might be a suitable material. With a ratio of twice as much TiO2 to YCBO then the inner rings would be 3 times bigger. The total energy losses related to the quenching issue are about 8%.

Another disadvantage is the system does not scale down well, so some test sections could be built but only a nearly full scale system would really indicate if the system would work. However, it seems promising and worth detailed testing and modeling.

The superconducting magnets that are needed for the project are on the leading edge of developments in that area There are record superconducting magnets with 50 cm bore sizes, 7 Tesla and 20 cm bore size, 8.1 Tesla. The full scale system needs 17 Telsa magnets with 100 cm bore size. There is an Atlas hybrid magnet being built to hold 1.2 Gj with 22 meters (2200cm) diameter. The highest power superconducting magnet is 26.8 Tesla and they believe they can soon reach 50 Tesla

Magnet Lab researchers tested a small coil (9.5-millimeter clear bore) in the lab’s unique, 19-tesla, 20-centimeter, wide-bore, 20-megawatt Bitter magnet. However, I think the main issue is one of cost that larger bore sizes mean more wire and more expense. More information on superconducting magnets

A Cern superconducting magnet has an inside diameter of 6.3 meters, a length of 12.5 meters and generates a magnetic field of 4 T (about 80,000 times stronger than the Earth’s). Once completed, the Cms superconducting magnet will boast a notable record: with its 2.6 Gigajoule of energy it will hold the world record of energy ever stored in a magnet.

The magnetic catapult is described in a 42 page postscript file.

The Magnetic Catapult Design
The projectile will be a cylindrical shaped permanent magnet. The accelerator will consist of a sequence of coaxial stationary rings, each of which is also a supercurrent loop magnet. The projectile and the rings all generate the same amount of magnetic flux, which will be assured by initially magnetizing all the rings.

[Thread a ring shaped piece of superconductor with another magnet whose North end is on one side and whose South end is on the other of the ring. cool the ring to ists superconducting temperature, then remove the magnet. Its flux no longer traverses the ring but the total magenetic flux through ring must remain unchanged. More discussion on page 11 of the paper.]

During launch, the projectile passes through the superconducting rings. The South end of the first ring attracts the North end of the projectile. Once the projectile has reached a central position inside that ring (the supercurrent is now zero) and we switch off the superconductivity in that ring converting it into an insulator. The projectile continues on without deaccelerating. This repeats along each of the magnetic rings.

The entire accelerator is enclused in a pipe with a superconducive inner coating. The outer pipe has the following purposes:
1. Is a vacuum vessel
2. It is an EM shield
3. It is a thermal insulator
4. It would help to levitate the projectile
5. The sequence of rings are a long solenoid, the field must come back the otehr way on the outside of the solenoid.

The projectile bursts a membrane at the end of the tube or it passes a double door airlock.

The system should be made in a mountain like Annapurna or Dhaulagiri in Nepal. The projectile would exist above half to two thirds of the atmosphere. The projectile would need to have material that would burn off (ablate) to take away the heat. Only a few percent of the total projectile would need to be sacrificed.

Cost estimate
The Superconducting supercollider (SSC) was to cost $8.25 billion but ran into cost overruns and was cancelled. The SSC was to be in an 87 kilometer long tunnel with 10,000 7m long 6.6 Tesla magnets that were Helium cooled. The Magnetic Launcher would be almost ten times shorter, the magnets need not be as precise, the outer vacuum shells are smaller and the vacuum need not be as high. The Magnetic launcher would be built very robustly and be located on a high mountain. The Brookhaven Relativistic Heavy Ion Collider (RHIC) was 4 kilometers long with 1740 superconducting magnets. The RHIC cost $600 million for $155,000 per linear meter. Those prices would enable the 9 km launcher to be made for $1.4 billion. Another cost estimate looks at costs of components (tunneling, Dewaring, structural support and superconductors) for a maximum cost of $800,000 per meter or $7.2 billion for a 9km long launcher.

The previous article that I had on scaling up railguns still had the issue of wearing out the launch tube and replacing the worn launch tube was the primary cost driver for the advanced railgun proposal.

Overview of electromagnetic guns

Japan taking small steps to 1 gigawatt space based solar power by 2030

Japan Aerospace Exploration Agency (JAXA) have begun to develop the hardware for deploying a 1 gigawatt space based solar power system by 2030. These we will small lab tests.

JAXA, which plans to have a Space Solar Power System (SSPS) up and running by 2030, envisions a system consisting of giant solar collectors in geostationary orbit 36,000 kilometers above the Earth’s surface. The satellites convert sunlight into powerful microwave (or laser) beams that are aimed at receiving stations on Earth, where they are converted into electricity.

The researchers will use a 2.4-meter-diameter transmission antenna to send a microwave beam over 50 meters to a rectenna (rectifying antenna) that converts the microwave energy into electricity and powers a household heater

JAXA ultimately aims to build ground receiving stations that measure about 3 kilometers across and that can produce 1 gigawatt (1 million kilowatts) of electricity — enough to power approximately 500,000 homes.

Low earth orbit spaced based solar power could be more easily scaled

Space Island Group has almost completed financing for a prototype 10-25 megawatt system that it claims will be in orbit within 18 months, at a total cost of $200 million.

"It will 'site-hop' across base stations in Europe, beaming 90 minutes of power to each one by microwave." If the test proves successful, a 1 gigawatt installation for the UK domestic market would be the next step.

15 proteins in urine are biomarkers for spotting coronary artery disease (CAD)

A set of 15 proteins found in urine can distinguish healthy individuals from those who have coronary artery disease (CAD), a new study has found.

Coronary artery disease is the most common type of cardiovascular disease, occurring in about 5 to 9% (depending on sex and race) of people aged 20 and older.

In 2001, the death rate from coronary artery disease was 228 per 100,000 white men, 262 per 100,000 black men, 137 per 100,000 white women, and 177 per 100,000 black women. Over 1 million deaths per year worldwide.

Due to the ease of obtaining samples, urinary protein analysis is emerging as a powerful tool to detect and monitor disease.

The researchers next examined how predictive their protein panel was and found it could identify the presence of CAD 83% of the time. The panel had a sensitivity of over 98%, which means the test produced almost no false positives and thus inaccuracies are primarily misdiagnosing CAD individuals as healthy. The researchers also observed that the protein signatures of CAD individuals became more normal after exercise, suggesting these biomarkers can be used to both help diagnose CAD and monitor the progress of treatment.

A USB stick size device has been created for genetic screening in minutes for tens of dollar A similar cost device seems possible for screening for the proteins that identify coronary artery disease.

This is another major piece in the vision that I and many others have to transform public health with widespread use of frequent biomarker tracking to identify people in the early stages of disease or those just with the increased risk factors and transform medicine to cheaper prevention of disease development

This should also be used to change drug approvals by identifying earlier when a drug is having effect with improved biomarkers.

More papers by Anna Dominiczak

Cardiovascular disease statistics

I provided details for Wired Magazine Defense blog

A prism of engineered material — metamaterial comprised of an arrangement of nano-coils of precious metals such as gold or silver — embedded in a solid glass-like material. The prism structure has a negative refractive index, which makes it truly transparent to light, allowing it to pass freely through with no reflection.

This site was mentioned on the Wired Magazine defense blog I helped to decode the meaning of
"exploiting the optical plasmon phenomenology characteristics of nanoscale structures."
from a DARPA project about transparent displays.

Wavelengths of light interact with nanoscale patterns that are made to create new effects that alter the wavelengths to enable super-microscopes or invisibility.

It refers to using transparent metamaterials to alter light to create displays. Most metamaterials that have been used to this point have been made of metal and some have been semiconductors

Concentric rings of plastic on gold allow an optical microscope to resolve objects too small to otherwise be seen (Image: Science/Maryland University)

Metamaterials can shape visible light and other wavelenths (sound, microwaves etc..). Metamaterials have tended to be metals with particular patterns. The wavelengths interact with the patterns and can be guided by them to make things invisible or to give wavelengths negative indexes of diffraction (enabling superlenses for better microscopes).

Metamaterials can also be made out of non-metals but to effect visible light would need to have nanoscale dimensions (nanoscale structures part).

Plasmons are what are interact[s] with the metamaterial to create the effect.

Plasmons :The quanta of waves produced by collective effects of large numbers of electrons in matter when the electrons are disturbed from equilibrium. Metals provide the best evidence of plasmons, because they have a high density of electrons free to move.

It sounds like Darpa wants to make big transparent metamaterial displays for windshields of planes or vehicles. Plus the material could react to light and block out lasers that were trying to blind or damage the occupants or react to a radiation flash from a nuclear device.

Contact lens and displays in glasses

Contact lens display

Форма для связи


Email *

Message *