Archive for the ‘ Memory technology ’ Category

Synapse on a Chip

The memristor – the so-called "missing link of electronics" memory technology that can change its resistance in varying levels – has been around on paper for nearly 40 years. However it wasn’t until 2010 that a group at the University of Michigan led by Dr. Wei Lu demonstrated that it can be used to build brain-like computers in a paper just published in Nano Letters. New Scientist reports that "memristors can behave uncannily like the junctions between neurons in the brain." Scientific American describes a US military-funded project that is trying to use the memristor "to make neural computing a reality." DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics Program (SyNAPSE) is funded to create "electronic neuromorphic machine technology that is scalable to biological levels."
The discovery of the memristor derives from the search for a rigorous mathematical foundation for electronics by a young electronics engineer at the University of California, Berkeley, Leon O. Chua. Chua’s analysis suggested there was a fourth foundational circuit element missing from the standard trio of resistor, capacitor and inductor. He called it "memristor." In 1971, he published a seminal paper on this missing basic circuit element.
In 1976, Chua and Sung-Mo Kang published another paper describing a large class of devices and systems they called "memristive devices and systems." The proof of the existence of such devices proved elusive until 2008, when R. Stanley Williams and his research team at HP actually developed a two-terminal titanium dioxide nanoscale device that exhibited memristor characteristics. A memristance device requires its atoms to change location when voltage is applied, and that happens much more easily at the nanoscale (1 x 10-9 meters). Here is HP Senior Fellow R. Stanley Williams at the whiteboard describing the memristor:

HP sees an immediate application of memristors for a new kind of computer memory that could be used in place of the dynamic random access memory (DRAM) commonly found in today’s desktop and laptop computers. When you turn your computer off, whatever you are working on in conventional DRAM is lost unless you save it. With memristor-based computers, your document or spreadsheet (or other data) would be stored without having to save it, with little power drained from the computer’s battery. HP is obviously interested in marketing memristors for computers, cell phones, video games – anything that requires a lot of memory without a lot of battery drain.

The National Institute of Standards and Technology (NIST) promotes U.S. innovation and industrial competitiveness and is also very interested in memristor technology. Here’s a video of Dr. Nadine Gergel-Hackett, who researches memristor technology there, describing the how memristors can be used to develop flexible chips that "do the twist":

As flexible chips that retain their memory when turned off, clearly memristors could soon have a big impact on the electronics marketplace. But what makes them so well suited to build "brain-like computers?" An HP Labs article makes the case that memristor technology "could one day lead to computer systems that can remember and associate patterns in a way similar to how people do," Improved facial recognition technology and possibly "provide more complex biometric recognition systems [that] could enable appliances that learn from experience and computers that can make decisions."
Memristors will soon have a big impact on the electronics marketplace. But what makes them so well suited to build "brain-like computers"?
The remarkable characteristics of the memristor that make it interesting to HP, DARPA, and NIST as a neural computing substrate come from an unlikely source in the biological world: the slime mold Physarum polycephalum. The slime mold appears as a fungus-like gel during certain phases of its lifecycle, hence the name. Like ectoplasm from the movie Ghostbusters, this glutinous single-celled organism – without a single neuron to its name – can sense and react to its environment and even solve simple puzzles. It can also anticipate events.

When Leon Chua first discovered this missing foundational circuit element, he suspected that memristors might have something to do with how biological organisms learn. Experiments with slime molds in 2008 by Tetsu Saisuga at Hokkaido University in Sapporo sparked additional research at the University of California, San Diego by Max Di Ventra. Di Ventra was familiar with Chua’s work and built a memristive circuit that was able to learn and predict future signals. This ability turns out to be similar to the electrical activity involved in the ebb and flow of potassium and sodium ions across cellular membranes: synapses altering their response according to the frequency and strength of signals. New Scientist reports that Di Ventra’s work confirmed Chua’s suspicions that "synapses were memristors." "The ion channel was the missing circuit element I was looking for," says Chua, "and it already existed in nature."
Jumping forward to 2010, the work of Dr. Wei Lu’s University of Michigan team now confirms that memristor circuits indeed behave like synapses. Lu’s team used a mixture of silicon and silver to join two metal electrodes, mimicking how synapses allow neurons to learn new firing patterns – not unlike a slime mold’s ability to anticipate events. The timing of electrical signals in two neurons anticipates how later messages can jump across the synapse between them. When a pair fires, the given synapse becomes more likely to pass later messages between the two. "Cells that fire together, wire together," says Lu.
Just like a synapse, the memristor changes its resistance in varying levels. Dr. Lu found that memristors can simulate synapses because electrical synaptic connections between two neurons can seemingly strengthen or weaken depending on when the neurons fire. "The memristor mimics synaptic action," Lu concludes. Dr. Nadine Gergel-Hackett at NIST acknowledges the Michigan team’s successful creation of a brain synapse analog. "This work is a large step towards the realization of biology-inspired computing," she says.
The human brain contains about 10 billion nerve cells, or neurons. On average, each neuron is connected to other neurons through about 10,000 synapses. While Lu’s research is promising, it will likely be a while until researchers can demonstrate circuits with even tens of thousands of memristor "synapses." Nevertheless, DARPA’s SyNAPSE project appears committed to scaling memristor technology to biological levels


The Impact of Lithography Challenges

I recently had the opportunity to talk to Dr. Nick Tredennick, and he shared some of the topics that he had included in his recent presentation titled The Last Convergence. His presentation defines three phases in the convergence of semiconductor-related technologies, with the first phase driven by companies like Digital that built modules from discrete components. During the second, integrated circuits began to aggregate discrete components; that expanded performance capability paved the way for minicomputers to displace mainframes. Dr. Tredennick concludes that the subsequent progress in semiconductor cost/performance characteristics brought about the high volume and broad acceptance of personal computer that even pushed minicomputers and workstations into a separate niche. He also observed that “…all of these transitions were in the second dimension; that is, planar components in planar assemblies. In addition, cost-performance drove the design of high-volume systems.”
Dr. Tredennick concludes that wafer stacking is the next convergence; this is a “…compelling change that will create seismic events throughout the semiconductor business.. He views that there are two elements contributing to this convergence. The first is that the shift of personal computers to a more mobile platform also shifted design objectives from a direct cost-performance relation toward the current emphasis on cost-performance/watt ratio. Changing to this new performance target will drive the further integration of chips into 3D devices due to the increased flexibility and functionality of 3D packages.
The second element is more revolutionary and is triggered by the anticipated transition to new light sources as the industry moves toward smaller circuit geometries.
The industry has assumed that the historic improvements in semiconductor performance would continue to follow the time-table of Moore’s Law–even though the investments necessary to achieve smaller transistors has risen exponentially for each new generation,
There is broad agreement today that the anticipated increases in performance, the increasing demand for higher wafer volume, and the expertise to continue the traditional decline in costs will all become more difficult as manufacturing processes march toward sub-20nm line widths. We may have in fact already reached the point that it will not be possible to make the next shrink in lithography on schedule due to the expenses and unresolved technical barriers ahead.
As companies continue to compete for cost/performance advantages while continuing to work on these manufacturing challenges, other alternatives to increasing performance will therefore have to be pursued. Dr. Tredennick expects that these competing OEM and semiconductor companies will likely find that wafer and die packaging techniques will yield better cost and performance results and at lower R&D costs than can be achieved by relying exclusively on the shrinking of transistor sizes.
There is no doubt that we will eventually get to smaller transistors through the use of new illumination sources and other process improvements. However, the time gap between today’s leading edge processes and the future deployment of new light sources will create the opportunity for R&D investment in other cost/performance alternatives. Furthermore by the time most manufacturers have completed that transition, the industry will have been fundamentally transformed in the interim by the convergence to these new stacked-packaging concepts.
The impact of that manufacturing scenario will challenge one of the basic assumptions upon which the semiconductor industry has relied. While value transistors once came from trailing-edge processes for which the plant, equipment, and development costs had been fully

amortized, this changes as progress slows in process lithography, and the most efficient source of the value transistors will migrate to the leading edge of fab facilities as production processes mature. As the reduction in the minimum lithography slows to less than the historic rate, the cost of wafer processing will become more influenced by the operating cost. Once the efficiencies of high volume manufacturing begin to take hold, fabs that have made this transition to the leading–edge processes will also be positioned to make value transistors.
“If the cost of silicon falls toward zero and progress in shrinking transistors slows, then where do system designers turn” for more performance and to add their own value. Dr. Tredennick concludes that the answer to this question will also be found in die and wafer stacking technologies.
The full impact of this scenario is far beyond the scope of this blog. Our worms-eye view of the universe is limited to memory technologies and we can only ask in the next blog what impact this broad trend might have to do with new memory technologies?


After first apologizing to Dr. Tredennick if I have somewhat mangled the points of his presentation—I completely agree with Dr. Tredennick assessment of the manufacturing challenges ahead.   
I also believe that this manufacturing issue has a double-barreled impact on the production costs of memory technologies. 
The first impact is relative to the ability to attain the smallest lithography sizes.  Some of the memory companies are already preparing for high-volume production of 25 to 30nm wafers over the next 12 to 18 months and appear to be ahead of the high-volume, production-level processes for logic products.  The memory companies will therefore likely be the first to test these limits.  Whatever additional costs these complexities contribute to the cost/performance ratio, that issue will likely apply equally to both the new memory technologies as well as to the older DRAM/NAND technologies.
However, I suspect there is a second impact to memory products that will not be borne equally by both the new and old memory technologies.  The issue relative to NAND/DRAM is that they are charge storage technologies as opposed to the data storage structures almost universally supported by the new memory technologies.  As lithography measurements below 20nm begin to significantly reduce the mass available by which stored charge technologies can maintain a consistent and predictable level of energy, both the performance and endurance of the cells changes.  This issue has not been shown to apply to the resistance and state-change memory cells.
Clever engineers have always found ways to extend the life of any technology or process, but at some point they can only do so by altering the slope in the decline of the traditional cost-per-bit or cost-per-functionality of the IC. 
I foresee the need to add more error correction or some other form of physical compensation for the existing high-volume memory technologies that will noticeably slow the decline in cost-per-bit of NAND and DRAM.  And as the result of those additional costs, the decline in the cost-per-bit of charge storage memory technologies will be on a less favorable slope than that of most of the new and emerging memory technologies.
We have already suggested one scenario by which new memory technologies can gain a foothold—we’ve presented Dr. Makimoto’s Wave as a possible harbinger of a broad shift in target applications and noted that Intel already appears to be moving in a like manner.  When you change the target application away from the monolithic desktop PC architectures, you also change the value proposition of the technologies. 
Here we have now added Dr. Tredennick’s notice of a second potentially major tremor that will likewise shake the previous foundation of traditional memory cost-per-bit expectations. – Innovative Silicon moves floating-body RAM off SOI – Innovative Silicon moves floating-body RAM off SOI.

EE Times

LONDON — Innovative Silicon Inc. (Santa Clara, Calif.), a pioneer of floating-body memory, has said it has adapted its technology for operation at less than one volt and so that it does not require a silicon-on-insulator substrate.A test chip generated at Hynix Semiconductor Inc. using 54-nm design rules has demonstrates Z-RAM technology is a contender as the lowest cost and lowest power DRAM replacement technology, the company said.

Innovative Silicon, founded in 2002 and still privately held, said it is not providing engineering detail of its achievement at this time. A paper jointly authored by Hynix and Innovative Silicon, has been submitted for presentation at the 2010 VLSI Technology Symposium. The paper will reveal more details of the cell operating voltages, the company said.

Implementing floating-body RAM on bulk silicon avoids the need for expensive silicon on insulator (SOI) substrates. Innovative Silicon said that Z-RAM was set to be lower-cost than traditional DRAM at sub-40-nm production nodes as well as meeting the double data rate performance requirements.

“We are very excited about the upgrades to our Z-RAM technology, as they tackle, head-on, the requirements of the large memory manufacturers to have the technology available on bulk silicon and with lower costs than any other DRAM technology — including conventional DRAM,” said Mark-Eric Jones, president and CEO of Innovative Silicon, in a statement. “Conventional DRAM has been the low cost, random-access memory technology for 40 years, but the memory industry is on the verge of transitioning to the capacitor-free Z-RAM technology.”

Pierre Fazan, chairman and chief technology officer, said: “The Z-RAM technology now has all of the key ingredients to fully replace stand-alone DRAM. It is implemented on bulk silicon and has demonstrated cell operating voltages below one-volt with no degradation to its multi-second static retention time, and delivers greater than a 1000 times improvement in dynamic or ‘disturb’ retention time. The Z-RAM technology’s operating voltage is now 50 to 75 percent lower than any other floating body or thyristor memory announced to date, and it is the only FB memory technology to cover the entire ITRS memory roadmap.”

“The advances in power and voltage demonstrated in our 54-nm test chips show that the Z-RAM technology has solved the most challenging issues we have seen with floating-body memories. These results validate that the Z-RAM technology has great potential to replace conventional DRAM over the next few memory generations,” said Sungjoo Hong, vice president of DRAM R&D at Hynix Semiconductor, in the same statement.