Archive for the ‘ variation ’ Category

EETimes.com – IBM warns of ‘design rule explosion’ beyond 22-nm

EETimes.com – IBM warns of ‘design rule explosion’ beyond 22-nm.

PORTLAND, Ore.—An IBM researcher warned of “design rule explosion” beyond the 22-nanometer node during a paper presentation earlier this month at the International Symposium on Physical Design (ISPD).Kevin Nowka, senior manager of VLSI Systems at the IBM Austin Research Lab, described the physical design challenges beyond the 22-nm node, emphasizing that sub-wavelength lithography has made silicon image fidelity a serious challenge.

“Simple technology abstractions that have worked for many generations like rectangular shapes, Boolean design rules, and constant parameters will not suffice to enable us to push designs to the ultimate levels of performance,” Nowka said.

Solving “design rule explosion,” according to Nowka, involves balancing area against image fidelity by considering the physical design needs at appropriate levels of abstraction, such as within cells. Nowka gave examples of how restricted design rules could reap a three-fold improvement in variability with a small area penalty.

Nowka envisions physical design rules beyond the 22-nm node that are more technology-aware and which make use of pre-analysis and library optimization for improved density and robustness, he said.

IBM described a solution to “design rule explosion” at the 22 nanometer node illustrated in an SRAM chip design.

Also at ISPD, which was held March 14 to 17 in San Francisco, Mentor Graphics Corp. proposed that hardware/software co-design be used for chips, their packages and their printed circuit (pc) boards. A Mentor executive offered an example in which a 26 percent cost savings was realized by performing such a co-optimization of all three systems simultaneously.

“Thinking outside of the chip,” was the key, according to John Park, business development manager for Mentor’s System Design division. By optimizing the interconnect complexity among all three levels of a design–chip, package and pc board—Park claimed that pin counts, packaging costs and high speed I/O can be optimized. According to Park, the chip-to-package-to-pc board design flow needs to be performed in parallel because restraints on pc boards often place requirements on package design, while package requirements can in turn constrain chip design, both of which are ignored by current designs flows.

Serge Leef, Mentor’s vice president of new ventures and general manager of the company’s System-Level Engineering division, invited the automotive industry to adopt the EDA design methodology for on-board electronics.

According to Leef, the typical automobile today has up to 60 electronic control units (ECUs), up to 10 different data networks, several megabytes of memory and miles of wiring—all of which could be better designed by EDA-like software.

“Software components are like VLSI macros and standard cells; ECUs are like physical areas on the layout onto which IC blocks are mapped; signal-to-frame mapping is like wire routing,” said Leef.

New software tools are needed, according to Leef, which can copy the EDA methodology but be optimized for solving the simultaneous conflicting constraints in automotive electronics, permitting analysis and optimization of designs in order to reduce the number of test cars that have to be prototyped.

In perhaps the boldest presentation at ISPD, keynote speaker Louis Scheffer, a former Cadence Design Systems Inc. Fellow who is now at Howard Hughes Medical Institute, proposed adapting EDA tools to model the human brain. Scheffer described the similarities and differences between the functions of VLSI circuitry and biological neural networks, pointing out that the brain is like a smart sensor network with both analog and digital behaviors that can be modeled with EDA.

Advertisements

Variability Aware System Modeling (IMEC) @ 2007 IMEC TAD conference

WebCite query result.

Variability Aware Modeling has progressed in the past year
to a level that enable answering qualitative questions on
technology choices. We describe in detail the process to
predict system (IC) yield from technology variability, and
apply this to a concrete system under following
technology assumptions:
-low VTH, high VTH, and dual or triple VTH, when
assigning parts of the circuits or critical paths to one of
the VTH choices. Strategies on VTH assignment play a
role. Geometrical correlations of variability (local vs.
global, random vs. systematic) are taken into account,
thus leading to predictions of wafer to wafer and even
batch to batch variation of yield or process drift.
In a second, more speculative part we anticipate how this
framework can enable the design of system that tackle
variability with runtime countermeasures, such as course
grain and fine grains self-adaptive or centrally controlled
circuits