Posts Tagged ‘ Moore’s Law ’

Chief Scientist of Nvidia Condemns Moore’s Law, Microprocessors – X-bit labs

Chief Scientist of Nvidia Condemns Moore’s Law, Microprocessors – X-bit labs.

William Dally, chief scientist and senior vice president of research at Nvidia, said in a column that Moore’s Law was no longer enabling scaling of computing performance on microprocessors. In addition, Mr. Dally indicated that central processing units (CPUs) in general could no longer fulfill the demand towards high performance.

“[Moore’s Law] predicted the number of transistors on an integrated circuit would double each year (later revised to doubling every 18 months). This prediction laid the groundwork for another prediction: that doubling the number of transistors would also double the performance of CPUs every 18 months. [Moore] also projected that the amount of energy consumed by each unit of computing would decrease as the number of transistors increased. This enabled computing performance to scale up while the electrical power consumed remained constant. This power scaling, in addition to transistor scaling, is needed to scale CPU performance. But in a development that’s been largely overlooked, this power scaling has ended. And as a result, the CPU scaling predicted by Moore’s Law is now dead. CPU performance no longer doubles every 18 months,” said Bill Dally in a column published at Forbes.

Perhaps, performance of CPUs no longer doubles every year and a half, but, firstly, those chips are universal and very flexible, secondly, they can be manufactured in large volumes. Graphics chips, which, from time to time, outpace the Moore’s Law, quite often cannot be manufactured in large volumes because of poor yields. Moreover, although GPUs can provide higher horsepower than CPUs, they are not that universal and flexible.

Even though historically developers of central processing units were concentrating on increasing clock-speeds of chips, five years ago Advanced Micro Devices and Intel Corp. concentrated on creating more parallel multi-core microprocessors that work on moderate clock-speeds. However, the vice-president of Nvidia also claims that multi-core x86 CPUs will ultimately not solve problem with the lack of necessary computing performance.

“Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work. This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance,” said Mr. Dally.

It is rather logical that Nvidia calls central processing units obsolete since it does not produce them or develop them. The big question is whether AMD and Intel give up and let Nvidia to actually capture part of the market of high-performance computing, where multi-core CPUs rule today.

“Parallel computing, is the only way to maintain the growth in computing performance that has transformed industries, economies, and human welfare throughout the world. The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers – not on multi-core CPUs. Let’s enable the future of computing to fly – not rumble along on trains with wings,” concluded the chief scientist of Nvidia.

Advertisements

FT.com / UK – Moore’s Law hits economic limits

FT.com / UK – Moore’s Law hits economic limits.

A leading chip manufacturing company will break ground on a 200-acre site in upstate New York this week, starting construction on a $4.2bn state-of-the-art factory.

But these days, new chip “fabs” are as rare as the April 1965 issue of Electronics Magazine.

Intel paid $10,000 for a single copy of the magazine in 2005. This was the issue which featured an article by the company’s co-founder Gordon Moore, in which he predicted transistor densities on chips would continue to double, about every two years.

In accordance with what we now know as Moore’s Law, we have moved from pieces of silicon with 16 transistors on them in the 1960s to ones with 600m today. More than 1m could fit on the full stop at the end of this sentence.

Globalfoundries’ “Fab 2” in New York will achieve another level of miniaturisation when it begins volume production in 2012, making chips with circuitry just 28 billionths of a metre wide.

But its construction is a rarity because of another aspect of Moore’s Law. We have yet to reach a scientific limit to further miniaturisation, but an economic one is fast approaching, according to some experts.

“The high cost of semiconductor manufacturing equipment is making continued chipmaking advancements too expensive to use for volume production, relegating Moore’s Law to the laboratory and altering the fundamental economics of the industry,” wrote Len Jelinek, chief analyst for semiconductor manufacturing at the iSuppli research firm, last month.

Mr Jelinek predicted that Moore’s Law would no longer drive volume chip production from 2014, sparking intense debate in Silicon Valley.

His reasoning is that circuitry widths will dip to 20nm (nanometres or billionths of a metre) or below by that date. But the tools to make them would be too expensive for companies to recover their costs over the lifetime of production.

The costs and risks involved in building new fabs have already driven many makers of logic chips (processor or controller chips) towards a “fabless” chip model, where they outsource much of their production to chip “foundries” in Asia.

The 14 chipmakers who were in the game at the 90nm level have been reduced to nine at the current 45nm level. Only two of them – Intel and Samsung – have firm plans for 22nm factories.

Intel argues that only companies with about $9bn in annual revenues can afford to be in the business of building new fabs, given the costs of building and operating the factories and earning a decent 50 per cent margin. That leaves just Intel, Samsung, Toshiba, Texas Instruments and STMicroelectronics.

Andy Bryant, Intel’s chief administrative officer, says Gordon Moore’s original article was an economics paper rather than a technical treatise.

“Moore’s Law is really not about the science, it’s about the business model that the science drives,” he says. “What Gordon said was the model is driven by the cost reductions that are allowed – the science takes the technology into more and more devices and the volume will explode because the cost comes down, so it is an economic model.”

So as long as demand can be maintained by consumers and businesses wanting the latest gadgets and features, Moore’s Law, and the huge investments it entails, will continue to make economic sense for fab builders.

“You see a lot of companies, who can no longer afford to be in manufacturing, wanting to declare the end of Moore’s Law, but it’s not in the near future,” he says.

However, with fewer companies to whom they can sell factory equipment and tools, these are tough times for big suppliers such as Applied Materials.

Hans Stork, Applied’s chief technology officer, says there will have to be consolidation among chip equipment suppliers. But he sees opportunities in other segments of a chip market split three ways into logic chips, D-Ram memory and Nand flash storage.

“We still see considerable upside on the storage side, there’s a lot of innovative work being done on solid-state drives to make them mainstream – that could be a big lever for us,” he says.

Those chip companies priced out of the market for the next level of miniaturisation are expected to extend the life of current technology with fresh innovation and marketing ploys.

Intel rival Advanced Micro Devices chose to split the company, with AMD becoming a fabless chipmaker and the new Globalfoundries spun off as a fab owner to make chips for AMD and others. Globalfoundries is working with IBM and has been bankrolled by the Abu Dhabi government. Business alliances and government support are other ways that companies can compete with Intel and Samsung.

“No one company or country is really going to drive this forever, I think it’s going to be collaborative in many forms that we are yet to see,” says Doug Grose, Globalfoundries chief executive

EETimes.com – IBM warns of ‘design rule explosion’ beyond 22-nm

EETimes.com – IBM warns of ‘design rule explosion’ beyond 22-nm.

PORTLAND, Ore.—An IBM researcher warned of “design rule explosion” beyond the 22-nanometer node during a paper presentation earlier this month at the International Symposium on Physical Design (ISPD).Kevin Nowka, senior manager of VLSI Systems at the IBM Austin Research Lab, described the physical design challenges beyond the 22-nm node, emphasizing that sub-wavelength lithography has made silicon image fidelity a serious challenge.

“Simple technology abstractions that have worked for many generations like rectangular shapes, Boolean design rules, and constant parameters will not suffice to enable us to push designs to the ultimate levels of performance,” Nowka said.

Solving “design rule explosion,” according to Nowka, involves balancing area against image fidelity by considering the physical design needs at appropriate levels of abstraction, such as within cells. Nowka gave examples of how restricted design rules could reap a three-fold improvement in variability with a small area penalty.

Nowka envisions physical design rules beyond the 22-nm node that are more technology-aware and which make use of pre-analysis and library optimization for improved density and robustness, he said.

IBM described a solution to “design rule explosion” at the 22 nanometer node illustrated in an SRAM chip design.

Also at ISPD, which was held March 14 to 17 in San Francisco, Mentor Graphics Corp. proposed that hardware/software co-design be used for chips, their packages and their printed circuit (pc) boards. A Mentor executive offered an example in which a 26 percent cost savings was realized by performing such a co-optimization of all three systems simultaneously.

“Thinking outside of the chip,” was the key, according to John Park, business development manager for Mentor’s System Design division. By optimizing the interconnect complexity among all three levels of a design–chip, package and pc board—Park claimed that pin counts, packaging costs and high speed I/O can be optimized. According to Park, the chip-to-package-to-pc board design flow needs to be performed in parallel because restraints on pc boards often place requirements on package design, while package requirements can in turn constrain chip design, both of which are ignored by current designs flows.

Serge Leef, Mentor’s vice president of new ventures and general manager of the company’s System-Level Engineering division, invited the automotive industry to adopt the EDA design methodology for on-board electronics.

According to Leef, the typical automobile today has up to 60 electronic control units (ECUs), up to 10 different data networks, several megabytes of memory and miles of wiring—all of which could be better designed by EDA-like software.

“Software components are like VLSI macros and standard cells; ECUs are like physical areas on the layout onto which IC blocks are mapped; signal-to-frame mapping is like wire routing,” said Leef.

New software tools are needed, according to Leef, which can copy the EDA methodology but be optimized for solving the simultaneous conflicting constraints in automotive electronics, permitting analysis and optimization of designs in order to reduce the number of test cars that have to be prototyped.

In perhaps the boldest presentation at ISPD, keynote speaker Louis Scheffer, a former Cadence Design Systems Inc. Fellow who is now at Howard Hughes Medical Institute, proposed adapting EDA tools to model the human brain. Scheffer described the similarities and differences between the functions of VLSI circuitry and biological neural networks, pointing out that the brain is like a smart sensor network with both analog and digital behaviors that can be modeled with EDA.