Archive for the ‘ IBM ’ Category

Oracle Plans to Acquire Chipmakers, Industry-Specific Software – Bloomberg

Oracle Plans to Acquire Chipmakers, Industry-Specific Software – Bloomberg.

Oracle Corp., building on a run of more than 65 acquisitions during the past five years, is looking to purchase semiconductor companies and makers of industry- specific software, Chief Executive Officer Larry Ellison said.

“You’re going to see us buying chip companies,” Ellison, 66, said yesterday at Oracle’s annual meeting in San Francisco. Acquiring chipmakers would further Oracle’s push into computer hardware, initiated in January with its purchase of Sun Microsystems Inc., a server manufacturer.

Ellison said he wants to follow the approach of Apple Inc. CEO Steve Jobs by owning more of the intellectual property that underpins computer chips. Apple has bought semiconductor makers to help develop devices such as the iPad and iPhone. Oracle already acquired some chip knowhow from Sun, which makes servers based on its own chip design, Sparc, while also using personal- computer chips from Intel Corp.and Advanced Micro Devices Inc.

Oracle may buy a semiconductor company with technology for servers, said Doug Freedman, an analyst at Gleacher & Co. in San Francisco. Potential targets include AMD, International Business Machines Corp.’s chip division and Nvidia Corp., he said.

“You’ve got to think it’s focused on enterprise hardware, on the server,” he said. “AMD jumps off the screen.”

Drew Prairie, a spokesman for Sunnyvale, California-based AMD, said the company doesn’t comment on rumors or speculation. Hector Marinez, a spokesman for Santa Clara, California-based Nvidia, also declined to comment. Lori Bosio, a representative of Armonk, New York-based IBM, didn’t return messages left after business hours yesterday.

Software Deals?

Oracle also plans to buy more makers of software focused on certain industries, Ellison said. By zeroing in on specific areas, the company aims to stand out from rivals such as SAP AG.

“We want to play in every important industry,” he said.

Oracle, based in Redwood City, California, fell 8 cents to $27.12 yesterday in Nasdaq Stock Market trading. The shares have climbed 11 percent this year.

Oracle, the world’s second-largest software company, had $23.6 billion in cash and short-term investments at the end of its fiscal first quarter. The company also hired former Hewlett- Packard Co. CEO Mark Hurd as co-president Sept. 6, leading analysts to predict that Oracle may acquire more hardware companies. HP is the world’s biggest computer maker.

As Oracle scouts for acquisition targets, it’s also working to make its hardware business more profitable by decreasing sales of lower-priced systems. Hardware sales yielded a gross profit margin of 48 percent in the first quarter, compared with 72.5 percent for the company as a whole.

Doubling Sales

Oracle plans to double the size of its hardware business, which generated $1.7 billion last quarter, Ellison said during a conference call last week.

The company is selling customers more high-end computer systems that contain computing power, storage, network connectivity and software designed to work together. Oracle introduced two such systems this week: Exalogic and a new version of its Exadata computer.

“I would be stunned if 10 years from now, most data centers didn’t rely on these engineered systems,” Ellison said. “We’re betting that this is the future of computing.”

To contact the reporters on this story: Aaron Ricadela in San Francisco at aricadela@bloomberg.netIan King in San Francisco atianking@bloomberg.net

Advertisements

X-bit labs – Symbiosis Between Globalfoundries and IBM Needed for Long-Term Success – Analyst

X-bit labs – Print version.

Close Collaboration with IBM Can Provide Globalfoundries Advantage in 20nm and Thinner Nodes

by Anton Shilov
05/11/2010 | 11:05 PM

The emergence of Globalfoundries could be one of the most significant events in recent semiconductor history, an analyst said. While for Advanced Micro Devices the creation of Globalfoundries resulted in cost savings only, for Advanced Technology Investment Company it is the chance to create the world’s most powerful contract maker of chips. However, in order to achieve this, Globalfoundries will need to work closely with IBM and adopt its chip design tools.

Last week it was reported that Globalfoundries could acquire semiconductor manufacturing operations from IBM, which would provide it additional clients, manufacturing capacities and, perhaps, certain intellectual property. But while it is crucial for Globalfoundries to boost its advanced manufacturing capacities so to be able to compete against Taiwan Semiconductor Manufacturing Company already in the mid-term future, for long-term success it is very important for Globalfoundries to collaborate with IBM in general and adopt/deploy IBM chip design tool-systems, which would give Globalfoundries a decisive competitive advantage in 20nm, 14nm, and finer nodes, according to Boris Petrov, managing partner of the Petrov Group.

“IBM will maintain its integrated circuit (IC) process technology leadership via research, but the critical business requirement is also that its Common Platform silicon alliance continues to be successful. […] To be successful Globalfoundries would have to meet cost economics that IBM has apparently failed to meet. This evolution stage represents an immense opportunity – if Globalfoundries, jointly with IBM, is able to construct and implement a new and differentiated vision,” Boris Petrov has written in a new column dedicated to Globalfoundries.

IBM Design Approach: When Perfection Means Isolation

The three primary areas of concern to an electronic system designer are power, timing, and noise. An optimal design technology addresses them in an integrated manner; such a system approach is the spirit as well as a distinctive differentiation of IBM’s chip design approach.

“The foundation of IBM’s leadership position in technology-based services is IBM’s focus on automation; in the case of ICs it is IBM’s focus on automation of system-level design processes. Before actual implementation in silicon, IC design entirely resides in software – at the system architecture, modeling, and application levels. Such software-based IC designs and their design tools are among the most complex software ever developed, and their complexity will continue to increase,” said Mr. Petrov.

IBM’s “abstraction engines” model basic concepts (shapes, timing, other) at such high levels that they are also used in IC-unrelated modeling (financial, materials, biological, other), notes Mr. Petrov. As chip designs become bigger and more complex, such an approach will be more and more compulsory for successful “first-pass” design with billions of transistors in 28nm, 20nm, and finer lithography technology nodes. The recent woes with TSMC’s 40nm and potential issues with 32nm have already cost chip designers millions of dollars, forced TSMC to can its 32nm fabrication process and the virtually the whole industry to reconsider the roadmaps. But nothing is likely to limit demands for higher-performance computing and going forward fabless designers of chips will have to work closely with foundries and the latter will have to concentrate on creation of design tools, which ensure that advanced designs can be made in high volumes and on time.

“The chip design factory approach to silicon integration will likely be the cornerstone of the sub-40nm semiconductor industry. In the sub-32nm chip designs, the emphasis decisively shifts away from an individual expertise and tools approach (the “presence of a super-engineer” concept) to a tightly integrated chip design factory approach,” explained the analyst.

IBM’s IC design focus continues to be on the needs of state-of-the-art technology, still the center of the chip business has moved away from proprietary modeling and toward open systems which are mandatory for adopting third-party intellectual property and creation of third-party chips. Verification flow, making designs manufacturable without having to model down at the transistor level, and power and timing closure in 28nm and finer lithography all present immense new challenges, the analyst stresses. IBM has already expanded and integrated its tool systems with industry standard tools for commodity solutions. Nonetheless, the overall concept remained unchanged: IBM’s tool systems continue to be aimed at the leading edge chips and third-party partners maintain and support the older tools.

What is important here is that only a handful of companies – including, but not limited to, AMD or IBM itself – require state-of-the-art fabrication process or designs. As a result, for IBM, its focus on perfection means isolation from the volume market. As a consequence, despite its advantage in design systems, IBM has had limited success outside internal use.

From Extreme to Mainstream

The mainstream merchant market’s cost and IBM’s profitability margin requirements are too far apart, therefore, it is unlikely that IBM will put much more efforts into development of its foundry business. IBM’s cost structure and focus on its own demands often make IBM the IC design partner of last choice: a client selects and pays for IBM services because it has nowhere else to turn and since IBM provides an expensive guarantee of on-time delivery of differentiated chips.

On the other hand, the chips that contain billions of transistors and considered “extreme” today will become mainstream tomorrow and companies developing them will have to use chip design tools that not only support such complexity, but ensure their low power consumption and introduction on time. Complex devices – such as central processing units or graphics processing units – tend to increase their transistor counts rather rapidly and in less than ten years time there will be chips containing tens of billions of transistors. Needless to say that Globalfoundries and other contract manufacturers will have to provide tools to develop chips of that complexity and potential acquisition, adoption, and deployment of IBM’s chip design expertise and suite of IC design tool-systems will be just what the doctor ordered for the company.

“The time for full demonstration of the power and superiority of IBM’s [chip design] approach is perhaps ahead. Perhaps, it will be the only approach possible in advanced lithography, with ICs with tens of billions transistors,” said Mr. Petrov.

In case the analyst is correct, then, if IBM sells its tools to Globalfoundries, the latter may find itself in a much more competitive position in years. Perhaps, with IBM’s suite of chip design tool-systems Globalfoundries may become the only contract maker of semiconductors, who can produce state-of-the-art chips with tens of billions of transistors or at least it will be much more ahead of its rivals.

Globalfoundries Should Convert IBM’s Design Tools for Volume Production

“To successfully deploy IBM’s IC design tool systems and expertise to much larger and rapidly growing segments of the consumer market, Globalfoundries would have to be able take the good and differentiated and to reject the obsolete and gold-plated,” said Boris Petrov.

At present Globalfoundries is fighting for manufacturing volumes via expansions of capacities as well high yields of chips made using leading-edge process technologies. But going forward – as chip designs get even more complex whereas mainstream customers will be unable to design them from scratch – Globalfoundries will have to provide complex support along with robust services, which is when/where IBM’s technologies of today will be required. The difficult challenge will be to drop the too expensive technologies and convert immensely valuable technology into fiscal gold.

“A key implication of Globalfoundries and the industry’s evolution is that chip design is becoming synonymous with an industrial robotic factory. System vendors need tightly integrated chip design and wafer foundry factories. If Globalfoundries is able to obtain, adapt, and cost effectively deploy IBM’s chip design capabilities it will have a decisive and sustainable competitive advantage in advanced technology nodes for its foundry customers,” asserts the analyst.

Globalfoundries impact and evolution could be significant, says Petrov Group – Part I

Globalfoundries impact and evolution could be significant, says Petrov Group – Part I.

The emergence of Globalfoundries (GF) could be one of the most significant events in recent semiconductor history. While the new company faces significant near-term operational and, especially, organizational challenges, there are also several likely strategic actions that GF will undertake to enable its evolution and transformation, according to Boris Petrov, managing partner of The Petrov Group.

Among several major stages in GF’s evolution, two are quite predictable. The first stage could be near term: the acquisition of IBM’s IC fabrication facilities. There is another and related stage of GF’s evolution that is possibly more significant but also more difficult to implement: GF’s potential acquisition and adoption of IBM’s IC design expertise.

Evolution Stage One: IBM and Its IC Fabrication Business

Why would GF take on IBM’s semiconductor manufacturing, which struggles to turn a profit even in good years? Why would this be a good fit for GF? Is this what GF needs most among its operational and strategy challenges that are threatening its very launch? What will it accomplish by taking over IBM’s semiconductor manufacturing? To clarify these questions one should first analyze the changing role of the IC fabrication business at IBM.

IBM’s core businesses are systems and engineering services; IBM is a technology-based solutions company that functions at multiple economic and geopolitical levels worldwide. IBM’s primary challenge is how to profitably double in size and grow to a US$200 billion company within a decade or so. IC fabrication is unlikely to play any role in the solution to this complex corporate challenge.

IBM’s “DNA” and culture strive to automate business processes of any kind; this is a core competency of IBM that allows IBM to embrace and penetrate all major businesses. IBM’s research has been a fountain of basic invention, often of entire businesses and industries; its US$3 billion Research division is a growth foundation for the entire company. It has remained unsurpassed by any commercial or government research institution; it is the benchmark that defies the common belief that creativity and innovation cannot excel within a large corporate bureaucracy.

IBM is also a military-like business machine, always at war and despite what other companies may publicly say, IBM is typically 5-7 years ahead of the industry in strategy formulation. For all its positives, IBM also has considerable challenges that plague the company. For example, it has lost market share in the IT business for many years.

IBM’s semiconductor business, including IC fabrication, was historically a key strategic element to its entire server system and software businesses. In the semiconductor industry, system companies have been IBM’s customer engagement targets because IBM can enable them technologically along the entire 360-degree silicon integration continuum (from concept design, to silicon, to board, to end-product). Here IBM stands alone with an array of advanced technologies in each segment of the silicon integration continuum-a one-stop technological weaponry-shopping place.

However, there has been a shift in the electronics industry – from Computing to the Consumer IC sector, where low cost is the primary requirement. If we broadly define the Consumer sector and include segments of cell phones, notebooks, netbooks and games consoles, such a Consumer-like sector will soon account for 60% of the total IC industry. But, low-cost IC fabrication is not among IBM’s core strengths.

The Petrov Group projects that IBM’s US$3 billion Research division will continue to drive IBM’s evolution as well as the evolution of the entire IC industry – and much beyond silicon. IBM’s material science and microelectronics research will not only be maintained but also accelerate. However, to accomplish its research and corporate growth goals IBM no longer needs IC revenues that have been held for decades around the US$2.5 billion level.

Internal IC fabrication stops being the requirement if IBM can ensure access to fabrication of its custom designed microprocessors. If GF can provide IC fabrication that IBM needs, then IBM no longer needs its internal semiconductor manufacturing capability.

If this is indeed a win-win evolution stage, what would be the benefits to GF? There are many benefits, including acquisition of IBM’s advanced SOI (Silicon on Insulator) technology and customers, and acquisition of SiGe and RF CMOS productized processes (IBM’s device models are considered the best in the industry). IBM is the source of all of GF’s advanced processing technology; it is IBM’s technology that makes the Common Platform such an increasingly invaluable brand in IC manufacturing. By manufacturing advanced microprocessors for AMD and IBM, GF would effectively preempt fabrication in that challenging segment; penetration into processor manufacturing has been one of Taiwan Semiconductor Manufacturing Company’s (TSMC’s) corporate objectives for many years. GF would also broaden its customer base and further increase its manufacturing scale. The economics of IC fabrication in advanced nodes is exceedingly harsh – being first to market and having early volume production are mandatory prerequisites for profitability.

In Summary

It is highly probable that IBM will consider a business alliance of some sort with GF. In GF, IBM has a potential partner with an infrastructure and management style that has elements inherited from Big Blue itself. With its fabrication facilities worldwide, and a foundation in complex processor design and manufacturing, GF should be able to incorporate state-of-the-art support for IBM, drive business economies, and ensure growth, noted Petrov. GF could be an ideal outlet for IBM’s IC fabrication business, enabling it to sell a business that has not met financial performance requirements for years, and still providing it the depth, scope, and resources needed to not only provide manufacturing security for IBM but also further ensuring success of its foundry

Evolution Stage Two: Acquisition of IBM’s IC Design Tool Systems and Expertise

Automation of system-level processes of any kind is the very cornerstone of IBM’s technological power; IBM’s chip design automation tools are part of this core corporate capability. In 2005 Petrov Group published a report titled “Inside IBM Research: Focusing on Design Automation and Productivity.” The report’s insights are still relevant today. IBM’s corporate DNA is to build tools for automation of processes. The results of this focus are self-evident in IBM’s Research itself – an organization that consistently outperforms its global counterparts. Petrov Group’s analysis of IBM’s innovation machine confirmed its six unique capabilities which should be of high interest to any corporation that aspires to research productivity excellence.

The three primary areas of concern to an electronic system designer (Petrov Group calls them “system survival” requirements) are power, timing, and noise. An optimal design technology would address them in an integrated manner; such a system approach is the essence as well as a unique differentiation of IBM’s chip design approach. A useful metaphor is that each IBM tool is either a leaf, or a branch, or the trunk of IBM’s design automation tree, that is, of IBM’s EDA tool ecosystem.

Such an integrated system approach is the essence of IBM’s renowned first-pass design success. IBM’s “abstraction engines,” or “the tree trunks” in the Petrov Group description, have a life cycle of 30+ years; they model basic concepts (shapes, timing, other) at such high levels that they are also used in IC-unrelated modeling (financial, materials, biological, other). As chip designs become larger and more complex, such an approach will be increasingly mandatory for successful “first-pass” design with billions of transistors in 28nm, 20nm, and finer lithography technology nodes.

IBM’s IC design focus continues to be on the needs of state-of-the-art technology. The focus has moved away from proprietary modeling and toward open systems which are mandatory for adopting third-party intellectual property (IP). Verification flow, making designs manufacturable without having to model down at the transistor level, and power and timing closure in 28nm and finer lithography all present immense new challenges. IBM’s tool systems continue to be more of a “bow wave” looking at modeling and designing at the bleeding edge and using others to maintain and support the older tools. IBM has already augmented and integrated its tool systems with industry standard tools for commodity tool solutions.

Despite its advantage in design systems, IBM has had limited success outside internal use. The external mainstream merchant market’s cost and IBM’s profitability margin requirements are too far apart. IBM’s cost structure and focus on internal requirements often make IBM the IC design partner of last resort; a customer selects and pays for IBM services only because it has nowhere else to turn. IBM provides an expensive guarantee of on-time delivery of differentiated ICs; CEOs can sleep peacefully knowing that their products will not miss holiday introduction dates.

To successfully deploy IBM’s IC design tool systems and expertise to much larger and rapidly growing segments of the consumer market, GF would have to be able take the good and differentiated and to reject the obsolete and gold-plated. This would require that GF enter this evolution stage with a clear strategy, very talented people, and continuing close cooperation with IBM. The difficult challenge will be of “mining for the nuggets” and to convert the immensely valuable technology into fiscal gold.

Summary

IBM will maintain its IC process technology leadership via research, but the critical business requirement is also that its Common Platform silicon alliance continues to be successful. The IC industry has moved away from the Computing to the Consumer sector. To be successful GF would have to meet cost economics that IBM has apparently failed to meet. This evolution stage represents an immense opportunity – if GF, jointly with IBM, is able to construct and implement a new and differentiated vision.

A key implication of GF’s and the industry’s evolution is that chip design is becoming synonymous with an industrial robotic factory. System vendors need tightly integrated chip design and wafer foundry factories. If GF is able to obtain, adapt, and cost effectively deploy IBM’s chip design capabilities it will have a decisive and sustainable competitive advantage in advanced technology nodes for its foundry customers, asserts Boris Petrov.

EETimes.com – IBM warns of ‘design rule explosion’ beyond 22-nm

EETimes.com – IBM warns of ‘design rule explosion’ beyond 22-nm.

PORTLAND, Ore.—An IBM researcher warned of “design rule explosion” beyond the 22-nanometer node during a paper presentation earlier this month at the International Symposium on Physical Design (ISPD).Kevin Nowka, senior manager of VLSI Systems at the IBM Austin Research Lab, described the physical design challenges beyond the 22-nm node, emphasizing that sub-wavelength lithography has made silicon image fidelity a serious challenge.

“Simple technology abstractions that have worked for many generations like rectangular shapes, Boolean design rules, and constant parameters will not suffice to enable us to push designs to the ultimate levels of performance,” Nowka said.

Solving “design rule explosion,” according to Nowka, involves balancing area against image fidelity by considering the physical design needs at appropriate levels of abstraction, such as within cells. Nowka gave examples of how restricted design rules could reap a three-fold improvement in variability with a small area penalty.

Nowka envisions physical design rules beyond the 22-nm node that are more technology-aware and which make use of pre-analysis and library optimization for improved density and robustness, he said.

IBM described a solution to “design rule explosion” at the 22 nanometer node illustrated in an SRAM chip design.

Also at ISPD, which was held March 14 to 17 in San Francisco, Mentor Graphics Corp. proposed that hardware/software co-design be used for chips, their packages and their printed circuit (pc) boards. A Mentor executive offered an example in which a 26 percent cost savings was realized by performing such a co-optimization of all three systems simultaneously.

“Thinking outside of the chip,” was the key, according to John Park, business development manager for Mentor’s System Design division. By optimizing the interconnect complexity among all three levels of a design–chip, package and pc board—Park claimed that pin counts, packaging costs and high speed I/O can be optimized. According to Park, the chip-to-package-to-pc board design flow needs to be performed in parallel because restraints on pc boards often place requirements on package design, while package requirements can in turn constrain chip design, both of which are ignored by current designs flows.

Serge Leef, Mentor’s vice president of new ventures and general manager of the company’s System-Level Engineering division, invited the automotive industry to adopt the EDA design methodology for on-board electronics.

According to Leef, the typical automobile today has up to 60 electronic control units (ECUs), up to 10 different data networks, several megabytes of memory and miles of wiring—all of which could be better designed by EDA-like software.

“Software components are like VLSI macros and standard cells; ECUs are like physical areas on the layout onto which IC blocks are mapped; signal-to-frame mapping is like wire routing,” said Leef.

New software tools are needed, according to Leef, which can copy the EDA methodology but be optimized for solving the simultaneous conflicting constraints in automotive electronics, permitting analysis and optimization of designs in order to reduce the number of test cars that have to be prototyped.

In perhaps the boldest presentation at ISPD, keynote speaker Louis Scheffer, a former Cadence Design Systems Inc. Fellow who is now at Howard Hughes Medical Institute, proposed adapting EDA tools to model the human brain. Scheffer described the similarities and differences between the functions of VLSI circuitry and biological neural networks, pointing out that the brain is like a smart sensor network with both analog and digital behaviors that can be modeled with EDA.

Gate First, or Gate Last: Technologists Debate High-k – 2010-03-10 15:41:15 | Semiconductor International

Gate First, or Gate Last: Technologists Debate High-k – 2010-03-10 15:41:15 | Semiconductor International.

David Lammers, News Editor — Semiconductor International, 3/10/2010

As high-k rolls out beyond Intel Corp.1 to both mobile and high-performance applications, the industry now faces a divided landscape.2 Intel and Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC, Hsinchu, Taiwan) — the largest MPU provider and pure-play foundry, respectively — are backing the replacement metal gate (RMG) or gate-last approach. Their competitors — Advanced Micro Devices Inc. (AMD, Sunnyvale, Calif.), GlobalFoundries Inc. (Sunnyvale, Calif.), IBM Corp. (Armonk, N.Y.), and other members of the Fishkill Alliance — are using the gate-first approach, at least for the 28 nm node.3 United Microelectronics Corp. (UMC, Hsinchu, Taiwan) said it will use a hybrid approach employing a gate-last method for the more-difficult PMOS transistor.4

High k cover imageNo matter what deposition flow is used, high-k is offering benefits, which is why the stakes are so high for getting it right. Not only does high-k sharply reduce gate leakage, the gate capacitance scales with a thinner equivalent oxide thickness (EOT). Though mobility may be not quite as high as in a chip using a native oxide, cutting the EOT with high-k enables a shorter gate length and improves the drive current.

However, pressure is building on the gate-first approach. Some high-k experts argue that the high-temperature steps following the high-k dielectric and metal gate deposition cause the Vt to shift, affecting PMOS performance in particular. Others, including John Pellerin, director of technology at GlobalFoundries, argue that the gate-first approach requires less-restrictive layout design rules, provides for a smaller die size, and eases IP porting, while meeting the performance needs of customers at the 32/28 nm node.

“We are unequivocally committed” to the gate-first approach at 28 nm, Pellerin said. “The die size and scaling potential are very critical factors. We get a lot of feedback that people are seeking ease of migration” as they move to a high-k solution.

S.Y. Chiang, senior vice president at TSMC, said the semiconductor industry went through a similar discovery process two decades ago, when early CMOS developers tried to use an N+ poly gate for both the N-channel and P-channel devices.

“When the industry began to do PMOS, companies found an N+ poly gate doesn’t work well,” Chiang said. “It was difficult to lower the Vt, so some people tried to add a counter dopant into the active region of the silicon channel to try to match the Vt. That caused a lot of problems, and made gate control and short channel effects (SCE) much worse.”

With that history in mind, Chiang said the gate-first approach to high-k ran into similar Vt control problems. Efforts to use capping layers improved gate-first performance, but Chiang said a gate-first cap-layer process “gets very, very complicated and difficult to do.”

Asked about the restrictive design rules (RDRs) required for the gate-last method, Chiang said TSMC has been working with the layout teams at its largest customers to adjust to the gate-last high-k flow.

“With the gate-last technology, we do have some restrictions. There is difficulty in planarizing it. However, if the layout team is willing to change to a new layout style, then they can get a layout density that is as good as with the gate-first approach. And it is not that difficult,” he said, adding that “everybody — the process people as well as the layout people — need to adjust the way they do things in order to make the products competitive.”

Chiang said TSMC’s early 28 nm high-k customers are all large companies that are well-equipped to handle layout changes. “We have had face-to-face meetings, and our high-k strategy has been very well accepted. Later, we will offer more help to the layout people at our smaller customers.”

The fact that TSMC was willing to switch to a gate-last approach “says something about the performance advantage of the gate-last approach,” said Dean Freeman semiconductor manufacturing analyst at Gartner Inc. “Gate first gets you a little bit tighter layout, but TSMC must have seen something they didn’t like when they did their shuttle runs,” comparing the gate-first and gate-last wafers.

Thomas Hoffmann, an IMEC (Leuven, Belgium) high-k research manager, raised some of the performance challenges with the gate-first approach at the 2009 International Electron Devices Meeting (IEDM).5 In a follow-up interview, Hoffmann said the gate-first deposition method makes some sense for low-power devices that don’t require the ultimate in performance.

“For low-power companies, such as Renesas and others, gate-first is possibly the best trade-off. They don’t require all the low Vt‘s and high performance, which is harder to do with gate first. But as they proceed beyond 28 nm, companies will need the extra performance advantage that gate last will deliver.”

High k fig 11. Cap layers can improve the Vt of gate-first gate stacks. (Source: IMEC)However, for performance-oriented companies that require a lower Vt, Hoffmann said “gate last is a must for high-performance applications. IBM obviously must provide high-performance solutions, and I think they need to bring in additional tricks to achieve low Vt‘s with the gate-first approach. Those tricks have a cost in terms of process complexity or yield. At the end of the day, offsets are possible, but perhaps that is why the other companies in the Fishkill Alliance may be getting nervous.”

Although gate last requires careful control of the etching and CMP steps, gate first also has its process control challenges, Hoffmann said. One of the key steps in gate first is deposition of the capping layer either below or on top of the high-k to adjust the Vt. For example, a thin layer — <1 nm — of La2O3 is deposited on the NMOS devices to achieve the appropriate Vt. The lanthanum layer must be removed from the PMOS devices, which requires patterning with resists, careful etching to avoid damage, and other “highly selective” process steps, Hoffmann said. An Al2O3 capping layer on the PMOS devices is employed to control the Vt (Fig. 1).

“You want the benefit of lanthanum for NMOS, but then you have to remove it for the PMOS,” Hoffmann said. “It is not simple at all to remove resist over a material that is very thin to begin with, while avoiding damage to the capping layer. It requires proper control and selectivity.”

Glen Wilk, business unit manager for ALD and epitaxial products at ASM International NV (Almere, Netherlands), said technologists have been debating the performance, complexity and cost issues between gate-first and gate-last deposition for many years. “What I do see coming is that as the technology scales, it is playing more to the strengths of the gate-last approach. There is better ability to set and control the work functions, a better choice of electrodes due to the lower thermal budget. You get the Vt where you want it and get it to stay there.”

As the industry scales, users of the gate-first approach will find it “difficult to control the PMOS characteristics,” Wilk said. Getting the optimum PMOS work function will “get tougher as devices scale, as the thermal budget gets tighter. It will be tougher to make [gate first] work. There will be an industry-wide focus on gate last.”

The benefits of the gate-last approach, in terms of extra strain and overall work function control, make gate last the best option for both high-performance and low-power applications, Wilk said. “Memory companies may have a little more room to play with. They may be able to accept gate first for a while. It really is becoming important not only for the high-performance guys, but also for low standby power, to look at gate last.”

Taking a dual approach is not the way to go, Wilk said. “Foundries want to have one solution, not many solutions,” he argued. “If they use gate last for performance, they will find a way to make gate last work for low standby power. They need one way to manage. If we are going to get to it, let’s get to it. Let’s not keep trying to force an approach that is going against the sweet spot of gate last.”

Hans Stork, CTO at Applied Materials Inc. (Santa Clara, Calif.), said the gate-first approach requires a carefully controlled etch of the capping layers used to control the Vt, while the gate-last method requires expertise at metal deposition and CMP. “Extendibility wise, gate last appears to have the better long-term outlook.”

Stork notes that foundries are paying close attention to Intel’s 32 nm system-on-chip (SoC) process, which uses a 0.95 nm EOT high-k layer for the high-performance and low-power transistors. “Intel’s SoC process extends the gate-last, high-performance process to low-leakage applications and low-voltage operations,” he said. “It is in the sweet spot for cell phone chips.” Customers are watching how the gate-first vs. gate-last alternatives deliver on work function control, cost/productivity, and yields. Large fabless companies such as Qualcomm Inc. (San Diego) that compete in the cell phone space, Stork said, will demand that their foundry suppliers “match Intel’s performance so they can remain competitive.”

At IEDM, Qualcomm technology executives said they are “very comfortable” with the gate-last technology direction endorsed by TSMC last July. In January, Qualcomm said it also will use GlobalFoundries at the 28 nm node. That will set up a head-to-head competition between the Qualcomm cell phone applications processors made at TSMC with a gate-last high-k process, and the gate-first approach used at GlobalFoundries. The 40 nm Qualcomm-designed cell phone CPUs are high-performance chips, running at 1 GHz in the recently introduced Google smartphone, for example, while requiring mobile-appropriate levels of power consumption.

Mark Bohr, director of process architecture at Intel’s Hillsboro, Ore.-based technology and manufacturing group, said the Atom-based products that use the 32 nm SoC process6 may be about a year away, though the exact schedule depends on the product groups (Fig. 2).

High k 22. Intel’s 32 nm NMOS (left) and PMOS transistors have a gate pitch of 112.5 and use a second-generation high-k/metal gate technology.

Asked if the gate-last process results in a larger die size due to more restrictive design rules (RDRs), Bohr said Intel’s RDRs at the 45 nm node have nothing to do with the replacement gate technology, and everything to do with Intel’s desire to stick with non-immersion lithography. “The gridded design was not to enable our high-k/metal gate,” Bohr said, but to support dry lithography.

Zero interface layer

Researchers — including Intel’s Bohr — seem to agree that HfO2 will continue to be used as the base dielectric material for the medium-term future. Rather than switch to new materials with relatively modest increases in the dielectric constant, the industry is better off to improve on hafnium-based dielectrics, though some companies are attempting to tweak the HfO2 with proprietary additives.

Much attention is being paid to reducing the oxide interfacial layer, which, for example, can account for ~5 Å of a ~10 Å EOT gate insulation layer. “Most thinking in the industry now is how to optimize hafnium, rather than start another five-year quest for a new material,” said Paul Kirsch, manager of Sematech’s high-k program. “From a time and effort perspective, let’s improve the effective k, eliminating the SiO2 interface.”7

At IEDM in December, several papers on zero interface layer (ZIL) technology were presented, including a presentation from the IBM-led Fishkill Alliance, which has used the gate-first approach for the 32/28 nm generation.8 An IMEC ZIL paper at IEDM also used a gate-first approach (Fig. 3).

high k 33. An IMEC high-k/metal gate device with no interfacial layer. Indicated thicknesses are in nm. (Source: IMEC)

T.P. Ma, a professor at Yale University, said ZIL is attractive, but most of the scavenging agents require relatively high-temperature steps to remove the interface layer. That lends itself to the gate-first approach, which supports higher temperatures for the gate stack.

Ma said his understanding is that ZIL “requires a high-temperature chemical reaction” to successfully scavenge the SiO2 interface layer. The gate-first approach, for all of its Vt challenges, is designed to withstand high temperatures, Ma said, while the gate-last approach “tries to avoid” high temperature exposure. The IBM and Sematech ZIL results have been “a pleasant surprise” in that the 5 Å EOT layers have shown acceptable leakage characteristics, Ma said.

The early Sematech ZIL work did involve a gate-first deposition method according to Raj Jammy, vice president of materials and emerging technologies at Sematech. “The ZIL approach does not necessarily depend upon high temperatures, but depends on the oxygen scavenging species,” he said, adding that different species have different thermal processing needs in order to be effective (Fig. 4).

High k 44. An interface layer of 5 Å can account for one-half of the EOT. Sematech created a zero interface layer device in 2009. (Source: J. Huang et al., IEEE VLSI Symposium 2009, Sematech)

An IMEC researcher said, “Our approach to reaching a zero interface layer does indeed require a thermal budget. However, there are other ways of growing an interface-free gate-stack. So this in itself is not a reason for selecting one before the other.” He added that it should be possible to “combine the low EOT of the ZIL gate-first gate stack with an improved effective work function using replacement gate.”

There “is more to do” to improve on the dielectric material and to reduce capacitance of the metal electrodes, Bohr said. Asked about the merits of completely removing the interface layer, “My impression is that is not very useful,” partly because ZIL devices do not exhibit the best channel mobility. “If you create the right kind of interface layer, it doesn’t trap a lot of charge.”

Gartner’s Freeman said high-k/metal gate technology will be a critical differentiator between TSMC and GlobalFoundries, starting at the 28 nm node. One possibility is that IBM and GlobalFoundries will do a “very quick about-face” at the 22 nm node, adopting a gate-last technology. Another possibility is that the gate-first approach may prove more capable of removing the interface layer. “Interface control will be absolutely critical at 16 nm,” Freeman said.

References

1. J. Markoff, “Intel Says Chips Will Run Faster, Using Less Power,” New York Times, Jan. 27, 2007, p. 1.
2. D. Lammers, “Pressure Builds on Gate-First High-k,” Semiconductor.net, Dec. 9, 2009.
3. D. Lammers, “GlobalFoundries Adds Qualcomm, Supports Gate-First Technology at 28 nm Generation,” Semiconductor.net, Jan. 7, 2010.
4. G.H. Ma, et al., “A Novel ‘Hybrid’ High-k/Metal Gate Process for 28 nm High Performance CMOSFETs,” 2009 IEDM, p. 655.
5. T. Hoffmann, “High-k/Metal Gates: Industry Status and Future Direction,” 2009 IEDM Short Course.
6. C.H. Jan et al., “A 32 nm SoC Platform Technology With 2nd Generation High-k/Metal Gate Transistors,” 2009 IEDM, p. 647.
7. J. Huang et al., “Gate First High-k/Metal Gate Stacks With Zero SiOx Interface Achieving EOT=0.59 nm for 16nm Application,” 2009 Symposium on VLSI Technology.
8. T. Ando, et al., “Understanding Mobility Mechanisms in Extremely Scaled HfO2 (EOT 0.42 nm) Using Remote Interfacial Layer Scavenging Technique and Vt-tuning Dipoles With Gate-First Process,” 2009 IEDM, p. 423.

IBM Nanophotonic Switch Promises Faster Energy-Efficient Computing – 2010-03-04 13:41:03 | Semiconductor International

IBM Nanophotonic Switch Promises Faster Energy-Efficient Computing – 2010-03-04 13:41:03 | Semiconductor International.

Alexander E. Braun, Senior Editor — Semiconductor International, 3/4/2010

IBM scientists in Yorktown Heights, N.Y., have taken a significant step toward replacing electrical signals that communicate via copper wires between computer chips with silicon circuits that communicate using pulses of light.

The device, called a nanophotonic avalanche photodetector (NAP), is the fastest and smallest switch for directing traffic in on-chip optical communications, ensuring that optical messages are efficiently routed. It could enable breakthroughs in energy-efficient computing with significant implications for the future of electronics.

IBM Nano detectorNanophotonic germanium photodetector generating an electron and holes avalanche. (Source: IBM)The NAP exploits the “avalanche effect” in germanium. Analogous to a snow avalanche on a steep mountain slope, an incoming light pulse initially frees just a few charge carriers, which in turn free others until the original signal is amplified many times. Conventional avalanche photodetectors are unable to detect fast optical signals because the avalanche builds slowly.

“This invention brings the vision of on-chip optical interconnections much closer to reality,” said T.C. Chen, a vice president at IBM Research. “With optical communications embedded into the processor chips, the prospect of building power-efficient computer systems with performance at the exaflop level might not be very distant.”

IBM’s avalanche photodetector can receive optical information signals at 40 Gb/sec and simultaneously multiply them tenfold. Since the NAP operates with just a 1.5 V voltage supply — 20× smaller than previous demonstrations — many of these tiny communication devices could potentially be powered by just a regular AA-size battery, while traditional avalanche photodetectors require 20-30 V power supplies.

“This dramatic improvement in performance is the result of manipulating the optical and electrical properties at the scale of just a few tens of atoms to achieve performance well beyond accepted boundaries,” said Solomon Assefa, one of the principal researchers. “These tiny devices can detect very weak pulses of light and amplify them with unprecedented bandwidth and minimal addition of unwanted noise.”

In the NAP, avalanche multiplication takes place within a few tens of nanometers and happens very fast. The small size also means that multiplication noise is suppressed by 50-70% with respect to conventional avalanche photodetectors. The device is made of silicon and germanium, materials already widely used in production of microprocessor chips. Moreover, it is fabricated using standard semiconductor manufacturing processes; thus, thousands of these devices could be built side-by-side with silicon transistors for high-bandwidth on-chip optical communications.

IBM has worked on this for the past few years, according to Assefa. “We have developed what you might call a nanophotonic tool kit,” he said. “We have made most of the devices that we need, such as modulators to modulate the light, waveguides, switches and all the other components to build on-chip interconnects. The NAP is the last piece of the puzzle, which we needed to have one chip send encoded pulses of light, and the next chip receive it and distribute it. Now the next step is to continue the development and production of these nanophotonic devices along thin-film transistors. If we put all of this together, we believe that within 10-15 years we will be able to integrate onto the chips with the microprocessors, this photonic interconnect system for networks.”

Assefa indicated that microprocessors are becoming extremely powerful, with several increasingly advanced cores being packed in. “This has great potential; however, the next step to make these computers even more powerful is to enable them to communicate. For this, high efficiency and low power usage are important to send as much data as possible from one chip to another.”

With copper, it would be very difficult to send this tremendous amount of data because considerable more power would be required and the excessive heat created would have to be dissipated. “If copper is replaced by light pulses, considerable power is saved and one can send a hundredfold more data than is possible with copper wires,” Assefa said. “At present, we are talking about communication between chips. In the future, similar devices could be used for communication between the different cores. In principle, you can think about a cell phone-sized computer as powerful as today’s supercomputers. It is in these kinds of applications that silicon nanophotonics will have a considerable effect.”

Assefa said he views this as more than just a scientific breakthrough — it is a nanophotonics paradigm shift. “There are many nanoscale effects that we could harness for our benefit,” he said. “In the NAP, for example, we can confine extremely high electric fields over a few tens of atoms. This is a new realm of physics. There is much that can be done to improve device performance. The 1.5 V low-power requirement is a direct consequence of this nanoengineering. If we can continue to leverage these nanoscale effects, many more advances will be made. There are no additional costs, no new processes that must be introduced. There are only advantages.

Design Rules: From Restriction to Prescription (IBM) @ IMEC 2007 TAD conference

WebCite query result.

  • DFM: For each solution proposed need to understand
  • Designability and Manufacturability
  • Improve design efficiency
  • Improve model-to-hardware correlation
  • Improve manufacturability
  • What is being designed
  • How and when ?
  • What happened to design rules ?
  • How do we get to a design methodology that works efficiently at 22nm ?