Archive for February, 2010

AMD, Nvidia Lose Share As Intel Leads The Graphics Market | Sramana Mitra on Strategy

AMD, Nvidia Lose Share As Intel Leads The Graphics Market | Sramana Mitra on Strategy.

According to a recent Jon Peddie Research report, Intel (NASDAQ:INTC) increased its graphics chip market share to 55.2% driven by strong Atom sales while Nvidia’s (NASDAQ:NVDA) share declined to 24.3% and AMD’s (NYSE:AMD) share declined slightly to 19.9%. Let’s take a closer look.

The quarterly research report included the first shipments of a new category, the CPU-integrated graphics – CIG and predicts that there would be a rapid decline in shipments of traditional integrated graphics processors (IGPs). AMD gained share in the notebook IGP segment, but lost share in both the desktop and notebook GPU segments due to constraints in 40nm supply. NVIDIA gained some share in desktop discretes, while slipping in desktop and notebook IGP. This downward trend will likely continue as Nvidia’s focus shifts to tablets and cell phones with its new Tegra chip.

In the higher-end advanced graphics card market where Nvidia dominates, Intel had earlier planned to compete with an advanced graphic card, Larrabee. However, Intel recently scrapped its plans for a commercial launch and said that it would instead use it as a software development platform for graphic and high-performance computing.

Nvidia last week reported fiscal year 2010 revenue of $3.3 billion, down about 3% and net loss of $68 million. As the market rebounded, fourth quarter revenue grew 9% q-o-q and doubled from last year and it swung to a profit of $131.1 million from a loss of $147.7 million last year. It ended the quarter with $1.73 billion in cash.

Gross margin in the quarter improved to 44.7% with strong demand but it was unable to meet the demand due to supply constraints and probably missed out on revenue of $100 million. Last quarter coverage is available here and recent coverage of Intel is available here.

Nvidia’s GPU revenue was up 22% q-o-q and within that, revenue from desktop GPU was up 19% and notebook GPU up 27% while Quadro graphics revenue was up 25%.

For the first quarter, Nvidia expects revenue to be flat from the fourth quarter as it expects to face constraints for at least half the year. Gross margin is expected to be 44 to 45%.

Nvidia has a lot of new products in the pipeline apart from the new Tegra chip. It also introduced Optimus switchable graphics technology, which provides the performance of discrete graphics while still delivering great battery life. Asus announced its laptop using this new technology. Acer last week announced its netbook based on ION. Nvidia expects these two technologies to supplement its Media Communications Processor (MCP) business that it is transitioning out of.

AMD on the other hand reported a 40% q-o-q increase in its fourth quarter graphics revenue of $427 million. Total revenue in the fourth quarter was up 42% to $1.646 billion and net income was $1.18 billion including $1.25 billion settlement paid by Intel.  For the full year, revenue declined 7% to $5.4 billion and it swung to a profit of $304 million from a loss of $3.1 billion.

Gross margin in the quarter improved to 45% and it ended the quarter with $1.8 billion in cash. AMD which also competes with Intel in the computing chips business reported Computing Solutions revenue of $1.214 billion, up 14% q-o-q with notebook processor sales up 19% q-o-q.

AMD expects revenue to be down seasonally for the first quarter. It is currently trading around $8 with market cap of $5.5 billion and 52-week high at $9.95 on December 24. Nvidia is trading around $16 with 52-week high at $18.78 on December 30 and market cap of $9 billion.

EETimes.com – Six reasons why no one wants an Atom-based SoC

EETimes.com – Six reasons why no one wants an Atom-based SoC.

SAN JOSE, Calif. — You would think an x86 core would be a pretty hot item for a system-on-chip design. So why is no one biting on Intel Corp.’s offer last March to sell rights to an Atom core for SoCs made at TSMC?Here’s some armchair speculation. Most of it comes down to one thing—this new SoC model might have some inside Intel a little scared.

1) Intel is charging high royalties

Intel did not make terms of its Atom SoC business publically available when it launched the deal. It’s a new business model for Intel and maybe the processor giant is being a little too greedy—aka fearful—about releasing the crown jewels of its processor designs.

2) Intel has some other nasty business terms

Atom royalties could be in line. After all, the prices ARM charges are probably widely known, so Intel should have a model on which to base its prices.

But I would not be surprised if Intel has a real fear about losing control of its intellectual property. Unlike ARM, Intel has spent years and millions litigating against rivals such as AMD, Cyrix and others who cloned the x86. The processor giant can’t afford to let China Inc. get hold of any proprietary details about its designs.

Thus I suspect there could be some onerous business or legal handcuffs that come with being an Atom licensee. If so, Intel could be scaring off customers.

3) Intel is not providing adequate visibility into its core

Again, fear of having one of its novel x86 designs cloned by rivals may have motivated Intel to keep a tight rein on how much technical detail it discloses about the core. SoC designers won’t want to trust their chip design to a core that isn’t well documented—especially not when there are plenty of alternative cores from ARM, MIPS and others that provide plenty of technical details about their internal plumbing.

4) The design might be a dog

Intel has disclosed no details about the Atom core it is making available through TSMC. Maybe it is some sort of step-child of the Atom cores Intel itself markets. The theory that Intel fears making its best IP openly available comes into play here, too.

The TSMC core could suck too much power. Even Intel’s Atom cores are power hogs compared to ARM cores in the same general neighborhood of performance, demanding a Watt or two where ARM might use a couple hundred milliwatts.

There could also be performance problems. Intel is gifted at cranking out processor designs when it owns the process technology they are made in. It is less skilled in designing for someone else’s standard foundry process—and it likely does not have its best design engineers on the task.

5) No one wants to go first

If I just got my $15 million in VC or corporate funding to do a new SoC design, I may not want to risk blowing the money on a brand new core just ported to a new process technology, provided by a large and paranoid company for whom the IP business is a new experiment.

No, I think I’d prefer a proven core and process and a core provider who has been in the game awhile.

6) Intel doesn’t know how to sell processor cores

Perhaps the PC processor giant just doesn’t know how to sell this stuff? The simple fix would be to hire a handful of enterprising ARM sales and application engineers.

Whatever the problems really are, I suspect they can be solved—if Intel really wants to be in this business. The x86 has a long history in PC and embedded markets. There are bazillions of apps, tools and peripherals for it.

Such a rich eco-system should attract a lively SoC business, if Intel has the will to do what it needs to do to become a solid silicon IP provider. Time will tell if that’s really in Intel’s soul, or the company just can’t get beyond its PC processor DNA.

EETimes.com – Update: Intel-TSMC Atom partnership on hold

EETimes.com – Update: Intel-TSMC Atom partnership on hold.

SAN FRANCISCO—Intel Corp. Thursday (Feb. 25)acknowledged that it has no immediate plans to bring to market any Atom chips manufactured by Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC), confirming a report that the groundbreaking partnership announced by the two companies last year has hit a lull.Earlier Thursday the New York Times reported that the partnership between Intel and TSMC had been placed on hiatus due to lack of customer demand. The report quoted Robert Crooke, general manager of Intel’s Atom and SOC development group, saying that the No. 1 chip vendor has not given up on the partnership.

Intel spokesperson Bill Kircos said no TSMC-manufactured Atoms are on the immediate horizon, though he added that the companies have achieved several hardware and software milestones and said they would continue to work together.

“It’s been difficult to find the sweet spot of product, engineering, IP and customer demand to go into production,” the Kircos said.

Intel (Santa Clara, Calif.) said last March it would port unspecified Atom processor cores to TMSC’s technology platform, including processes, IP, libraries and design flows. The deal—the first in which Intel would transfer a processor technology outside to a silicon foundry—was widely seen as Intel’s attempt to beef up its presence in the embedded space, where the architecture of ARM Holdings plc dominates.

The delay in moving Atom products into production at TSMC appears signal slower-than-expected progress on execution of Intel’s strategy to grow revenue outside of the PC market by pushing its x86 architecture deeper into to the embedded market and elsewhere.

Intel has been trying for years to grow revenue outside of the PC space, where its microprocessors dominate.

Last year the company made a series of moves seen as gearing up for a broader push into new markets—including the TSMC deal, the acquisition of embedded software specialist Wind River Systems Inc. and the $1.25 billion settlement of longstanding legal issues with microprocessor rival Advanced Micro Devices Inc. Placing its partnership with TSMC on hiatus would appear to cast doubt on Intel’s strategy to grow revenue by driving its x86 architecture beyond the PC to the embedded market and elsewhere.

Jim McGregor, chief technology strategist at market research firm In-Stat, said placing the TSMC partnership on hold does not mean Intel’s x86 will fail to make further inroads into the embedded and consumer markets. But he said Intel’s quest will require patience.

“It takes time. It takes a lot of time,” McGregor said. “They have to build a relationship and build the product. You can’t just flip a switch.”

McGregor noted that companies using ARM or MIPS architectures to build embedded and consumer products currently have many choices of chip vendors, something that would change if they moved to x86, where Intel is the only choice. Moving to an x86 architecture would also require huge changes in design and software, McGregor said.

“Intel, by some accounts, is guilty of its own success,” McGregor said. “No matter what, they are skewed as the 800-pound gorilla when they enter a new market.” This adds a layer of concern for anyone considering doing business with Intel, McGregor said.

Still, McGregor believes Intel is bound to propagate x86 deeper into other markets beyond the PC. He noted that x86 now dominates in the field of medical imaging systems, whereas 12 years ago the Power PC architecture reigned supreme in the niche. Intel’s embedded group already has more than $2 billion in annual revenue, McGregor said.

“Intel is targeting hundreds of applications with Atom,” McGregor said. “Some they’ll win, some they won’t. The area will they will probably have the biggest initial impact is embedded applications.”

McGregor added that he had received no official word that the Intel-TSMC partnership is on hold. But, he said to his knowledge TSMC has not manufactured any Atom products for Intel. “As far as I know, Intel has not secured the volume of design wins that would necessitate that,” McGregor said.

Taking the road from ASIC to FPGA – 25/02/2010 – Electronics Weekly

Taking the road from ASIC to FPGA – 25/02/2010 – Electronics Weekly.

Guest columnist Tom Feist, director of tools marketing at Xilinx restates the arguments for considering FPGAs – no DFM, no mask prep, no DRC/LVS

If you’ve been in the electronics industry even just a short time, you’ve likely witnessed first hand the increasing adoption of FPGAs over ASICs and ASSPs for a growing number of applications from aerospace and defense to medical and industrial to automotive, broadcast and communications equipment.

With the introduction of each new silicon process technology ASICs and ASSPs are becoming more cost prohibitive and risky to produce and only feasible for a shrinking number of the highest volume applications. ASICs are rapidly going the way of the Dodo and the standard gate array.

With multi-million gate capacity, high-speed serial IO, large blocks of on-board RAM and DSP slices, today’s FPGAs are extremely powerful system-on-chips that are ideal for most applications. Even the smallest devices of our high volume line of Spartan-6 FPGAs are faster and have great capacity than the most advanced ASICs on the market just six years ago—the largest devices in our new Virtex-6 FPGA family boast performance, capacity and functionality that is greater than many of today’s mainstream ASICs.

With all this as a backdrop, we are seeing a growing number of engineering teams transitioning from ASIC into FPGA design teams. Many of these teams would like to leverage the tools, flows, and methodologies they’ve previously used or developed for ASIC design projects to their new FPGA projects. They often find that while the flows are similar in many ways, in other ways they quite happily different.

Those making the transition are often amazed and quite relieved by the fact that they don’t need as many tools and no longer have to account for the expense associated with advanced ASIC flows.

Because Xilinx already produced the FPGA silicon, design teams don’t have to run any tools that ensure the proper creation of masks or silicon features, such as hundred thousand dollar/pound DRC vs. LVS, design for manufacturing, or mask correct layout tools. In most cases, design teams will no longer need advanced IC signal integrity type tools because we’ve already accounted for those issues when we designed the silicon.

In fact, new converts to FPGA will be shocked to find out that the entire tools suite for FPGA design is lower in cost than an entry level ASIC-class HDL linting and debugging tool. And if they are a senior ASIC design engineer, they may even become tear eyed and misty with nostalgia as the FPGA flow is reminiscent of the days of the ASIC-handoff, in which designers spent the bulk of their time dealing with front end design (architecture creation, logic design and verification) and would hand off their design at logic synthesis to an ASIC vendor to do the physical design.

They’re even amazed at how much less time they have to spend on their design projects, especially in performing verification. Where a typical ASIC flow is 18 months, a typical FPGA flow for the largest FPGA devices is 8 months, often shorter. Where verification in its many forms consume upwards of 60 to even 75 percent of the overall ASIC development time, in the FPGA world it typically can consume only30% of overall design time. This is simply because in the ASIC world if a single error makes its way to production silicon, it can require a multimillion dollar respin and severe delay in production that can be devastating, even destroy a company. Therefore ASIC design teams spend considerable time trying to send flawless designs to the fab, only to have missed some critical functional or timing error. In FPGA, if you have a logic mistake, you can fix it and simply reprogram the FPGA—even after you’ve deployed the product in the field.

Users are also happy to find out that in the FPGA tool world, the old saying “you get what you pay for,” doesn’t apply too aptly to FPGA vendor tools. In fact, for the price, most users are extremely pleased with the functionality and breadth of FPGA tools that FPGA vendors produce for their customers. What’s more, FPGA vendor flows are relatively open and support interfaces to third-party tools, which allow users to add the tools of their choosing to their FPGA flows.

Today we’re seeing many of these teams adding high level co-design and verification commonly called ESL (electronic system level) tools and advanced logic verification tools to their FPGA flows. ESL tools allow designers to describe with system level languages the desired functionality of their entire systems (software as well as hardware), then test that functionality and once optimized move their architectural description to RTL and implementation in the FPGA. Today, there are several tools and flows using different languages that perform high level co-design and co-verification but no single de facto standard has emerged, as each of these tools thus far typically target a particular application rather than broad sets application spaces.

Another area that is maturing quite rapidly due in large part to the legacy of ASICs is advanced logic verification. Because a single mistake in ASIC code design can have such devastating consequences, EDA vendors at the urging of ASIC teams have developed highly advanced logic design flows that include assertion based verification and high level HDLs such as SystemVerilog. While Xilinx’s ISE Design Suite doesn’t yet include native support for AVB or SystemVerilog, many customers connect third-party simulation environments to ISE to effectively ensure their code is as clean as possible before programming their designs into the FPGA.

Of course, FPGA users have the enormous advantage of being able to program their design into their FPGA at any time and test not only the logic functionality of their FPGA but the overall functionality of their FPGA in the context of the their system. We provide an advanced tool in the ISE Design Suite called ChipScope Pro that is very useful in this silicon-in-system debugging. What’s great about this extra layer of verification is that it not only allows folks to identify issues with the FPGA but also the shortcomings of the rest of their system. What’s even better is that once you identify a system issue you can often add functionality to your FPGA design that will make up for the shortcomings of another (often unmodifiable) IC or discrete in your system.

Indeed, as more ASIC designers move to FPGAs, they are often quite surprised and pleased by how much more simple and faster and less stressful it is to program an FPGA than it is to design an ASIC. At the same time, we continuously learn from our customers (who are increasingly not only hardware designers but also embedded software and algorithm architects) and looking for ways to better our flows. If you haven’t used our tools or our FPGAs, I encourage you to give them a try by starting with our free Web Pack or 30-day free evaluation of our full ISE Design Suite.

Tom Feist is the Senior Marketing Director for the ISE Design Suite at Xilinx. He can be reached attom.feist@Xilinx.com

EETimes.com – TSMC’s R&D boss addresses 40-nm yields, high-k, litho

EETimes.com – TSMC’s R&D boss addresses 40-nm yields, high-k, litho.

SAN JOSE, Calif. — At the TSMC Japan Executive Forum in Yokohama this week, Shang-Yi Chiang, senior vice president of R&D at Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC), addressed several issues about the silicon foundry giant.

Chiang discussed TSMC’s 40-nm capacity, yield issues, high-k and lithography. EE Times obtained a transcript of the presentation. Here’s some of the issues discussed:

1. 40-nm capacity issues

(As previously reported, Nvidia Corp. and others are struggling to get their 40-nm wafers from TSMC.)

Chiang: ”There are four major messages I’d like to deliver in this presentation. Number one is 40-nanometer technology happened to be a very high demand in the early stage, so we saw the customer demand ramp up so quick, more than what we had seen before for 40-nanometer, and we are working very hard to make up the volume.

At this stage we only have fab 12 ready to tape production of 40 nanometer and we are able to do about 80,000 wafers per quarter at the moment. These are twelve-inch wafers. And this will be doubled by the end of this year, to 160,000 twelve-inch wafers for 40 nanometer capacity by the end of this year, and partly from fab 12 and partly from fab 14.”

2. 40-nm yield issues

(For some time, TSMC has struggled with 40-nm issues.)

Chiang: ”You all heard about TSMC is challenge during the early part of last year. I report to you we are glad all this problems was behind us. We resolve this yield problem in the second half of last year. So we’re glad the yield issue was over, and we are building the capacity very aggressively to fulfill the very high demand from our customers.

Moving to 45 and 40 nanometer is a lot more challenging. This is the first time we began to use 193 nanometer shrink immersion. That means the photo resist during exposure will be merged in water and is a very high potential defect. For this a very big challenge. We began to develop the third generation. We began to use the second generation low k material with a k value of 2.5 and at this k value the material become quite fragile so there is a lot of potential issues in the package side. So moving to 40 nanometer that’s why it’s getting pretty challenging, pretty difficult to do.”

3. Progess at 28-nm

(Several months ago, TSMC rolled out its 28-nm, which will have several options.)

Chiang: ”The first node we’re going to release for the 28-nanometer will be we call the 28 LP. This is our poly gate and silicon oxide nitrate version. We will establish production at the end of June this year, about four months from now, and this is for the low power application. Again, no high-k metal gate.”

4. Progress on high-k and metal gate

(At 28-nm, TSMC is expected to have a high-k/metal-gate option.)

Chiang: ”The first high-k metal gate we call 28 HP for the high performance application will be introduce the end of September this year, and followed by three months later December will be the 28 HPL. This is the first high-k metal gate introduction for the low power application.

At this moment the only way we know how to do that is the gate last approach. So I firmly believe everybody will migrate to using gate last in the future generation, and could be as early as 22, 20 nanometer mode. Even some of our competitors who claim gate first have unique advantage, I firmly believe they will move to gate last very quickly.

There are for the gate last process. It happen you can achieve the higher constraint from silicon germanium because by the time you remove the poly silicon from the gate the silicon gemainium can push the channel in a little bit to have more strength in the channel. But by far the biggest advantage, and that’s why TSMC achieves the gate last is the first one you can have different work functions for different gate metal.”

5. Plans for 22-nm node

(In the past, TSMC has talked about the 22-nm node.)

Chiang: ”Going forward, we plan to introduce 22 nanometer node about two years after we introduce 28 nanometer, so the first introduction we’d like to be in the Q3 of 2012, and this if for the high performance version. And followed by the low power version about the end of Q1 2013.

Going to 22 nanometer and beyond, as we like for a new module to be introduced, in 20, 22 nanometer, the first will be we go to the second generation high-k metal gate.”

6. Litho directions

(TSMC has inserted 193-nm immersion and is looking at EUV and maskless.)

”We will continue using the 193 immersion with double patterning in the early stage, and we will migrate to EUV or multiple e-beam direct write if one of these technologies can be more mature and more cost effective.

And lithography side the industry seems like the mainstream trend will be to go to EUV and you may or may not have heard about the cost of EUV. If you buy one EUV tool with a matched track, it will cost like 80 million dollars for just one tool. I was shocked to sign a P.O. only couple weeks ago shortly before my vacation. I signed a 1.9 million euro purchase order for a clamp.

This clamp is a custom made clamp only for EUV. We have to mount a special clamp on the ceiling and this clamp will be used to lift the EUV tool when we install the machine, and when we do the maintenance. This tool is so heavy, no other tool can lift it up, and this custom-made clamp costs us 1.9 million euros, just to buy a clamp. It’s really shocking.”

EETimes.com – Comment: ARM must beware of the ‘tied-selling’ trap

EETimes.com – Comment: ARM must beware of the ‘tied-selling’ trap.

LONDON — The success of ARM Holdings plc with its series of low-power processing cores, and its relatively small-scale success — so far — with its Mali graphics processing cores puts ARM in a potentially dangerous position.ARM (Cambridge, England) gives royalty discounts when additional processor cores are integrated on a single chip. In the era of multicore processor architectures this clearly makes sense as it is applied to dual-core and quad-core Cortex-A9s and even heterogeneous aggregations of ARM cores. However, Warren East, CEO of ARM, has said that discounts are also extended when a chip company includes a Mali graphics core alongside an ARM general purpose processor cores.

And therein lies the danger.

Now a discount could be seen as wholly reasonable, but if potential licensees came to believe that working with another graphics core provider could hurt their access to ARM’s main processor cores or even just to roadmap information and call backs from engineers, then ARM could find itself being criticized — or even prosecuted — for “tied selling.” Such allegations have plagued Intel, the world’s largest and most successful chip vendor, in the past.

ARM’s main rival in the graphics core licensing business is Imagination Technologies Group plc (Kings Langley, England). Imagination has been licensing its graphics cores for a considerable time and as a result has started to enjoy success. As ARM CEO East said earlier this month, “We are the new kid on the block. Imagination is very much the incumbent.” Indeed prior to acquiring its own graphics capability ARM was active in selling the virtues of ARM and Imagination cores working in tandem. “As such we only have ourselves to blame,” East said.

IP core royalties are usually set as a percentage of the chip price. East told analysts at conference to discuss ARM’s most recent financial results: “if you integrate a Mali core on the same chip as another ARM microprocessor then it’s similar to integrating two ARM microprocessors on the core. Typically both cores come under royalty and the second one has a discount. And perhaps because of the special-purpose functionality in the Mali core, the discount is probably not as great as it would be for a second general-purpose ARM microprocessor.”

With regards to discounting policy it sounds like ARM is treading on the right side of the tied-selling line, but such debates can become highly nuanced and a case may not hinge on pricing alone. ARM could even come up with a pricing policy that showed all discounts accruing on the main processor, but that would not necessarily exonerate it.

I am sure that ARM will have continued success with its Mali cores. And when that happens ARM could argue that by controlling its own graphics architecture, the physical intellectual property underlying it, as well as how it integrates with its own general-purpose processors it has created technical advantages in the market place that it should be free to exploit. A rival might argue that ARM was exploiting its dominance (at least in some markets) to force customers to take the Mali graphics core as well as a general-purpose core and at the same time exclude the competition in an abuse of market position.

Of course there is an argument that being attacked over antitrust issues is a nice problem to have, as it means a company is successful. The corollary is that when a company is successful it becomes worth suing. Which is why ARM must do all it can to avoid falling into the tied-selling trap.

EETimes.com – Junctionless transistor is ready for 20-nm node, says researcher

EETimes.com – Junctionless transistor is ready for 20-nm node, says researcher.

LONDON — Professor Jean-Pierre Colinge of Tyndall National Institute (Cork, Ireland), co-author of the paper Nanowire transistors without junctions that was published by Nature Nanotechnology recently, has said that junctionless transistors could be implemented commercially at around the 20-nm manufacturing node.The junctionless transistor is based on use of control gate around a silicon nanowire. The gate can be used to modulate the resistance of the nanowire and to “squeeze” the electron channel to nothing, thus turning off the device. Doping is used to produce p- and n-type FETs but there are no steep dopant gradients nor junctions, which promises simplified manufacturing.

Such a major change in the structure of the fundamental electronic device could be expected to require a great deal of independent research. An introduction at or around 20-nm would require companies to switch more or less immediately. However, a switch to the junctionless transistor could fit in with previously forecast moves by the industry away from planar transistors and towards FinFETs and multi- and wrap-around gate structures.

Speaking to EE Times by telephone Professor Colinge said: “It’s not shown in the Nature paper but we have made a silicon nanowire measuring about 10 nanometers by 10 nanometers. Now there is a rule of thumb that the gate length should be about twice the nanowire dimensions to avoid short channel effects. I think junctionless transistors could intersect with ITRS [International Technology Roadmap for Semiconductors] at 20-nm.”

Professor Colinge continued: “The junctionless transistor could compete now but it will take time for semiconductor companies to get used to the idea. People are scared of the high doping levels.”

According to the paper published in Nature Nanotechnology dopant levels of between 2 x 10^19 and 5 x 10^19 atoms per cubic centimeter were used. The high doping levels are required to ensure a high current drive and good source and drain contact resistance, the paper states.

But Professor Colinge pointed out that the junctionless transistor scales far better than a conventional transistor which will need to implant and control complex dopant gradients and profiles in diminishing distances. “The junctionless device does scale better. You still need the high resolution etching but you don’t have to scale the gate oxide as aggressively as you do on a regular device,” he said.

He also said that while the most recent paper does not address the issue, there is no reason why silicon junctionless transistors should not be amenable to induced strain to increase electron mobility. At the same time the junctionless approach could also be applied to materials other than silicon. “Yes it is applicable to other materials such as compound semiconductors,” Professor Colinge said.

Another difference between a conventional transistor and a junctionless transistor is that the junctionless device is a normally-on device, although this is made more complex by use of doped gate materials, Professor Colinge said. But he stressed that for the purposes of logic, memory and small signal operation there is no difference to a conventional transistor. “So far as we know the devices are interchangeable and there are no implications for the layout of logic,” he said.

The research presented in the Nature paper was partly paid for by Tyndall’s participation in two European funded projects Nanosil and EuroSOI+.

EETimes.com – Xilinx confirms: Samsung, TSMC in, UMC out at 28-nm

EETimes.com – Xilinx confirms: Samsung, TSMC in, UMC out at 28-nm.

SAN FRANCISCO—Xilinx Inc. said Monday (Feb. 22) it will use leading foundry Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC) as one of two foundry suppliers for its 28-nm FPGAs, a major strategy shift that has been the subject of industry rumors and analyst speculation for weeks.Xilinx (San Jose, Calif.) said it is using TSMC and Samsung Electronics Co. Ltd.’s foundry division to make 28-nm parts, which are expected to begin sampling by the end of this year. Xilinx has long used a two foundry strategy at each process node. Samsung first joined Xilinx’ foundry supplier roster at the 40-nm node, supplanting Toshiba.

Xilinx’ shift to TSMC is a bitter pill for rival foundry United Microelectronics Corp. (UMC), which has been a foundry supplier to Xilinx for more than a decade. Some analysts blamed 65-nm yield issues at UMC for a supply glitch that last summer materially impacted Xilinx sales, speculation which UMC later denied.

UMC (Hsinchu, Taiwan) will presumabably continue to manufacture Xilinx parts at 65-nm, 40-nm and other nodes.

Suresh Menon, product development vice president for Xilinx’ programmable platforms development group, said Xilinx evaluates foundries’ process technology at every process node in order to determine which suppliers to use.

Suresh Menon
Xilinx Inc.

“In this generation, as in every generation, we look at the needs for our next generation FPGAs and identify what process we need to deliver that generation of products,” Menon said.

Xilinx is emphasizing power management in its 28-nm products (see related story). Menon said static power is a very significant portion of the total power dissipation of a chip at 28-nm. The choice of process technology at 28-nm is critical to achieving maximum power efficiency, he added.

Menon said TSMC and Samsung offered Xilinx the best process technology options for high-performance, low-power process technology at 28-nm. Xilinx has been working with the two companies on 28-nm development for more than two years, he said.

Some analysts, including Ian Ing of Broadpoint Amtech, have been saying for several weeks that Xilinx was expected to use TSMC at the 28-nm node.

Altera Corp., the chief competitor to Xilinx in the programmable logic market, has used TSMC as its foundry supplier for years. Menon said the Xilinx-TSMC agreement would not restrict TSMC from working with Xilinx competitors. He noted that this is typical of TSMC’s business practices and said TSMC manufacturers parts for competing suppliers in many markets, including graphics chips.

Xilinx and Samsung last week announced that Spartan-6 FPGAs have achieved volume production on Samsung’s 45-nm process.

Observations: The (good and bad) future of the Internet

Observations: The (good and bad) future of the Internet.

SAN DIEGO—“We know even now that we are at some fundamental limits of what the Internet can handle,” warned University of California, San Diego processor kc claffy [sic capitalization] at the beginning of her talk at theAmerican Association for the Advancement of Science meeting in San Diego. “We have one big expectation—being able to innovate,” she said. “And it is unclear whether we will be able to do that.”

claffy’s warnings are based on the observation that the Internet’s infrastructure is, for the most part, hidden. In the U.S. there are on the order of one hundred Internet service providers that control the fiber lines and the routers that direct traffic throughout the network. Each of these ISPs has agreements with the others to exchange traffic. In essence, these agreements say if you move my bits, I’ll move yours. However, all these agreements are not just independent and unregulated, they’re secret. Proprietary corporate information. This makes it impossible to understand how traffic will get redirected when, say, one path fails. It makes it impossible to understand just how strong the overall system is when something goes wrong. It makes it impossible to map the overall structure of the Internet (something intensely frustrating to claffy, whose job it is to map the overall structure of the Internet). And it also makes it difficult to predict how the Internet will grow.

One thing is for sure, though: The Internet will continue to grow. We just don’t know if the current system for addressing content on the Internet will be able to accommodate this growth. Every location on the Internet—every web site, every user—has associated with it a specific address, called an Internet Protocol (IP) address. The current addressing system—called IPv4—has about four billion possible addresses. The Internet is expected to outgrow this batch of addresses in about two years. For decades researchers have been working on the next generation of addresses—the IPv6 system—which has approximately enough addresses to last until the heat death of the universe. But IPv6 and IPv4 are not compatible, so anyone working with a new IPv6 address would not be able to access Web sites using old IPv4 addresses. “Everyone would have to switch at the same time—Google, Verizon, everyone” claffy told me after her presentation. Yet a massive instantaneous global switchover of the Internet’s entire addressing system is, in short, unlikely.

Irwin Jacobs, the CEO and co-founder of Qualcomm, also spoke regarding the spread of the wireless Internet. He started out with some figures that underscored the estimates of the Internet’s rapid growth. According to Google, he said, half of all Internet connections today come from mobile devices, and the mobile web adoption and growth rates today are eight times what wireline-based adaptation was ten years ago. According to one estimate by Ericsson, the number of wirelessly connected devices worldwide will rise from an estimated 4.6 billion today to 50 billion by 2020. This would be around seven devices for every man, woman and child on the planet. (Jacobs later clarified that he thought these numbers were “ambitious.”)

One well-publicized challenge of mobile 3G networks is dealing with the ever-increasing amounts of video coming through the system. “Video to phones accounts for half the bits now,” he said. One option to reduce strain on 3G networks is expanding the portion of the wireless spectrum used to distribute content. An example of how this could work is FLO TV, a service from Qualcomm that uses the old UHF channel 55 to broadcast over 20 channels to wireless devices. The system is now in place in 68 metropolitan areas, he said.

Whatever happens going forward, the Federal Communications Commission’s upcomingBroadband Access Plan, expected in two weeks, will surely shape the landscape of the Internet over the years to come. Let’s hope it can cope with the growth.

EETimes.com – SPIE: TSMC jumps on EUV bandwagon

EETimes.com – SPIE: TSMC jumps on EUV bandwagon.

SAN JOSE, Calif. — In a major development, ASML Holding NV said that Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC) will take delivery of ASML’s extreme ultraviolet (EUV) lithography system.At some point, possibly next year or sooner, TSMC will take delivery of a TwinScan NXE:3100 tool from ASML. The NXE:3100 is a ”pre-production” EUV tool, said to have an NA of 0.25.

This represents a change in direction for silicon foundry giant TSMC. TSMC has dismissed EUV–at least in the past. For next-generation lithography, the company has been backing maskless lithography. It will continue to do so.

After months of collaboration, foundry chip supplier TSMC and Mapper Lithography BV recently claimed that Mapper’s tool located on TSMC’s Fab 12 GigaFab is printing features so far unachievable with current immersion lithography technology. In 2008, TSMC and Mapper concluded an agreement according to which Mapper would ship its first 300 mm multiple-electron-beam maskless lithography platform for process development and device prototyping to TSMC.

TSMC is also expected to be one the first dedicated foundries conducting on-site EUV development. It will install the new system on its Fab 12 GigaFab for development of future technology nodes.

TSMC is evaluating EUV and other lithography technologies for their potential to optimize cost-effective manufacturing at future technology nodes. EUV technology employs a much shorter wavelength and has the potential to reduce costs associated with current techniques used to stretch 193-nm immersion lithography, making it a promising lithography technology for manufacturing IC’s for future advanced technology nodes.

“TSMC will use a TwinScan NXE:3100 for research and development of future advanced technology nodes,” said Shang-yi Chiang, TSMC’s senior vice president of research and development, in a statement. “EUV is one of next-generation lithography technologies we are investigating.”

Others have also jumped on the EUV bandwagon, including Hynix, Intel, Samsung, Toshiba, among others.

Follow

Get every new post delivered to your Inbox.