The Impact of Lithography Challenges
I recently had the opportunity to talk to Dr. Nick Tredennick, and he shared some of the topics that he had included in his recent presentation titled The Last Convergence. His presentation defines three phases in the convergence of semiconductor-related technologies, with the first phase driven by companies like Digital that built modules from discrete components. During the second, integrated circuits began to aggregate discrete components; that expanded performance capability paved the way for minicomputers to displace mainframes. Dr. Tredennick concludes that the subsequent progress in semiconductor cost/performance characteristics brought about the high volume and broad acceptance of personal computer that even pushed minicomputers and workstations into a separate niche. He also observed that “…all of these transitions were in the second dimension; that is, planar components in planar assemblies. In addition, cost-performance drove the design of high-volume systems.”
Dr. Tredennick concludes that wafer stacking is the next convergence; this is a “…compelling change that will create seismic events throughout the semiconductor business.. He views that there are two elements contributing to this convergence. The first is that the shift of personal computers to a more mobile platform also shifted design objectives from a direct cost-performance relation toward the current emphasis on cost-performance/watt ratio. Changing to this new performance target will drive the further integration of chips into 3D devices due to the increased flexibility and functionality of 3D packages.
The second element is more revolutionary and is triggered by the anticipated transition to new light sources as the industry moves toward smaller circuit geometries.
The industry has assumed that the historic improvements in semiconductor performance would continue to follow the time-table of Moore’s Law–even though the investments necessary to achieve smaller transistors has risen exponentially for each new generation,
There is broad agreement today that the anticipated increases in performance, the increasing demand for higher wafer volume, and the expertise to continue the traditional decline in costs will all become more difficult as manufacturing processes march toward sub-20nm line widths. We may have in fact already reached the point that it will not be possible to make the next shrink in lithography on schedule due to the expenses and unresolved technical barriers ahead.
As companies continue to compete for cost/performance advantages while continuing to work on these manufacturing challenges, other alternatives to increasing performance will therefore have to be pursued. Dr. Tredennick expects that these competing OEM and semiconductor companies will likely find that wafer and die packaging techniques will yield better cost and performance results and at lower R&D costs than can be achieved by relying exclusively on the shrinking of transistor sizes.
There is no doubt that we will eventually get to smaller transistors through the use of new illumination sources and other process improvements. However, the time gap between today’s leading edge processes and the future deployment of new light sources will create the opportunity for R&D investment in other cost/performance alternatives. Furthermore by the time most manufacturers have completed that transition, the industry will have been fundamentally transformed in the interim by the convergence to these new stacked-packaging concepts.
The impact of that manufacturing scenario will challenge one of the basic assumptions upon which the semiconductor industry has relied. While value transistors once came from trailing-edge processes for which the plant, equipment, and development costs had been fully
amortized, this changes as progress slows in process lithography, and the most efficient source of the value transistors will migrate to the leading edge of fab facilities as production processes mature. As the reduction in the minimum lithography slows to less than the historic rate, the cost of wafer processing will become more influenced by the operating cost. Once the efficiencies of high volume manufacturing begin to take hold, fabs that have made this transition to the leading–edge processes will also be positioned to make value transistors.
“If the cost of silicon falls toward zero and progress in shrinking transistors slows, then where do system designers turn” for more performance and to add their own value. Dr. Tredennick concludes that the answer to this question will also be found in die and wafer stacking technologies.
The full impact of this scenario is far beyond the scope of this blog. Our worms-eye view of the universe is limited to memory technologies and we can only ask in the next blog what impact this broad trend might have to do with new memory technologies?
After first apologizing to Dr. Tredennick if I have somewhat mangled the points of his presentation—I completely agree with Dr. Tredennick assessment of the manufacturing challenges ahead.
I also believe that this manufacturing issue has a double-barreled impact on the production costs of memory technologies.
The first impact is relative to the ability to attain the smallest lithography sizes. Some of the memory companies are already preparing for high-volume production of 25 to 30nm wafers over the next 12 to 18 months and appear to be ahead of the high-volume, production-level processes for logic products. The memory companies will therefore likely be the first to test these limits. Whatever additional costs these complexities contribute to the cost/performance ratio, that issue will likely apply equally to both the new memory technologies as well as to the older DRAM/NAND technologies.
However, I suspect there is a second impact to memory products that will not be borne equally by both the new and old memory technologies. The issue relative to NAND/DRAM is that they are charge storage technologies as opposed to the data storage structures almost universally supported by the new memory technologies. As lithography measurements below 20nm begin to significantly reduce the mass available by which stored charge technologies can maintain a consistent and predictable level of energy, both the performance and endurance of the cells changes. This issue has not been shown to apply to the resistance and state-change memory cells.
Clever engineers have always found ways to extend the life of any technology or process, but at some point they can only do so by altering the slope in the decline of the traditional cost-per-bit or cost-per-functionality of the IC.
I foresee the need to add more error correction or some other form of physical compensation for the existing high-volume memory technologies that will noticeably slow the decline in cost-per-bit of NAND and DRAM. And as the result of those additional costs, the decline in the cost-per-bit of charge storage memory technologies will be on a less favorable slope than that of most of the new and emerging memory technologies.
We have already suggested one scenario by which new memory technologies can gain a foothold—we’ve presented Dr. Makimoto’s Wave as a possible harbinger of a broad shift in target applications and noted that Intel already appears to be moving in a like manner. When you change the target application away from the monolithic desktop PC architectures, you also change the value proposition of the technologies.
Here we have now added Dr. Tredennick’s notice of a second potentially major tremor that will likewise shake the previous foundation of traditional memory cost-per-bit expectations.