Archive for the ‘ Design technology ’ Category

Analog and RF Design Issues in High-k & Multi-Gate CMOS Technologies (IFX, 2009 IEDM 18.3) .PDF – Google Docs

Analog and RF Design Issues in High-k & Multi-Gate CMOS Technologies (IFX, 2009 IEDM 18.3) .PDF – Google Docs.

HK related design

  • 12bit ADC in 32nm HK CMOS : any small Vt variation should be handled by
  • error correction by non-binary search algorithms can efficiently compensate for resolution and performance degradation due to hysteresis effects
  • 1/f noise from HK now low enough
  • Phase noise/power  in VCO using HK/MG –> state of art.

Mug related design

  • roughness of sidewall –> degrade noise and RF performance
  • increased series resistance
  • high fringing cap
  • VCO, PL L–> show competitive jitter and phase noise/power FOMs
  • LNA –> good noise and power matching behavior
  • Self heating increases current consumption
  • Better Short Channel control –> beneficial for output impedance, gain, Vt matching

Design Challenges for 22nm CMOS and Beyond (S.Barkar, Intel 2009 IEDM)

paper

Past : segmented hardmacro functional block –> system is built using combination of those hardmacro with a bit of softmacro.

    • relied upon custom design methodology for higher performance and smaller area.
    • –> due to increasing design rule complexity, restrictions, and regularity –> little room to improve custom design
    • custom design more focus on local design rather than global design –> sub optimal in overall sense.
  • Future : System with soft macro only.
    • in 22nm and beyond is System Design with design automation at all levels.
    • Softmacro rather than hardmacro
      • described at a higher level of abstraction, such as an RTL description.
    • system can be built at a higher level of abstraction using these softcore functional blocks
      • processors, bus, other functional blocks
      • system is optimized using system level optimizer
    • physical design is optimized as well.
    • Custom design is limited to memory array, register files

2009 DAC key note — Fu-Chieh Hsu @ TSMC

Overcoming the new design complexity barrier: Alignment of technology and business models

http://videos.dac.com/46th/tuekey/fu-chiehhsu.html

logo

Rebel Science News: Why Parallel Programming Is So Hard

Rebel Science News: Why Parallel Programming Is So Hard.

The Parallel Brain

The human brain is a super parallel signal-processing machine and, as such, it is perfectly suited to the concurrent processing of huge numbers of parallel streams of sensory and proprioceptive signals. So why is it that we find parallel programming so hard? I will argue that it is not because the human brain finds it hard to think in parallel, but because what passes for parallel programming is not parallel programming in the first place. Switch to a true parallel programming environment and the problem will disappear.

Fake Parallelism

What is the difference between a sequential program and a parallel program? A sequential program is an algorithm or a list of instructions arranged in a specific order such that predecessors and successors are implicit. Is there such a thing as a parallel algorithm? In my opinion, the term ‘parallel algorithm’ is an oxymoron because an algorithm, at least as originally defined, is a sequence of steps. There is nothing parallel about algorithms whether or not they are running concurrently on a single processor or on multiple processors. A multithreaded application consists of multiple algorithms (threads) running concurrently. Other than the ability to share memory, this form of parallelism is really no different than multiple communicating programs running concurrently on a distributed network. I call it fake parallelism.

True Parallelism

In a truly parallel system, all events are synchronized to a global clock so that they can be unambiguously identified as being either concurrent or sequential. Synchronization is an absolute must in a deterministic parallel system, otherwise events quickly get out step and inferring temporal correlations becomes near impossible. Note that ‘synchronous processing’ is not synonymous with ‘synchronous messaging’. A truly parallel system must use asynchronous messaging; otherwise the timing of events becomes chaotic and unpredictable. The human brain is a temporal signal processing network that needs consistent temporal markers to establish correlations. While single thread programs provide adequate temporal (sequential) cues, concurrent threads are non-deterministic and thus concurrent temporal cues are hard to establish, which leads to confusion. See also Parallel Programming: Why the Future Is Synchronous for more on this subject.

It is beneficial to view a computer program as a communication system in which elementary processes send and receive signals to one another. In this light, immediately after execution, an operation (predecessor) in an algorithm sends a signal to the next operation (successor) in the sequence meaning essentially, ‘I’m done; now it’s your turn’. Whereas in an algorithmic program, every element or operation is assumed to have only one predecessor and one successor, by contrast, in a parallel program, there is no limit to the number of predecessors or successors an element can have. This is the reason that sequential order must be explicitly specified in a parallel program. Conversely, concurrency is implicit, i.e., no special construct is needed to specify that two or more elements are to be executed simultaneously.

Composition vs. Decomposition

The common wisdom in the industry is that the best way to write a parallel program is to break an existing sequential program down into multiple threads that can be assigned to separate cores in a multicore processor. Decomposition, it seems, is what the experts are recommending as the correct method of parallelization. However, this begs a couple of questions. If composition is the proper method of constructing sequential programs, why should parallel programs be any different? In other words, if we use sequential elements or components to build a sequential program, why should we not use parallel elements or components to build parallel programs? If the compositional approach to software construction is known to work in sequential programs, it follows that the same approach should be used in parallel software construction. It turns out that signal-based parallel software lends itself well to the use of plug-compatible components that can snap together automatically. Composition is natural and easy. Decomposition is unnatural and hard.

Conclusion

In conclusion, the reason that parallel programming is hard is that it is not what it is claimed to be. As soon as parallel applications become implicitly parallel, synchronous and compositional in nature, parallel programming will be at least an order of magnitude easier than sequential programming. Debugging is a breeze in a deterministic environment, cutting development time considerably.

Related articles: