Archive for the ‘ General trend ’ Category

Singularity: Kurzweil on 2045, When Humans, Machines Merge – TIME

Singularity: Kurzweil on 2045, When Humans, Machines Merge – TIME.

Click here to find out more!
Thursday, Feb. 10, 2011

2045: The Year Man Becomes Immortal

 

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I’ve Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.

On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200. (See TIME’s photo-essay “Cyberdyne’s Real Robot.”)

Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil’s age than by anything he’d actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she’d been President Lyndon Johnson’s first-grade teacher.

But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It’s an act of self-expression; you’re not supposed to be able to do it if you don’t have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

That was Kurzweil’s real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we’re approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away. (See the best inventions of 2010.)

Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they’re getting faster is increasing.

True? True.

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there’s no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks to play Farmville.

Probably. It’s impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you’d be as smart as they would be. But there are a lot of theories about it. Maybe we’ll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we’ll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.(Comment on this story.)

The difficult thing to keep sight of when you’re talking about the Singularity is that even though it sounds like science fiction, it isn’t, no more than a weather forecast is science fiction. It’s not a fringe idea; it’s a serious hypothesis about the future of life on Earth. There’s an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it’s an idea that rewards sober, careful evaluation.

See pictures of cinema’s most memorable robots.

From TIME’s archives: “Can Machines Think?”

See TIME’s special report on gadgets, then and now.

 

People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there’s more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language. (See “Is Technology Making Us Lonelier?”)

The Singularity isn’t a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an “intelligence explosion”:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good’s intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that “within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended.”

By that time Kurzweil was thinking about the Singularity too. He’d been busy since his appearance on I’ve Got a Secret. He’d made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1 — and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology. (See pictures of adorable robots.)

But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him “the best person I know at predicting the future of artificial intelligence.”(See the world’s most influential people in the 2010 TIME 100.)

In real life, the transcendent man is an unimposing figure who could pass for Woody Allen’s even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity’s most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He’s good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I’ve looked at the numbers, and this is what they say, so what else can I tell you?

Kurzweil’s interest in humanity’s cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. “Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project,” he says. “So it’s like skeet shooting — you can’t shoot at the target.” He knew about Moore’s law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It’s a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.

As it turned out, Kurzweil’s numbers looked a lot like Moore’s. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900. (Comment on this story.)

Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond — the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. “It’s really amazing how smooth these trajectories are,” he says. “Through thick and thin, war and peace, boom times and recessions.” Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.

See TIME’s video “Five Worst Inventions.”

See the 100 best gadgets of all time.

 

Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we’re not evolved to think in terms of exponential growth. “It’s not intuitive. Our built-in predictors are linear. When we’re trying to avoid an animal, we pick the linear prediction of where it’s going to be in 20 seconds and what to do about it. That is actually hardwired in our brains.”

Here’s what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity — never say he’s not conservative — at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. (See how robotics are changing the future of medicine.)

The Singularity isn’t just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

Not all of them are Kurzweilians, not by a long chalk. There’s room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won’t happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you’re walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen’s distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

In addition to the Singularity University, which Kurzweil co-founded, there’s also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. (See TIME’s computer covers.)

At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James “the Amazing” Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading — the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters — handed out pamphlets. An android chatted with visitors in one corner.

After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It’s not just wishful thinking; there’s actual science going on here.

For example, it’s well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can’t reproduce anymore and dies. But there’s an enzyme called telomerase that reverses this process; it’s one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn’t just get better; they got younger. (Comment on this story.)

Aubrey de Grey is one of the world’s best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. “People have begun to realize that the view of aging being something immutable — rather like the heat death of the universe — is simply ridiculous,” he says. “It’s just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It’s really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable.”

Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father’s genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he’s 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.

From TIME’s archives: “The Immortality Enzyme.”

See Healthland’s 5 rules for good health in 2011.

 

But his goal differs slightly from de Grey’s. For Kurzweil, it’s not so much about staying healthy as long as possible; it’s about staying alive until the Singularity. It’s an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they’ll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we’ll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.

It’s an idea that’s radical and ancient at the same time. In “Sailing to Byzantium,” W.B. Yeats describes mankind’s fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. “There are people who can accept computers being more intelligent than people,” he says. “But the idea of significant changes to human longevity — that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that’s the major reason we have religion.” (See the top 10 medical breakthroughs of 2010.)

Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.

The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn’t currently produce the kind of intelligence we associate with humans or even with talking computers in movies — HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don’t make conversation at parties. They’re intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn’t exist yet.

Why not? Obviously we’re still waiting on all that exponentially growing computing power to get here. But it’s also possible that there are things going on in our brains that can’t be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer’s Singularity Summit. “Although biological components act in ways that are comparable to those in electronic circuits,” he argued, in a talk titled “What Cells Can Do That Robots Can’t,” “they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events.” That makes the ones and zeros that computers trade in look pretty crude. (See how to live 100 years.)

Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being — in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness — a machine with no ghost in it? And how would we know?

Even if you grant that the Singularity is plausible, you’re still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?

Kurzweil admits that there’s a fundamental level of risk associated with the Singularity that’s impossible to refine away, simply because we don’t know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don’t have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error. (Comment on this story.)

If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. “It would require a totalitarian system to implement such a ban,” he says. “It wouldn’t work. It would just drive these technologies underground, where the responsible scientists who we’re counting on to create the defenses would not have easy access to the tools.”

Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He’s tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.

See TIME’s photo-essay “A Global Look at Longevity.”

See how genes, gender and diet may be life extenders.

 

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. “Generally speaking,” he says, “the core of a disagreement I’ll have with a critic is, they’ll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don’t believe I’m underestimating the challenge. I think they’re underestimating the power of exponential growth.”

This position doesn’t make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It’s called the Blue Brain project, and it’s an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM’s Blue Gene super-computer. So far, Markram’s team has managed to simulate one neocortical column from a rat’s brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you’d then have to educate the brain, and who knows how long that would take?) (See portraits of centenarians.)

By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. “When people look at the implications of ongoing exponential growth, it gets harder and harder to accept,” he says. “So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I’ve tried to push myself to really look.”

In Kurzweil’s future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century’s worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.

We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species. (See the costs of living a long life.)

Or it isn’t. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.

But as for the minor questions, they’re already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn’t have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn’t see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?

Already 30,000 patients with Parkinson’s disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn’t need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn’t strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits. (Comment on this story.)

A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century’s answer to the Founding Fathers — except unlike the Founding Fathers, they’ll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney’s Tomorrowland. Nothing gets old as fast as the future.

But even if they’re dead wrong about the future, they’re right about the present. They’re taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.

Download TIME’s iPhone, BlackBerry and Android applications.

See TIME’s Pictures of the Week.


10 semiconductor themes for 2011

10 semiconductor themes for 2011.

What’s in store for 2011? Gus Richard, an analyst with Piper Jaffray, has provided his 10 predictions-or themes-for 2011 (and beyond). SAN JOSE, Calif. – What’s in store for 2011? Gus Richard, an analyst with Piper Jaffray, has listed his 10  themes for ICs in 2011 (and beyond):

1. The Fourth Wave of Computing

”In our view, the era of the mobile Internet, thin client or ultra mobile computer is upon us. It is the 4th wave of computing. In our view, in the 4th wave, the critical capability is not the processor capability, but rather connectivity or bandwidth as well as very low power. The iPad, iPhone, and Android operating system are all early winners in this new era, leading the 4th wave.”

2. ASIC to PLD Conversion

”ASICs and ASSPs are being replaced by programmable logic devices, PLDs. With each successive node the cost of a design goes up. The cost of a 45-nm SOC chip design is estimated to be roughly $80 million and a 32-nm SOC is $130 million. We estimate that the addressable market of these chips needs to be roughly $400 million and $650 million to make a reasonable return assuming a 50 percent gross margin.”

3. The Super Cycle and Increased Capital Intensity

”Over the last 10 to 15 years more and more companies have gone fabless or fab lite as fewer and fewer companies have been able to afford the cost of a leading edge 300-mm manufacturing plant. The question is who is going to pay for the increasing capital intensity. Clearly the dominant manufacturers are going to be spending including: Intel, Samsung, Toshiba, TSMC and Global Foundries.

While we haven’t heard of any discussions, we think it is only a matter of time before companies like TSMC and UMC start asking customers to share the burden. We think this will be the source of capital that takes capital intensity back to a cycle high in the mid 20 percent range.”

4. Lithography’s Increasing Share of Wallet

”Lithography is increasing as a percentage of fab spending. The last generation ASML lithography system, the XT 193-nm immersion, cost 30 million euros; the current generation, NXT, cost 40 million euros. The EUV systems are also more expensive than 193-nm immersion tools. The pre-production tools shipping today cost 42 million euros and production tools due to ship in late 2011 will cost 65 to 70 million euros.”

5. The Low Down on the Slow Down of Moore’s Law

”There are only three ways to increase the output of a fab. The first is to scale (shrink) the size of the transistor and other structures on a chip (Moore’s Law), the second is to move to a larger diameter wafer and the third is increased the number of wafers processed. For all but Intel and Samsung, Moore’s Law is slowing and the transition to next generation process technology is grinding to a halt.”

6. Increasing Levels of Innovation and Killer Applications

”There is now increasing visibility into new drivers of semiconductor demand. It has been a long time since the semiconductor industry benefited from a killer app that could move the needle in terms of growth. We think there are several demand drivers now occurring. These include: the 4th wave of computing or the ultra mobile era dominated by smart phones and tablets, the second is the proliferation of internet connectivity to an increasing array of devices (ubiquitous connectivity, Ubiquinet as mere ‘internet’ is no longer appropriate), the need to upgrade the communication infrastructure to support an increasing plethora of devices, the increased use of video over IP and the need to support the mobile internet. We believe these trends are driving an upgrade cycle for electronics as well as increasing semiconductor content in existing devices and a crop of new devices.”

7. Increased Investment in Communication Infrastructure

”We are seeing a shift to data from voice in mobile phones, increased delivery of video over the internet IP and the emergence of cloud computing. This is driving a need for infrastructure upgrades.”

8. Home Networking

”Bandwidth to the home is increasing at a rate of 40 percent per year. Fiber-to-the-home (FTTH) is a significant driver of this growth in developed economies and video over IP is driving the growth of FTTH. The need to support HD video on an increasing number of devices and more connected devices in the home are driving home networking. The solutions will be both WiFi and networks over existing wires, in our view.”

9. LED Lighting

”The growth of LED lighting is being driven by increasing global regulation banning the incandescent bulb, which should accelerate over the next 2 years. Currently the EU prohibits the sale of 75W and 100W bulbs and moves to an outright ban on all incandescent bulbs by Dec 2012. The US begins a similar tiered restriction beginning in 2012 with the 100W bulb and all bulbs banned by 2014. While initially much of the incandescent bulb replacement will be CCFL we think over time LED will be the solution of choice as they provide a better quality light and the cost comes down as volumes increase. We estimate that the number of general purpose light bulbs has been doubling every year to an estimated 200 million this year and 400 million units next year.”

10. The Analog Bifurcation and Over Investment

”During the lost decade of the 00s, the analog semiconductor market saw renewed interest and likely too much investment. While analog is rich with niche market opportunities, we think it will be harder for companies in the analog market going forward. In analog like other chip markets, gross margin is inversely proportional to volume. That is to say the higher the volume the lower the gross margin and the lower the volume the higher the gross margin.

Moreover, we would expect TI to use its new 300mm fab to drive market share in very higher volume standard analog markets as well. While not all analog companies will be impacted, those whose products and business models overlap with TI will come under pressure.”

Experiments offer tantalizing clues as to why matter prevails in the universe

Experiments offer tantalizing clues as to why matter prevails in the universe.

A large collaboration of physicists working at the Fermilab Tevatron particle collider has discovered evidence of an explanation for the prevalence of matter over antimatter in the universe. They found that colliding protons in their experiment produced short-lived B meson particles that almost immediately broke down into debris that included slightly more matter than antimatter. The two types of matter annihilate each other, so most of the material coming from these sorts of decays would disappear, leaving an excess of regular matter behind.

This sort of matter/antimatter asymmetry accounts for the fact that just about all the material in the universe is made of the normal matter we’re familiar with. The results are being published this week in papers appearing simultaneously in the APS journals  and Physical Review D.

Physicists have long known about processes described by current theory that would produce tiny excesses of matter, but the amounts the theories predict are far smaller than necessary to create the  we observe. The Tevatron experiments suggest that we are on the verge of accounting for the quantities of matter that exist today. But the truly exciting implication is that the experiment implies that there is new physics, beyond the widely accepted Standard Model, that must be at work. If that’s the case, major scientific developments lie ahead.

The results emerge from a complicated and challenging analysis, and have yet to be confirmed by other experiments. If the matter/ imbalance holds up under the scrutiny of researchers at the  in Europe and competing research groups at Fermilab, it will likely stand as one of the most significant milestones in high-energy physics, according to Roy Briere of Carnegie Mellon University in Pittsburgh. Briere summarizes the experimental results and their implications in a Viewpoint article in the current edition of APS Physics
.

Bloter.net » Blog Archive » HP의 팜 인수가 시사하는 흐름 몇가지

Bloter.net » Blog Archive » HP의 팜 인수가 시사하는 흐름 몇가지.

HP가 죽어가던 팜(Palm)을 12억 달러에 인수한다고 발표했습니다. 인수는 올해 7월경 마무리될 예정이라는군요. 팜은 스마트폰인 ‘팜 프리’도 만들고 그 안에 들어가는 스마트폰 운영체제인 ‘웹OS’도 직접 만들던 업체였습니다. 이로써 HP는 유닉스 운영체제인 ‘UX’ 이후 또 하나의 OS를 확보하게 됐습니다.

가트너 자료에 따르면 팜은 2009년 스마트폰 OS 시장에서 0.7%를 차지했습니다. 0.7%라고 해도 상승을 하면 문제가 안되지만 그 점유율은 갈수록 떨어지고 있습니다. 최근 애플이 아이폰 4세대 제품에서 CDMA 진영을 지원하겠다고 밝히면서 주가도 폭락했습니다. 팜이 CDMA 진영 미 통신사들에 스마트폰을 공급하고 있었는데 그 시장도 애플이 지원하겠다고 밝혔기 때문입니다. 가뜩이나 있던 시장도 없어지게 생긴 것이죠.

hppalmma100503이런 상황에서 HP는 왜 12억 달러나 들여서 팜을 인수했을까요?

윈텔 진영 균열 가속화

HP의 팜 인수는 PC 시대에 최고의 궁합을 맺었던 운영체제와 칩, PC와 노트북 제조사간에 이제 헤어질 시기가 점차 도래하고 있다는 것을 대표적으로 보여주는 것이라고 봅니다. 마이크로소프트의 윈도우와 인텔, 컴팩을 인수한 HP의 궁합은 상상을 초월합니다. 개인용 PC 시장의 전쟁에서 IBM이라는 초거대 업체와의 싸움을 승리로 이끌었고, 전세계 95%가 넘는 시장을 석권할 정도의 파괴력을 보여줬습니다. 마이크로소프트와 인텔의 우산안에서 수많은 제조사들이 경쟁을 벌이고 있습니다. 이 시장이 하루 아침에 사라지지는 않겠지만 그 영향력이 예전 같지는 않을 것입니다.

HP는 이 시장에서 컴팩을 인수하면서 시장 1위에 올라서 있습니다. 델이 유통 혁신을 통해 시장을 호령하기도 했었지만 HP도 이 분야의 혁신에 엄청난 투자를 단행하면서 델을 확실히 제쳤습니다.

문제는 모바일 디바이스 시장이 열리면서 이제 이런 찰떡 궁합에 묘한 균열이 생기고 있는 것이죠.

인텔이 이미 독자적인 행보를 시작했습니다. 인텔은 수많은 모바일 디바이스 시장을 겨냥해 칩 뿐아니라 운영체제에 상당한 관심을 가지고 있습니다. 대표적인 것이 모블린(Moblin)이죠. 대형 뉴스도 이미 공개됐습니다. 노키아가 인텔의 행보에 힘을 실어준 것이죠.

인텔과 노키아는 미래형 컴퓨팅 기기용 소프트웨어 플랫폼 시장을 위해 모블린과 노키아의 마에모를 통합한 리눅스 기반 소프트웨어 플랫폼인 미고(MeeGo)를 탄생시켰습니다.

지난 2월 15일 발표된 인텔과 노키아의 발표한 자료를 잠시 보시죠.

이 새로운 플랫폼은 휴대용 모바일 컴퓨터, 넷북, 태블릿,, 미디어폰, 커넥티드 TV, 차량용 인포테인먼트 시스템 등 다양한 기기 전반의 하드웨어 아키텍처를 지원합니다.

미고(MeeGo)는 Qt 애플리케이션 개발 환경을 제공하며, 모블린 코어 운영체제 성능과 레퍼런스 사용자 경험을 기반으로 구축됩니다. 개발자들은 Qt를 이용해 다양한 기기와 플랫폼에 맞는 애플리케이션을 구축해 노키아의 오비 스토어와 인텔 앱업 센터를 통해 배포할 수 있습니다. 미고는 리눅스 재단에 의해 운영되며, 오픈소스 개발 모델의 우수 사례들을 통해 관리된다. 미고(MeeGo)의 첫 번째 버전은 2010년 2분기에 발표될 예정이며 이를 기반으로 한 기기는 올해 하반기 출시될 계획이다.

인텔과 노키아는 전 세계 기기 제조업체, 네트워크 운영 업체, 반도체 기업, 소프트웨어 판매와 개발 업체들이 미고(MeeGo)를 채택할 것으로 기대하고 있습니다.

마이크로소프트 입장에서는 가장 강력한 우군이었던 인텔이 이제는 독자적인 운영체제를 만들어 새로운 생태계를 마련하고자 하는 상황을 눈 앞에서 지켜봐야만 합니다. 모바일 시장에서 칩과 운영체제를 함께 제공하겠다는 것이죠. 마이크로소프트는 윈도우 임베이드 제품으로 리눅스가 장악하고 있던 임베이드 OS 시장을 하나씩 붕괴시켜 나가고 있었는데 이런 전략을 위협할 세력들이 나타나고 있고, 인텔이 그 선봉에 서고 있습니다.

경쟁 업체인 구글만 하더라도 크롬 OS나 안드로이드 OS를 통해 수많은 디지털 기기 업체들을 우군으로 끌어들이고 있습니다. 급변하는 모바일 기기 시장에서는 대규모 조직이 느리게 움직이는 마이크로소프트의 모델이 과연 타당한 것인지 의문을 제기하고있는 것이죠.

이런 상황에서 세계 최대 디지털 기기 생산 업체인 HP는 언제까지 마이크로소프트만 쳐다보고 제조만 담당하려고 할까요? 넷북이나 스마트폰 시장에서 보여지는 것처럼 대만의 제조 중심 업체들은 다양한 운영체제를 모두 지원하면서 그 세를 점차 확장해 나가고 있습니다. 선진 시장에서는 고전을 하겠지만 중국과 신흥 시장에서는 저렴한 제품을 무기로 계속해서 성장해 나가고 있습니다. 제조 기술과 유통망이 하나씩 갖춰지고 있습니다. 이제 마케팅 능력만 좀더 쌓으면 기존 제조 업체를 위협하는 데 큰 장애물은 없어집니다. HP는 언제까지 이런 시장에 만족해야 할까요?

HP의 고민은 팜 OS를 통해 일정 부분 해결될 수 있습니다. 어쩌면 애플이 HP가 가려는 모델의 성공을 알려줬기에 과감히 12억 달러를 배팅한 것 아닌가 하는 생각이 듭니다. 애플은 아이팟과 아이폰, 아이패드로 연결되는 다양한 기기에 ‘아이폰 OS’를 모두 탑재시키고 있습니다. 특히 아이패드의 경우 테블릿 시장을 호령해 왔던 HP에게는 상당한 위협이 되면서 동시에 새로운 가능성을 보여줬습니다.

HP는 기존 태블릿 제품과는 전혀 다른 ‘슬레이트’를 선보일 계획이었습니다. 이 테블릿엔 마이크로소프트 윈도우 7과 인텔의 아톰 프로세서가 탑재되고 터스스크린을 지원하고 웹카메리와 USB, 메모리 카드도 탑재해 애플의 아이패드와 차별화할 계획이었죠. 그런데 출시 연기설이 모락모락 나오고 있습니다. 팜을 인수한 만큼 팜의 웹OS를 탑재해 두뇌를 교체하려고 하는 것 아닌가 하는 것이죠.

HP는 팜 인수 후 마이크로소프트와의 결별설이 거론될 것 같으니 미리 “마이크로소프트와의 협력은 계속된다”고 연막을 치긴 했지만 그 말을 고지 곳대로 들을 곳은 하나도 없습니다. HP는 이제 독자적인 모바일 기기 운영체제를 통해 다양한 애플리케이션 업체, 개발자, 통신 서비스 업체와 함께 생태계를 만들어 낼 수 있는 꿈을 실현할 수 있게 된 것이죠. 언제가지 마이크로소프트가 던져주는 운영체제를 가지고 피튀기는 제조 시장에 만족할 수 없다는 선언을 한 것 같습니다.

모든 자사 제품 일정을 마이크로소프트의 새로운 운영체제에 맞춰야 하고, 마이크로소프트의 제품이 경쟁력이 있던 없던 무조건 탑재해야 되는 그간의 문제도 일거에 해결할 수 있습니다. 어쩌면 마이크로소프트 입장에서는 우군들이 하나 둘 이탈하면서 모바일 시장에는 이전과는 전혀 다른 접근법이 필요하다는 걸 뼈저리게 느끼고 있는 지 모르겠습니다. 이미 스마트폰 OS 시장엔 윈도우 폰 7을 통해 이전과는 전혀 다른 움직임을 보이고 있습니다.

흥미로운 것은 마이크로소프트가 애플과 같이 다른 모바일 기기들을 위해 윈도우 폰 7을 이식시켜 나갈 지 아니면 기존의 임베디드 OS들을 계속 공급할 지 여부입니다. 수많은 제조사들이 마이크로소프트의 OS에서 벗어난다면 마이크로소프트의 시장 장악력은 급속히 떨어질 수밖에 없습니다. 이미 인터넷 브라이저 시장에서 인터넷 익스플로러의 점유율은 전세계적으로 50% 가까이 떨어졌습니다. 웹으로 대변되는 새로운 시대엔 마이크로소프트의 제품을 사용하지 않아도 된다는 것이 이곳 저곳에서 검증되고 있습니다.

클라우드가 모든 걸 변화시킨다

HP의 팜 인수는 최근 유행하고 있는 ‘클라우드’를 빼놓고 설명할 수 없을 것 같습니다. 네트워크가 발달하지 않았던 PC 시대는 모든 것들을 중앙에 모아놓고 이에 접속해서 처리하는 데는 많은 한계가 있었습니다. 이 때문에 PC와 노트북 같은 클라이언트들의 컴퓨팅 파워가 상당히 중요했습니다.

하지만 유무선 광대역망이 구축되고 나서 이런 흐름은 바뀌고 있습니다. 스마트폰이 대표적이죠. 또 넷북의 등장도 마찬가지입니다. ‘이동성’이 가능해 진 시대에 소비자들이 소유하게 될 수많은 무선 디지털 기기는 모든 것들을 장착할 필요가 없습니다. 인터넷에 얼마나 효율적으로 ‘접속’할 수 있는 지가 관건입니다. 인터넷에 널린 수많은 서비스들을 어떻게 효율적으로 사용할 수 있을 지가 관건입니다. 휴대성과 ‘연결’이 중요해 진 것이죠.

구글이 크롬이나 안드로이드와 같은 운영체제를 만들어 내면서도 강력한 힘을 발휘할 수 있는 것은 그런 운영체제가 탑재된 기기와 자사의 수많은 인터넷 서비스들을 아주 밀접하게 연결할 수 있기 때문입니다. 소프트웨어를 구매해 설치하지 않아도 인터넷에 접속해 원하는 시점에 원하는 걸 사용할 수 있게 된 것이죠. 어떤 기기가 되더라도 동일한 서비스를 안정적으로 사용할 수 있습니다. 모든 데이터들은 클라우드 저편의 ‘데이터센터’에 저장이 됩니다.

애플은 제조사가 소프트웨어와 서비스를 결합했을 때 얼마나 기업의 가치를 극대화시키고, 경쟁 업체들과 전혀 다른 시장에서 우위를 점할 수 있을 지 보여줍니다. 거대한 생태계를 만들어 내는 데 성공한 것이죠. 그 파괴력은 실로 엄청납니다. 애플리케이션 개발자들 입장에서는 8천 500만대가 넘게 깔린 기기 사용자를 겨냥할 수 있습니다. 이런 수치는 계속해서 늘어날 겁니다. 지사를 만들지 않아도 됩니다. 그냥 자신이 있는 사무실에서 애플리케이션을 개발하면 전세계 어디에 있던 상관없이 애플의 기기를 손에 쥔 이용자를 만날 수 있습니다.

이런 생태계의 위력은 계속해서 커지고 있습니다.

PC와 노트북, 프린터 시장을 장악하고 있는 HP 입장에서는 운영체제가 없을 경우 이런 생태계 마련이 처음부터 불가능하다는 걸 알고 있습니다. 노키아는 심비안을 오픈소스 진영에 던져 놓고 있지만 스마트폰 시장에서 다른 모바일 기기 시장으로 나오기 위해 인텔과 손을 잡았습니다. 하지만 노키아는 ‘오비’라는 생태계를 마련하고 있습니다. 오픈소스를 쓴다는 것만 빼고는 애플이 가는 길을 그대로 가는 것이죠. 그러면서 동시에 전혀 뛰어들지 않았던 테블릿이나 넷북 시장도 겨냥하고 있습니다.

HP는 이런 상황을 가만히 두고만 볼 수 있을까요? 미비하지만 여전히 팜의 매니아들과 생태계가 있습니다. 여기에 HP가 연구개발과 영업조직, 마케팅 인력과 자금을 쏟아붇는다면 충분히 승산이 있습니다. 다양한 데이터들은 클라우드 넘어 데이터센터에 있습니다. 데이터센터는 HP의 무대입니다.  단순히 제조에만 관심이 있는 HP가 아닙니다.

HP는 클라우드에 필요한 모든 솔루션과 장비를 제공하지만 거기서 한발 더 나아가 서비스에도 많은 투자를 단행하고 있습니다.

HP는 대중 사용자를 겨냥한 8가지의 클라우드 서비스도 준비중입니다.

hpcloudcomputing090213대략의 서비스를 살펴보면 디지털 사진 프린팅과 무료 온라인 사진 앨범과 디지털 사진 공유 서비스인 스냅피쉬(www.snapfish.com), 대중들이 참여해 만든 잡지를 프로급 품질의 프린팅 서비스로 제공하는 맥클라우드(http://magcloud.com), 온디맨드 프링팅 서비스인 BookPrep(http://www.hp.com/idealab/us/en/bookprep.html), 비즈니스 아이디어나 상품, 기획들을 사고 팔 수 있는 MarketSplash(www.marketsplash.com), 온라인데이터백업 서비스인 HP upline(미국에서만 서비스 제공), 휴대폰으로 어디서나 접속해 프린트 서비스를 받을 수 있는 클라우드프린트, 마이스페이스와 프린팅 관련 제휴 , 모바일과 위치기반 소셜 네트워킹인 프렌드리(Friendlee).

자사의 개인 고객들을 겨냥해 서비스를 계속해서 쏟아내겠다는 것이죠. 이유는 모두가 알고 있는 대로 입니다. 이런 서비스가 결합됐을 때 해당 제조 물품의 판매도 늘어납니다. 유사한 기기는 누구나 만들어 낼 수 있지만 서비스를 만들어 내기는 쉽지 않습니다. HP 입장에서 제조 분야의 경쟁력을 자연스럽게 서비스로 확대할 수 있는 것이죠. 서비스가 실패할 경우 제조 분야에서 후발 주자들과의 차별화를 만들어 내기가 쉽지 않습니다.

팜의 인수는 이런 HP의 큰 그림을 연결하는 매개체이자 촉매제가 될 확율이 높습니다. 기업 대상의 생태계 마련에서 한발 더 나아갈 수 있는 것이죠.

개인이던 기업이던 이제 중요 정보들은 클라우드에 넘기고 있습니다. 얼마 전 SK텔레콤은 개인용 클라우드 컴퓨팅(PCC) 시장이 2013년에 전세계적으로 18조원에 달할 것이라고 밝혔습니다. 이제 개인 대상의 클라우드 전쟁이 시작됐다는 것이죠. 이동통신사, PC와 스마트폰 제조업체, 인터넷 서비스 업체가 그 시장을 놓고 혈전을 벌이고 있습니다. 기기와 클라우드 서비스를 결합시키지 않을 수 없습니다.

통신사가 원하는 모든 것을 제공

HP가 팜을 인수하면서 스마트폰 시장도 겨냥하고 있다는 것은 누구나 생각할 수 있습니다. 스마트폰시장은 매년 20%씩 성장하고 있으며 1천억 달러(한화 약 111조원)시장이라는 자료도 있습니다.

다만 마이크로소프트, 구글이 가지고 있는 생태계와 정면 충돌하면서 이 시장에 힘을 보태지는 않을 것 같다는 것이죠. 개인적으로 HP는 팜을 인수하면서 수많은 지적재산권과 스마트폰 시장에 대응할 수 있는 인력과 기술들을 확보할 수 있게 됐습니니다. 마이크로소프트와 구글의 스마트폰 운영체제를 다루더라도 제대로 시장을 이해하는 인력을 내부에 보유하고 있는지 여부는 중요합니다.

조금은 전혀 다른 생각이지만 HP는 이번 팜 인수 이전부터 통신 시장을 겨냥한 수많은 소프트웨어와 서비스를 강화해 왔습니다. 대표적인 것이 ‘HP커뮤니케이션과 미디어 솔루션(CMS)’입니다. HP는 통신사들의 비즈니스를 위한 솔루션 컨설팅과 솔루션, 솔루션 관리 서비스를 제공해 왔고, 이를 바탕으로 스마트 네트워크 구축, 컨텐츠 관리와 유료화, 운영과 관리의 혁신, 새로운 사업 모델 구상에 적극적으로 대응해 왔습니다.

최근엔 수많은 디지털 디바이스 관리 솔루션(MDM) 업체도 인수했습니다.

여기에 단말단까지 추가하게 되면 통신사가 원하는 모든 것들을 제공할 수 있는 최고의 경쟁력을 가진 회사가 되는 것이죠. 아이패드만 보더라도 통신사들의 데이터망을 사용하도록 하고 있습니다. HP가 자사가 전세계를 겨냥해 쏟아낼 모바일 기기를 전세계 통신사들이 가려는 방향과 일치시키면서 개발할 수 있는 것이죠. 그것도 통신사와 HP가 원하는 시점에 말이죠.

IBM이 아무리 서비스에 능하고 파트너들이 쟁쟁하다고 하더라도 직접 통신사가 원하는 제품을 들고 협력을 단행하는 HP와는 차이가 있을 수 있습니다. HP는 IBM이 보유하지 못한 일반 사용자 시장을 보유하고 있고, 이런 사용자들의 요구에 민감할 수밖에 없는 통신사들의 경우 모든 것들을 제공할 수 있는 전략적 파트너가 필요한 상황입니다. 통신사가 새로운 기기 업체와 협력을 하면서도 그 안에 얹어질 콘텐츠와 서비스를 별도로 고민하고 있는 상황인데 HP는 이런 모든 고민의 초기단계부터 함께 할 수 있게 되는 것이죠.

이 분야는 철저한 B2B 비즈니스입니다. 개인과 기업 시장을 모두 겨냥할 수 있는 ‘연결고리’가 팜이 될 수 있지 않을까 생각하게 된 이유입니다.

그런 면에서 우리나라 제조업체들이 팜을 놓친 건 어쩌면 몇 번 없는 기회를 놓친 건 아닌가 하는 생각이 듭니다. 단순히 기기를 차별화되게 만들더라도 그런 경험을 많이 보유한 소프트웨어 엔지니어와 지적재산권을 쉽게 얻을 수 있는 기회였습니다. 돈이 없던 회사도 아닙니다. 삼성전자의 올해 1분기 이익이 4조를 넘었습니다. 사정이 있었던 것인지 아니면 제조 회사로서 여전히 경쟁력을 가져가기에 충분하고, 최근 안드로이드 시장이나 바다 시장을 겨냥해 몇몇 경쟁력 있는 소프트웨어 회사들에게 개발 지원금을 지불하고 필요한 소프트웨어를 공급받으면 충분히 승산이 있다고 생각하는 것인지 모르겠습니다.

소프트웨어도 해야되고, 새로운 클라우드도 만들어야 하는 국내 제조사들입니다. 처음부터 하나씩 해내갈 수도 있겠지만 경쟁사들은 날라다니고 있을텐데 내부에서 언제까지 그때 그때 대응할 수 있을 지 걱정이 앞섭니다.

HP가 인수한 팜을 놓고 너무나 엉뚱한 생각들을 하는 건 아닌지 모르겠습니다.

EETimes.com – Viewpoint: Is semiconductor industry consolidation inevitable?

EETimes.com – Viewpoint: Is semiconductor industry consolidation inevitable?.

Inevitability of semiconductor industry consolidation seems to be widely accepted. Hitachi and Mitsubishi combine to form Renesas; next Renesas and NEC combine; AMD acquires ATI; etc. Articles discussing the consolidation that has already occurred seem to appear everywhere, with dire forecasts for the future.

Yet the data show exactly the opposite. According to my chief market researcher, Merlyn Brunken, the semiconductor industry has been slowly “deconsolidating” since the 1960s. Consider the market share of the No. 1 semiconductor supplier, Intel. Intel’s market share is the same today, about 13 percent, as the No. 1 supplier was 35 years ago when it was Texas Instruments. In between, NEC was No. 1. The names have changed but the market shares haven’t.

What about the combined market shares of the top five semiconductor suppliers? That percentage has been slowly declining since the 1960s and is about 33 percent in the most recently reported data, less than the 35% reported in 1972. And the combined market share of the top ten? Also decreasing, albeit very slowly, from 48 percent in 1972 to 46 percent in 2008.


Click on image to enlarge.

We might speculate that consolidation has been focused on companies in the manufacturing-intensive part of the semiconductor industry, like the DRAM industry, especially during the past decade. The data show, however, that the market shares of the top one, three, five and ten DRAM suppliers have all decreased since the turn of the century. Did they consolidate earlier? Not in the last 25 years. In fact, the market shares of the top one, three and five DRAM suppliers are exactly the same today as in 1983 (the earliest year for which I have DRAM data) and the top ten have modestly less market share than in 1983.


Click on image to enlarge.

So what’s going on? Why isn’t the semiconductor industry consolidating like most maturing industries? The answer seems to lie in the way the semiconductor industry regularly reinvents itself. Turnover of names of the top ten semiconductor companies is high, with more than 50 percent disappearing from the top ten since the 1950s.


Click on image to enlarge.

Maintaining, or growing, market share in the semiconductor industry requires achieving a leading position in whatever new technology is driving growth. Few companies have been able to do that. To be in the top ten in the 1950s meant you were a leader in germanium or silicon transistors. In the 1960s, sustained growth required leadership in bipolar integrated circuits, eliminating 40 percent of the original top ten in a single decade and 80% of the top ten by the 1970s. In that decade, continued growth was driven by MOS memory, bringing three Japanese companies, NEC, Hitachi and Toshiba, into the top ten. In the 1980s and 90s, leaders in microprocessors moved up the ranking, particularly Intel and Motorola. This changed in the 1990s, as SoC leaders like STMicroelectronics and TI gained momentum. In the most recent decade, fabless companies like Qualcomm, and the primary foundry, TSMC, moved into the top ten.

Looking back across the past six decades, TI is the only company that has remained in the top ten throughout semiconductor history. When consolidation does occur, it is mostly among those companies that fail to adapt to the next level of emerging technology.

What about the future of this turnover phenomenon? Will market shares of leading semiconductor companies continue to decrease? That, of course, remains uncertain. But the semiconductor industry does have an attribute that makes it different from other major industries. That characteristic is the phenomenal growth in unit volume. Nearly 15 percent more chips and 50 percent more transistors are shipped each year than the prior year. Compare that 50 percent compound average growth rate to a mere 0.1 percent for automobiles over the last ten years, 1 percent for crude oil, and 9.3 percent for computers.

Growing cumulative unit volume at such an aggressive rate drives costs down the learning curve, reducing the cost per transistor 35 percent per year over the last ten years. And that enables totally new applications to move into the mass consumer market on a regular basis. For instance, a 2001 MP3 audio player cost about the same as today’s portable video player which has ten times the memory capacity with the same size and power as the 2001 version. If unit volume growth continues (and it shows every sign of doing so), new applications will emerge and semiconductor pervasion will continue to increase. The result? There will be new leaders, as those emerging applications become the growth drivers of the decades ahead.


Click on image to enlarge.

One of the more amazing aspects of the increasing pervasion of semiconductors into new applications is the significant growth in revenue of existing applications as the cost per unit decreases. Consider the digital camera. Most of the semiconductor content of a digital camera consists of non-volatile FLASH memory and the image sensor. In the early 1990s, solid state image sensors sold for $20-25. Image sensors were a negligible portion of the semiconductor total available market (TAM) until the current decade. During the 1990s the price per sensor fell dramatically from the $20-25 range to about $5. At this price point, unit volume soared, making image sensors more than 3 percent of the semiconductor TAM in the last few years. At the same time, NAND Flash memory prices fell nearly 65 percent per year, propelled by the growing unit demand. The result was a substantial net growth in the market for digital cameras and the semiconductors required to make them.


Click on image to enlarge.

This model is repeated again and again in the semiconductor industry. Unit volume growth drives reduction in the cost per unit of semiconductors, fueling market growth and new applications. The revenue generated by this growth provides funds for development of new technologies and further cost reductions, enabling additional applications. In the case of the digital camera, the cost reductions are only the beginning. Low cost solid state imaging and storage have enabled a host of high-volume applications of digital photography such as security, medical imaging, automotive applications and many more areas. The total semiconductor TAM grows because of—not in spite of—the 35 percent per year reduction in the unit price of transistors.


Click on image to enlarge.

When we have an application that is extremely high volume, the cost reduction activity drives changes that enable increased semiconductor consumption globally. The handheld wireless phenomenon is a good example. New subscriptions for wireless communications in India alone in the fourth quarter of 2009 were more than 50 million. That volume has driven down costs for application specific chip designs so that a $15 cell phone is now a reality. That’s a along way from the $3,995 price tag in 1983 for the early commercial handheld cellular phones (Motorola’s DynaTAC 8000X).

What does all this say about our future as an industry? As long as transistor unit volume continues to grow at such high rates, we will continue to enable new applications that grow the total market. On the surface, the makeup of the semiconductor market looks very stable, with the only change in the last 15 years being the five point gain of wireless communications at the expense of computing. But underneath, there have been dramatic changes. The market for desktop personal computers has matured while the market for notebooks and netbooks is growing rapidly. Consumer electronics has remained a relatively constant percent of the semiconductor TAM in the last fifteen years but the growth of video games, flat screen TVs, MP3 players, digital cameras, etc. has dramatically altered the content of that segment.


Click on image to enlarge.

Click on image to enlarge.

For the future, new applications will continue to change the character of the semiconductor market as it grows. New companies will enter the semiconductor industry with new architectures, new device structures and new packaging techniques to satisfy emerging needs.

What about consolidation of the semiconductor industry in the years ahead? It’s possible that our industry will mature and broad consolidation will occur. But if history has anything to teach us, it is that new applications will create new challenges. Companies that innovate to meet those challenges will move ahead. And semiconductor companies that continually reinvent themselves will successfully compete with new comers for the top ten positions.

Walden Rhines is chairman and CEO of EDA vendor Mentor Graphics Corp.

EETimes.com – IBM warns of ‘design rule explosion’ beyond 22-nm

EETimes.com – IBM warns of ‘design rule explosion’ beyond 22-nm.

PORTLAND, Ore.—An IBM researcher warned of “design rule explosion” beyond the 22-nanometer node during a paper presentation earlier this month at the International Symposium on Physical Design (ISPD).Kevin Nowka, senior manager of VLSI Systems at the IBM Austin Research Lab, described the physical design challenges beyond the 22-nm node, emphasizing that sub-wavelength lithography has made silicon image fidelity a serious challenge.

“Simple technology abstractions that have worked for many generations like rectangular shapes, Boolean design rules, and constant parameters will not suffice to enable us to push designs to the ultimate levels of performance,” Nowka said.

Solving “design rule explosion,” according to Nowka, involves balancing area against image fidelity by considering the physical design needs at appropriate levels of abstraction, such as within cells. Nowka gave examples of how restricted design rules could reap a three-fold improvement in variability with a small area penalty.

Nowka envisions physical design rules beyond the 22-nm node that are more technology-aware and which make use of pre-analysis and library optimization for improved density and robustness, he said.

IBM described a solution to “design rule explosion” at the 22 nanometer node illustrated in an SRAM chip design.

Also at ISPD, which was held March 14 to 17 in San Francisco, Mentor Graphics Corp. proposed that hardware/software co-design be used for chips, their packages and their printed circuit (pc) boards. A Mentor executive offered an example in which a 26 percent cost savings was realized by performing such a co-optimization of all three systems simultaneously.

“Thinking outside of the chip,” was the key, according to John Park, business development manager for Mentor’s System Design division. By optimizing the interconnect complexity among all three levels of a design–chip, package and pc board—Park claimed that pin counts, packaging costs and high speed I/O can be optimized. According to Park, the chip-to-package-to-pc board design flow needs to be performed in parallel because restraints on pc boards often place requirements on package design, while package requirements can in turn constrain chip design, both of which are ignored by current designs flows.

Serge Leef, Mentor’s vice president of new ventures and general manager of the company’s System-Level Engineering division, invited the automotive industry to adopt the EDA design methodology for on-board electronics.

According to Leef, the typical automobile today has up to 60 electronic control units (ECUs), up to 10 different data networks, several megabytes of memory and miles of wiring—all of which could be better designed by EDA-like software.

“Software components are like VLSI macros and standard cells; ECUs are like physical areas on the layout onto which IC blocks are mapped; signal-to-frame mapping is like wire routing,” said Leef.

New software tools are needed, according to Leef, which can copy the EDA methodology but be optimized for solving the simultaneous conflicting constraints in automotive electronics, permitting analysis and optimization of designs in order to reduce the number of test cars that have to be prototyped.

In perhaps the boldest presentation at ISPD, keynote speaker Louis Scheffer, a former Cadence Design Systems Inc. Fellow who is now at Howard Hughes Medical Institute, proposed adapting EDA tools to model the human brain. Scheffer described the similarities and differences between the functions of VLSI circuitry and biological neural networks, pointing out that the brain is like a smart sensor network with both analog and digital behaviors that can be modeled with EDA.

DARPA Wants to Override Evolution to Make Immortal Synthetic Organisms | Popular Science

DARPA Wants to Override Evolution to Make Immortal Synthetic Organisms | Popular Science.

Death-resistant synthetic beings? Don’t worry, there’s a genetically encoded kill-switch

Evolution Done Gone Wrong This will turn out well Syfy

It’s been a long time since a Pentagon project from the DARPA labs truly evoked a “WTF DARPA?!” response, but our collective jaw dropped when we saw the details on a project known as BioDesign. DARPA hopes to dispense with evolutionary randomness and assemble biological creatures, genetically programmed to live indefinitely and presumably do whatever their human masters want. And, Wired’s Danger Room reports, when there’s the inevitable problem of said creatures going haywire or realizing that they’re intelligent and have feelings, there’s a planned self-destruct genetic code that could be triggered.

Unsurprisingly, molecular biologists have weighed in with huge caveats and raised fingers of objection. First, they say that DARPA has the wrong idea about hoping to overcome evolution’s supposed randomness, and that evolution really represents a super-efficient design algorithm. Then there’s the problem of guaranteeing immortal life for any biological creature in the first place — just look here and here at some really smart people who have yet to find that fountain of youth.

DARPA has committed just a piddling $6 million out of next year’s budget toward BioDesign. But it will also put $20 million toward a new synthetic biology program and give $7.5 million for speeding up the analysis and editing of cellular genomes. We’re pretty sure that means the Pentagon agency hasn’t considered a future where police “blade runners” help violently “retire” escaped lab replicants of humans.

“It’s too bad she won’t live! But then again, who does?” said Edward Olmos to Harrison Ford inBlade Runner, long before the actor morphed into the gruff but lovable admiral of Battlestar Galactica. Never mind even the experts, let’s trust Olmos. He’s helped hunt down replicants and save humanity from genocidal Cylon robots of our own making. Are you listening, DARPA?

Go wild with the robotic submarine stalkers, thelightning harnessing, and the cyborg insect spies. Just … give this BioDesign thing a bit more thought.