For Gary Taubes’ original article, click here
A close-up look at a doomed-yet-brilliant start-up computer company that never quite grasped the basics of business.
Some day we will build a thinking machine. It will be a truly intelligent machine. One that can see and hear and speak. A machine that will be proud of us.
— From a Thinking Machines brochure
* * *
In 1990, seven years after its founding, Thinking Machines was the market leader in parallel supercomputers, with sales of about $65 million. Not only was the company profitable; it also, in the words of one IBM computer scientist, had cornered the market “on sex appeal in high-performance computing.” Several giants in the computer industry were seeking a merger or a partnership with the company. Wall Street was sniffing around for an initial public offering. Even Hollywood was interested. Steven Spielberg was so taken with Thinking Machines and its technology that he would soon cast the company’s gleaming black Connection Machine in the role of the supercomputer in the film Jurassic Park, even though the Michael Crichton novel to which the movie was otherwise faithful specified a Cray.
In August of last year Thinking Machines filed for Chapter 11. It had gone through three CEOs in two years and was losing money at a considerably faster rate than it had ever made it.
What caused this high-flying company to come crashing to earth? The standard explanation is that Thinking Machines was a great company victimized by the sudden cutbacks in science funding brought about by the end of the cold war.
The truth is very different. This is the story of how Thinking Machines got the jump on a hot new market — and then screwed up, big time.
* * *
Until W. Daniel Hillis came along, computers more or less had been designed along the lines of ENIAC. In that machine a single processor completes instructions one at a time, in sequence. “Sequential” computers are good at adding long strings of numbers and at other feats of arithmetic. But they’re seriously deficient at the kinds of pattern-recognition tasks that a two-week-old puppy can master effortlessly — identifying faces or figuring out where it is in a room. Puppies can do that because their brains — like those of all animals, including humans — are “massively parallel” computers. Instead of looking at information one jigsaw-puzzle piece at a time, a brain processes millions, even billions, of pieces of data at once, allowing images and other patterns to leap out.
While a graduate student at MIT’s Artificial Intelligence (AI) Lab, Hillis, whom everyone knows as Danny, had conceived of a computer architecture for his thesis that would mimic that massively parallel process in silicon. Hillis called the device a “connection machine”: it had 64,000 simple processors, all of them completing a single instruction at the same time. To get more speed, more processors would be added. Eventually, so the theory went, with enough processors (perhaps billions) and the right software, a massively parallel computer might start acting vaguely human. Whether it would take pride in its creators would remain to be seen.
Hillis is what good scientists call a very bright guy — creative, imaginative, but not quite a genius. He is also an inveterate tinkerer, whose work has always been more fascinating than practical. On the fifth floor of Boston’s Computer Museum, for instance, is a minimalist computer constructed of fishing line and 10,000 Tinkertoy parts. Hillis built it to play and win at tic-tac-toe, which it invariably does. His other work includes a robot finger that can differentiate between a washer and a screw but is flummoxed by a piece of gum; a propeller-driven jumpsuit that allows its wearer literally to walk on water; and a home robot constructed of paint cans, lightbulbs, and a rotisserie motor.
At the AI Lab, Hillis had become a disciple of legendary AI guru Marvin Minsky. The two were determined to build a connection machine as a tool with which to develop software programs for artificial intelligence. Because the cost would be prohibitive for a university laboratory, they decided to form a company. They went looking for help and found Sheryl Handler.
Handler had participated in the start-up of the Genetics Institute, a Harvard-based genetic-engineering firm. Her background was eclectic: she had studied interior design, held a master’s degree in landscape architecture from Harvard, and at the time was pursuing a doctorate in city planning at MIT. She was also running her own nonprofit consulting firm, specializing in third-world resource planning. She had a taste for classical music and a fine appreciation for style. She’d even been the subject of a Dewars Profile that ran with the quote “My feminine instinct to shelter and nurture contributes to my professional perspective.”
Handler also had a talent for cultivating friendships with brilliant and famous people. One of her Genetics Institute colleagues later called her a “professional schmoozer.” She quickly proved her usefulness by connecting the people who would build the Connection Machine with CBS founder William Paley. Hillis, Minsky, and Handler pitched the idea to Paley and CBS president Fred Stanton in a meeting to which Hillis wore his customary jeans and T-shirt. Still, he managed to impress the television moguls, who with others eventually agreed to kick in a total of $16 million to the venture.
In May 1983, despite the lack of a business plan, the company was founded and took up shop in a dilapidated mansion outside Boston that once was owned by Thomas Paine, the author of the Revolutionary War pamphlet Common Sense. Hillis and Handler called their new company Thinking Machines because, says Hillis, “we wanted a dream we weren’t going to outgrow.” As it turned out, there was never much danger of that.
In the beginning, Thinking Machines didn’t need to make good business decisions because it had the Defense Advanced Research Projects Agency. A research arm of the Defense Department, DARPA was looking for computer architectures that would enable tanks, missiles, and other weapons to recognize enemy targets and understand spoken orders. In 1984 Hillis and his colleagues at Thinking Machines repackaged Hillis’s thesis and pitched it to DARPA. The agency responded by offering the company a multiyear $4.5-million contract. Now all Thinking Machines had to do was build one of the world’s fastest computers in two years’ time.
The company promptly went on a hiring binge. Its prime hunting grounds were the computer-science departments of MIT, Carnegie-Mellon, Yale, and Stanford — which happened to house four of the world’s leading AI labs. Everyone, from programmers to administrative assistants, had to be interviewed by Handler, who had a very specific, if mysterious, idea of who would be good enough to work for Thinking Machines. (Many researchers later reported that once they were hired, they never got to speak to Handler again — even when they were alone with her in an elevator.)
In fact, Thinking Machines was becoming Handler’s aesthetic creation as much as the Connection Machine was Hillis’s. In the summer of 1984 the company moved into its new home — the top two floors of the old Carter Ink Building in Cambridge, Mass., a few blocks from MIT. Handler personally oversaw the design of the office space, insisting that each office be painted a different and specific color. Huge open spaces were created to stimulate idea sharing and creativity. A plush cafeteria was put in, complete with a gourmet chef. Couches were scattered throughout the offices so that researchers could take naps or even sleep there overnight, which many of them did. And the soft-drink machine was wired to a terminal. Researchers who wanted a drink simply typed in their choice.
In short, Thinking Machines was becoming a hacker’s paradise. The thinking, says Lew Tucker, one of the company’s research directors, was that “if they were fed, they’d practically live at Thinking Machines.” If Hillis disapproved, he didn’t make it known. Having taken to commuting in an antique fire engine, he could hardly play the pragmatist to Handler’s stylist.
In May 1985, Thinking Machines announced the impending completion of the first Connection Machine, the CM-1. The announcement would be made on the third floor of the Carter Ink Building. Handler had every surface on the new floor repainted a slightly different shade of mauve. When it was done, she wasn’t satisfied. So she had her researchers and scientists paint it again.
The CM-1 was an AI researcher’s dream. Unfortunately, few AI labs could afford a $5-million computer, and, as Resnikov had predicted, hardly anyone else was interested. When it came to general scientific computing, the CM-1 was “a dog,” in the words of Gordon Bell, a computer guru and architect of the famous VAX computer at Digital Equipment Corp. It had no facility for running FORTRAN, the de facto standard computer language of science; nor could it do what are known as “floating-point operations,” the operations that manipulate numbers in scientific computation.
Thinking Machines sold seven CM-1s, but only because DARPA brokered and subsidized most of the deals. If the company was going to stay in business, it would need a machine that could pull its weight outside AI research. Unfortunately, according to Resnikov, the decision to tailor the CM-1 to the AI “nonmarket” cost Thinking Machines three years in the real-world marketplace.
In April 1986, Thinking Machines announced the arrival of the CM-2, a machine the scientific community actually could use. The CM-2 was able to run FORTRAN and to do floating-point operations. It was also a piece of work artistically: a five-foot cube of cubes — done up in what Thinking Machines employees called “Darth Vader black” — in whose innards red lights flickered mysteriously. But the machine’s exotic massively parallel technology still needed special software, which meant its users had to learn new programming techniques. The CM-2 might be more like the human brain than a sequential computer like the Cray was, but scientists knew how to write programs for the Cray. Many of Thinking Machines’ first customers, says Dave Waltz, who ran the company’s AI group, did most of their computing on the floating-point processors, ignoring the 64,000 single-bit processors.
As a result, there still wasn’t much of a market for Connection Machines. But thanks to the support of DARPA, which continued to broker deals, Thinking Machines didn’t have to seriously contemplate building a machine that had a natural market. “Our charter,” says Tucker, “wasn’t to look at a machine and figure out the commercial profit. Our charter was to build an interesting machine.” But the definition of interesting would soon change.
* * *
In the late 1980s, DARPA and the Bush administration, having accepted the fact that the end of the cold war had reduced the urgency for military supercomputing, came up with a new challenge for parallel computing. They began to talk about solving what D. Allan Bromley, the president’s science adviser, dubbed “grand challenge” scientific problems: modeling the global climate, analyzing the folding of proteins, mapping the human genome, predicting earthquakes, revealing the nuances of quantum mechanics. The problems didn’t require artificial intelligence, just enormous computing power.
The official name of the new project was the High Performance Computing and Communication (HPCC) program, and DARPA was the lead agency, with a projected budget of several billion dollars through 1996 to accomplish its goals. At the top of the list: building a computer capable of a teraflop — a trillion floating-point operations per second.
Not surprisingly, Thinking Machines had an inside track on getting a chunk of the projected budget. While other computer companies were out wooing customers, Handler had been cultivating a friendship with Bromley. As soon as Thinking Machines promised it would have a scaled-down version of a teraflop machine ready by 1992, the agency awarded the company an initial contract of $12 million.
In the meantime, several computer companies were exploring a new technology — a compromise between the comfort of sequential computing and the performance of massively parallel machines. A sort of “moderately parallel” design, the technology entailed stringing together a smaller number of the powerful, cheap, off-the-shelf microprocessors used in PCs and workstations — rather than the thousands of highly customized but less powerful processors used in the Connection Machines — into a single supercomputer that would work with existing software.
The cost advantages of using off-the-shelf chips, as well as the functional advantage of running existing software, seemed overwhelming — especially considering the fact that few customers outside the tiny AI community had much interest in Thinking Machines’ massively parallel design. Even Hillis eventually came around and chose the moderately parallel design for the company’s next generation of machine. Unfortunately, the old dream died hard: the decision came only after 18 months of internal bickering. Once again, the company was off to a late start.
What’s more, there were signs that the company was still chasing the wrong market. Industry analysts in 1992 were projecting that the growth in supercomputers was not in science but in business applications — in particular in what’s known as “database mining,” an area that could well become, as IBM parallel-computing expert Art Williams put it, “the killer application” for parallel computers. With the country in a recession, businesses needed every competitive advantage they could get, which meant knowing their customers’ preferences and buying habits in intimate detail. They had begun to collect all conceivable data and were feeding them into their mainframes, looking for any insight that would help them maximize profits. But it sometimes took mainframes hours, even days, to churn out the answer to a single question. So large companies were beginning to check out parallel computers.
In fact, Thinking Machines had sold two Connection Machines to American Express. That got management at Thinking Machines talking about starting a business supercomputer group, an idea that appears at first to be a no-brainer. But at Thinking Machines the idea got stuck in endless discussions. Hillis and Handler already were bitter about having to target general scientific computing rather than artificial intelligence; they weren’t about to jump on the idea of servicing mere merchants. Hillis later complained about the injustice of a world where “the real money is in handling Wal-Mart’s inventory rather than searching for the origins of the universe.”
Nonetheless, thanks to DARPA, Thinking Machines went into the black for the first time. In 1989 the company reported a profit of $700,000 on revenues of $45 million. Handler promptly signed a 10-year lease with the Carter Ink Building for a whopping $6 million a year — about $37 a square foot. (Lotus Development Corp., which was virtually across the street from Thinking Machines, was paying $8 a square foot.) Thinking Machines also hired another 120 employees, bringing the total to over 400. Meanwhile, the company had developed an image as one of the leading high-tech companies in the country. It was, says Stephen Wolfram, who founded the highly successful software company Mathematica, “the place that foreign trade delegations would come to visit to see where American business was at these days.”
Yet competition was looming. Cray Research launched a crash program in 1990 to get a massively parallel machine on the market within two years. IBM was doing the same. Even Fujitsu Limited, one of Japan’s major supercomputer manufacturers, was in the process of opening a parallel-computing lab, looking toward marketing a 1,000-processor machine.
If there was ever a time that Thinking Machines could, and needed to, put itself on a solid financial and competitive foundation by merging with a deep-pocketed company or by going public, it was now. But Handler nixed all deal making. She felt the company could get a wildly successful teraflop machine out on its own.
As the company forged ahead with its frantic effort to bring the new machine out on time, the corporate culture started to shift from openness to paranoia. Employees weren’t allowed to discuss the machine with one another in the cafeteria. Customers were kept in the dark. The new machine was dubbed the CM-5, to foil hackers acting as corporate spies who presumably would be rummaging through the company’s files looking for a nonexistent CM-3.
Thinking Machines announced the CM-5 in October 1991. Hillis claimed it had the highest “theoretical” peak performance of any supercomputer ever, if you added enough processors to it. The reality: at the time completion of the CM-5 was announced, the machine was slower than its predecessor, the CM-2. Among other problems, the standard chips the company had chosen weren’t ready, so some machines had to ship with slower, earlier-generation chips. Meanwhile, competitors like Intel, Kendall Square Research (KSR), MasPar Computer, and nCube were starting to ship faster supercomputers. More than ever, Thinking Machines was depending on its DARPA edge to move its products.
Then, in August 1991, as DARPA was about to start the process of determining which supercomputer vendors would win the lion’s share of its planned spending spree, the Wall Street Journal broke the story that the agency had been playing favorites. It turned out that DARPA had subsidized — sometimes to the tune of the entire purchase price — the sale of some 24 Connection Machines in recent years. The subsidies added up to a gift to Thinking Machines of $55 million — 20% of the company’s lifetime revenues to that point.
DARPA had greased Intel’s supercomputing wheels too but had left the rest of the supercomputer industry to fend for itself. And now the other players were howling. Perhaps the clearest and most damning criticism came from KSR founder Henry Burkhardt: “Vendors handed money by the government have no interest in solving customers’ problems,” he growled.
An embarrassed Bush administration put an end to Thinking Machines’ DARPA gravy train. For the first time the company had to sell its machines on their merits in an open market. At the end of 1992, Thinking Machines reported a loss for the year of $17 million. The CM-5 wasn’t selling, and the company was hemorrhaging money. Hillis was no longer spending much time in the office. The first round of layoffs had started. Salaries were frozen. Requests for new laptop computers were being denied.
Meanwhile, Handler had an enormous marble archway installed in the atrium of the Carter Ink Building. When a national supercomputer conference was held in Seattle, she decided to stay in San Francisco and commute to Seattle from the swank Stanford Court Hotel. She commissioned a $40,000 logo design for a CM-5 sweatshirt and then rejected it. While the company was sinking, she focused her attention on putting out a cookbook with recipes from the company’s now-infamous cafeteria. Increasingly paranoid, she had a video camera aimed at her personal parking spot and, by some accounts, made people take meetings with her in her parked car. She hired a bodyguard, telling her colleagues that she had received death threats.
Some members of Thinking Machines’ board suddenly seemed to realize that the person who had been running the company all those years had no business skills. The board discussed dumping Handler, but she managed to get her biggest enemies there kicked off.
In early 1993 a new president was brought in, but Handler, who remained CEO, quickly got rid of him. Later in the year a lawyer named Richard Fishman was hired as president. Fishman was a longtime friend of Handler, but when he realized that no outsider would fund the sinking company while Handler remained at its helm, he engineered her ouster.
Fishman focused the company on the business market and began looking for a partner. Sun and IBM were interested, says Tucker, but weren’t willing to take on Thinking Machines’ mounting debt, which included six more years of rent at the Carter Ink Building, a $36-million commitment.
In mid-August, Thinking Machines filed for bankruptcy protection, and Fishman resigned. Soon Hillis himself left the company that had been founded around his thesis. Thinking Machines would reemerge as a small software firm selling programs for its former competitors’ parallel computers.
As late as 1989, says Fishman, Thinking Machines was still three years ahead of the rest of the world in parallel-processing technology. “While others caught up,” he says, “Thinking Machines was losing time, losing customers, and not moving on to the next generation.” Had the CM-5 been built without the miscues and the wasted time, the company might have gone on to live up to its considerable promise. But, as one of the company’s senior scientists would later put it, what if pigs could fly?
* * *
Gary Taubes is a New York-based science and technology writer. His most recent book is Bad Science: The Short Life and Weird Times of Cold Fusion