Tuesday, March 29, 2016

List of Worst Failed Computer Projects

Failed Computer Projects

Ada computer language


. NeXT Computer (1988): Based on the Motorola’s new 25MHz 68030 CPU and including 8MB-64MB of RAM, a 330MB hard drive and an 1120x832 grayscale display, Steve Jobs’ NeXT station cost $10,000 a pop. It was inaccessible to most and didn’t sell very well. Despite its limited commercial success, NeXT played a pivotal role in history. 

The 16 Worst Failed Computers of All Time -...
www.maximumpc.com/the-16-worst-failed-computers-of-all-time
14. Commodore Plus/4 (1984): Commodore released like 2,000 computers in about 5 years time. That’s baffling. The Plus/4 was a home computer with a built-in

12. IBM PS/2 (1987): How ya’ gonna’ do it? PS/2 It! Or not. The Personal System/2 was IBM’s failed attempt to regain control of the clone market via a closed, proprietary architecture.

HP3000 (first ship) was failure. Did not take into account moving data from memory into stack registers

HP 300 (wikipedia)  The HP 300 "Amigo" was a computer produced by Hewlett Packard (HP) in the late 1970s based loosely on the stack-based HP 3000, but with virtual memory for both code and data. It introduced built-in networking, automatic spelling correction, multiple windows (on a character based screen), and labels adjacent to vertically stacked user function keys, now used on ATMs and gas pumps. The HP300 featured HP-IB (later IEEE-488) interface (IF) as the I/O bus, an 8" floppy disk, and a built-in fixed 12M hard drive later common on PCs. The HP300 was cut-short from being a commercial success despite the huge engineering effort, which included HP-developed and -manufactured silicon on sapphire (SOS) processor and I/O chips. HP Computer Systems Division General Manager (GM), Doug Spreng, decided the file system differences between the division's money making HP3000 line and the burgeoning HP300 would keep the HP300 from being successful and killed the product. HP built two semi-truck loads of units before shutting down the HP300 production line to meet customer contractual agreements


iapx 432 final project The iAPX 432 was a flop and was discontinued only four years after its release. Speed was the main reason that the processor failed, although many programmers did not see Ada as the way of the future and therefore ignored the chip. The reason the 432 was so slow was that it verified many memory accesses (causing a memory read), its instructions were not aligned and took a while to decode, it did not have a large enough cache, it did not have enough registers, and it had extra chips which had to communicate. The success of the 80286 sealed its fate; as mentioned above it was four times faster than the 432. The iAPX 432 taught Intel a lot about what could and could not be done, and it was impressive that such a complex a system could be created with the available technology, but it was too complicated for practical use. Tandem rejects iAPX paper: why is it 4x slower than 8086? 50% of compiler instructions unneccesary dvorak Once released the chip proved to be a woofing dog the designers gave up on it as a product and moved forward with some of the ideas used to design the chip. It’s believed that it was given up on after 1984 although supplies of the chipset may have still been available as late as 1993. The ideas behind the chip continued and slowly evolved into what is today’s Intel 960 embedded processor Everything changed back once the 432 hit the market and was determined to be a dog. The 432 was simply too ambitious an undertaking.

http://dtrace.org/blogs/bmc/2008/07/18/revisiting-the-intel-432/ Posted on July 18, 2008
our many failures go largely unstudied — and the rich veins of wisdom that these failures generate live on only in oral tradition passed down by the perps (occasionally) and the victims (more often).

A counterexample to this — and one of my favorite systems papers of all time — is Robert Colwell‘s brilliant Performance Effects of Architectural Complexity in the Intel 432. This paper, which dissects the abysmal performance of Intel’s infamous 432, practically drips with wisdom, and is just as relevant today as it was when the paper was originally published nearly twenty years ago.

For those who have never heard of the Intel 432, it was a microprocessor conceived of in the mid-1970s to be the dawn of a new era in computing, incorporating many of the latest notions of the day. But despite its lofty ambitions, the 432 was an unmitigated disaster both from an engineering perspective (the performance was absolutely atrocious) and from a commercial perspective (it did not sell — a fact presumably not unrelated to its terrible performance). To add insult to injury, the 432 became a sort of punching bag for researchers, becoming, as Colwell described, “the favorite target for whatever point a researcher wanted to make.”

But as Colwell et al. reveal, the truth behind the 432 is a little more complicated than trendy ideas gone awry; the microprocessor suffered from not only untested ideas, but also terrible execution. For example, one of the core ideas of the 432 is that it was a capability-based system, implemented with a rich hardware-based object model. This model had many ramifications for the hardware, but it also introduced a dangerous dependency on software: the hardware was implicitly dependent on system software (namely, the compiler) for efficient management of protected object contexts (“environments” in 432 parlance). As it happened, the needed compiler work was not done, and the Ada compiler as delivered was pessimal: every function was implemented in its own environment, meaning that every function was in its own context, and that every function call was therefore a context switch!. As Colwell explains, this software failing was the greatest single inhibitor to performance, costing some 25-35 percent on the benchmarks that he examined.

If the story ended there, the tale of the 432 would be plenty instructive — but the story takes another series of interesting twists: because the object model consumed a bunch of chip real estate (and presumably a proportional amount of brain power and department budget), other (more traditional) microprocessor features were either pruned or eliminated. The mortally wounded features included a data cache (!), an instruction cache (!!) and registers (!!!). Yes, you read correctly: this machine had no data cache, no instruction cache and no registers — it was exclusively memory-memory. And if that weren’t enough to assure awful performance: despite having 200 instructions (and about a zillion addressing modes), the 432 had no notion of immediate values other than 0 or 1. Stunningly, Intel designers believed that 0 and 1 “would cover nearly all the need for constants”, a conclusion that Colwell (generously) describes as “almost certainly in error.” The upshot of these decisions is that you have more code (because you have no immediates) accessing more memory (because you have no registers) that is dog-slow (because you have no data cache) that itself is not cached (because you have no instruction cache). Yee haw!



No comments:

Post a Comment