AidSpace Blog

The Mystery of the Massively Parallel Processor

Posted on

Several months ago, according to statistics that measure the public’s access to the museum’s collections via our web site, the one artifact on exhibit at the Udvar-Hazy Center that our online users visited the most was….the Massively Parallel Processor.

Massively Parallel Processor

Massively Parallel Processor

The what? The Massively Parallel Processor, or MPP, is a pair of large blue boxes crammed full of circuit boards, tucked away in the northwest corner of the McDonnell Space hangar at the Udvar-Hazy Center. It is admittedly not much to look at, compared to, say, the Enola Gay, which is currently the most queried artifact online. While the web and new media people try to figure out if the MPP’s exalted online status was an anomaly or not, let me explain what the MPP is. Perhaps after I describe it, you may feel that it deserves more recognition.

We all know how fast computer technology has advanced in the past few decades—many of us carry hand-held devices that have more computer power in them than the supercomputers of an early era, never mind the computer power of the Apollo Guidance Computer that took astronauts to the Moon between 1968 and 1972. But in spite of those advances, the basic design of computers has not changed that much. Nearly all of them follow a design first sketched by the Hungarian mathematician John von Neumann, in a report he wrote for the U.S. Army in 1945. In that report, he argued that an optimal computer would have a single processor, which performed basic operations on a single piece of data at a time, which it transferred to and from a high-speed memory. He argued that such a design was the only way that human beings could manage the complexity of computer design, especially the complexity of programming them. Over the succeeding decades, computer circuits have gotten much faster, and memories have gotten much larger. And of course computers have gotten much smaller and use far less power. But the basic “von Neumann architecture,” with its single instruction stream and single channel to memory, has remained.

John von Neumann

John von Neumann Credit: United States Department of Energy

The Massively Parallel Processor was an experimental machine intended to break what has been called the “von Neumann bottleneck,” by having a program manipulate not one but thousands of pieces of data at a time—in this case, over 16,000 memory locations, each with its own associated processor. That was especially important for computers that processed images, which consist of thousands of picture elements, or “pixels,” each of which needs to be manipulated, but each of which also bears a close relationship to its immediate neighbors.

The MPP was built for the Goddard Space Flight Center in Greenbelt, Maryland, by the Goodyear Aerospace Corporation of Akron, Ohio—a division of Goodyear well-known for its lighter-than-air craft, but a company that also was a pioneer in supplying advanced computers to military and aerospace customers. It was designed in the late 1970s, delivered to Goddard in 1983, and operated into the 1990s.

Was the MPP a success? It worked well, and it demonstrated that a parallel architecture was feasible, and that it was indeed possible to program it. It did not lead to a line of “non-von-Neumann” computers. The laptops and hand-held devices we use employ advanced versions of the classic architecture. But in many current high-performance computer installations, such as those used by Google to search the Internet, parallel architectures are heavily used. Perhaps the large number of Internet queries are coming from Google’s server farms, who are going to the National Air and Space Museum’s website to check up on their grandfather.

Paul Ceruzzi is a curator specializing in aerospace computing and electronics in the Division of Space History at the National Air and Space Museum.

Tags: ,


4 thoughts on “The Mystery of the Massively Parallel Processor

  1. Current CPUs are a hybrid between classic Von Neumann architecture and massively parallel machines like the MPP. However, the graphics cards generally are massively parallel. And Google and such take Sun’s old slogan that “the computer is the network, the network is the computer” and use it to take Massively Parallel to entirely new heights. In fact, much of the difference is one of scale – how capable the sub-units are, and how fast the interconnections are.

    Still, the MPP deserves a lot of recognition, and I’m glad to see it’s getting it, no matter where it’s coming from!

  2. What a wonderful little dollop of computing history! I am making some inquiries here at Goddard to see if anybody is still here who worked with the MPP. I would like to write about it on the Geeked On Goddard” science blog. -Dan Pendick

  3. As a computer scientist and an aviation geek, I’m so glad to see this post on a very important computational architecture.

    Is there any chance that one of the MasPar machines from Goddard wound up in the NASM archives? I’d love to see a post or two about that machine.

    Thanks!

  4. The MasPar machines were basically commercial adaptations of the original MPP-1 architecture. The MPP-1 wasn’t exactly “parallel” by modern use of the word; it had a single instruction flow but each instruction could operate on a large number of data elements at once. Much of the technology from this lives on in the vector processing hardware of typical PC graphics cards today.

Leave a Reply

Your email address will not be published. Required fields are marked *


5 − one =