Hewlett Packard Enterprise has finally revealed a working prototype of The Machine, a research project first announced in 2014 that hoped to “reinvent the fundamental architecture of computing.” The achievement is bittersweet, though, as it seems that HPE no longer plans to commercialise The Machine as a complete solution—instead, bits and pieces of the project will filter down into other commercial HPE servers and technologies.
HPE unveiled the prototype at an event in London on Monday. The Machine wasn’t actually powered up in London; rather, it’s located in a lab in Fort Collins, Colorado. There was a single node of the prototype on display in London, though. The chassis is very deep (about a foot deeper than a usual server rack) and thin, with an SoC on one end, RAM in the middle, and then oodles (2-4 terabytes) of persistent memory taking up the rest of the case.
Persistent memory isn’t the only exciting bit, though: the SoC is attached to the persistent memory via a silicon photonics fabric. Presumably those ribbons that connect to the SoC and run along the edge of the chassis are fibre-optic cabling. HPE says that the SoC can also use the persistent memory in other nodes, again via a silicon photonics fabric, resulting in hundreds or thousands of terabytes of total accessible memory.
Persistent memory, if you haven’t heard of it before, is exactly what it sounds like: memory that can survive a power outage. HPE is already using persistent memory in some of its servers, and presumably the same tech is being used here in the prototype. A few different types of persistent memory have been mooted over the last few years, including 3D XPoint and HP’s own fabled memristors, but for now the only type that has been commercialised is NVDIMMs—sticks of RAM that have flash memory on-board, so that the contents of memory can be backed up to flash in case of power outages. Micron has been selling the stuff for a while now, with per-stick capacity now up to 16GB.
Recently we discussed the performance benefits of keeping entire databases in memory. HPE’s own rather optimistic slides reckon that modifying existing tools and database systems to work in-memory can realise performance gains of up to 300 times. But the ultimate end goal—a complete rethink of how we actually do computing, triggered by The Machine—is a performance boost of about 8000 times.
The architecture of the SoC is unknown at this point, though it’s probably some kind of Intel x86-64 or ARM v8 chip. If I had to guess the SoC is probably closely connected to the silicon photonics—but unfortunately we also don’t know much about the silicon photonics. HP has been working on silicon photonics (on-chip transmission and receiving of optical signals) for years, but there’s no indication that the company’s tech is anywhere near mature enough for The Machine. Intel is probably the furthest along with commercialising silicon photonics tech, followed by IBM.
The main purpose of the event in London, though, wasn’t to talk about the prototype; rather, it was to quietly kill off hopes that The Machine as an actual, er, machine, will be commercialised. One of the slides (pictured above) explicitly stated the Machine’s goal is now to “demonstrate progress, not develop products.” The next slide then shows how some of the tech advances will percolate down into other HPE products.
This is at odds with HP’s original proclamation in 2014 that it would bring The Machine to market in a few years or “fall on its face trying.”
Still, a lot has changed since The Machine was first conceived, including HP splitting into two separate companies. The company also expected memristor-based non-volatile memory to be commercialised by now. Pivoting The Machine to a technology incubator seems like a very sensible idea, but completely reinventing the architecture of computing in a series of baby steps is of course a little less exciting than the original moonshot.
Listing image by HP