Days of Futures Past

The picture above is ASCI Red, the world’s fastest supercomputer in 1999-2000. It was about the size of a large tennis court, sucked a couple of MW and cost around $55M (it went through various incarnations). And that’s not to mention the staff of acolytes and air-conditioned buildings required to make it work. Its delivered performance was about 2.4 TFlops (Thousand Billion Floating Point Operations per second), with a theoretical maximum of around 3.2TFlops, delivered by an array of nearly 10,000 processors, all chuntering away in parallel.

Now look at this: the box (image from egpu.io) that now sits on my desk, for use as a rendering engine for Augmented and Virtual Reality development and for computational AI experiments. It’s what’s called a eGPU: a very high bandwidth box containing a current generation graphics card (in this case, a GTX 1080Ti), that sits alongside my computer and offloads all the stuff that CPUs really shouldn’t have to deal with.

Now for some of its figures: It has slightly over 12 Billion transistors, 3,500 processors and consumes around 350w when at full chat. Given the right class of problem, its maximum throughput is around the 12TFlops level: that’s 5x that1 of the world’s leading supercomputer of 18 years ago. Oh yes, and the whole ensemble cost around $1500: 1/37,000th of the pricetag of ASCI Red. Want an illustration of Moore’s Law allied to mass market economies of scale? This is it. 

I’ve been working in and around VR and AI for quite some time. Starting from a peripheral association with the Alvey programme in the 1980s, through various experiments in VR with the rise of VRML and then working with Silicon Graphics in the 90s to my current stewardship of a very exciting AR/AI tech and content company, the common denominator has always been the pushing of the leading edge of the available technologies for input and display, tracking rendering and computation.

The difference is that now we’re operating at a far higher level than before: where once we were dealing with megabytes, we now casually throw terabytes around; where we struggled with expensive megaflops, we now call up teraflops into existence, on-demand, from thin air and, where we struggled with pixellated multi-kilo displays and pathetic ultrasonic sensors, we now simply wave our pocket supercomputers at the world, if, that is, we’ve remembered which pocket we last put it in.

And that matters. Twenty years ago, we understood the principles of what we wanted to do but simply didn’t have the tech – at any price – to get the experience above the threshold of acceptability. Now, we’re there: we can create immersive experiences that surpass the perceptual threshold for the willing suspension of disbelief and which (even more importantly) have driven lag and rendering times below the threshold of the infamous “Barfogenic Zone”, something that’s plagued VR and simulation systems since their inception.

And, lest any gamers reading are thinking, “WTF? We’ve been doing that for years!”, indeed you have, but not with the level of environmental mapping, sensor fusion, scene rationalisation and real-time analytics required to seamlessly integrate virtual and physical worlds, and imbue each with intelligent behaviours and knowledge of its counterpart reality. That takes a LOT of machine cycles and, more to the point, eats battery life.

So, despite these frankly insane advances in technological capability, we’re still pushing the boundaries of the possible, especially in applications which combine experience delivery with intelligent behaviours, data and sensor fusion and adaptive analytics: that’s a model which is as demanding of computing resources as anything we’ve yet seen, combining, as it does, several areas of high-performance computing into a single system. So we’re still juggling with doing smart stuff, technically and creatively, to get the best out of the limitations of each part of the experience-feedback-understanding cycle.

It is however a hugely important synergy that the same parallel architecture that allows gamers to engage in mass mayhem in 4k at 120fps is near-identical to that which enables AI engineers to train Machine Learning to misidentify your cat, thus benefitting massively from the mass market engineering of companies like NVIDIA and AMD. Thanks to the demand from the consumer market, AI researchers and developers are now able to put a whole bunch of these GPUs in a standard chassis and have available – for say £20,000, computing power that they couldn’t have had at any price a decade ago.

It is however less than useful that the same tools are ideal for the cryptographic work associated with mining Bitcoin and its ilk. Right now, we’re seeing a huge drought of GPUs as the gaming and AI communities suffer from the depredations of bulk-buying cryptocurrency delusionists.

So the next time you worry about the rise of AI taking your job away, for once you don’t need to blame the Baby Boomers: this time it’s your gamer offspring destroying the global zeitgeist from their LED-lit, loudly whirring bedrooms.


 

  1. terms and conditions apply, depending on precision and word length.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.