AI and the Naming of Names

AI (that’s Artificial Intelligence – I have to be clear here as I live in a farming community and conversations have been known to take a strange turn) is a flavour of the moment and is riding high on the arm-waving curve of the hype cycle. We’ve been here before though – as a notion, AI has been through more loops of the hype cycle than most technologies, with successive waves of mutually reinforcing innovation and fiction conspiring to promise more than contemporary understanding could deliver.

This is perhaps unsurprising, given the breadth and fuzziness of the term, ‘Artificial Intelligence’. Its aspirational imprecision does encourage spurious claims of ‘intelligence’ for systems that are anything but, but which seek to jump on the bandwagon of AI and to use its momentum to bolster their own ends. Their inevitable underperformance of course creates parasitic drag on the whole field, at the ultimate cost of public and investor perception and confidence. The consequential crash’n’burn phase then becomes all the more likely, at which point many of the bright jewels of the field’s genuine players get thrown out along with the dross. Again.

But such is the nature of the beast: technologies rise, are over-sold, the market corrects and the whole topic fades into the background, usually until there’s an ultimate convergence of enabling technologies and needs, at which point things start take off again, (usually) on a sounder base than hitherto. And that is where I believe us now to be.

Need and Acceptance

Firstly, there is a market with both big, societal problems to solve and small-but-complex personal needs that we’d really like to be able to hand off to our machine assistants – anything from understanding and predicting climate change(1) or predicting the consequences of corporate (mis)behaviour down to monitoring our health and booking our holidays. More to the point, there’s increasing recognition at all levels that we need help to deal with the complexity and uncertainty that comes from radical information overload.

It won’t be a smooth ride though: as ‘intelligent’ systems and their behaviours become both more ubiquitous and more visible to us, there will naturally be a plethora of unintended consequences, not least of which will be how we work out how and whether to trust the judgement of an autonomous system when it’s dealing with outcomes from processes too complex for us to comprehend directly, without waiting for the outcome to validate the judgement one way or the other. On the other hand, we have a decidedly patchy track record ourselves in intentional meddling in complex constructs such as national or global economies and a very poor record of understanding and accepting the consequences of intervening in natural systems.

Enabling and Delivering

Secondly, there’s a confluence of enabling thought models and technologies: in the first case, we’re getting much better at taking multi-disciplinary views and understanding the non-deterministic dynamics of our creations and in the second, we have casual access to technologies that were science fiction a couple of decades ago: most of us carry massively powerful, connected computers in our pockets(2), Cloud services such as AWS provide the massive instant and cost-effective scalability of computing resources that AI often needs, GPU arrays let us build incredibly cheap, massively parallel supercomputers in our labs, automated and connected sensors for pretty much everything are reaching commodity-level pricing and the growing exposure of massive raw data sets under various Big Data initiatives gives us a fertile loam on which to grow our services. And all can be tied together in much of the world (large parts of the UK excepted) with fast, cheap and reliable fibre and 4G networks.

So I do believe that, this time around, there both is real substance to the models and that we are reaching the critical convergence of disciplines and enablers that will allow ‘AI’ to become an integral part of our lives, doing stuff that people realise they want rather that what we technologists think they ought to have.

But that brings us back to my problem with the term ‘Artificial Intelligence’ itself. It’s a semantic problem but, I think, an important one: the juxtaposition of ‘Artificial’ and ‘Intelligence’ places a burden of expectation (Intelligence) on something for which no universal definition exists. The very term sets up a shibboleth – a test where no-one can really agree on the bar that needs to be crossed: each discipline, application and practitioner has their own – and continually changing, definition – of intelligence, and the more we learn and the more we therefore realise that we don’t know, the higher that bar is raised.

In fact, a very good working definition of Artificial Intelligence is the old and wryly stated assertion that AI is, “Anything we haven’t done yet” – no matter how many years of blood, sweat and electrons have been shed cracking any given problem, as soon as it’s done and out there, it seems easy. At that point we immediately decide that it isn’t ‘real’ AI and move on to the next unobtainable goal. That does say more about the nature of human than machine intelligence, but makes the assessment of the latter something of an exclusion problem: we can define it (however we choose), or we can measure it, but we can’t have both at the same time – the act of our measuring (and understanding) inevitably redefines the problem.

So, if we set aside the near-theosophy inherent in ‘Artificial Intelligence’, what does that leave us with? Well, Cognitive Bingo: perm pretty much any combination of Autonomous, Emergent, Self-organising, Adaptive, Deep, Machine, Cybernetic and Learning and you’ll find that someone, somewhere has probably used it to try to describe their field. That’s simply because what we fondly call AI is such a broad church of highly inter-related disciplines that it is really difficult to come up with a useable catch-all. Hence the persistence of ‘Artificial Intelligence‘, however debased we may regard the term to be.

For the current state of the world, I find myself using either Machine Learning or Autonomous Systems, depending on context. I’m still rather ambivalent however, given that neither term encapsulates the cybernetics of a system’s interaction with humans, whom I fondly imagine to be the ultimate beneficiaries and they don’t address the whole emerging area of “Machine Doing” – the point at which these entities move beyond being a extension of what we do, mediated by us, to actually doing things based entirely on their own judgement.  From that point, I start to use the term ‘Volitional System’, whilst looking over my shoulder for glowing spheres materialising in the dark.

Most of our AIs are though, right now, learning systems, so Machine Learning will do, pro tem.

So, and putting aside the naming of names, rather than getting hung up on whether something is intelligent or not and what in fact that means to the beetle on the street, I prefer to think in terms of a system’s footprint against certain capabilities. If a system…

  • is demonstrably adaptive , capable of continuing to be effective in its target domain without human intervention as that domain evolves
  • can explain its conclusions to a level where we can trust it to a point commensurate with the criticality of the problem (taking us to a bad restaurant versus managing a nuclear plant)
  •  can be readaptive (put it in a different environment, albeit with the same class of problem and it must be able to function, albeit with a modicum of training/seeding)
  •  can deal with a degree of complexity and uncertainty in its target domain that is not otherwise addressable by prescriptively modelled systems

…then it can reasonably said to fit the definition of an <insert name here>. And I don’t especially care what you call it. As long as it isn’t an unqualified ‘Artificial Intelligence’. There are however a couple of (only marginally flippant) steps beyond: if a system…

  • can itself effect actions based on its decisions, via online or physical agency (robots)
  • operates in a true cybernetic feedback loop with its environment, be that with AHBs (Actual Human Beings) or other machine systems
  • behaves (appropriately) in ways that it’s developers hadn’t imagined

Then things are getting really interesting, at which point we can probably start to dust off the I-word for real. Or at least start referring to Genuine People Personalities. But, before we do that, there’s something else to consider: the relationship between the fields of Artificial Intelligence, Illusory Intelligence and Real Stupidity. Which is coming next.


 

The author has been intermittently involved with AI in a number of forms, particularly Illusory Intelligence and Real Stupidity, (his own and that of machine systems) since the 1980s, when he taught himself LISP on the periphery of the UK’s Alvey research programme. He has nearly recovered from that experience with just the occasional (((flashback))). He was the architect of the conversation engine and bots in Douglas Adams’ Starship Titanic when CTO of TDV, co-organised the Digital Biota III A-Life conference in 1998 and has been since focussing on architectures for emergent, self-organising systems. He was co-founder of udu, Inc. in 2012 and continues to work on encouraging swarm systems to do something useful. He is particularly interested in the cybernetics of the dynamic between machine systems and AHBs.


 

1. For around £20k, I can put together a 30,000 node array of NVIDIA Tesla graphic cards ( packaged as supercompute modules) that will outperform the world’s fastest supercomputer as of November 2000 (Lawrence Livermore Laboratory’s ASCI White). I have no idea what ASCI White cost but will safely wager that it would be more than an inflation adjusted £20k and that it wouldn’t fit under a desk and just gently warm your knees. In the what-has-it-got-in-its-pocketses? stakes, my iPhone 6s Plus turns in just under 1GFlop on the supercomputer Linpack benchmark, which in the first global supercomputer top500 list from June 1993, would have put my PHONE in about 260th place.

2. A paper entitled “Trends in Supercomputing and Computational Physics” I have from 1985 notes that weather forecasting will soon require a computer capable of 10GFlops raw performance (not comparable with a LINPACK benchmark) and the equivalent of about 10 contemporary Cray-2s. At Rutherford, with our steam-powered Atlas 10, we used to dream of Cray-2s. Ambitious stuff, but I’ll just note in passing that, on one floating point benchmark (SGEMM), my phone does in fact exceed 10GFlops.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.