Most AI practitioners will argue that the risk to humanity from AI doesn’t (and won’t) come from an AI waking up one day, deciding that the best way to solve the world’s problems is to wipe out humanity and then serendipitously finding that it’s in control of the world’s nuclear weapons. On the principle that cock-up trumps conspiracy, pretty much every time, we’re far more likely to take a range of hits from the misapplication of an AI that’s either too stupid1 to do the job that’s been asked of it or where those deploying it are incapable of understanding its limitations (or indeed don’t care, as long as they’ve cashed out before it all falls apart). Broadly speaking, machine systems fail for one or more of these reasons: Continue reading Wye AI, Man!