The Dark Secret at the Heart of AI

April 22nd, 2017

Via: MIT Technology Review:

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

3 Responses to “The Dark Secret at the Heart of AI”

  1. Duras says:

    Now that’s plausible deniability. Humanity is like a moth to flame with new technology, we can’t build a car that lasts 50 years, but we want AI overlords yesterday. I’d be interested to know how AI perceives us (Tay anyone?), humanity seems far too illogical to be judged/ruled by machines.

    Also: considering how much we really understand the internal workings of the brain (not very well), how in the hell do we assume we can build an artificial one before we have the archetypal model down?

  2. Dennis says:

    My first thought was “They need to duplicate the neural net and compare outputs for equivalent inputs” and then I thought “Wow…That sounds a bit like L & R hemispheres!”

  3. soothing hex says:

    if (X == bad_move)
    then don’t_do X
    keep_going

    This would probably become cumbersome at some point. Not to mention that intelligence would be used to question itself thoroughly and eventually destroy or override these master controls using whatever way. Anyway operant conditioning could work for a while, provided we give these tools a ‘feelings’ parameter. Err.. ok they might get angry and fuck shit up even worse. We’re somewhat safe if we know how to soothe them using their suggestive function in a socionics model. This is getting boring. I suppose the appeal in AI is the perspective of some other form of contact.

Leave a Reply

You must be logged in to post a comment.