torstai 10. tammikuuta 2019

Artificial Intelligence


Before I start, understand that I do not work, dabble or have never even played with with any kind of AI/ML (Artificial Intelligence / Machine Learning) systems, so everything that follows is my interpretations based on relatively superficial articles and discussions I've read and may be partially or even completely wrong. You've been warned.

There's been a lot of talk about AI lately, how it is going to be massively change everything, from work to society to ... everything. I am a bit skeptical though, as everything I've read points the current state of AI being mostly hype. Even the term AI is wrong at the moment. There is absolutely no intelligence involved there - only small mathematical models that have been trained to give specific output with various inputs - Machine Learning. Like identifying objects from pictures - is this a cat? chair? elephant?

Don't get me wrong here, making that work is impressively difficult task. But the problem is that this is just about the point where these systems stop. They do that one - and only one - task well, and nothing else. Yet they still are immensely useful.

These things are essentially black boxes however - you feed them training data, telling them what the expected response for that data is, until you get reliably the result from them you want. Which is nice, but if you ask specialists how they work internally, you get no response. They don't know what happens inside, it "just works". Until it doesn't, and you can't really even figure out why it suddenly doesn't work. Say, wouldn't it be fun if your automatic car's traffic sign detection suddenly went haywire just because there's a balloon or something else equally unexpected somewhere in camera view...

I'm using car here as an example a lot, since people reading this are likely to have at least passing understanding of driving as an challenge, even for human mind.

Worse yet, you can't even train these algorithms when they are deployed. Or more specifically, you can't allow them to be trained (or allow them to train themselves) in uncontrolled environment - like when being used in your car, because that might too easily train them to behave in erratic fashion, or simply wrong.

And then we return to issue of these models doing just one job. How do you combine them to produce useful result, even in unexpected situations? And how do you define unexpected in the first place - remember, these models occasionally behave in weird ways when they see data they didn't expect. So what your car's collision avoidance system does when it suddenly sees 5-feet high milkshake in front of it?

All that being said, I have no doubt that in very strictly defined jobs and environments these systems are absolutely wonderful, say, as quality control system picking out bad products in factory or something like that. But considering all things I mentioned above, I dare to say that we're still very, very far away from useful general-use artificial intelligence, and super-minds visioned in both utopian and dystopian sci-fi is currently just that - fiction.

But I still don't rule out the possibility of it getting suddenly much closer due to some new innovation.






Ei kommentteja:

Lähetä kommentti