lauantai 14. huhtikuuta 2018

Meet your friendly, fallible robotic driver


It seems that the prediction is that self-driving cars are here, and very soon. Quite a few companies - most (in)famously Tesla - have been developing self-driving technology for some time now. Tesla's tech isn't really self-driving however, despite the name they use ("autopilot"), it's just somewhat better version of lane control systems used in higher-end cars now.

I am not against self-driving cars, really, but I will not be an early adopter here. The tech absolutely must mature a lot before I am willing to trust myself in the care of robot driver.

One figure thrown around lately is 90%. That is, today the most advanced systems are about 90% reliable, and remaining 10% requires human attention. Tesla's recent accidents are mostly because of that last 10% - the driver didn't catch on that system was failing and end result wasn't pretty.

That 10% is today. But it gets worse. Much, much worse before it gets better.

I was absolutely certain that I had written a related post earlier, about machine translation, but I don't seem to be able to find it right now. Oh well.

The 10% error figure is a bit difficult, but let's - for sake of argument here - define it to mean that in about 10% of typical trips it will need human to take over, and very quickly.

See, with 10%, today, people are already having problem concentrating on the road enough to catch on when things are start going wrong. Already here we're having first problem - the systems aren't good enough to notice themselves that things are going wrong. It's the human driver that must notice this, often quickly ("what, why car is running towards that media---crash").

But that isn't the worst. What if the failure figure is lower; let's say from year from now it's down to 5%; year after that 2%; another year to 1%?

If people are already having serious problems catching on when things go wrong 10% of time, how are - how could they - catch on when errors are even more rare? The short answer of course is that they can't. We're only humans. We will doze off, play with our phones, daydream, whatever, anything and everything except watch the road.

The translation post I though I had written (or have written, but can't find it right now) was tangentially related. Today machine translation is - again - 90% good, so we need to proofread and correct the mistakes. But when it gets to 98-99% range, we don't bother anymore, and there will be embarrassing mistakes in our brochures and technical documents and whatever.

The difference is, of course, that bad translation doesn't (well, generally, there are some exceptions) get people killed. Bad driving most certainly does.

And this is the reason I won't be adopting self-driving tech anytime soon. 1% is no way low enough failure figure here. I'll be waiting for 0,01% figures first. Or, alternatively, slightly worse device that actually can yell out early enough that it can't handle the situation so I can take over. But then again, that is also very difficult to detect - if it were easy, those cars that have crashed would have stopped instead. When things go wrong, they tend to go wrong quickly.







Ei kommentteja:

Lähetä kommentti