views:

340

answers:

9

Seems that requirements on safety do not seem to like systems that use AI for safety-related requirements (particularly where large potential risks of desctruction/death are involved). Can anyone suggest why? I always thought that, provided you program your logic properly, the more intelligence you put in an algorithm, the more likely this algorithm is capable of preventing a dangerous situation. Are things different in practice?

+20  A: 

Most AI algorithms are fuzzy -- typically learning as they go along. For items that are of critical safety importance what you want is deterministic. These algorithms are easier to prove correct, which is essential for many safety critical applications.

tvanfosson
+1 Excellent answer.
Michael Haren
At a minimum you need, deterministic and comprehensible behavior at the interlock level. Any reasons---other than culture bias---not to use fuzzy stuff to provide a predictive alert layer?
dmckee
It occur to me that the answer to my own question is probably "Resources". As in, they are better spent on more deterministic code, or more thorough validation. ::sigh::
dmckee
+3  A: 

I would think that the reason is twofold.

First it is possible that the AI will make unpredictable decisions. Granted, they can be beneficial, but when talking about safety-concerns, you can't take risks like that, especially if people's lives are on the line.

The second is that the "reasoning" behind the decisions can't always be traced (sometimes there is a random element used for generating results with an AI) and when something goes wrong, not having the ability to determine "why" (in a very precise manner) becomes a liability.

In the end, it comes down to accountability and reliability.

casperOne
Look at what happened to Will Smith in IRobot!
Michael Haren
+1  A: 

I would guess that AI systems are generally considered more complex. Complexity is usually a bad thing, especially when it relates to "magic" which is how some people perceive AI systems.

That's not to say that the alternative is necessarily simpler (or better).

When we've done control systems coding, we've had to show trace tables for every single code path, and permutation of inputs. This was required to insure that we didn't put equipment into a dangerous state (for employees or infrastructure), and to "prove" that the programs did what they were supposed to do.

That'd be awfully tricky to do if the program were fuzzy and non-deterministic, as @tvanfosson indicated. I think you should accept that answer.

Michael Haren
+3  A: 

The more complex a system is, the harder it is to test. And the more crucial a system is, the more important it becomes to have 100% comprehensive tests.

Therefore for crucial systems people prefer to have sub-optimal features, that can be tested, and rely on human interaction for complex decision making.

Roy Peled
+1  A: 

The key statement is "provided you program your logic properly". Well, how do you "provide" that? Experience shows that most programs are chock full of bugs.

The only way to guarantee that there are no bugs would be formal verification, but that is practically infeasible for all but the most primitively simple systems, and (worse) is usually done on specifications rather than code, so you still don't know of the code correctly implements your spec after you've proven the spec to be flawless.

Michael Borgwardt
+3  A: 

From a safety standpoint, one often is concerned with guaranteed predictability/determinism of behavior and rapid response time. While it's possible to do either or both with AI-style programming techniques, as a system's control logic becomes more complex it's harder to provide convincing arguments about how the system will behave (convincing enough to satisfy an auditor).

joel.neely
+4  A: 

Haven't you seen any of the Terminator or Matrix films? We can't trust the machines.

Dan Dyer
Brilliant answer!
Jamie Chapman
+1  A: 

I think that is because AI is very hard to understand and that becomes impossible to maintain.

Even if a AI program is considered fuzzy, or that it "learns" by the moment it is released, it is very well tested to all know cases(and it already learned from it) before its even finished. Most of the cases this "learning" will change some "thresholds" or weights in the program and after that, it is very hard to really understand and maintain that code, even for the creators.

This have been changing in the last 30 years by creating languages easier to understand for mathematicians, making it easier for them to test, and deliver new pseudo-code around the problem(like mat lab AI toolbox)

DFectuoso
A: 

There are enough ways that ordinary algorithms, when shoddily designed and tested, can wind up killing people. If you haven't read about it, you should look up the case of Therac 25. This was a system where the behaviour was supposed to be completely deterministic, and things still went horribly, horribly wrong. Imagine if it were trying to reason "intelligently", too.

dwf