Loading...

Humans or Autonomous Systems – which makes the safer decisions?

Which makes the ‘safer’ decisions – a human or a machine, and is that even the right question? My main research area for the AAIP is assuring the safety of decision-making made by Autonomous Systems (AS), and I’ve recently published a paper on a technique to assess how this can be done.

As part of my research, I’m currently trying to learn more about the models used to evaluate human decision-making to see whether they can provide insight into assessing AS decision-making, and whether the psychology of human decision-making can offer any insights into future research. This latest strand of my research has me wondering…

Anecdotally, the consensus as to whether society would accept (trust) an AS which could make decisions autonomously, seems to be predicated on whether it will be either safer than, or at least as safe as a human.  But if we don’t know how safe human decision-making is, is this a useful metric?

To be clear, this article is not considering decision-support systems (autonomous or otherwise), nor am I in any position to consider traversing the minefield that is the ethics of decision-making. This article is considering only the act of decision-making in an operating scenario.  For clarity, a scenario comprises one or more scenes (a snapshot of the environment, including scenery and dynamic elements, as well as actor’s and observers’ self-representations, and the relationships amongst these entities), and describes the temporal development between several scenes in a sequence.  Every scenario starts with an initial scene, and the temporal development is characterised by a set of actions and events.  Right…back to it…

My reading to date reveals a plethora of models with which psychologists, sociologists, and decision-researchers analyse HOW decisions are made (heuristics, fast and frugal heuristics, naturalistic decision-making etc.), but nothing (so far) on how safe these decisions are.  I’m no expert on human factors, and certainly no psychologist nor sociologist, but my understanding is that such researchers within the safety science discipline concentrate their efforts more on assisting humans to make the correct (safe) decision.  For complex socio-technical systems for example, this assistance is provided through the delivery of accurate and understandable data / information at the right time, in the right format, in order that the correct option can be selected by a human.

This act of ‘option selection’ is key – neither humans nor machines ‘make decisions’, rather they select a revealed option.  Research suggests that human decision-making is often influenced by cognitive / social / historical biases – which can be both beneficial and detrimental depending on the context in which they are made. Benefits are realised by speeding up the decision-making process (or option selection) (see ‘Thinking, fast and slow’ for a much more articulate description than I could offer), and detriments manifest when historical biases influence the wrong decision (in a particular context and situation).

Other than perhaps through the use of Reinforcement Learning (RL), AS are not at risk from historical biases, but neither can they benefit from faster decision-making predicated on heuristics.  If speed is the only advantage (for decision-making) then one can reasonably argue that a computer is much quicker at processing data than a human, anyway.

…so perhaps the question is actually whether research into assuring the safety of decision-making can provide insights into the safety of human decision-making?  Watch this space…

Leave a Reply

Your email address will not be published. Required fields are marked *

Top