Assuring the Safety of Decision-making in Autonomous Systems

We’ll be presenting our paper on ‘Analysing the Safety of Decision-Making in Autonomous Systems’ later this year at Safecomp 2022 in Munich (which we’re absolutely STOKED about by the way!).  In the paper (sorry, no peeking until then) we analyse how we can assure the safety of Autonomous System (AS) decision-making.  Whilst we’re excited to explain our Decision Safety Analysis approach, we are reconciled with the fact that there remain a number of unresolved issues outside of the paper’s scope, including:

  • The efficacy and aim of an Operational Domain Model
  • Whether AS should/can make decisions in the same way as humans
  • How we can be confident that we have elicited all options on which the AS ‘decides’.

… this article concerns the latter.

What is Decision-Making Anyway?

Although there seems to be continued debate (and differences) amongst psychologists as to how humans make decisions (including considering a priori experience, historical (and other) biases etc.), a decision is largely accepted as being the selection on one or more options from a finite set that is available at the time the decision is made.

That an option may not be ‘visible’ to a human does not discount it as an available option, and even when an option exists, an individual human may not be capable of selecting one or more options available.  As decisions transfer from the human to the AS, we must be confident demonstrably that we have identified all potential options available to an AS for selection.

In the increasingly complex operating environments in which AS are increasingly  being deployed, there are potentially infinite environment variables which may affect the safety of decisions taken by the AS.  Yet the problem may not be necessarily as intractable as may first appear.

Consider if you will, an autonomous vehicle approaching a roundabout.  The environment variables that impact the safety of the AS’ decision as to whether it should enter the roundabout could be (not an exhaustive list):

  • The presence of other vehicles (autonomous and/or ‘traditional’) on the roundabout
  • The presence of other vehicles (autonomous and/or ‘traditional’) waiting to enter the roundabout (and their location in relation to ‘give way’ rules pertinent to the country in which the roundabout is located)
  • Weather conditions (temperature, precipitation, humidity)
  • Time of day
  • Visibility
  • State of road surface (wet/dry)
  • The presence of pedestrians waiting to cross the road (and their location in relation to ‘give way’ rules pertinent to the country in which the roundabout is located)
  • Whether any areas of prohibited waiting will be breached on entry (due to traffic density on the roundabout)
  • Other pertinent rules of the ‘Highway Code’
  • Etc.

Whilst this may present a messy problem for understanding and perception, when considering the options available to the AS, it can only make decisions on system variables under its control.  In this instance, this can be nothing more than moderating speed and trajectory.  For example:

  • Enter the roundabout (at variations in speed)
  • Stop and wait.

However, as humans we are limited to compiling only the list of options that we can conceive.  As humans we are also therefore only capable of assessing the safety of AS decision-making in terms of considering only the options that we can perceive.  What if there are other options available of which we could never conceive?

A 2011 paper on using AS to model the 3-dimensional axis in an Air Traffic Control revealed options identified by an algorithm that were assessed as more effective and perhaps safer than any options a human Air Traffic Controller could conceive.  Do we need to follow this same algorithmic approach when eliciting potential options for the assessment of safe AS decision-making? Perhaps.  Keep watching this space as the research unfolds…

Leave a Reply

Your email address will not be published. Required fields are marked *