Loading...

Stolen From the Lips of Dinosaurs: Part 2

Way back when (19th July 2012 to be precise), I penned the following brief missive:

Throughout our career and in fact in our daily lives, safety professionals hear things that make them shiver to the bone!

Things you really DO NOT want to hear in your organisation:

They must be safe – they haven’t crashed!
• Why are we trying to change something that isn’t broken?
• But we’ve always done it like that.
• Look son, never mind all that stuff – I’ll show you how it’s really done…
• Don’t you think this safety nonsense is a bit over the top?
• Why would we want to supervise or check – we train our guys properly and trust them?
• Sometimes you just have to cut corners…
• What do you need that book for – can’t you remember how to do it?

It was fascinating to look back at a snapshot in time that is almost a decade ago, and yet it is clear that this rant was vented at a time when I would still have been influenced by my own military experience (having only left the Service that very year). On reflection I can see now that it speaks mainly to military safety professionals of the time, however (I do hope such folly is no longer heard on hangar floors).

How would such a rant read today, then? Do we suffer still from the same inherited bilge?

Although time and technologies have progressed at pace, I remain unconvinced that attitudes to system safety (in any industry) have benefited from an equally seismic progression. I accept that I am no longer able to speak authoritatively on any current military matter, but I can comment on the present states of practice and literature for system safety engineering, however.

What would my ‘list of doom’ read some 10 years on? Thinking from a system safety engineering practice perspective (observed through my consultancy), the list of things you really DO NOT want to hear in your organisation would now perhaps be:

• The suppliers have followed 61508 for their software, and as it achieves SIL n I can put that failure rate in my Loss Model
• We’ve modelled the probability of software failure
• We’ve modelled the frequency of software failure
• Software fails
• Why do we care about software – there isn’t any in our (extremely complex socio-technical) system
• DAL A is equivalent to SIL 4
• If we comply with regulations, we can’t be held accountable, therefore we’re safe
• If the regulator agrees, we’re safe
• And undoubtedly the worst of all… There’s more aircraft at the bottom of the sea then there are ships in the sky

And as for the state of literature, here’s a list of things you really DO NOT want to read in a peer-reviewed paper (or it could be just me that doesn’t):

• Autonomous cars can’t drive under the influence of alcohol or drugs, so they will be safer than conventional vehicles
• I can prove my (insert catchy acronym) is safe, because it can meet these (made up?) safety requirements
• I have applied my (insert equally catchy acronym) to this fabricated case study, and look – it would have prevented that accident (honest, guv)
• We’ll take the humans out of the loop with unmanned maritime vessels, so if we subtract accidents that involved humans from the maritime accident database, then it is obvious that these accidents won’t happen again…so unmanned vessels will clearly be safer
• If we apply Tort Law, we’re safe
• Drivers will happily pay between 5 and 10 times the price for this ‘new improved system’ for the greater societal good
• Our data has statistical significance because we ran 2 simulations
• Car 1 came to a halt in 3.4 and Car 2 in 4.2 (no units, just 3.4 and 4.2…I assumed ‘rabbits’)
• And perhaps the worst of all…Any operating environment presents a finite number of discrete states

What would your list read like? Please do share it in the comments section below.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top