Chronic and Acute Harm in Functional Safety
I’ve written previously about how the world of safety engineering is slowly coming to grips with the realisation that risk is no longer linear (and probably never has been) – especially with the advent of autonomous systems.
Safety engineering has historically focussed on acute harm – an immediate, impactful mishap if you like. We are still (largely) yet to consider the contribution of chronic harm – or harm that pervades over time, with increasing levels of ‘severity’.
Placing the debates of risk science versus safety science to one side for now, let us (for the sake of explication) consider risk as a product of severity and likelihood…but how do we ‘measure’ either element for chronic harm? Perhaps we simply cannot, and must seek instead to eradicate it? “Easier said than done” I hear you cry…although it is hard to tell, as I am shouting it as well…but let us try.
To eradicate chronic harm, we must first locate its sources. From an intrinsic safety perspective, one can readily imagine how poor manual handing techniques could translate into a muscular-skeletal injury over a protracted period, as a consequence of many cycles of poor technique. The severity aspect is simple to qualify, but what about likelihood? Do we assess the likelihood of an injury occurring ‘per lift’? We could, but that presumes the harm manifests from a single event – and not ‘the final straw that broke the camel’s back’. The harm didn’t happen on that last, fateful evolution; the muscles, ligaments, and/or tendons were weakened over time.
And what of functional safety (a matter on which I feel more qualified to opine over…intrinsic safety isn’t really my thing)? Chronic harm in functional safety is not the erosion of system elements over time (through erosion, abrasion, or cycles of stress for example) – we have that covered to varying degrees of success. Chronic harm regarding complex socio-technical systems could manifest from trust, or rather a lack of. If an end-user, or other actor does not trust your wonderfully-designed system, how does that impact the assumptions you have made regarding its interaction with people?
This trust may not be lacking from the moment a system was introduced into service, but may erode over time – perhaps as confidence in its output degrades with each spurious display, or lessens with every freeze of the screen. Simpler yet, the humans in the loop may simply not trust the deployed technology. How do we measure this risk? In cases such as this, one cannot even begin to measure the severity of harm, let alone likelihood.
The solution? We can pontificate, posture, and debate over units of measurement, safety philosophies, and types of analyses ad nauseum. Or, we could seek to eradicate chronic harm in all its forms, and instead argue compellingly over its absence in toto.
So, let the games begin…you can start by sharing your opinions in the comments section below. May the odds be forever in your favour…