We’re currently designing and developing a butlerbot for use in one of our brand-new University Buildings. When I say ‘we’, I mean ‘there are some extremely clever people with whom I have the pleasure of working that are designing and building a butlerbot’. I’m playing the role of the annoying safety scientist…a role I have honed well over the years, even if I do say so myself!
The project will eventually grow to a fleet of the little tykes, and users will be able to request a robot to come their location, collect a small package and deliver the package to the required destination. The building in question will have a management system with which the butlerbots can interact to request the lift, and open doors as it travels around the building.
I am seriously geeking out, naturally – but there are many research benefits to this project that present quite a unique opportunity to investigate, test, and demonstrate concepts which many of my colleagues are working on.
Safe Operating Context
A Safe Operating Context is designed to assure the operating context of an Autonomous System (AS), and is established by defining the scope of required activities and the inherent capabilities of the proposed autonomous system. This scope will of course vary – dependent on many factors, including the ‘level’ of autonomy required, and any human interactions (including human oversight). We can use this project to ensure we can elicit a complete set of safety objectives to assure the AS in operation. Key to this will be the both the creation and validation of the Operating Domain Model for the Butler-Bot, and complete and compelling Use Cases.
Operating Domain Model
Operating Domain Models (ODM) (or ODDs in the automotive world) contain the scope of operation within which we require the AS to be safe. The key research questions we can contribute to solving is, from a safety perspective:
- PRECISELY what is an ODM:
- What needs to be contained in an ODM?
- What needs to be kept out of an ODM?
- How is an ODM to be used?
- What is an ODM to be used for?
- What actions are needed should the AS stray out from the ODM?
Autonomous vehicles have perhaps the ‘largest’ (and that’s probably the wrong word) ODM (ODD), but we can start from a relatively contained and constrained environment (the building and its stakeholders) to develop the research and test the scalability of the ‘answers’.
Use Cases
On the face of it, use cases are quite a simple artefact. What does the system need to do? When? Why? With whom? The introduction of autonomy incurs extra considerations for use cases, however, and we need to think either more carefully, or differently (and more likely for longer) about:
- Actors involved directly in the task (users, the fabric of the building, the building management system etc.)
- Actors not involved in the task, but may interact with the use case (visitors, tradespeople, emergency services, other staff etc.)
- What assumptions the AS must make (and how can these be validated in operation)
- Preconditions (and how the AS must ensure they are met before, and during the ‘case’)
- Clear start and end points of the use case
- Actions on completion (in other words, the butlerbot has delivered the package, now what?).
We have created the Use Cases, and are embarking on a range of safety analyses that the cases (along with the ODM and SOC) will be ‘tested in anger’ for the efficacy, completeness, and correctness.
Safety of Decision-Making
We need to assure the safety of decisions taken by the AS – and we have done this in a manner that informs and defines the contribution made by perception and understanding. We’ve just submitted a paper on this, so forgive me for staying tight-lipped on this for now. Our real-world case study…? You guessed it!
Safety Analysis Techniques
What is wrong with existing safety analysis techniques? In truth, probably nothing. Well, there is perhaps nothing wrong with the intent and structure of existing techniques – but the implementation of these techniques and methods needs to have more intelligent thought applied before their undertaking. As part of the safe assurance of these autonomous systems, we’ll be assessing whether and why existing safety engineering methods/analyses/tools would elicit causes of failure pertinent to autonomous systems and see what may need to happen. With luck we will avoid the need for another acronymic paper (or YAAPing as Drew Rae would call it).It’s not only the safety science research that will benefit, and the design and development of these lovable little rogues will allow us to demonstrate a lot of work from our AAIP research pillars.
Localisation/Persistent Mapping
One of the considerations with autonomous robots is whether and how they can detect static objects such as translucent doors or take appropriate action around areas with blind corners. Localisation techniques and persistent mapping technology can now be deployed around the building to test their efficacy.
Geofencing
Imagine the scene, you have recently sat down somewhere comprised of say porcelain…hark! What is that sound of wheeling? There are certain places we don’t want a butlerbot to boldly go (forgive the split infinitive, Captain), and we will test geofencing technology to prevent the inadvertent straying of anything automated.
Perception
It’s incredible how seemingly benign environmental factors can impact the ability of an autonomous system to sense and understand aspects of its environment. The new lab facilities will allow us to research the impact of things like lux and the warmth of light can prevent certain camera technologies from detecting apparently ‘easy’ objects to detect.
All of this and more, from a humble, unassuming butlerbot.