Blog

Reliable Autonomous Systems

AI/ML / Multiagent

Reliable Autonomous Systems

Autonomous systems must be robust against environmental stresses to be reliable.

Neil Yorke-Smith recently gave a lecture titled “Towards a Framework for Certification of Reliable Autonomous Systems” where he discussed many points from his and his co-author’s paper on arXiv. This is an extremely important topic in AI and multiagent systems, but there are still too few answers, with interesting threads branching out in many different directions.

The beginnings of their three-layer, reference framework for autonomy certification is (quoting)

  1. The separation of high-level control from low-level control in systems architectures. This is a common trend amongst hybrid systems, especially hybrid control systems, whereby discrete decision/control is used to make large (and discrete) step changes in the low-level (continuous) control schemes.
  2. The identification and separation of different forms of high-level control/reasoning. Separate high-level control or decision making can capture a wide range of different reasoning aspects, most commonly ethics or safety. Many of these high-level components give rise to governors/arbiters for assessing options or runtime verification schemes for dynamically monitoring whether the expectations are violated.
  3. The verification and validation of such architectures as the basis for autonomous systems analysis. Fisher, Dennis, and Webster use the above structuring as the basis for the verification of autonomous systems. By separating out low-level control and high-level decision making diverse verification techniques can be used and integrated. In particular, by capturing the high-level reasoning component as a rational agent, stronger formal verification in terms of not just ‘what’ and ‘when’ the system will do something but ‘why’ it chooses to do it can be carried out, hence addressing the core issue with autonomy.

I wanted to capture some of my thoughts.

  • To earn an Airline Transport Pilot Certificate, a human pilot requires (in part) 1,500 flight hours. This as a kind of “burn-in” process where the pilot is exposed and challenged by a wide variety of situations. It is a kind of evolutionary process. No certification process can assure 100% infallibility because no situation novelty is boundless, but it does demonstrate a kind of robustness. The goal is to strive for minimal error rates over the range of all realistic situations.
  • It is reasonable to begin the certification requirements for autonomous systems based on the requirements for humans. But humans and computers have different strengths and weaknesses, and these must be taken into account. For example, humans can only learn so much, whereas a computer’s memory capabilities are enormous. And while computer’s current reasoning capabilities are limited, human reasoning is radically more flexible and fluid, able to decide on a course of action given limited and faulty information. Human decision making isn’t infallible, but it is remarkably practical and useful.
  • The key, open issue for me is still the mechanism(s) to assure reliability of the high-level control structures in autonomous systems.

    Subscribe

    Subscribe to see our posts as soon as they’re available.