Autonomous systems need the ability to comply with a given ethical theory since they can have a significant impact on society. However, while moral philosophers have studied ethical theories for thousands of years, they're still difficult to implement by anyone who works on autonomous systems. We therefore propose an ethically compliant autonomous system that optimizes completing a task while following an ethical framework.
Autonomous systems use decision-making models that make simplifying assumptions to reduce the complexity of the real world. However, given these assumptions, autonomous systems can encounter different exceptions. We therefore propose an introspective autonomous system that resolves exceptions by interleaving regular decision making with exception handling using a belief over whether there are any exceptions.
Autonomous systems often use approximate planners that rely on state abstractions to solve large MDPs in real-time decision-making problems. However, these planners eliminate details needed to produce effective behavior in autonomous systems. We therefore propose a partially abstract MDP with a set of abstract states that each compress a set of ground states to condense irrelevant details and a set of ground states that expand from a set of abstract states to retain relevant details.
Autonomous systems use anytime planners that offer a trade-off between plan quality and computation time in real-time decision-making problems. However, to optimize this trade-off, an autonomous system must decide when to interrupt an anytime planner (i.e., stop thinking) and act on the current plan (i.e., start doing). We therefore propose two metareasoning techniques that use online performance prediction and reinforcement learning to estimate the optimal stopping point of an anytime planner.
Autonomous systems often operate in environments where their behavior and the behavior of other agents is influenced by a signal. However, if an autonomous system can't observe the signal directly, the consequences can be disastrous. We therefore develop an approach to agent-aware state estimation that enables an autonomous system to indirectly observe a signal by exploiting the behavior of other agents in the enviornment.
Autonomous systems cannot always complete their task and can even act dangerously because they use limited decision-making models. Ideally, to overcome these problems, autonomous systems should rely on human assistance when necessary. We therefore propose a competence-aware system that learns and improves its level of autonomy, each requiring a different degree of human assistance, by learning through experience.