AR.gif (3295 bytes)

Autonomy Reasoning in Sensible Agent-Based Systems

Introduction   Autonomy as Decision-Making Control Dynamic Adaptive Autonomy  Decision-Making Frameworks  Performance of ADMF

AR one-pager (.pdf)

Introduction

    Often, decisions about how, when, and with whom to collaborate determine the most successful problem-solvers. "Intelligence" alone may be insufficient to achieve desired outcomes due to global constraints, conflicts, lack of resources, lack of knowledge, or lack of time.  Problem-solvers who can leverage the capabilities of others through collaboration have greater potential to overcome such difficulties.   However, the static definition of decision-making interactions for collaborative groups can unduly constrain problem-solving activities over time by limiting the members’ ability to take initiative or by enforcing the communication overhead of group interaction when it is not needed.  This may ultimately reduce both group and individual productivity.  Problem solvers should therefore endeavor to adjust their    collaborative decision-making interactions as needed. Such adjustment may include the capability to form new alliances with other problem solvers as well as the capability to strike out alone by taking initiative to achieve a goal when no appropriate help is available.

Autonomy as Decision-Making Control

    Although no widely accepted definition exists for agent "autonomy," this research has identified decision-making control as a fundamental dimension of autonomy.  Agent autonomy is often interpreted as freedom from human   intervention, oversight, or control. However, in multi-agent systems, a human user may be far removed from the operations of any particular agent. Additionally, the behavior of autonomous agents is generally viewed as goal-directed.  That is, autonomous agents act with the purpose of achieving their goals. Many researchers also consider pro-activeness to be a defining property of autonomous agents.  Incorporating these properties, autonomy becomes an agent’s active use of its capabilities to pursue its goals without intervention, oversight, or control by any other agent. However, no agent can be completely free from all types of intervention with respect to any goal. This research identifies three distinct types of intervention:

        (1) modification of an agent’s environment,

        (2) influence over an agent’s beliefs, and

        (3) intervention in an agent’s determination of which goals/ sub-goals/ intentions it will pursue.

All three types of intervention are equally important considerations for agent design and operation. However, this research relates the term "autonomy" most clearly to intervention in the decision-making processes that an agent uses to determine how its goals should be pursued.  Therefore, autonomy is an agent’s active use of its capabilities to pursue its goals, without intervention by any other agent in the decision-making processes used to determine how those goals should be pursued. 

Dynamic Adaptive Autonomy

    This research seeks to address the need for dynamically adjustable decision-making interactions for Sensible Agents.  Sensible Agents are encapsulated, automated software entities that attempt to achieve assigned goals. The Sensible Agent architecture has been designed for operation in complex, dynamic problem-solving environments. Such problem-solving environments demand flexibility and adaptability from successful automated problem solvers.  This research supports the Sensible Agent capability of Dynamic Adaptive Autonomy (DAA). An agent’s degree of autonomy is determined by the amount of control the agent has over the outcome of a decision-making process. DAA will contribute to the flexibility and adaptability of Sensible Agents by allowing them to modify their decision-making interaction styles during system operation. As the agent’s decision-making interaction style changes, so does its level of autonomy (ranging from command-driven, to true consensus, to locally autonomous/ master). The process of autonomy reasoning in Sensible Agents estimates the most appropriate level of autonomy for each of an agent’s goals (which may change throughout system operation) and attempts to maintain the necessary decision-making interactions to achieve that level of autonomy. This research has provided empirical evidence that the performance of Sensible Agents does vary across different decision-making frameworks given the same situation. As situations change, agents should be allowed to implement the most effective decision-making framework for the current situation.

Decision-Making Frameworks

Agents in a Multi-Agent System (MAS) can interact in sensing, planning, and executing. When a group of agents comes together to pursue a goal, how should the group interact? A Decision-Making Framework (DMF) defines the allocation of decision-making and action-execution responsibilities among a set of agents pursuing one or more goals. A Decision-Making Framework specifies the set of agents interacting to achieve a set of goals. Specifically, a Decision-Making Framework consists of (1) the decision-making control set {D} specifying which agents make decisions about the goals, (2) the authority-over set {C} specifying which agents must carry out the decisions made, and (3) the set of goals {G} under consideration. Agents form a DMF for one or more goals and an agent may participate in multiple DMFs for different goals simultaneously. The set of DMFs covering all goals in the system is the Global Decision-Making Framework, denoted by GDMF [1]. The GDMF is the ({D}, {C}, {G}) DMF assignments such that all goals are in exactly one DMF, but one ({D}, {C}, {G}) assignment may apply to multiple goals. Multi-Agent Systems with one fixed DMF have a Static Decision-Making Framework. Agents that can change DMFs have the Adaptive Decision-Making Frameworks (ADMF) capability. ADMF is a search through the space of possible Decision-Making Frameworks for a DMF predicted to perform well given the system’s goals, agent capabilities, and perceived environmental factors.

Performance of ADMF

Prior research has shown that using the ADMF capability the Multi-Agent System improves system performance compared to using the same, Static Decision-Making Framework. Specifically, the empirical results in the research justified the following hypotheses.

(1)     The GDMF under which agents perform best differ as the situation that the agents encounter differs.

(2)     Agents operating under ADMF can perform significantly better than agents operating under static DMFs given run-time situation changes.

(3)     Agents operating under ADMF, without exhaustive prior knowledge, can perform significantly better than agents operating under static DMF given run-time situation changes.

The current research seeks to disentangle the influence of the planning and model-sharing factors on Decision-Making Framework performance. The current set of experiments controls for the following parameters: The effect of the Model-Sharing Framework by controlling (1) whether agents share world models within their DMF and (2) whether agents share world models across DMFs. The planning parameters examined are (3) the probability agents will choose not to plan during a processing phase, given an unachieved goal (decision-procrastination), (4) the probability agents will choose to implement a solution that is predicted to worsen performance given its world model (risk-aversion), and (5) in what order agents plan for a set of related goals (planning-order-bias). The results and analysis of the experiments show the different organizations perform differently under different planning and model-sharing factors and a close relation exists between reasoning about how to form teams of agents and the planning and acting in those teams.  

 

Copyright © 2002 by The University of Texas at Austin, The Laboratory for Intelligent Processes and Systems