Perspective Modeler: Evaluating
Information Trustworthiness
SUMMARY
A Sensible Agent requires an accurate picture of its surroundings.
Because Sensible Agents do not rely on nor have access to a global, “omnipotent”
view of the world, the Perspective Modeler (PM) contains the Sensible Agent’s
explicit knowledge model of its local (subjective) viewpoint of the world.
The PM’s model contains knowledge about the self-agent (the agent whose perspective
is being used), other agents (not limited to Sensible Agents), and the environment.
The PM is designed to support improved decision making in high level planners while providing potential reactive solutions to achieve robustness in the face of a dynamic environment, hardware failures, lack of communication bandwidth, or the time delays introduced in end-to-end communication. Technology embedded in the Perspective Modeler includes assessment of information completeness and uncertainty, as well as maintenance of belief models based on those assessments.
The Perspective Modeler interprets data received from information sources (including the self-agent’s sensors and other agents) and changes its models accordingly, using a Belief Revision process based on the source’s reputation, the certainty a source places on the data, and the age of the data. The PM also maintains beliefs about states and events external to the self-agent and predicts the actions of other agents and the environment. New belief revision research includes work related to temporal issues, addressing the question of how confidence in a piece of information should be discredited, or depreciated, as time passes. This integration of “information staleness” factors is important in weighting older, more certain data against more recent, yet more uncertain information.
The Perspective Modeler maintains reputation values, essential for belief revision, for each information source. Current reputation management research [1] focuses on two methods for assessing an information source’s reputation: (1) Direct Trust Revision, in which a source’s reputation is revised based on its past transaction history with the agent, using dissimilarity metrics to measure the quality of information received, and (2) Recommended Trust Revision, in which a source’s reputation is affected by trust information recommended by other agents. Reputation management strategies provide soft security, attaining high truth accuracy in domains with unreliable or malicious information sources. These sources of fraudulent data can be identified and isolated.
Three types of beliefs, with their corresponding representations, are held
in the PM: behavioral, declarative, and intentional, as depicted in Figure
1. The behavioral knowledge in the Perspective Modeler allows an intelligent
agent to dynamically assess the current behavioral states and to model the
possible future behavioral states of itself, other agents, and the environment.
It also provides an agent with reactivity for response in dynamic and uncertain
environments. The behavioral model specifies the execution model of
an agent (defined as the states, the transitions between those states, and
the events affecting the transitions) that defines how to perform its designated
tasks. The declarative knowledge provides the agent with its best estimate
of beliefs of itself and others. The declarative model specifies (1)
data for which agent is responsible and resources assigned to an agent and
(2) the attributes which characterize an agent. The intentional
knowledge represents the agent’s goals. The intentional model captures
the intentions of an agent in the form of the Intended Goal Structure (IGS).
Each Sensible Agent’s PM is implemented in LISP running on Solaris or Linux machines. The knowledge models are written in LOOM, and LOOM’s ontology has been employed. Other Sensible Agent modules can connect to the PM module through the Inter-Language Unification system (ILU) which is OMG CORBA compliant ORB. All modules are implemented as CORBA objects, so that they can be running on different platforms and different machines to maximize portability. PM services also have been successfully deployed on the CoABS Grid.
BIBLIOGRAPHY
[1] Barber, K.S., J. Kim, “Soft Security: Isolating Unreliable Agents from
Society,” 5th Workshop on Deception, Fraud and Trust in Agent Societies,
1st International Joint Conference on Autonomous Agents and Multi-Agent Systems
(AAMAS 2002), July 15-19, 2002, Bologna, Italy.
Copyright ©2002 by The University of Texas at Austin,
The Laboratory for Intelligent Processes and Systems