AbstractsComputer Science

Situation-aware trust management in multi-agent systems

by Han Yu

Institution: Nanyang Technological University
Year: 2014
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Record ID: 1374914
Full text PDF: http://hdl.handle.net/10356/59239


Trust is a mechanism used by people to facilitate interactions in human societies where risk and uncertain are common. Over the past decade, the importance of trust management in computational intelligence research (e.g. in multi-agent systems (MASs)) has been recognized by both the industry and the academia. Computational trust models for evaluating the trustworthiness of a trustee agent based on a wide range of evidence have been proposed. Nevertheless, two important research problems remain open in this field. Firstly, how to mitigate the adverse effects of biased third-party testimonies on the accuracy of evaluating the trustworthiness of an agent? Secondly, how to make trust-aware task delegation decisions to efficiently utilize the capacities of trustee agents to achieve high social welfare? This thesis presents the research into addressing these two problems. It first proposes a novel reinforcement learning based trust evidence aggregation model – namely the Actor-Critic Trust (ACT) model – to address the problem of biased testimonies. Individual truster agents can use the ACT model to dynamically learn to adjust the selection of witness agents, the weights given to each of their testimonies, as well as the weights given to the collective opinions of the witness agents and the first-hand trust evidence to produce a trustworthiness evaluation. The model operates according to observable changes in the MAS environment and has been shown to be robust against collusions among witness agents. The ACT model eliminates the need for manually tuning these weight parameters in most existing trust models and makes agents more adaptive in changing environments. This work then goes beyond the existing trust management research framework by removing an widespread assumption implicitly adopted by existing research: that a trustee agent can process an unlimited number of interaction requests per discrete time unit without compromising its performance as perceived by the truster agents. The trust management problem is re-formalized as a multi-agent trust game based on the principles of the Congestion Game, which is solved by two trust-aware interaction decision-making approaches: 1) the Social Welfare Optimizing approach for Reputation-aware Decision-making (SWORD) approach, and 2) the Distributed Request Acceptance approach for Fair utilization of Trustee agents (DRAFT). SWORD is designed for use in MASs where a central trusted entity is available, while DRAFT is designed for individual trustee agents in fully distributed MASs. Both of these proposed approaches have been demonstrated to help an MAS achieve significantly higher social welfare than existing trust-aware interaction decision-making approaches. Theoretical analyses have shown that the social welfare produced by these two approaches can be made closer to optimal by adjusting only one key parameter. With these two approaches, the framework of research used by current multi-agent trust models can be enriched to handle more realistic operating environment conditions…