K. Kravari, N. Bassiliades, “DISARM: A Social Distributed Agent Reputation Model based on Defeasible Logic”, Journal of Systems and Software, Vol. 117, pp. 130–152, July 2016.

Author(s): K. Kravari, N. Bassiliades


Appeared In: Journal of Systems and Software, Vol. 117, pp. 130–152, July 2016

Keywords: Multi-agent Systems; Agent Reputation; Distributed Trust Management; Logic-Based Approach; Defeasible Reasoning, Semantic Web


Abstract: Agents act in open and thus risky environments with limited or no human intervention. Making the appropriate decision about who to trust in order to interact with is not only necessary but it is also a challenging process. To this end, trust and reputation models, based on interaction trust or witness reputation, have been proposed. Yet, they are often faced with skepticism since they usually presuppose the use of a centralized authority, the trustworthiness and robustness of which may be questioned. Distributed models, on the other hand, are more complex but they are more suitable for personalized estimations based on each agent's interests and preferences. Furthermore, distributed approaches allow the study of a really challenging aspect of multi-agent systems, that of social relations among agents. To this end, this article proposes DISARM, a novel distributed reputation model. DISARM treats Multi-agent Systems as social networks, enabling agents to establish and maintain relationships, limiting the disadvantages of the common distributed approaches. Additionally, it is based on defeasible logic, modeling the way intelligent agents, like humans, draw reasonable conclusions from incomplete and possibly conflicting (thus inconclusive) information. Finally, we provide an evaluation that illustrates the usability of the proposed model.