November 15, 2007

Bounded autonomy: autonomy and dependence

“The agents have bounded autonomy”. What could this mean? Let us try to contribute to an answer with the help of (Conte and Castelfranchi 1995: Chps. 2 and 3).

To be autonomous an agent must be capable of generate new goals as means for achieving existing goals of its own. But, except for heavenly beings, autonomous agents are not self-sufficient. The autonomy is limited by dependence. An agent depends on a resource when he needs it to perform some action to achieve one of his goals. Beyond resource dependence, there is social dependence: an agent x depends on another agent y when, to achieve one of his goals, x needs an action of y. Agents can even treat other agents as resources. There is mutual dependence between two agents when they depend on each other to achieve one and the same goal. Dependences imply interests. A world state that favours the achievement of an agent’s goals is an interest of that agent.

The relations of dependence and interest hold whether an agent is aware of them or not. Objective relations between two or more agents or between agents and the external world are those relations that could be described by a non-participant observer even if they are not in the participants minds. So, there is an objective base of social interaction. There is social interference between two agents when the achievement of one’s goals has some (positive or negative) effects on the other achieving his goals – be those effects intended or unintended by any agent.
Limited autonomy of social agents comes also from influencing relations between them. By acquiring beliefs about their interests agents can acquire goals. An agent can have true beliefs about his interests, when they overlap with objective interests. True beliefs about interests can help setting goals and planning action. But an agent can also have false beliefs about interests, as well as ignoring some of his objective interests. Furthermore, there can be conflicting interests of the same agent (viz immediate vs. long-term interests).

Now, an agent can adopt another agent’s goals. If y has a goal g and x wants y to achieve g as long as x believes that y wants to achieve g, we can say that x adopted the y’s goal. The goal adoption can be a result of influencing: y can work to have x adopting some of y’s goals. By influencing, new goals can replace older ones. An agent x can influence another agent y to adopt a goal g according to x’s needs, even if that goal g is not an interest of y.

So, the bounded autonomy of the agents comes from the relations of dependence and influencing holding among them, and between them and the real world.

REFERENCE
(Conte and Castelfranchi 1995) CONTE, Rosaria, and CASTELFRANCHI, Cristiano, Cognitive and Social Action, London, The University College London Press, 1995



1 Comments, criticism, questions, suggestions:

Pedro Lima 18 November 2007 at 16:01  

Bounded autonomy issues, from related viewpoints to the ones presented here, have been tackled by Multi-Agent and Multi-Robot Systems researchers. For instance, the interesting quote "Objective relations between two or more agents or between agents and the external world are those relations that could be described by a non-participant observer even if they are not in the participants minds." correspond to the definition of relational behaviours by (Lima and Custódio, 2006) used in the SocRob (http://socrob.isr.ist.utl.pt) project since 1998, and partially inspired in the notion of relational roles em (Drogoul and Zucker, 1998).

Similarly, an agent adopting another agent’s goals is a concept resembling (though being different) the notion of commitment expressed in (Cohen and Levesque, 1991), where the authors state the notion of joint commitment.

(Lima and Custodio, 2006) Multi-Robot Systems, Pedro Lima, Luís Custódio, Chapter I of Innovations in Robot Mobility and Control, S. Patnaik, S. Tzafestas (Eds.). Springer Verlag, Berlin, 2006

(Drogoul and Zucker, 1998) Drogoul, A., and Zucker, J. (1998). Methodological
Issues for Designing Multi-Agent Systems with Machine Learning
Techniques: Capitalizing Experiences from the Robocup Challenge. Technical
Report LIP6 1998/041, Laboratoire d'Informatique de Paris 6.


(Cohen and Levesque, 1991) Cohen, P. R., Levesque, H. J., (1991), "Teamwork". Nous, Vol 35, pp.
487-512

  © Blogger template Newspaper III by Ourblogtemplates.com 2008

Back to TOP