November 25, 2007

Social sciences and artificial societies


Epstein and Axtell argue that artificial societies modelling can constitute a new kind of explanation of social phenomena (Epstein and Axtell 1996:20).

Lansing (2002) argues that the modelling of artificial societies can profit from a broad historical perspective of disputes among social scientists and philosophers on how to study social phenomena. To exemplify, he points out the parallel between some writing of Theodor Adorno on the positivist dispute in German sociology and the question that introduces Growing Artificial Societies: “How does the heterogeneous micro-world of individual behaviors generate the global macroscopic regularities of the society?” (Epstein and Axtell 1996:1) This is a classical problem of the social sciences, the micro-macro link problem or the problem of social order.

A number of researches take both perspectives together within Multi-agent Systems (MAS) modelling. Let us give just a few examples.

(Hexmoor et al. 2006), using game-theoretic concepts, studies norms as a possible solution to coordination problems. A normative agent is seen as an autonomous agent whose behaviour is shaped by norms prevailing in the society and an agent who decides on its goals, its representation of norms, its evaluation of the consequences of not complying, and the state of the environment whether to adopt a norm or dismiss it.

(Malsch and Weiβ 2000), opposing more traditional (negative) views on conflict within MAS, suggest relaxing the assumption that coordination can be designed to perfection and acknowledging conflicts’ beneficial effects for social life, as an opportunity to restructuring social institutions. They further suggest importing conflict theories from sociology, even if “the best theory of conflict” does not exist.

(Sabater and Sierra 2005) reviews a selection of trust and reputation models in use both in “virtual societies” (such as electronic markets, where reputation is used as a trust-enforcing mechanism to avoid cheater and frauds) and in fields like teamwork and cooperation.

(Alonso 2004) argue for using rights and argumentation in MAS. If agents must comply with norms automatically, they are not seen as autonomous any more. If they can violate norms to maximize utilities, the advantages of normative approach evaporate and the normative framework does not stabilize the collective. The concept of rights offers a middle way to escape the dilemma. Individuals have basic rights to execute some sets of actions (under certain conditions), but rights are implemented collectively. Agents are not allowed to inhibit the exercising of others’ rights and the collective is obliged to prevent such inhibitory action. Rights are not piecemeal permissions; they represent a system of values. Nobody can trade with rights (even its own); rights are beyond utility calculus. Systems of rights do not eliminate autonomy. Because they are typically incomplete or ambiguous, some argumentation mechanism must be at hand to solve underspecification problems.

“Socionics” is a combination of sociology and computer science (Malsch & Schulz-Schaeffer 2007). The Socionics approach does not ignore emergence and self-organisation in societies. For example, the Social Reputation approach belongs to a stand of research about emergent mechanisms of social order. (Hahn et al. 2007) models reputation as a mechanism of flexible social self-regulation valuable when agents, working within the framework of Socionics, need to decide to whom cooperate in certain circumstances. Although, emergent self-organisation is often of no help to model complex social interaction because it involves individuals “capable of reflexively anticipating and even outwitting the outcome of collective social interaction at the global level of social structure formation” (Malsch & Schulz-Schaeffer 2007:§2.8). Why ignore that social norms and regulations exist in human societies? The projects described within the Socionics framework are in search of integrated approaches for both sides of a persistent controversy: is social structure an emergent (“bottom up”) outcome of social action? or is social action constituted (“top down”) from social structure? (Malsch & Schulz-Schaeffer 2007:§3.1)

The question now is: facing such a variety, how would we choose the most promising concept to deal with the problem of social order in artificial societies?



REFERENCES

(Epstein and Axtell 1996) EPSTEIN, J.M., and AXTELL, R., Growing Artificial Societies: Social Science from the Bottom Up, Washington D.C., The Brookings Institution and the MIT Press, 1996

(Lansin 2002) LANSING, J.S., «“Artificial Societies” and the Social Sciences», in Artificial Life, 8, pp. 279-292

(Hexmoor et al. 2006) HEXMOOR, H., VENKATA, S.G., and HAYES, R., “Modelling social norms in multiagent systems”, in Journal of Experimental and Theoretical Artificial Intelligence, 18(1), pp. 49-71

(Malsch and Weiβ 2000) MALSCH, T., and WEIΒ, G., “Conflicts in social theory and multiagent systems: on importing sociological insights into distributed AI”, in TESSIER, C., CHAUDRON, L., and MÜLLER, H.-J. (eds.), Conflicting Agents. Conflict Management in Multi-Agent Systems, Dordrecht, Kluwer Academic Publishers, 2000, pp. 111-149

(Sabater and Sierra 2005) SABATER, J., and SIERRA, C., “Review on Computational Trust and Reputation Models”, in Artificial Intelligence Review, 24(1), pp. 33-60

(Alonso 2004) ALONSO, E., “Rights and Argumentation in Open Multi-Agent Systems”, in Artificial Intelligence Review, 21(1), pp. 3-24

(Malsch & Schulz-Schaeffer 2007) MALSCH, Thomas and SCHULZ-SCHAEFFER, Ingo, “Socionics: Sociological Concepts for Social Systems of Artificial (and Human) Agents”, in Journal of Artificial Societies and Social Simulation, 10(1)

(Hahn et al. 2007) HAHN, Christian, FLEY, Bettina, FLORIAN, Michael, SPRESNY, Daniela and FISCHER, Klaus, “Social Reputation: a Mechanism for Flexible Self-Regulation of Multiagent Systems”, In Journal of Artificial Societies and Social Simulation, 10(1)




Read more...

November 19, 2007

The doctrinal paradox, the discursive dilemma, and some problems of deliberative capabilities in multi-agents systems


Deliberative capabilities of multi-agents systems do not necessarily emerge from their individual members' deliberative capabilities alone. Although, we don’t need any kind of telepathy (wireless direct communication between robots) or collective consciousness in order to conceptualize those capabilities. Pettit (2004) helps understanding the problem, leading us from the doctrinal paradox, identified in jurisprudence, to a generalized discursive dilemma most deliberative collectives may face.

This is an example of the doctrinal paradox.
A three-judge court has to decide a tort case and judge the defendant liable if and only if the defendant’s negligence was causally responsible for the injury to the plaintiff and the defendant has a duty of care toward the plaintiff. Now, which decision has been taken when judges voted as follows?



Cause of harm?
(Premise 1)
Duty of care?
(Premise 2)
Liable?
(Conclusion)
Judge AYesNoNo
Judge BNoYesNo
Judge CYesYesYes


With a conclusion-centered procedure, the court decides “No”. With a premise-centered procedure, the court decides “Yes”, the conclusion following deductively from the conjunction of positive answers to both premises. The doctrinal paradox consists in having different outcomes to the same case with the same votes but different procedures. The same paradox can arise with a conclusion linked to a disjunction of premises. For example, when an appellant should be given a retrial either if inadmissible evidence has been used or a forced confession has taken place.



Inadmissible evidence?Forced confession?Retrial?
Judge AYesNoYes
Judge BNoYesYes
Judge CNoNoNo



The paradox in not confined to courts and legal domain. It can arise within many groups, like appointment and promotion committees or committees deciding who is to win a certain contract or a prize. “It will arise whenever a group of people discourse together with a view to forming an opinion on a certain matter that rationally connects, by the lights of all concerned, with other issues” (Pettit 2004:170).

In a generalized version, the paradox is named the discursive dilemma. Purposive groups (organizations with a specific function or goal, like states, political parties or business corporations) will almost inevitably confront the discursive dilemma in an especially interesting version. They have to take a series of decisions over a period of time in a consistent and coherent way.
Take as an example a political party that takes each major decision by a majority vote. It announces in March it will not increase taxes if it gets into government and announces in June it will increase defence spending. In September it must announce whether it will increase spending in other policy areas. The following matrix (where A, B, C stands for voting behaviour patterns) shows the dilemma’s structure.



Increase taxes?Increase defence spending?Increase other spending?
ANoNoNo (reduce)
BNoNo (reduce)Yes
CYesYesYes



If the party allows a majority vote on last issue, it risks incoherence and, so, discredit.

This kind of situations can occur partly because in ordinary social life people (even within organizations) do not show preferences and take decisions on the basis of complete information and deep theoretical basis. So, collectives prone to achieve their own goals, involving the outside world and/or their own members, must adopt some kind of collective reason, some mechanism to sustain coherent global behaviour towards those goals. Collective reason does not necessarily emerges from individuals’ reason alone.


REFERENCE

(Pettit 2004) PETTIT, Philip, “Groups with Minds of their Own”, in SCHMITT, Frederick (Ed.), Socializing Metaphysics, New York, Rowman and Littlefield, 2004 pp. 167-193 (click to get the paper in pdf file)

More information at the Philip Pettit's Web Page

Read more...

November 15, 2007

Bounded autonomy: autonomy and dependence

“The agents have bounded autonomy”. What could this mean? Let us try to contribute to an answer with the help of (Conte and Castelfranchi 1995: Chps. 2 and 3).

To be autonomous an agent must be capable of generate new goals as means for achieving existing goals of its own. But, except for heavenly beings, autonomous agents are not self-sufficient. The autonomy is limited by dependence. An agent depends on a resource when he needs it to perform some action to achieve one of his goals. Beyond resource dependence, there is social dependence: an agent x depends on another agent y when, to achieve one of his goals, x needs an action of y. Agents can even treat other agents as resources. There is mutual dependence between two agents when they depend on each other to achieve one and the same goal. Dependences imply interests. A world state that favours the achievement of an agent’s goals is an interest of that agent.

The relations of dependence and interest hold whether an agent is aware of them or not. Objective relations between two or more agents or between agents and the external world are those relations that could be described by a non-participant observer even if they are not in the participants minds. So, there is an objective base of social interaction. There is social interference between two agents when the achievement of one’s goals has some (positive or negative) effects on the other achieving his goals – be those effects intended or unintended by any agent.
Limited autonomy of social agents comes also from influencing relations between them. By acquiring beliefs about their interests agents can acquire goals. An agent can have true beliefs about his interests, when they overlap with objective interests. True beliefs about interests can help setting goals and planning action. But an agent can also have false beliefs about interests, as well as ignoring some of his objective interests. Furthermore, there can be conflicting interests of the same agent (viz immediate vs. long-term interests).

Now, an agent can adopt another agent’s goals. If y has a goal g and x wants y to achieve g as long as x believes that y wants to achieve g, we can say that x adopted the y’s goal. The goal adoption can be a result of influencing: y can work to have x adopting some of y’s goals. By influencing, new goals can replace older ones. An agent x can influence another agent y to adopt a goal g according to x’s needs, even if that goal g is not an interest of y.

So, the bounded autonomy of the agents comes from the relations of dependence and influencing holding among them, and between them and the real world.

REFERENCE
(Conte and Castelfranchi 1995) CONTE, Rosaria, and CASTELFRANCHI, Cristiano, Cognitive and Social Action, London, The University College London Press, 1995



Read more...

Autonomy


Pfeifer and Bongard (2007), dealing with design principles for collective systems, suggest that, according to the “level of abstraction principle”, collective intelligence refers not only to groups of individuals, as in human societies, but equally “to any kind of assembly of similar agents”, including groups of modules in modular robotic systems or organs that make up entire organisms (Pfeifer and Bongard 2007:241-243). Now, the “level of abstraction principle” raises the following question: to put individuals (for example) in human societies on the same foot with organs or modules purports to ignore different degrees of autonomy enjoyed by a human lung and a human individual. Pim Haselager helps to elaborate on that question.

According to Haselager, the following definition sums up various interpretations of autonomous agents circulating within AI: “Autonomous agents operate under all reasonable conditions without recourse to an outside designer, operator or controller while handling unpredictable events in an environment or niche” (Haselager 2005:518). This could be a working definition within robotics, relating more autonomy to less intervention of human beings while the robot is operating, and ruling out completely predetermined environments.
However, from some philosophical perspectives this conception of autonomy would be unsatisfactory, because it lacks an appropriate emphasis on the reasons for acting. A truly autonomous agent must be capable of acting according to her own goals and choices, while robots don’t choose their goals. Programmers and designers are the sole providers of goals to the robots. Notwithstanding, roboticists can safely ignore this “free-will concept of autonomy”. Mechanistic inclined philosophers do the same. For them, free-will is just an illusion and even adult human beings have no real choices.

Haselager offers a third concept of autonomy that could narrow the gap between autonomy-in-robotics and autonomy-in-philosophy. This concept of autonomy focus on homeostasis and the intrinsic ownership of goals.
A system can have his own goals, even if it cannot freely choose them, if they matter to his success or failure. A robot owns his goals “when they arise out of the ongoing attempt, sustained by both the body and the control system, to maintain homeostasis” (Haselager 2005:523). For example, a robot regulating his level of energy is in some way aiming for a goal of his own. This is still true despite the fact the robot is not free to ignore that specific goal. Evolutionary robotics, allowing the human programmer to withdraw from the design of that behaviour, still increases autonomy. That approach could be further improved with co-evolution of body and control systems, as much as adding autopoiesis to homeostasis. Notwithstanding, our understanding of autonomy, both in technical and in philosophical terms, could benefit from those ways to experiment how goals become grounded in artificial creatures.

Whether full autonomy is attainable is a remaining question.


REFERENCES

(Pfeifer and Bongard 2007) PFEIFER, R., and BONGARD, J., How the Body Shapes the Way We Think, Cambridge: Massachusetts, The MIT Press, 2007

(Haselager 2005) HASELAGER, Willem F.G., “Robotics, philosophy and the problems of autonomy”, in Pragmatics & Cognition, 13(3), 515-532 (click to open the pdf file)

Haselager page



Read more...

  © Blogger template Newspaper III by Ourblogtemplates.com 2008

Back to TOP