November 15, 2007

Autonomy


Pfeifer and Bongard (2007), dealing with design principles for collective systems, suggest that, according to the “level of abstraction principle”, collective intelligence refers not only to groups of individuals, as in human societies, but equally “to any kind of assembly of similar agents”, including groups of modules in modular robotic systems or organs that make up entire organisms (Pfeifer and Bongard 2007:241-243). Now, the “level of abstraction principle” raises the following question: to put individuals (for example) in human societies on the same foot with organs or modules purports to ignore different degrees of autonomy enjoyed by a human lung and a human individual. Pim Haselager helps to elaborate on that question.

According to Haselager, the following definition sums up various interpretations of autonomous agents circulating within AI: “Autonomous agents operate under all reasonable conditions without recourse to an outside designer, operator or controller while handling unpredictable events in an environment or niche” (Haselager 2005:518). This could be a working definition within robotics, relating more autonomy to less intervention of human beings while the robot is operating, and ruling out completely predetermined environments.
However, from some philosophical perspectives this conception of autonomy would be unsatisfactory, because it lacks an appropriate emphasis on the reasons for acting. A truly autonomous agent must be capable of acting according to her own goals and choices, while robots don’t choose their goals. Programmers and designers are the sole providers of goals to the robots. Notwithstanding, roboticists can safely ignore this “free-will concept of autonomy”. Mechanistic inclined philosophers do the same. For them, free-will is just an illusion and even adult human beings have no real choices.

Haselager offers a third concept of autonomy that could narrow the gap between autonomy-in-robotics and autonomy-in-philosophy. This concept of autonomy focus on homeostasis and the intrinsic ownership of goals.
A system can have his own goals, even if it cannot freely choose them, if they matter to his success or failure. A robot owns his goals “when they arise out of the ongoing attempt, sustained by both the body and the control system, to maintain homeostasis” (Haselager 2005:523). For example, a robot regulating his level of energy is in some way aiming for a goal of his own. This is still true despite the fact the robot is not free to ignore that specific goal. Evolutionary robotics, allowing the human programmer to withdraw from the design of that behaviour, still increases autonomy. That approach could be further improved with co-evolution of body and control systems, as much as adding autopoiesis to homeostasis. Notwithstanding, our understanding of autonomy, both in technical and in philosophical terms, could benefit from those ways to experiment how goals become grounded in artificial creatures.

Whether full autonomy is attainable is a remaining question.


REFERENCES

(Pfeifer and Bongard 2007) PFEIFER, R., and BONGARD, J., How the Body Shapes the Way We Think, Cambridge: Massachusetts, The MIT Press, 2007

(Haselager 2005) HASELAGER, Willem F.G., “Robotics, philosophy and the problems of autonomy”, in Pragmatics & Cognition, 13(3), 515-532 (click to open the pdf file)

Haselager page



0 Comments, criticism, questions, suggestions:

  © Blogger template Newspaper III by Ourblogtemplates.com 2008

Back to TOP