tag:blogger.com,1999:blog-33801287599755796582024-03-06T04:29:11.887+00:00Institutional RoboticsAn ongoing scientific, philosophical, and pragmatic research on Collective Robotics.<br>
A strategy to conceptualize multi-robot systems, which takes institutions as the main tool of social life of robots.<br>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-3380128759975579658.post-43573595691511028602009-11-04T13:17:00.006+00:002009-11-04T13:30:36.649+00:00Vida Institucional Artificial / Institutional Artificial LifeConferência proferida no âmbito do Ciclo "Das Sociedades Humanas às Sociedades Artificiais", edição 2009, sessão de 26 de Março. Organização do Instituto de Sistemas e Robótica (pólo do Instituto Superior Técnico), no âmbito do Laboratório de Sistemas Inteligentes.<br />
<br />
<i>Conference from the series "From Human Societies to Artificial Societies", 2009 edition, session of March 26. Conferences organized by the Institute for Systems and Robotics (Instituto Superior Técnico, Lisbon).</i><br />
<br />
Seguem-se o vídeo desta conferência e os slides usados na ocasião. A reconstituição da conferência é possível combinando o uso destes dois recursos: mudar os slides manualmente à medida que a palestra avança.<br />
<br />
<i>Here is the video of this conference and the slides showed at the time. The reconstitution of the conference is possible by combining the use of these two features: change the slides manually as the lecture progresses. Everything in Portuguese.<br />
</i><br />
<br />
<object height="281" width="500"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=7392071&server=vimeo.com&show_title=1&show_byline=0&show_portrait=0&color=00ADEF&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=7392071&server=vimeo.com&show_title=1&show_byline=0&show_portrait=0&color=00ADEF&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="500" height="281"></embed></object><br />
<br />
<div style="text-align: center;"><object data="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=vidainstitucionalartificial-090327162050-phpapp01&stripped_title=vida-institucional-artificial" height="355" type="application/x-shockwave-flash" width="425"><param name="allowFullScreen" value="true" /><param name="allowScriptAccess" value="always" /><param name="src" value="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=vidainstitucionalartificial-090327162050-phpapp01&stripped_title=vida-institucional-artificial" /><param name="allowfullscreen" value="true" /></object><br />
<br />
Clicar para <a href="http://institutionalrobotics2009.isr.ist.utl.pt/?p=72">mais informação sobre esta conferência</a>.<br />
<br />
<i>More on this conference, <a href="http://institutionalrobotics2009.isr.ist.utl.pt/?p=72">here</a>.</i><br />
<br />
Clicar para <a href="http://institutionalrobotics2009.isr.ist.utl.pt/">mais informação sobre este ciclo de conferências</a>.<br />
<br />
<i>More on the whole set of conferences, <a href="http://institutionalrobotics2009.isr.ist.utl.pt/">here</a>.</i><br />
</div>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0tag:blogger.com,1999:blog-3380128759975579658.post-70595781622572758502009-03-06T12:09:00.000+00:002009-03-06T12:17:28.726+00:00Bio, Nano, Robo – New Challenges for Historians of Technology<div align="justify"><br /><span style="font-style: italic;">The <a href="http://www.shotlisbon2008.com/">SHOT (Society for the History of Technology) 2008 Annual Meeting</a> took place in Lisbon last October. I served as chair of the session 1, on “Bio, Nano, Robo – New Challenges for Historians of Technology”, and commentator to the three presentations at the programme. The following is a version of the commentary I produced at that occasion.</span><br /><br />We have just heard three interesting presentations on the history of nanotechnology, genetics as an economic endeavour, and robotics.<br />This session is about “new challenges for historians of technology”. Why nanotechnology, genetics and robotics must be taken as new challenges for historians of technology or historians of science?<br />I don’t know whether it is clear for you why nanotechnology, genetics and robotics fit well in the same session of a meeting like this one. It is not for me. But it is perhaps a difficulty of mine, because I am not a historian of technology, not even a historian.<br />I am a philosopher and I am currently working as a researcher to the Institute for Systems and Robotics at the Technical University of Lisbon. And, from that perspective, and not from the perspective of a historian, I decided to suggest something on this issue. I will suggest that we can deal with these questions within the framework of the sciences of the artificial.<br />Let me try to explain what I mean, as a commentary on the presentations.<br /><br />Christian Kehrt challenges usual definitions of nanotechnology, and suggests increasing our understanding of the field having a closer look at the visions of the involved actors. The point is that visions enable actors to open new fields of research, provide new symbolic resources and broader fields of possibilities, so triggering further technological development.<br />Christian identifies one of the core visions of nanotechnology: the idea of molecular engineering, which provides the grounding for a major venture: to transgress the limits of nature, to become a <span style="font-style: italic;">deus ex machina</span>, using molecular components and atoms.<br />My question is: what’s new about that vision?<br />Already by year 1637 the French philosopher René Descartes, in his <span style="font-style: italic;">Discourse on the Method</span>, wrote that science can give us the practical knowledge needed to “render ourselves the lords and possessors of nature” (his words). This statement is usually taken as a manifesto of modern science. It is usually interpreted as a programme for physics and other scientific disciplines dealing with the external physical world.<br />However, Descartes added that becoming the lords and possessors of nature is a desirable result “especially for the preservation of health”. And he was using a broad concept of health and of medicine, as this quote testifies. Health is the first and fundamental of all the blessings of this life «for the mind is so intimately dependent upon the condition and relation of the organs of the body, that if any means can ever be found to render men wiser and more ingenious than hitherto, I believe that it is in medicine they must be sought for.»<br />So, the grand vision of modern science, to make ourselves the lords and possessors of nature, is directed, from the beginning, to our own inner mechanisms, bodily and mental mechanisms of human beings.<br />Still, as Christian Kehrt said, one of the core visions of nanotechnology is to transgress the limits of nature. This is perhaps new compared with Descartes’ vision.<br />Yet, drawing a clear distinction between transgressing and respecting the limits of nature can prove harder than expected.<br /><br />Sally Hughes’ presentation can be taken as a proof that there are people and companies struggling to modify the very concept of respecting and transgressing the limits of nature.<br />She gave us some elements of the history of Genentech, the first company entirely devoted to genetic engineering. Genentech had to face a bunch of problems from its first days to its current status as the most successful biotechnology company in the world. Problems with technology, with law, with politics, and with regulation. One of those problems was an intense legal dispute on the question “could one patent life?”. Genentech, despite the high level of risk and uncertainty that characterized the environment of the early development of this venture, made a spectacular debut on the New York stock market in the fall of 1980.<br />So, in a sense, Genentech managed to successfully modify the status of the question “what is to respect nature”. And they did so largely from outside the walls of scientific institutions: at courts, at companies, at media to change opinion. The social construction of life sciences is not only for scientists. How can we deal with this? How can we even understand this?<br /><br />Kathleen Richardson presented us the robots, as they were invented, or reinvented, in the 20s of last century by Karel Čapek, as a means to provoke reflection about human societies, about what does it mean to be human. According to Kathleen, the targets of Čapek’s criticism were well known political practices and ideologies.<br />Both utopian and dystopian elements were linked to robots. Utopian elements relate to the vision of a future society where machines work and humans only supervise them. Dystopian elements relate to production chains where workers are themselves tools to the machines.<br />Both the utopian and the dystopian elements Kathleen Richardson mentions as inspiring Čapek’s play relate to the collective organization of human societies: which place workers deserve within production structures, which social organization is most desirable.<br />We still need to concentrate on the question “what is it to be a human being” to better understand the meaning of a brand of technologies that have the potential to change the answer to that question.<br />However, nowadays, utopian as well as dystopian elements of this kind must better be placed at the individual’s level, rather than at collective level. We expect that medicines and prosthesis can give us a better body, perhaps a better mind, a new face, the chance to replace a leg or a hand that we have lost. Some expect to live longer, others to be able to choose children with such-and-such features and without such-and-such other ones. And some of us want all those opportunities to be a matter of individual choice.<br /><br />So, core visions of nanotechnology’s dream of transgressing the limits of nature, playing <span style="font-style: italic;">deus ex-machina</span> with molecules – even if perhaps we are not sure about what “nature” means.<br />Companies deploy huge efforts on many lines of battle to assure not only that we can engineer the living, but also that we can patent the results.<br />Now old-fashioned fears about robots as creatures of ambiguous status are giving place to hopes of gaining access to new bodies and new minds for ourselves and our progeny.<br />How to make sense of all this within a unique framework?<br />How to put the activities of companies, like Genentech; scientific and technological activities, like those being developed under the nano-banner; and cultural and political activities, like those related to the play of Čapek – how to put all this together and how to understand all this within a coherent vision?<br />My suggestion is to try to use the concept of “sciences of the artificial” to do so.<br />And I will borrow from Herbert Simon the main idea.<br />There is a book from Herbert Simon, <span style="font-weight: bold;">The Sciences of the Artificial</span>, published first in 1969, that could be of help here.<br />According to Simon, we cannot understand what it means to be artificial opposing artificial to natural. Since “The world we live in today is much more a man-made world than it is a natural world”, to define artificial as (wo)man-made has vanishing utility.<br />As Henry Petroski once wrote (<span style="font-weight: bold;">The Evolution of Useful Things</span>, 1994, ix), <span style="line-height: 115%;font-family:";font-size:11;" lang="EN-GB"> </span>“Other than the sky and some trees, everything I can see from where I now sit is artificial. The desk, books, and computer before me; the chair, rug, and door behind me; the lamp, ceiling, and roof above me; the roads, cars, and buildings outside my window, all have been made by disassembling and reassembling parts of nature.”<br />So, we need to take a different way.<br />Artificial things, artefacts, are not only (wo)man-made things.<br />Simon suggestion is to define an artefact as an interface between an inner environment, the organization of the artefact itself, and an outer environment. If the inner environment is appropriate to the outer environment, or vice versa, the artefact is adapted to its purpose.<br />Now, artificial entities, or artefacts, are “all things that can be regarded as adapted to some situation”. And, and this is a crucial point, according to this definition we must consider “living systems that have evolved through the forces of organic evolution” as artefacts or artificial things.<br />Some examples of Simon’s application of this definition are as follows.<br />The “economic man” is an artificial system. The inner system of the economic man is adapted to the outer economic environment, and its behaviour has evolved by pressure of the economic environment.<br />In psychological terms, human beings are also artificial creatures. “Human begins, viewed as behaving systems, are quite simple. The apparent complexity of our behaviour over time is largely a reflection of the complexity of the environment in which we find ourselves” . Psychology, the study of this adaptive process, must be seen as one of the sciences of the artificial.<br />If artefacts are all kind of things that became adapted to a situation, we can ask how to make artefacts, how to make things that have wanted properties. The answer to that question is what HS calls “the science of design”, the science of creating the artificial.<br />Engineers are professional designers. But doctors that prescribe remedies for a sick patient are also designers. A manager that devises a new sales plan for a company is a designer. A political leader that draws a social welfare policy for a state is a designer. Schools of engineering, of architecture, business, education, law, and medicine, are all concerned with design.<br />If you are working on devising courses of action aimed at changing existing situations into preferred ones, you are designing; “The natural sciences are concerned with how things are. (…) Design, on the other hand, is concerned with how things ought to be (…)”.<br />Now, my suggestion is that all the presentations in this session could be seen as different approaches to a study of the artificial, or to applied aspects of the sciences of the artificial. They are all about different approaches to the grand design of human beings, of human societies and of human culture, where artefacts fill almost all the meaning in our culture.<br />One point I want to make is that, talking of “the sciences of the artificial” nobody is proposing neither a new scientific discipline, nor new departments at our universities. At some extent, Christian Kehrt got close to this vision when he said that, to understand the hopes and fears of nanotechnology, it is crucial to study the history of biotechnology and microelectronics. Our suggestion is that we need a new perspective: how are we shaping the future human societies by engineering so many aspects of ourselves and of our communities. And how could we, historians, biologists, physicists, computer scientists, social scientists, and philosophers, work together on this new perspective. For example, trying to understand how technologies are taking the lead of this process. And trying to understand if it is all right about our future being shaped mainly by technological possibilities: sometimes without a proper reflection on what does it mean to be human.<br />Could this perspective, from the “sciences of the artificial”, make sense to historians of technology?<br /></div>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0tag:blogger.com,1999:blog-3380128759975579658.post-67202748450569547372009-02-26T12:14:00.010+00:002009-02-26T12:39:39.515+00:00The roundabout case study<div align="justify">Institutional Robotics is a new strategy to conceptualize multi-robot systems, which takes institutions as the main tool of social life of robots with bounded rationality and bounded autonomy. This institutional approach intends to get inspiration from philosophical and social sciences research on social and collective phenomena, and is mainly inspired by concepts from Institutional Economics, an alternative to mainstream neoclassical economic theory. <div align="justify"><p>The goal is to have multiple robots developing activities in a shared environment with human, in such a way that humans can interact with robots "naturally", intuitively, without a need to learn specific techniques to deal with them. The focus is not one-to-one interaction, but social behaviour in physical and social environments populated with many natural as well as artificial agents. So, the robots must be able to recognize institutions and institutional indicators that humans also recognize as structuring forms of their complex social relationships. This includes, for instance, rules, routines, signs, forms of organization of the material world, social roles, and social forms as organizations or teams.</p><p><br /></p><p><br /></p><p><b>References</b>:<br /></p><p><br /></p><p>SILVA, Porfírio, and LIMA, Pedro U., "Institutional Robotics", in Fernando Almeida e Costa <i>et al.</i> (eds.), Advances in Artificial Life. Proceedings of the 9th European Conference, ECAL 2007, Berlim e Heidelbergh, Springer-Verlag, 2007, pp. 595-604<a href="http://maquinaespeculativa.weblog.com.pt/Institutional%20Robotics%20na%20Springer.pdf"> </a><a href="http://maquinaespeculativa.weblog.com.pt/Institutional%20Robotics%20na%20Springer.pdf" class="external text" title="http://www.springerlink.com/content/jv82627127585321/fulltext.pdf" rel="nofollow">link</a></p><p><a href="http://maquinaespeculativa.weblog.com.pt/Institutional%20Robotics%20na%20Springer.pdf" class="external text" title="http://www.springerlink.com/content/jv82627127585321/fulltext.pdf" rel="nofollow"><br /></a> </p><p>SILVA, Porfírio, VENTURA, Rodrigo, and LIMA, Pedro U.,"Institutional Environments", in Proc. of Workshop AT2AI: From agent theory to agent implementation, AAMAS 2008 - 7th International Conference on Autonomous Agents and Multiagent Systems, Estoril, Portugal, 2008 <a href="http://institutionalrobotics.wordpress.com/files/2008/03/institutional-environments_at2ai-6_workingnotes.pdf" class="external text" title="http://www.ofai.at/research/agents/conf/at2ai6/papers/Silva.pdf" rel="nofollow">link</a></p><p><a href="http://institutionalrobotics.wordpress.com/files/2008/03/institutional-environments_at2ai-6_workingnotes.pdf" class="external text" title="http://www.ofai.at/research/agents/conf/at2ai6/papers/Silva.pdf" rel="nofollow"><br /></a> </p><p><br /></p><br /><br /><span style="font-size:130%;">The roundabout case study</span><br /><br />To put to a first test some of the basic concepts of Institutional Robotics, there is an ongoing case study with a minimalist setting. We want a set of robots to become able to behave as car drivers in an urban traffic scenario. The minimal setup represents several roundabouts connected by a small system of streets. Robots will have to know how to deal with basic aspects of the road code, some traffic signs, and agents playing special roles (police robots). Some more general rules, typical of human societies (“respect the integrity of other agents”, for example) must also be acknowledged and respected by the robots. Teams of e-pucks (the small robots being used) should be able to act in a “normal”, “conformist” way in the institutional environment while competing for the realization of a particular task (for example, collecting energy). But the robots could also be able, guided by utility-based considerations, to opt for inobservance of the institutional framework. The experiment will address the consequences of that co-existence of "conformist" and "non-conformist" behaviours within the same “robotic society”.<br /><br /><br />The case study explores an aspect that is essential in many institutions. Most of the time, institutions have both material and mental aspects. The roundabout in a traffic scenario instances that property. On the one hand, the roundabout, just due to its physical features, constrains behaviour: vehicles can not move on, drivers must choose either to turn right or to turn left if they want to proceed. Now, doing that (deciding in a conformist way, in Portugal, to go right) implies invoking a mental entity, a rule. It is well known that this rule is not the same in all countries. But it always combines with material features of the roundabout to play its role in a institutional environment.<br /></div><br /><div align="center"><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhB1U9mPT1VxkO7faiOL0v8fdN2sh5pGrtfmScM90Memng6AgeaT-tGTKtHaww_u6GJNM4-t_CeS007rKEj-HDMuNLyMUY8C7qS3lTeF0qbVPdaS4x0YhS1hch4DyrIvV9r8VWtBN8zDTXk/s1600-h/26set08-rotunda-epucks-142w.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhB1U9mPT1VxkO7faiOL0v8fdN2sh5pGrtfmScM90Memng6AgeaT-tGTKtHaww_u6GJNM4-t_CeS007rKEj-HDMuNLyMUY8C7qS3lTeF0qbVPdaS4x0YhS1hch4DyrIvV9r8VWtBN8zDTXk/s400/26set08-rotunda-epucks-142w.jpg" alt="" id="BLOGGER_PHOTO_ID_5307080082370335586" border="0" /></a><span style="font-size:85%;">Part of the experimental setup<br />at Researchers’ Night 2008 (26th September 2008, Centro Cultural de Belém)</span><br /><br /><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_D0TnnuPG9L1hqEb24wrUIwLjjJtFmocWgh8JbTgwDzFVLyEX-NU1-hG7p5WOmScwgMBX7sXYkoSvhyztIpr9NymiN8SSrzM4HonWF98KXlZwj9HUZ-ouReg_MNFgNUKitb87mp91iz9x/s1600-h/Experiment_small.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 310px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_D0TnnuPG9L1hqEb24wrUIwLjjJtFmocWgh8JbTgwDzFVLyEX-NU1-hG7p5WOmScwgMBX7sXYkoSvhyztIpr9NymiN8SSrzM4HonWF98KXlZwj9HUZ-ouReg_MNFgNUKitb87mp91iz9x/s400/Experiment_small.png" alt="" id="BLOGGER_PHOTO_ID_5307080086298604306" border="0" /></a><span style="font-size:85%;">A step of the experiment ahead<br /></span><br /><br /><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/mEImm1aAH_o&hl=pt-br&fs=1"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="http://www.youtube.com/v/mEImm1aAH_o&hl=pt-br&fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><span style="font-size:85%;">Basic behaviours: obstacles avoidance, wall following.<br /></span><br /><br /><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/PUuHuMEyc1A&hl=pt-br&fs=1"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="http://www.youtube.com/v/PUuHuMEyc1A&hl=pt-br&fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><span style="font-size:85%;">Cognitions does not preclude emergence: one e-puck got stuck on a small elevation; another robot, just passing through, and not being aware of the situation, smooths down the elevation with its own weight and frees its fellow.</span><br /><br />(José Nuno Pereira, at ISR/IST, is a crucial participant in the roundabout case study.)<br /></div></div>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0tag:blogger.com,1999:blog-3380128759975579658.post-14546827358151604412007-11-25T00:20:00.000+00:002007-11-25T00:34:18.675+00:00Social sciences and artificial societies<div align="justify"><br />Epstein and Axtell argue that artificial societies modelling can constitute a new kind of explanation of social phenomena (Epstein and Axtell 1996:20).<br /><br />Lansing (2002) argues that the modelling of artificial societies can profit from a broad historical perspective of disputes among social scientists and philosophers on how to study social phenomena. To exemplify, he points out the parallel between some writing of Theodor Adorno on the positivist dispute in German sociology and the question that introduces Growing Artificial Societies: “How does the heterogeneous micro-world of individual behaviors generate the global macroscopic regularities of the society?” (Epstein and Axtell 1996:1) This is a classical problem of the social sciences, <em>the micro-macro link problem </em>or <em>the problem of social order</em>. <br /><br />A number of researches take both perspectives together within Multi-agent Systems (MAS) modelling. Let us give just a few examples.<br /><br />(Hexmoor et al. 2006), using game-theoretic concepts, studies norms as a possible solution to coordination problems. A normative agent is seen as an autonomous agent whose behaviour is shaped by norms prevailing in the society and an agent who decides on its goals, its representation of norms, its evaluation of the consequences of not complying, and the state of the environment whether to adopt a norm or dismiss it.<br /><br />(Malsch and Weiβ 2000), opposing more traditional (negative) views on conflict within MAS, suggest relaxing the assumption that coordination can be designed to perfection and acknowledging conflicts’ beneficial effects for social life, as an opportunity to restructuring social institutions. They further suggest importing conflict theories from sociology, even if “the best theory of conflict” does not exist.<br /><br />(Sabater and Sierra 2005) reviews a selection of trust and reputation models in use both in “virtual societies” (such as electronic markets, where reputation is used as a trust-enforcing mechanism to avoid cheater and frauds) and in fields like teamwork and cooperation. <br /><br />(Alonso 2004) argue for using rights and argumentation in MAS. If agents must comply with norms automatically, they are not seen as autonomous any more. If they can violate norms to maximize utilities, the advantages of normative approach evaporate and the normative framework does not stabilize the collective. The concept of rights offers a middle way to escape the dilemma. Individuals have basic rights to execute some sets of actions (under certain conditions), but rights are implemented collectively. Agents are not allowed to inhibit the exercising of others’ rights and the collective is obliged to prevent such inhibitory action. Rights are not piecemeal permissions; they represent a system of values. Nobody can trade with rights (even its own); rights are beyond utility calculus. Systems of rights do not eliminate autonomy. Because they are typically incomplete or ambiguous, some argumentation mechanism must be at hand to solve underspecification problems.<br /><br />“Socionics” is a combination of sociology and computer science (Malsch & Schulz-Schaeffer 2007). The Socionics approach does not ignore emergence and self-organisation in societies. For example, the Social Reputation approach belongs to a stand of research about emergent mechanisms of social order. (Hahn et al. 2007) models reputation as a mechanism of flexible social self-regulation valuable when agents, working within the framework of Socionics, need to decide to whom cooperate in certain circumstances. Although, emergent self-organisation is often of no help to model complex social interaction because it involves individuals “capable of reflexively anticipating and even outwitting the outcome of collective social interaction at the global level of social structure formation” (Malsch & Schulz-Schaeffer 2007:§2.8). Why ignore that social norms and regulations exist in human societies? The projects described within the Socionics framework are in search of integrated approaches for both sides of a persistent controversy: is social structure an emergent (“bottom up”) outcome of social action? or is social action constituted (“top down”) from social structure? (Malsch & Schulz-Schaeffer 2007:§3.1)<br /><br /><em>The question now is: facing such a variety, how would we choose the most promising concept to deal with the problem of social order in artificial societies?</em><br /><br /><br /><br />REFERENCES<br /><br />(Epstein and Axtell 1996) EPSTEIN, J.M., and AXTELL, R., <strong>Growing Artificial Societies: Social Science from the Bottom Up</strong>, Washington D.C., The Brookings Institution and the MIT Press, 1996<br /><br />(Lansin 2002) LANSING, J.S., «<a href="http://www.ic.arizona.edu/~lansing/ArtSoc.pdf">“Artificial Societies” and the Social Sciences</a>», in <em>Artificial Life</em>, 8, pp. 279-292<br /><br />(Hexmoor et al. 2006) HEXMOOR, H., VENKATA, S.G., and HAYES, R., “<a href="http://www.cs.siu.edu/~hexmoor/CV/PUBLICATIONS/JOURNALS/JETAI-06/JETAI-DH.pdf">Modelling social norms in multiagent systems</a>”, in <em>Journal of Experimental and Theoretical Artificial Intelligence</em>, 18(1), pp. 49-71<br /><br />(Malsch and Weiβ 2000) MALSCH, T., and WEIΒ, G., “<a href="http://www.agent.ai/doc/upload/200405/mals00_1.pdf">Conflicts in social theory and multiagent systems: on importing sociological insights into distributed AI</a>”, in TESSIER, C., CHAUDRON, L., and MÜLLER, H.-J. (eds.), <strong>Conflicting Agents. Conflict Management in Multi-Agent Systems</strong>, Dordrecht, Kluwer Academic Publishers, 2000, pp. 111-149<br /><br />(Sabater and Sierra 2005) SABATER, J., and SIERRA, C., “<a href="http://www.iiia.csic.es/~jsabater/Publications/2005-AIR.pdf">Review on Computational Trust and Reputation Models</a>”, in <em>Artificial Intelligence Review</em>, 24(1), pp. 33-60<br /><br />(Alonso 2004) ALONSO, E., “Rights and Argumentation in Open Multi-Agent Systems”, in <em>Artificial Intelligence Review</em>, 21(1), pp. 3-24<br /><br />(Malsch & Schulz-Schaeffer 2007) MALSCH, Thomas and SCHULZ-SCHAEFFER, Ingo, “<a href="http://jasss.soc.surrey.ac.uk/10/1/11.html">Socionics: Sociological Concepts for Social Systems of Artificial (and Human) Agents</a>”, in <em>Journal of Artificial Societies and Social Simulation</em>, 10(1) <br /><br />(Hahn et al. 2007) HAHN, Christian, FLEY, Bettina, FLORIAN, Michael, SPRESNY, Daniela and FISCHER, Klaus, “<a href="http://jasss.soc.surrey.ac.uk/10/1/2.html">Social Reputation: a Mechanism for Flexible Self-Regulation of Multiagent Systems</a>”, In <em>Journal of Artificial Societies and Social Simulation</em>, 10(1) <br /><br /></div><br /><br><br>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0tag:blogger.com,1999:blog-3380128759975579658.post-20521327697526365832007-11-19T10:10:00.000+00:002007-11-19T10:30:32.378+00:00The doctrinal paradox, the discursive dilemma, and some problems of deliberative capabilities in multi-agents systems<div align="justify"><br />Deliberative capabilities of multi-agents systems do not necessarily emerge from their individual members' deliberative capabilities alone. Although, we don’t need any kind of telepathy (wireless direct communication between robots) or collective consciousness in order to conceptualize those capabilities. Pettit (2004) helps understanding the problem, leading us from the doctrinal paradox, identified in jurisprudence, to a generalized discursive dilemma most deliberative collectives may face.<br /><br /><strong>This is an example of the doctrinal paradox.</strong><br />A three-judge court has to decide a tort case and judge the defendant liable if and only if the defendant’s negligence was causally responsible for the injury to the plaintiff and the defendant has a duty of care toward the plaintiff. Now, which decision has been taken when judges voted as follows?</div><br /><br /><table bgcolor="white" bordercolor="green" cellspacing="1" cols="4" cellpadding="1" width="80%" align="center" border="2"><tr align="middle"><td></td><td>Cause of harm?<br>(Premise 1)</td><td>Duty of care?<br>(Premise 2)</td><td>Liable?<br>(Conclusion)</td></tr><tr align="middle"><td>Judge A</td><td>Yes</td><td>No</td><td>No</td></tr><br /><tr align="middle"><td>Judge B</td><td>No</td><td>Yes</td><td>No</td></tr><tr align="middle"><td>Judge C</td><td>Yes</td><td>Yes</td><td>Yes</td></tr></table><br /><br /><div align="justify">With a conclusion-centered procedure, the court decides “No”. With a premise-centered procedure, the court decides “Yes”, the conclusion following deductively from the conjunction of positive answers to both premises. <strong>The doctrinal paradox consists in having different outcomes to the same case with the same votes but different procedures.</strong> The same paradox can arise with a conclusion linked to a disjunction of premises. For example, when an appellant should be given a retrial either if inadmissible evidence has been used or a forced confession has taken place.</div><br /><br /><table bgcolor="white" bordercolor="green" cellspacing="1" cols="4" cellpadding="1" width="80%" align="center" border="2"><tbody><tr align="middle"><td></td><td>Inadmissible evidence?</td><td>Forced confession?</td><td>Retrial?</td></tr><tr align="middle"><td>Judge A</td><td>Yes</td><td>No</td><td>Yes</td></tr><tr align="middle"><br /><td>Judge B</td><td>No</td><td>Yes</td><td>Yes</td></tr><tr align="middle"><td>Judge C</td><td>No</td><td>No</td><td>No</td></tr></tbody></table><br /><br /><div align="justify"><br />The paradox in not confined to courts and legal domain. It can arise within many groups, like appointment and promotion committees or committees deciding who is to win a certain contract or a prize. “It will arise whenever a group of people discourse together with a view to forming an opinion on a certain matter that rationally connects, by the lights of all concerned, with other issues” (Pettit 2004:170).<br /><br /><strong>In a generalized version, the paradox is named the discursive dilemma.</strong> Purposive groups (organizations with a specific function or goal, like states, political parties or business corporations) will almost inevitably confront the discursive dilemma in an especially interesting version. <strong>They have to take a series of decisions over a period of time in a consistent and coherent way.</strong><br />Take as an example a political party that takes each major decision by a majority vote. It announces in March it will not increase taxes if it gets into government and announces in June it will increase defence spending. In September it must announce whether it will increase spending in other policy areas. The following matrix (where A, B, C stands for voting behaviour patterns) shows the dilemma’s structure.</div><br /><br /><br /><table bgcolor="white" bordercolor="green" cellspacing="1" cols="4" cellpadding="1" width="80%" align="center" border="2"><tbody><tr align="middle"><td></td><td>Increase taxes?</td><td>Increase defence spending?</td><td>Increase other spending?</td></tr><tr align="middle"><td>A</td><td>No</td><td>No</td><td>No (reduce)</td></tr><tr align="middle"><td>B</td><td>No</td><td>No (reduce)</td><td>Yes</td></tr><tr align="middle"><td>C</td><td>Yes</td><td>Yes</td><td>Yes</td></tr></tbody></table><br /><br /><div align="justify"><br />If the party allows a majority vote on last issue, it risks incoherence and, so, discredit.<br /><br />This kind of situations can occur partly because in ordinary social life people (even within organizations) do not show preferences and take decisions on the basis of complete information and deep theoretical basis. So, collectives prone to achieve their own goals, involving the outside world and/or their own members, must adopt some kind of collective reason, some mechanism to sustain coherent global behaviour towards those goals. <strong>Collective reason does not necessarily emerges from individuals’ reason alone.</strong><br /><br /><br />REFERENCE<br /><br />(Pettit 2004) PETTIT, Philip, “<a href="http://www.princeton.edu/~ppettit/papers/GroupswithMinds_2004.pdf">Groups with Minds of their Own</a>”, in SCHMITT, Frederick (Ed.), <strong>Socializing Metaphysics</strong>, New York, Rowman and Littlefield, 2004 pp. 167-193 (click to get the paper in pdf file)<br /><br />More information at the <a href="http://www.princeton.edu/~ppettit/index.htm">Philip Pettit's Web Page</a><br /></div>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com2tag:blogger.com,1999:blog-3380128759975579658.post-6835626874309056862007-11-15T10:16:00.000+00:002007-11-15T11:03:29.728+00:00Bounded autonomy: autonomy and dependence<div align="justify">“The agents have bounded autonomy”. What could this mean? Let us try to contribute to an answer with the help of (Conte and Castelfranchi 1995: Chps. 2 and 3).<br /><br />To be autonomous an agent must be capable of generate new goals as means for achieving existing goals of its own. But, except for heavenly beings, autonomous agents are not self-sufficient. The autonomy is limited by dependence. An agent depends on a resource when he needs it to perform some action to achieve one of his goals. Beyond resource dependence, there is social dependence: an agent <em>x</em> depends on another agent <em>y</em> when, to achieve one of his goals, <em>x</em> needs an action of <em>y</em>. Agents can even treat other agents as resources. There is mutual dependence between two agents when they depend on each other to achieve one and the same goal. Dependences imply interests. A world state that favours the achievement of an agent’s goals is an interest of that agent.<br /><br />The relations of dependence and interest hold whether an agent is aware of them or not. Objective relations between two or more agents or between agents and the external world are those relations that could be described by a non-participant observer even if they are not in the participants minds. So, there is an objective base of social interaction. There is social interference between two agents when the achievement of one’s goals has some (positive or negative) effects on the other achieving his goals – be those effects intended or unintended by any agent.<br />Limited autonomy of social agents comes also from influencing relations between them. By acquiring beliefs about their interests agents can acquire goals. An agent can have true beliefs about his interests, when they overlap with objective interests. True beliefs about interests can help setting goals and planning action. But an agent can also have false beliefs about interests, as well as ignoring some of his objective interests. Furthermore, there can be conflicting interests of the same agent (viz immediate vs. long-term interests).<br /><br />Now, an agent can adopt another agent’s goals. If <em>y</em> has a goal <strong>g</strong> and <em>x</em> wants <em>y</em> to achieve <strong>g</strong> as long as <em>x</em> believes that <em>y</em> wants to achieve <strong>g</strong>, we can say that <em>x</em> adopted the <em>y</em>’s goal. The goal adoption can be a result of influencing: <em>y</em> can work to have <em>x</em> adopting some of <em>y</em>’s goals. By influencing, new goals can replace older ones. An agent <em>x</em> can influence another agent <em>y</em> to adopt a goal <strong>g</strong> according to <em>x</em>’s needs, even if that goal <strong>g</strong> is not an interest of <em>y</em>. <br /><br />So, the <em>bounded </em>autonomy of the agents comes from the relations of dependence and influencing holding among them, and between them and the real world.<br /><br />REFERENCE<br />(Conte and Castelfranchi 1995) CONTE, Rosaria, and CASTELFRANCHI, Cristiano, <strong>Cognitive and Social Action</strong>, London, The University College London Press, 1995<br /></div><br /><br><br>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com1tag:blogger.com,1999:blog-3380128759975579658.post-43943569371755439432007-11-15T10:15:00.000+00:002007-11-15T10:15:22.514+00:00Autonomy<div align="justify"><br />Pfeifer and Bongard (2007), dealing with design principles for collective systems, suggest that, according to the “level of abstraction principle”, collective intelligence refers not only to groups of individuals, as in human societies, but equally “to any kind of assembly of similar agents”, including groups of modules in modular robotic systems or organs that make up entire organisms (Pfeifer and Bongard 2007:241-243). Now, the “level of abstraction principle” raises the following question: to put individuals (for example) in human societies on the same foot with organs or modules purports to ignore different degrees of autonomy enjoyed by a human lung and a human individual. Pim Haselager helps to elaborate on that question.<br /><br />According to Haselager, the following definition sums up various interpretations of autonomous agents circulating within AI: “Autonomous agents operate under all reasonable conditions without recourse to an outside designer, operator or controller while handling unpredictable events in an environment or niche” (Haselager 2005:518). This could be a working definition within robotics, relating more autonomy to less intervention of human beings while the robot is operating, and ruling out completely predetermined environments.<br />However, from some philosophical perspectives this conception of autonomy would be unsatisfactory, because it lacks an appropriate emphasis on the reasons for acting. A truly autonomous agent must be capable of acting according to her own goals and choices, while robots don’t choose their goals. Programmers and designers are the sole providers of goals to the robots. Notwithstanding, roboticists can safely ignore this “free-will concept of autonomy”. Mechanistic inclined philosophers do the same. For them, free-will is just an illusion and even adult human beings have no real choices.<br /><br />Haselager offers a third concept of autonomy that could narrow the gap between <em>autonomy-in-robotics</em> and <em>autonomy-in-philosophy</em>. This concept of autonomy focus on homeostasis and the intrinsic ownership of goals.<br />A system can have his own goals, even if it cannot freely choose them, if they matter to his success or failure. A robot owns his goals “when they arise out of the ongoing attempt, sustained by both the body and the control system, to maintain homeostasis” (Haselager 2005:523). For example, a robot regulating his level of energy is in some way aiming for a goal of his own. This is still true despite the fact the robot is not free to ignore that specific goal. Evolutionary robotics, allowing the human programmer to withdraw from the design of that behaviour, still increases autonomy. That approach could be further improved with co-evolution of body and control systems, as much as adding <em>autopoiesis </em>to homeostasis. Notwithstanding, our understanding of autonomy, both in technical and in philosophical terms, could benefit from those ways to experiment how goals become grounded in artificial creatures.<br /><br />Whether full autonomy is attainable is a remaining question.<br /><br /><br />REFERENCES<br /><br />(Pfeifer and Bongard 2007) PFEIFER, R., and BONGARD, J., <strong>How the Body Shapes the Way We Think</strong>, Cambridge: Massachusetts, The MIT Press, 2007<br /><br />(Haselager 2005) HASELAGER, Willem F.G., <a href="http://www.nici.kun.nl/~haselag/publications/PragCogHaselager05.pdf">“Robotics, philosophy and the problems of autonomy”</a>, in <em>Pragmatics & Cognition</em>, 13(3), 515-532 (click to open the pdf file)<br /><br /><a href="http://www.nici.kun.nl/~haselag/">Haselager page</a> </div><br /><br><br>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0tag:blogger.com,1999:blog-3380128759975579658.post-24713657245822167662007-10-19T23:33:00.000+01:002008-12-09T00:57:00.624+00:00A Instituição, Manuel Botelho, 1985<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhT898ev8nxGhdexY27V6CXFgmfroQNPoAxH2PvhaDNjdJoM8Vo3JJnTf8o7t6MR3tNBDEuTq_JUKEckxiEZ7ygFeuK7bhIRNKnXJBOXGvJ4467idQ4I5rKQqc-GEoQ-nx2uVyMGD77rb8G/s1600-h/MBotelho-AInstituicao-1985w.jpg"><img id="BLOGGER_PHOTO_ID_5122806656011299618" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhT898ev8nxGhdexY27V6CXFgmfroQNPoAxH2PvhaDNjdJoM8Vo3JJnTf8o7t6MR3tNBDEuTq_JUKEckxiEZ7ygFeuK7bhIRNKnXJBOXGvJ4467idQ4I5rKQqc-GEoQ-nx2uVyMGD77rb8G/s400/MBotelho-AInstituicao-1985w.jpg" border="0" /></a><br /><div><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgjBI3DBFvJOywRKUbPIzyJpH9mZ2XPCqGi7RBCXQvHRTTohiER8aBuNqijAIHLXal5o-xuMJX5Nt_mnECj2VQS9Ve0es1OjVtIJQBjJI__IGPahl1x4xz_5zpfaiJ3HGUx_ziArzvx7jQ/s1600-h/MBotelho-AInstituicao-1985w.jpg"></a><br /><em>A Instituição</em>, Manuel Botelho, 1985, carvão sobre papel, colecção Fundação Calouste Gulbenkian, Lisboa<br /><em><br />The Institution</em>, Manuel Botelho, 1985, charcoal on paper, collection Fundação Calouste Gulbenkin, Lisbon<br /><br />(clicar para aumentar)<br />(click to zoom in)</div>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0tag:blogger.com,1999:blog-3380128759975579658.post-47300574339673364052007-10-18T22:05:00.000+01:002008-12-09T00:57:00.731+00:00Best Philosopy Paper Award - ECAL 2007<div align="justify">The paper "Institutional Robotics", by Porfírio Silva (a philosopher) and Pedro Lima (a roboticist), received the Best Philosopy Paper Award in ECAL 2007 - 9th European Conference on Artificial Life, Lisbon, held in Lisbon from 10th to 14th September 2007.<br /><br /><a href="http://maquinaespeculativa.weblog.com.pt/Institutional%20Robotics%20na%20Springer.pdf">SILVA, Porfírio, e LIMA, Pedro U., "Institutional Robotics", <em>in</em> Fernando Almeida e Costa <em>et al</em>. (eds.), Advances in Artificial Life. Proceedings of the 9th European Conference, ECAL 2007, Berlim and Heidelbergh, Springer-Verlag, 2007, pp. 595-604</a><br /><br /></div><p align="center"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTKccKHBgUBOLu9Np8Hkt-Jm2AGSFumzszYYISwNsESY_7YWwH4yn3QGkCtnmQL6qHSnSAiLf3oFmp2ty78Cy5uhW3TDVfC4D2oF_Sg5tNB0lc4As5Lntxl0l9Nh-RorJPuJWiv3lsf8o/s1600-h/best-phil-paper.jpg"><img id="BLOGGER_PHOTO_ID_5110144645978356930" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTKccKHBgUBOLu9Np8Hkt-Jm2AGSFumzszYYISwNsESY_7YWwH4yn3QGkCtnmQL6qHSnSAiLf3oFmp2ty78Cy5uhW3TDVfC4D2oF_Sg5tNB0lc4As5Lntxl0l9Nh-RorJPuJWiv3lsf8o/s400/best-phil-paper.jpg" border="0" /></a> <em>Certificate for Best Philosophy Paper</em>, ECAL 2007</p><p align="center"><em><span style="font-size:85%;">(click to zoom in)</span></em></p><p align="center"></p>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0tag:blogger.com,1999:blog-3380128759975579658.post-68253979663566866642007-10-17T23:44:00.000+01:002007-11-06T19:27:58.181+00:00First appearance of the concept in a paper<div align="justify">The very first use of the concept "Institutional Robotics" in a scientific paper is documented by the following link (paper in Portuguese, pdf file, click to download):<br /><br /><a href="http://maquinadeturing.planetaclix.pt/roboinst.pdf">SILVA, Porfírio, "Por uma robótica institucionalista: um olhar sobre as novas metáforas da inteligência artificial", <em>in </em>Trajectos, 5 (Outono 2004), pp. 91-102</a> </div><div align="justify"></div><br /><br /><div align="justify">Abstract:</div><div align="justify">Toward Institutional Robotics.<br />A view on the new metaphors of artificial intelligence.<br /><br />Collective Robotics represents a deep renewal of the classic model of inquiry in Artificial Intelligence. This renewal goes on pair with new inspiring metaphors, which now become more than ever close to social sciences. A notable example of this is RoboCup, the World Championship of Robotic Soccer, which 2004’s edition took place in Lisbon.<br /><br />This text, part of an ongoing critical reflection on the sciences of the artificial, starts with an account of relevant mutations Artificial Intelligence is experiencing. Afterwards, we analyse the dynamics of the new inspiring metaphors giving place to new strands of the old objective of building intelligent machines. Finally, inspired by unorthodox approaches to the philosophy of economics, we suggest an institutional approach to collective robotics.</div>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0tag:blogger.com,1999:blog-3380128759975579658.post-41577830000418931532007-10-16T21:46:00.000+01:002007-11-06T19:27:35.458+00:00Institutional Robotics<div align="center"><strong><span style="font-size:130%;"></span></strong><br /><strong>Porfírio Silva</strong> (*) and <strong>Pedro U. Lima</strong> (**)</div><div align="center"><br />(*)Philosophy Department, Faculdade de Letras, University of Lisbon</div><div align="center">(**) Institute for Systems and Robotics, Instituto Superior Técnico, Technical University of Lisbon</div><br /><div align="justify"><br /><strong>Abstract.</strong> Pioneer approaches to Artificial Intelligence have traditionally neglected, in a chronological sequence, the agent body, the world where the agent is situated, and the other agents. With the advent of Collective Robotics approaches, important progresses were made toward embodying and situating the agents, together with the introduction of collective intelligence. However, the currently used models of social environments are still rather poor, jeopardizing the attempts of developing truly intelligent robot teams. In this paper, we propose a roadmap for a new approach to the design of multi-robot systems, mainly inspired by concepts from Institutional Economics, an alternative to mainstream neoclassical economic theory. Our approach intends to sophisticate the design of robot collectives by adding, to the currently popular emergentist view, the concepts of physically and socially bounded autonomy of cognitive agents, uncoupled interaction among them and deliberately set up coordination devices.<br /><br /><strong>Key words:</strong> Collective Robotics, Institutional Economics, Institutional Robotics<br /></div><br /><br /><p><strong><span style="font-size:130%;">1 Introduction</span></strong></p><div align="justify">Three great neglects are at the heart of Good Old-Fashioned Artificial Intelligence: the neglect of the body, of the world, and of other agents. Collective Robotics is an important attempt to surpass these neglects: because it embodies intelligence in physical robots; because it places robots in physical environments largely natural; because it locates intelligence in the collective. Nevertheless, most multi-robot systems model extremely poor social environments. Our aim is to put forward a new conceptual approach to design control systems of artificial robotic societies. In Section 2 some weaknesses of popular guiding principles to collective systems design are identified. In Section 3 we look for inspiration coming from fields of sciences of the artificial other than robotics. In Section 4 we sketch out a new strategy to conceptualize multi-robot systems: Institutional Robotics, which takes institutions as the main tool of social life of robots with bounded rationality and bounded autonomy.</div><br /><br /><div align="center">--------------------[596]-------------------- </div><br /><strong><div align="justify"><br /><span style="font-size:130%;">2 Emergence, Uncoupled Interaction, Bounded Autonomy, and Collective Inefficiency</span></div><div align="justify"></strong><br />Two out of the four design principles for collective systems suggested in [1:241–243] represent popular views among practitioners of AI and Robotics. According to the “level of abstraction principle”, collective intelligence refers not only to groups of individuals, as in human societies, but equally “to any kind of assembly of similar agents”, including groups of modules in modular robotic systems or organs that make up entire organisms. The “design for emergence principle” states that a desired functionality should not be directly programmed into a group of agents, but emerge from a set of simple rules of local interaction, relying on self-organizing processes. These two principles raise three questions.<br />First, exclusive focus on emergence and self-organization stems from the prominent role conferred to local interaction. No reason is given to ignore indirect or mediated interaction, which [2:14] considers characterized by properties such as name uncoupling (interacting entities do not have to know one another explicitly), space uncoupling and time uncoupling (they do not have neither to be at the same place nor to coexist at the same time). Communication is an example of such an indirect (not local) interaction.<br />Second, to put individuals in human societies on the same foot with organs or modules purports to ignore different degrees of autonomy enjoyed by a human lung and a human individual. According to [3:518], the following could be a working definition within robotics: “Autonomous agents operate under all reasonable conditions without recourse to an outside designer, operator or controller while handling unpredictable events in an environment or niche”. However, for some philosophical perspectives this conception of autonomy would be unsatisfactory, because a truly autonomous agent must be capable of acting according to his own goals, while designers are the sole providers of goals to the robots. A concept of autonomy arising out of the ongoing attempt to maintain homeostasis could improve our understanding of autonomy and how goals become grounded in artificial creatures [3].<br />Whether full autonomy is attainable is a remaining question. A sharp negative answer to that question is offered by [4]. Autonomous agents must be capable of generating new goals as means for achieving existing goals of their own, but they are not necessarily self-sufficient. An agent depends on a resource where he needs it to perform some action to achieve one of his goals. There is also social dependence: an agent x depends on another agent y when, to achieve one of his goals, x needs an action of y. Two agents can be mutually dependent. Dependences imply interests: a world state that favours the achievement of an agent’s goals is an interest of that agent. Dependence and interest relations are objective relations, holding whether an agent is aware of them or not. Limited autonomy of social agents comes also from influencing relations between them. By acquiring (true or false) beliefs about their interests agents can acquire goals. Now, an agent x can influence another agent y to adopt a goal according to x’s needs, even if that goal is not an interest of y.<br /></div><br /><br /><div align="center">--------------------[597]--------------------</div><br /><div align="justify">Within this approach, cognition does not preclude emergency. To form goals and establish plans to their achievement, agents must be cognitive. However, bounded rationality combines with bounded autonomy to give place to emergent phenomena: there are deliberately planned actions but they may produce unintended effects beyond reach of the agent’s understanding or awareness.<br />Third, no reason is given to rule out coordination devices deliberately set up by agents in some multi-agent systems (legislation and organisations in human societies, for example). The remaining question is the desirability of that fact. Could we show that, at least in some situations, merely emergent processes may lead to inefficient solutions to collective problems? If so, we would have a hint on why multi-agent systems may need coordination devices. A set of experiences within MAS, reported in [5], advances our understanding of the problem. There, situations previously identified in experimental economics are simulated with a version of the Genetic Algorithm (GA). The GA population represents a collection of sets of rules associated with the set of actions available to agents; the fitness function for each agent maximizes his payments.<br /><em>Co-ordination problem 1.</em> A set of individuals, kept in isolation from one another, must choose one of 16 colours. Each participant choice will be rewarded in accordance with the rule: multiply a fixed amount of money by the number of players that have chosen the same colour. The experiment repeats a number of times. After each repetition, players are informed of frequencies and pay-offs by colour, so participants can change their choices next time, what they indeed do to maximize payments. Individual behaviours rapidly converge: the rule “choose colour x”, where x is the most often selected, emerges as a shared convention. The “spontaneous order hypothesis” seems to work.<br /><em>Co-ordination problem 2.</em> A new experimental situation departs from the previous one in just one detail. The payoff to each individual now depends, not only on the frequency of the chosen colour, but also on an “intrinsic” characteristic of each colour, which remains unknown to players. For example, all other factors remaining equal, the choice of the colour number 16 pays 16 times more than colour number 1. The convergent choices of all participants to colour 16 is the most valuable situation to every participant, but that convergence is unlikely to occur in the absence of any opportunity to agree on a joint strategy. An initial accidental convergence to any colour creates an attractor capable of strengthen itself from repetition to repetition. Even if a participant has a god’s eye view of the situation, any isolated option for the best theoretical option will neither improve the individual payoff nor move the collective dynamics towards a path conducive to a higher collective payoff. Self-organizing processes may lead to inefficient solutions for a collective problem. The “spontaneous order hypothesis” is in trouble, even with mere co-ordination problems, when the best for each individual is also the best for the collective (for other individuals). The situation gets worse with a “co-operation problem”, when the best outcome for the collective and the best outcome for an individual don’t coincide necessarily.<br /><em>Co-operation problem.</em> Now, the individuals must post a monetary contribution (from 0 to a predefined maximum) in an envelope and announce the amount contained in it. The sum of all the contributions is multiplied by a positive factor<br /></div><br /><br /><div align="center">--------------------[598]--------------------</div><br /><div align="justify"><br />(’invested’) and the resultant collective payoff is apportioned among the individuals. For each participant, its share of the collective payoff is proportional to the announced contribution, not to the posted contribution. As all participants know these rules, they realize that to maximize payoff an individual must contribute nothing and announce the maximum. So, it is with no surprise that, after some initial rounds, free-riding behaviour emerges: the posted contributions tend to zero while the announced contributions are kept close to the maximum. The group follows collectively a path that all of his members consider undesirable: soon there will be no more money to distribute.<br />This set of experiences suggests collective order does not always emerge from individual decisions alone. Coordination devices deliberately set up by agents could be useful.<br /><br /><strong><span style="font-size:130%;">3 Artefacts in Institutional Environments</span></strong></div><div align="justify"><br />The previous Section has shown that some concepts can add to emergentist views in order to sophisticate artificial collective systems design: physically and socially bounded autonomy of cognitive (not only reactive) agents; uncoupled interaction among them; deliberately set up coordination devices. How could we put all these concepts together? Social sciences’ concepts have already inspired fields of sciences of the artificial other than robotics. Relying on some results of that cross-fertilization, we will arrive at the unifying concept of “institutional environment”. It will later lead us to Institutional Robotics.<br />Epstein and Axtell argue that artificial society modelling can constitute a new kind of explanation of social phenomena [6:20]. Lansing [7] argues that the modelling of artificial societies can profit from a broad historical perspective of disputes among social scientists and philosophers on how to study social phenomena. To exemplify, he points out the parallel between some writing of Theodor Adorno on the positivist dispute in German sociology and the question that introduces [6]: “How does the heterogeneous micro-world of individual behaviors generate the global macroscopic regularities of the society?”. This is a classical problem of the social sciences, the micro-macro link problem or the problem of social order. A number of researches take both perspectives together within Multi-agent systems (MAS) modelling. A few examples are: [8] studiesnorms as a possible solution to coordination problems; [9] suggests relaxing the assumption that coordination can be designed to perfection and importing conflict theories from sociology; [10] reviews trust and reputation models; within the framework of “Socionics” (a combination of sociology and computer science [11]), the Social Reputation approach [12] models reputation as an emergent mechanism of flexible self-regulation; [13] argues for using basic individual rights in MAS, combined with some argumentation mechanism.<br />Facing such a variety, how would we choose the most promising concept? Perhaps we need them all. “It does not seem possible to devise a coordination strategy that always works well under all circumstances; if such a strategy existed, our human societies could adopt it and replace the myriad coordination </div><br /><br /><div align="center">--------------------[599]--------------------</div><br /><div align="justify"><br />constructs we employ, like corporations, governments, markets, teams, committees, professional societies, mailing groups, etc.” [14:14] So, we keep them all, and more – but we need an unifying concept to give the whole some consistence.<br />“Environment” is such a concept. [2] suggests the need to go deeper than the subjective view of MAS, where the environment is somehow just the sum of some data structures within agents. What we need to take into account is the active character of the environment: some of its processes can change its own state independently of the activity of any agent (a rolling ball that moves on); multiple agents acting in parallel can have effects any agent will find difficult to monitor (a river can be poisoned by a thousand people depositing a small portion of a toxic substance in the water, even if each individual portion is itself innocuous) [2:36]. Because there are lots of things in the world that are not inside the minds of the agents, an objective view of environment must deal with the system from an external point of view of the agents [15:128].<br />One can wonder if this can be relevant to robotics, where agents already behave sensing and acting in real (not just software) environments. We suggest the answer is affirmative. Dynamic environmental processes independent of agents’ purposes and almost unpredictable aggregate effects of multiple simultaneous actions are not phenomena restricted to physical environments. Similar phenomena can occur in organizational environments: if nine out of ten of the clients of a bank decide to draw all their money at the same date, bankruptcy could be the unintended effect. And, most of the time, social environments in robotics are poorly modelled. So, the objective view of the environment could apply not only to physical features, but also to the social environment of the agents. We further suggest that both physical and social environments are populated with strange artefacts: artefacts with material and mental aspects. Let us see, following [16].<br />An artefact is something done by an agent to be used by another (or the same) agent. An artefact may not be an object: footprints left on a mined field for the followers are artefacts. Artefacts shaped for coordinating the agents’ actions are coordination artefacts. Even single-agent actions can be coordinated actions if they contribute to solve an interference problem with other agents. Some artefacts have physical characteristics that represent opportunities and constraints which are sufficient conditions to enable a single-agent coordinated action, even if the agent doesn’t recognize them (the wall of a house keeps people inside and outside separated). Sometimes, the agent must additionally recognize the opportunities and constraints of the artefact: sitting at a table with other people needs some knowledge (“not try to seat at a place already occupied”).<br />More interesting artefacts are associated not only with physical but also with cognitive opportunities and constraints (deontic mediators, such as permissions and obligations). Recognizing all of those enables a single-agent coordinated action: a driver approaching a roundabout is obliged, only by physical properties of the artefact, to slow down and go right or left to proceed; traffic regulations add something more indicating which direction all drivers have to choose not to crash with others.<br />Furthermore, artefacts can be completely dematerialized. Such artefacts enable single-agent coordinated actions only by means of cognitive opportunities<br /></div><br /><br /><div align="center">--------------------[600]--------------------</div><br /><div align="justify"><br />and constraints recognized by the acting agent. Social conventions and norms are relevant examples of the kind. A traffic convention to drive on the right works independently of any material device.<br />Consider now multi-agent coordinated actions. “There exist some artefacts such that the recognition of their use by an agent and the set of cognitive opportunities and constraints (deontic mediators) are necessary and sufficient conditions to enable a multiagent coordinated action” [16:320]. Institutions are of such a kind of artefacts. The definition takes institutional actions as multi-agent coordinated actions performed by a single-agent. How could this be? Because of a cognitive mediation intertwined with the agents’ behaviours.While traditional views on institutions take them as structured sets of rules and conventions, in [16] the basic coordination artefact is the institutional role played by an agent with the permission of others. A group of agents recognizes that an agent (Paul) plays a role (priest) and so believes he has the artificial power of doing a multi-agent coordinated action (the marriage of John and Mary). Both recognition and belief are intertwined with the behaviour of treating Paul as a priest and treating John and Mary, from some point in time on, as a married couple.<br />The single-agent action of an agent playing a role is the vehicle action for a collective action, like flipping the switch is the vehicle action for the supra-action of turning the light on. In this context, the agent relies on some external aspects of the world (the functioning of the electrical circuit). To get John and Mary married the priest must perform a certain set of bodily movements counting as marrying. That set of movements is the vehicle action for the supra-action of marrying John and Mary. Again, the collective of agents rely on some external aspects of the world: the institutional context [16:312,320–321].<br />So, we have got our unifying concept: institutional environments populated with a special kind of artefacts.<br /><br /><strong><span style="font-size:130%;">4 Institutional Robotics</span></strong></div><div align="justify"><strong><span style="font-size:130%;"><br /></span></strong>With the “institutional environment” concept as a starting point, in this Section we sketch out a new strategy to conceptualize multi-robot systems. Some global inspiration comes from Institutional Economics [17], an alternative to mainstream neoclassical economic theory. “Market-based multi-robot coordination” is a previous example of importing some Economics’ views into Robotics [18]. We do the same, but with different assumptions.<br />(1) The control system for a robotic collective is a network of institutions. All institutions exist as means for some activity of some set of robots. As a principle, institutions are generic: they are not designed to any specific set of robots.<br />(2) Institutions are coordination artefacts and come in many forms: organizations, teams, hierarchies, conventions, norms, roles played by some robots, behavioural routines, stereotyped ways of sensing and interpret certain situations, material artefacts, some material organization of the world. A particular institution can be a composite of several institutional forms.<br />(3) Institutions can be mental constructs. An example of a “mental institution” is a program to control a sequence of operations.<br /></div><br /><br /><div align="center">--------------------[601]--------------------</div><br /><div align="justify"><br />(4) Institutions can be material objects functioning exclusively by means of its physical characteristics given the physical characteristics of the robots (a wall separating two buildings effectively implements the prohibition of visiting neighbours if the robots are not able to climb it). Some rules (or other kinds of mental constructs) can be associated to a material object to create a more sophisticated institution (a wall separating two countries is taken as a border; there are some doors in the wall to let robots cross the border; some regulations apply to crossing the border).<br />(5) The boundaries between institutional and purely physical aspects of the world are not sharp. Not all material objects are institutions. If the wall separating buildings is seen as just an element of the physical world, some robots gaining access to opposite building with newly acquired tools or physical capabilities will not be minded as a breach of a prohibition. However, modifications of the material world creating new possibilities of interaction can become institutional issues. If the collective prefers to preserve the previous situation of separated buildings, the new capability of the robots to climb the wall could give place to new regulations. Material objects are devices for institutions when they implement some aspect of the collective order. The continuing existence of a material object can be uncoupled from the continuing existence of the institutional device it implements (the wall could be demolished without eliminating the border; the border can be eliminated without demolishing the wall). So, a material leftover of a discarded institution can last as an obstacle in the world.<br />(6) Enforcement mechanisms can be associated with institutions to prevent (or to redress negative effects of) violation. Examples are fines and reputation.<br />(7) The institutional environment at any point in the history of a collective is always a mix of inherited and newly adopted forms. So, the designer of a robotic collective must shape the first version of any system of institutional robotics. However, that first institutional setup must be neither purely centralized, nor fully decentralized, nor purely distributed. That means the following. Not all robots are equal in power: neither all agents have the same computational power, nor all access the same information, nor all are allowed to take decision on all domains. There are some hierarchical relations among robots: for any robot, access to information and permission to act are bounded by decisions of some others. However, different hierarchies apply to different issues and the same robot can be on top of one hierarchy and at bottom of others. Some robots, by virtue of one-to-one relationships not framed by any hierarchy, are able to establish short cuts to and influence top level decision makers that would otherwise be beyond reach. There is neither a single robot nor a small group of robots in charge of all collective decisions all the time. Although, some kind of elitism is possible: different (eventually partially overlapping) groups of robots share the ruling over different domains of decision. Elite must eventually be renewed: robots can be removed from power, robots can access power.<br />(8) Agents are robots, hardware/software “creatures”, operating on real physical environments. Robots are able to modify at some extent the material organization of their physical world.<br /></div><br /><div align="center">--------------------[602]--------------------</div><br /><div align="justify"><br />(9)The continuing functioning of any robot depends on some material condition (available energy, for example). Whatever set of tasks a robot has to fulfil, some of them must be related to survival. There could be some institutions in charge of managing life conditions for all or some robots.<br />(10) All robots have built-in reactive behaviours, routines, and deliberative competences. Robots have partial models of themselves (they know some, but not all, of their internal mechanisms). Some of the internal mechanisms known by the robots can be accessed and modified by themselves.<br />(11) Every agent is created with links to some subset of the institutional network in existence within the collective. (Nobody is born alone in the wild). At some extent agents are free to join and to disconnect themselves from institutions. However, under certain circumstances, some institutions could be made compulsory for every agent or for some subset of all agents. Some institutions can filter access, either according to some objective rules or according to the will of those already connected. Disconnecting from an institution prevents the access to resources under control of it, as well as the participation in decision making processes taking place within it.<br />(12) Each robot has a specific individual identification (a name). All robots are able to identify, if not all, at least some others by their names.<br />(13) Any agent disconnected from all institutions will be treated by other agents as just an aspect of the material world. To recover from that extreme situation and get connected again to the institutional network an agent must be helped by some benevolent agent.<br />(14) World models are a special kind of institution. Being created with preestablished links to some institutions, any robot is endowed with some partial world models. World models can be specific to aspects of the physical world, specific to aspects of the social world or combine some aspects of both. None of the robots is endowed with a complete model of the world (except if gods are allowed). Inconsistencies between partial world models of one robot are allowed.<br />(15) There will be some process of collective world modelling. For example, a shared model of physical world can result from co-operative perception (sensor fusion [19:17–22]: merging sensor data from sensors spread over different robots and applying confidence criteria to weight their contribution to an unified picture of some aspect of the environment).<br />(16) The functioning of the sensorial apparatus of the agents can be modulated by their links to some institutions (adhering to an institution can augment the power or distort the functioning of some sensor). Institutional links can also modify the access to pieces of information available at collective level.<br />(17) From the point of view of an external observer the world model of a robot can be inaccurate. Inaccuracies can result from objective factors, intrinsic to the robotic platform (like sensors’ limitations) or extrinsic (inappropriate vantage points to obtain data from some regions of the environment). Other inaccuracies can result from subjective factors: robots can have “opinions” and “ideologies”.<br />(18) An “opinions” is an individual deviation from world models provided by institutions. (Even if a specific “opinion” of an individual agent is objective<br /></div><br /><br /><div align="center">--------------------[603]--------------------</div><br /><div align="justify"><br />knowledge gathered by virtue of some privileged vantage point, in such a manner that an external observer would prefer to rely on that opinion instead of accepting the “common sense”, that means nothing to other agents, as long as they are deprived of that gods’ view). By virtue of bearing an “opinion” the behaviour of a robot can be modified.<br />(19) An “ideology” is a set of “opinions” shared by a subset of all agents. Its acceptance among agents largely overlaps with sets of agents linked to some subset of the institutional network. An “ideology” can be “offered” by an institution to any agent prone to adhere or be a condition for adhesion. An “ideology” can result from a modification of the sensor fusion process (modification of the criteria to weight different individual contributions, for example). “Ideologies” can be about the physical or the social world. Modifying the perception of the agents and their behaviours, “ideologies” can affect the functioning of institutions in many ways: for example providing alternative stereotyped ways of sensing certain situations (“ignore such and such data streams”) or undermining mechanisms of social control (“break that rule and we will pay the fine for you with a prize”).<br />(20) Decision-making processes are a special kind of institution. Many aspects of collective dynamic can be subject to co-operative decision-making [19:34–46].<br />(21) Institutional building is a special issue for decision-making processes: “constitutional rules” for the functioning of some already existing institutions can be deliberated by the robots themselves; robots can deliberately set up new institutions or abandon old ones. Some institutions will have specific mechanisms to facilitate institutional building.<br />(22) Institutional building is fuelled by “institutional imagination”: robots can conceive alternative institutions, or alternative constitutional rules to existing institutions, not to implement them at short term, but as “thought experiments”. Results of those thought experiments can be put forward to specific institutional building mechanisms.<br />(23) The functioning of an institution can be modified, not by deliberative means, but by accumulating small modifications initiated by some robots and not opposed by others.<br />(24) An institution fade away when none agent is anymore linked to it. Robots can have memories of old institutions and reintroduce them in the future.<br /><br /><strong><span style="font-size:130%;">5 Conclusion</span></strong></div><div align="justify"><strong><span style="font-size:130%;"><br /></span></strong>This paper suggested a new strategy to conceptualize multi-robot systems: the Institutional Robotics, which takes institutions as the main tool of social life of robots with bounded rationality and bounded autonomy. We have plans to set up a working group consisting of a team of people with a multidisciplinary background (e.g., philosophy, cognitive sciences, biology, computer engineering, artificial intelligence, systems and control engineering) to work on it, including further brainstorming, concepts refinement and actual implementation.<br /></div><br /><div align="center">--------------------[604]--------------------</div><br /><div align="justify"><br /><strong><span style="font-size:130%;">References</span></strong><br /><br />1. Pfeifer, R., Bongard, J.: How the Body Shapes the WayWe Think. The MIT Press, Cambridge (2007)<br /><br />2. Weyns, D., Parunak, H.: v. D., Michel, F., Holvoet, T., Ferber, J.: "Environments for Multiagent Systems, State-of-the-art and Research Challenges", In Weyns, D., Parunak, H.v.D., Michel, F. (eds.) E4MAS 2004, LNCS (LNAI), vol. 3374, pp. 1–47. Springer, Heidelberg (2005)<br /><br />3. Haselager, W.F.G.: Robotics, philosophy and the problems of autonomy. Pragmatics & Cognition 13(3), 515–532 (2005)<br /><br />4. Conte, R., Castelfranchi, C.: Cognitive and Social Action. The University College London Press, London (1995)<br /><br />5. Castro Caldas, J., Coelho, H.: The Origin of Institutions: socio-economic processes, choice, norms and conventions. Journal of Artificial Societies and Social Simulation 2(2) (1999), <http:><br /><br />6. Epstein, J.M., Axtell, R.: Growing Artificial Societies: Social Science from the Bottom Up. Brookings Institution Press, Washington (1996)<br /><br />7. Lansing, J.S.: Artificial Societies” and the Social Sciences. Artificial Life 8, 279–292 (2002)<br /><br />8. Hexmoor, H., Venkata, S.G., Hayes, R.: Modelling social norms in multiagent systems. Journal of Experimental and Theoretical Artificial Intelligence 18(1), 49–71(2006)<br /><br />9. Malsch, T., Weiß, G.: Conflicts in social theory and multiagent systems: on importing sociological insights into distributed AI. In: Tessier, C., Chaudron, L., M¨uller, H.-J. (eds.) Conflicting Agents. Conflict Management in Multi-Agent Systems, pp. 111–149. Kluwer Academic Publishers, Dordrecht (2000)<br /><br />10. Sabater, J., Sierra, C.: Review on Computational Trust and Reputation Models. Artificial Intelligence Review 24(1), 33–60 (2005)<br /><br />11. Malsch, T., Schulz-Schaeffer, I.: Socionics: Sociological Concepts for Social Systems of Artificial (and Human) Agents. Journal of Artificial Societies and Social Simulation 10(1) (2007), <http:><br /><br />12. Hahn, C., Fley, B., Florian, M., Spresny, D., Fischer, K.: Social Reputation: a Mechanism for Flexible Self-Regulation of Multiagent Systems. Journal of Artificial Societies and Social Simulation 10(1) (2007)<br /><br />13. Alonso, E.: Rights and Argumentation in Open Multi-Agent Systems. Artificial Intelligence Review 21(1), 3–24 (2004)<br /><br />14. Durfee, E.H.: Challenges to Scaling Up Agent Coordination Strategies. In:Wagner, T.A. (ed.) An Application Science for Multi-Agent Systems, pp. 113–132. Kluwer Academic Publishers, Dordrecht (2004)<br /><br />15. Weyns, D., Schumacher, M., Ricci, A., Viroli, M., Holvoet, T.: Environments in Multiagent Systems. The Knowledge Engineer Review 20(2), 127–141 (2005)<br /><br />16. Tummolini, L., Castelfranchi, C.: The cognitive and behavioral mediation of institutions: Towards an account of institutional actions. Cognitive Systems Research 7(2-3), 307–323 (2006)<br /><br />17. Hodgson, G.M.: Economics and Institutions: A Manifesto for a Modern Institutional Economics. Polity Press, Cambridge (1988)<br /><br />18. Dias, M.B., Zlot, R.M., Kalra, N., Stentz, A.: Market-based multirobot coordination: a survey and analysis. Proceedings of the IEEE 94(7), 1257–1270 (2006)<br /><br />19. Lima, P.U., Cust´odio, L.M.: Multi-Robot Systems. In: Innovations in Robot Mobility and Control. Studies in Computational Intelligence, vol. 8, pp. 1–64. Springer, Heidelberg (2005)<br /></div><br /><div align="center">------------------------------------</div><br /><div align="justify">Reference to this paper:<br /><br /><span style="font-size:130%;color:#33cc00;">SILVA, Porfírio, e LIMA, Pedro U., "Institutional Robotics", <em>in</em> Fernando Almeida e Costa <em>et al.</em> (eds.), <strong>Advances in Artificial Life. Proceedings of the 9th European Conference, ECAL 2007</strong>, Berlim e Heidelbergh, Springer-Verlag, 2007, pp. 595-604</span> </div>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0tag:blogger.com,1999:blog-3380128759975579658.post-52643561263109225242007-10-15T23:16:00.000+01:002008-12-09T00:57:01.793+00:00Institutional Robotics: what's that?<div align="center"><strong><span style="font-size:130%;color:#330099;">A new strategy to conceptualize multi-robot systems is suggested:<br /><em>Institutional Robotics</em>, which takes institutions as the main tool of social life of robots with bounded rationality and bounded autonomy.</span></strong> </div><div align="justify"><br /><br /></div><p align="justify"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh63PQuWYkTxDQuLmdeXMmQQYCFvJH4oSB47eV0ocHg0oQAcxBRazce8QPEwqKP2ZuzUFhSc3WYpExPh-akS1AAwXA6L82_7y6tR4xDSwBKTWilSa-FQm_6upnTjKECnGzrVDwjoucTFQpR/s1600-h/Nobody-is-born-in-the-wild+web.jpg"><img id="BLOGGER_PHOTO_ID_5114636498637908034" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh63PQuWYkTxDQuLmdeXMmQQYCFvJH4oSB47eV0ocHg0oQAcxBRazce8QPEwqKP2ZuzUFhSc3WYpExPh-akS1AAwXA6L82_7y6tR4xDSwBKTWilSa-FQm_6upnTjKECnGzrVDwjoucTFQpR/s400/Nobody-is-born-in-the-wild+web.jpg" border="0" /></a><br /><br />- The control system for a robotic collective is a network of institutions<br />- Institutions: mental constructs or/and material objects: organizations, teams, conventions, norms, roles, behavioural routines, stereotyped ways of sensing and interpret, material artefacts<br />- Enforcement mechanisms (fines, reputation)<br />- Every agent is linked to some nodes of the institutional network. Access to resources and decision making processes depend on institutional links.<br />- Agents disconnected from all institutions are just aspects of material world<br />- The institutional environment is a mix of inherited and newly adopted forms </p><p><br /><br /></p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8DOh_xBGybbJy37ZmCXgys7uux1fTcdnwzXLsKEBbvr4UoaX0jKGa7HiumzL1fh8N31aOCcYnor9xsvr8t3VWWZ8CJvMk2Iez86Gxpp66QlYV7AFCI1qfEaUx8nympadCw4Mh4wPC2rgB/s1600-h/poder+copy.jpg"><img id="BLOGGER_PHOTO_ID_5114637155767904338" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8DOh_xBGybbJy37ZmCXgys7uux1fTcdnwzXLsKEBbvr4UoaX0jKGa7HiumzL1fh8N31aOCcYnor9xsvr8t3VWWZ8CJvMk2Iez86Gxpp66QlYV7AFCI1qfEaUx8nympadCw4Mh4wPC2rgB/s400/poder+copy.jpg" border="0" /> <p align="justify"></a><br /><br /><br />- Robots are able to modify at some extent the material organization of their physical world<br />- Some material objects implement aspects of collective order<br />- Robots struggle for survival (energy)<br />- Built-in reactive behaviours, routines, and deliberative competences.<br />- Robots have partial models of internal mechanisms; some can be accessed.<br />- Robots have names<br /><br /></p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqjhvDyUvLp7MF2XRbj2L3SrVAWHO6OR_oI1Jlj3lSdq-siPAL_aE0AjFd3DntHjcBLIa-yE439-c8vtKH_Yel2vH5GwPpy3w3y4V6gwx-XeKvMr9fJhBzB593gPWTMrkU7pLAp7isZmJm/s1600-h/percepcao-coop-web.jpg"><img id="BLOGGER_PHOTO_ID_5114637632509274210" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqjhvDyUvLp7MF2XRbj2L3SrVAWHO6OR_oI1Jlj3lSdq-siPAL_aE0AjFd3DntHjcBLIa-yE439-c8vtKH_Yel2vH5GwPpy3w3y4V6gwx-XeKvMr9fJhBzB593gPWTMrkU7pLAp7isZmJm/s400/percepcao-coop-web.jpg" border="0" /> <p align="justify"></a><br /><br />- Partial models of physical and/or social world are special kind of institution<br />- Collective world modelling (co-operative perception)<br />- Functioning of agents’ sensorial apparatus modulated by links to institutions<br />- Inaccuracies in world models result from objective or subjective factors:<br />”opinions”: individual deviation from world models provided by institutions<br />”ideology”: set of ”opinions” “offered” by an institution to agents prone to adhere<br />- “Opinions” and ”ideologies” affect behaviour<br /><br /><br /></p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJCfFL5syth8to4pKj1m8JVf_eoGtsGjnu4ObKFB8dKquq8dPBeEE5tmyOqoqNuxYpWK8B8b6LbTo8-PxiEgOTxAFUgSyFCEgTNDSKcGJG9blSulusub_Vz3dG_F0CFhrRErRU8ZZ0uHhd/s1600-h/desigualdades-institucionai.jpg"><img id="BLOGGER_PHOTO_ID_5114638199444957298" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJCfFL5syth8to4pKj1m8JVf_eoGtsGjnu4ObKFB8dKquq8dPBeEE5tmyOqoqNuxYpWK8B8b6LbTo8-PxiEgOTxAFUgSyFCEgTNDSKcGJG9blSulusub_Vz3dG_F0CFhrRErRU8ZZ0uHhd/s400/desigualdades-institucionai.jpg" border="0" /> <p align="justify"></a><br /><br />- Decision-making processes are a special kind of institution (co-operative decision-making)<br />- Institutional building: robots can deliberately set up new institutions<br />- “Institutional imagination”: robots conceive alternative institutions<br />- Emergence:<br />accumulating small modifications modify institutions without deliberation;<br />unused institutions fade away<br /><br /><br /></p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQdHe-62upE9j3gg93h7ZfhvGY06bFXY7aHcFqUFGzvFqbK9nw9WjFecBNi2b3AwL6Z4pntiTfiAjxYS886HrDmHk796G1Ihfep0ZgHq92-nw6HLIFzBLfAuPh3-hNkTZDNdRaGAuir_yO/s1600-h/QRIO-parlamento-web.jpg"><img id="BLOGGER_PHOTO_ID_5114638641826588802" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQdHe-62upE9j3gg93h7ZfhvGY06bFXY7aHcFqUFGzvFqbK9nw9WjFecBNi2b3AwL6Z4pntiTfiAjxYS886HrDmHk796G1Ihfep0ZgHq92-nw6HLIFzBLfAuPh3-hNkTZDNdRaGAuir_yO/s400/QRIO-parlamento-web.jpg" border="0" /></a>Porfirio Silvahttp://www.blogger.com/profile/04171586201793747291noreply@blogger.com0