November 04, 2009

Vida Institucional Artificial / Institutional Artificial Life

Conferência proferida no âmbito do Ciclo "Das Sociedades Humanas às Sociedades Artificiais", edição 2009, sessão de 26 de Março. Organização do Instituto de Sistemas e Robótica (pólo do Instituto Superior Técnico), no âmbito do Laboratório de Sistemas Inteligentes.

Conference from the series "From Human Societies to Artificial Societies", 2009 edition, session of March 26. Conferences organized by the Institute for Systems and Robotics (Instituto Superior Técnico, Lisbon).

Seguem-se o vídeo desta conferência e os slides usados na ocasião. A reconstituição da conferência é possível combinando o uso destes dois recursos: mudar os slides manualmente à medida que a palestra avança.

Here is the video of this conference and the slides showed at the time. The reconstitution of the conference is possible by combining the use of these two features: change the slides manually as the lecture progresses. Everything in Portuguese.






Clicar para mais informação sobre esta conferência.

More on this conference, here.

Clicar para mais informação sobre este ciclo de conferências.

More on the whole set of conferences, here.

Read more...

March 06, 2009

Bio, Nano, Robo – New Challenges for Historians of Technology


The SHOT (Society for the History of Technology) 2008 Annual Meeting took place in Lisbon last October. I served as chair of the session 1, on “Bio, Nano, Robo – New Challenges for Historians of Technology”, and commentator to the three presentations at the programme. The following is a version of the commentary I produced at that occasion.

We have just heard three interesting presentations on the history of nanotechnology, genetics as an economic endeavour, and robotics.
This session is about “new challenges for historians of technology”. Why nanotechnology, genetics and robotics must be taken as new challenges for historians of technology or historians of science?
I don’t know whether it is clear for you why nanotechnology, genetics and robotics fit well in the same session of a meeting like this one. It is not for me. But it is perhaps a difficulty of mine, because I am not a historian of technology, not even a historian.
I am a philosopher and I am currently working as a researcher to the Institute for Systems and Robotics at the Technical University of Lisbon. And, from that perspective, and not from the perspective of a historian, I decided to suggest something on this issue. I will suggest that we can deal with these questions within the framework of the sciences of the artificial.
Let me try to explain what I mean, as a commentary on the presentations.

Christian Kehrt challenges usual definitions of nanotechnology, and suggests increasing our understanding of the field having a closer look at the visions of the involved actors. The point is that visions enable actors to open new fields of research, provide new symbolic resources and broader fields of possibilities, so triggering further technological development.
Christian identifies one of the core visions of nanotechnology: the idea of molecular engineering, which provides the grounding for a major venture: to transgress the limits of nature, to become a deus ex machina, using molecular components and atoms.
My question is: what’s new about that vision?
Already by year 1637 the French philosopher René Descartes, in his Discourse on the Method, wrote that science can give us the practical knowledge needed to “render ourselves the lords and possessors of nature” (his words). This statement is usually taken as a manifesto of modern science. It is usually interpreted as a programme for physics and other scientific disciplines dealing with the external physical world.
However, Descartes added that becoming the lords and possessors of nature is a desirable result “especially for the preservation of health”. And he was using a broad concept of health and of medicine, as this quote testifies. Health is the first and fundamental of all the blessings of this life «for the mind is so intimately dependent upon the condition and relation of the organs of the body, that if any means can ever be found to render men wiser and more ingenious than hitherto, I believe that it is in medicine they must be sought for.»
So, the grand vision of modern science, to make ourselves the lords and possessors of nature, is directed, from the beginning, to our own inner mechanisms, bodily and mental mechanisms of human beings.
Still, as Christian Kehrt said, one of the core visions of nanotechnology is to transgress the limits of nature. This is perhaps new compared with Descartes’ vision.
Yet, drawing a clear distinction between transgressing and respecting the limits of nature can prove harder than expected.

Sally Hughes’ presentation can be taken as a proof that there are people and companies struggling to modify the very concept of respecting and transgressing the limits of nature.
She gave us some elements of the history of Genentech, the first company entirely devoted to genetic engineering. Genentech had to face a bunch of problems from its first days to its current status as the most successful biotechnology company in the world. Problems with technology, with law, with politics, and with regulation. One of those problems was an intense legal dispute on the question “could one patent life?”. Genentech, despite the high level of risk and uncertainty that characterized the environment of the early development of this venture, made a spectacular debut on the New York stock market in the fall of 1980.
So, in a sense, Genentech managed to successfully modify the status of the question “what is to respect nature”. And they did so largely from outside the walls of scientific institutions: at courts, at companies, at media to change opinion. The social construction of life sciences is not only for scientists. How can we deal with this? How can we even understand this?

Kathleen Richardson presented us the robots, as they were invented, or reinvented, in the 20s of last century by Karel Čapek, as a means to provoke reflection about human societies, about what does it mean to be human. According to Kathleen, the targets of Čapek’s criticism were well known political practices and ideologies.
Both utopian and dystopian elements were linked to robots. Utopian elements relate to the vision of a future society where machines work and humans only supervise them. Dystopian elements relate to production chains where workers are themselves tools to the machines.
Both the utopian and the dystopian elements Kathleen Richardson mentions as inspiring Čapek’s play relate to the collective organization of human societies: which place workers deserve within production structures, which social organization is most desirable.
We still need to concentrate on the question “what is it to be a human being” to better understand the meaning of a brand of technologies that have the potential to change the answer to that question.
However, nowadays, utopian as well as dystopian elements of this kind must better be placed at the individual’s level, rather than at collective level. We expect that medicines and prosthesis can give us a better body, perhaps a better mind, a new face, the chance to replace a leg or a hand that we have lost. Some expect to live longer, others to be able to choose children with such-and-such features and without such-and-such other ones. And some of us want all those opportunities to be a matter of individual choice.

So, core visions of nanotechnology’s dream of transgressing the limits of nature, playing deus ex-machina with molecules – even if perhaps we are not sure about what “nature” means.
Companies deploy huge efforts on many lines of battle to assure not only that we can engineer the living, but also that we can patent the results.
Now old-fashioned fears about robots as creatures of ambiguous status are giving place to hopes of gaining access to new bodies and new minds for ourselves and our progeny.
How to make sense of all this within a unique framework?
How to put the activities of companies, like Genentech; scientific and technological activities, like those being developed under the nano-banner; and cultural and political activities, like those related to the play of Čapek – how to put all this together and how to understand all this within a coherent vision?
My suggestion is to try to use the concept of “sciences of the artificial” to do so.
And I will borrow from Herbert Simon the main idea.
There is a book from Herbert Simon, The Sciences of the Artificial, published first in 1969, that could be of help here.
According to Simon, we cannot understand what it means to be artificial opposing artificial to natural. Since “The world we live in today is much more a man-made world than it is a natural world”, to define artificial as (wo)man-made has vanishing utility.
As Henry Petroski once wrote (The Evolution of Useful Things, 1994, ix), “Other than the sky and some trees, everything I can see from where I now sit is artificial. The desk, books, and computer before me; the chair, rug, and door behind me; the lamp, ceiling, and roof above me; the roads, cars, and buildings outside my window, all have been made by disassembling and reassembling parts of nature.”
So, we need to take a different way.
Artificial things, artefacts, are not only (wo)man-made things.
Simon suggestion is to define an artefact as an interface between an inner environment, the organization of the artefact itself, and an outer environment. If the inner environment is appropriate to the outer environment, or vice versa, the artefact is adapted to its purpose.
Now, artificial entities, or artefacts, are “all things that can be regarded as adapted to some situation”. And, and this is a crucial point, according to this definition we must consider “living systems that have evolved through the forces of organic evolution” as artefacts or artificial things.
Some examples of Simon’s application of this definition are as follows.
The “economic man” is an artificial system. The inner system of the economic man is adapted to the outer economic environment, and its behaviour has evolved by pressure of the economic environment.
In psychological terms, human beings are also artificial creatures. “Human begins, viewed as behaving systems, are quite simple. The apparent complexity of our behaviour over time is largely a reflection of the complexity of the environment in which we find ourselves” . Psychology, the study of this adaptive process, must be seen as one of the sciences of the artificial.
If artefacts are all kind of things that became adapted to a situation, we can ask how to make artefacts, how to make things that have wanted properties. The answer to that question is what HS calls “the science of design”, the science of creating the artificial.
Engineers are professional designers. But doctors that prescribe remedies for a sick patient are also designers. A manager that devises a new sales plan for a company is a designer. A political leader that draws a social welfare policy for a state is a designer. Schools of engineering, of architecture, business, education, law, and medicine, are all concerned with design.
If you are working on devising courses of action aimed at changing existing situations into preferred ones, you are designing; “The natural sciences are concerned with how things are. (…) Design, on the other hand, is concerned with how things ought to be (…)”.
Now, my suggestion is that all the presentations in this session could be seen as different approaches to a study of the artificial, or to applied aspects of the sciences of the artificial. They are all about different approaches to the grand design of human beings, of human societies and of human culture, where artefacts fill almost all the meaning in our culture.
One point I want to make is that, talking of “the sciences of the artificial” nobody is proposing neither a new scientific discipline, nor new departments at our universities. At some extent, Christian Kehrt got close to this vision when he said that, to understand the hopes and fears of nanotechnology, it is crucial to study the history of biotechnology and microelectronics. Our suggestion is that we need a new perspective: how are we shaping the future human societies by engineering so many aspects of ourselves and of our communities. And how could we, historians, biologists, physicists, computer scientists, social scientists, and philosophers, work together on this new perspective. For example, trying to understand how technologies are taking the lead of this process. And trying to understand if it is all right about our future being shaped mainly by technological possibilities: sometimes without a proper reflection on what does it mean to be human.
Could this perspective, from the “sciences of the artificial”, make sense to historians of technology?

Read more...

February 26, 2009

The roundabout case study

Institutional Robotics is a new strategy to conceptualize multi-robot systems, which takes institutions as the main tool of social life of robots with bounded rationality and bounded autonomy. This institutional approach intends to get inspiration from philosophical and social sciences research on social and collective phenomena, and is mainly inspired by concepts from Institutional Economics, an alternative to mainstream neoclassical economic theory.

The goal is to have multiple robots developing activities in a shared environment with human, in such a way that humans can interact with robots "naturally", intuitively, without a need to learn specific techniques to deal with them. The focus is not one-to-one interaction, but social behaviour in physical and social environments populated with many natural as well as artificial agents. So, the robots must be able to recognize institutions and institutional indicators that humans also recognize as structuring forms of their complex social relationships. This includes, for instance, rules, routines, signs, forms of organization of the material world, social roles, and social forms as organizations or teams.



References:


SILVA, Porfírio, and LIMA, Pedro U., "Institutional Robotics", in Fernando Almeida e Costa et al. (eds.), Advances in Artificial Life. Proceedings of the 9th European Conference, ECAL 2007, Berlim e Heidelbergh, Springer-Verlag, 2007, pp. 595-604 link


SILVA, Porfírio, VENTURA, Rodrigo, and LIMA, Pedro U.,"Institutional Environments", in Proc. of Workshop AT2AI: From agent theory to agent implementation, AAMAS 2008 - 7th International Conference on Autonomous Agents and Multiagent Systems, Estoril, Portugal, 2008 link





The roundabout case study

To put to a first test some of the basic concepts of Institutional Robotics, there is an ongoing case study with a minimalist setting. We want a set of robots to become able to behave as car drivers in an urban traffic scenario. The minimal setup represents several roundabouts connected by a small system of streets. Robots will have to know how to deal with basic aspects of the road code, some traffic signs, and agents playing special roles (police robots). Some more general rules, typical of human societies (“respect the integrity of other agents”, for example) must also be acknowledged and respected by the robots. Teams of e-pucks (the small robots being used) should be able to act in a “normal”, “conformist” way in the institutional environment while competing for the realization of a particular task (for example, collecting energy). But the robots could also be able, guided by utility-based considerations, to opt for inobservance of the institutional framework. The experiment will address the consequences of that co-existence of "conformist" and "non-conformist" behaviours within the same “robotic society”.


The case study explores an aspect that is essential in many institutions. Most of the time, institutions have both material and mental aspects. The roundabout in a traffic scenario instances that property. On the one hand, the roundabout, just due to its physical features, constrains behaviour: vehicles can not move on, drivers must choose either to turn right or to turn left if they want to proceed. Now, doing that (deciding in a conformist way, in Portugal, to go right) implies invoking a mental entity, a rule. It is well known that this rule is not the same in all countries. But it always combines with material features of the roundabout to play its role in a institutional environment.



Part of the experimental setup
at Researchers’ Night 2008 (26th September 2008, Centro Cultural de Belém)





A step of the experiment ahead



Basic behaviours: obstacles avoidance, wall following.



Cognitions does not preclude emergence: one e-puck got stuck on a small elevation; another robot, just passing through, and not being aware of the situation, smooths down the elevation with its own weight and frees its fellow.

(José Nuno Pereira, at ISR/IST, is a crucial participant in the roundabout case study.)

Read more...

  © Blogger template Newspaper III by Ourblogtemplates.com 2008

Back to TOP