November 19, 2007

The doctrinal paradox, the discursive dilemma, and some problems of deliberative capabilities in multi-agents systems


Deliberative capabilities of multi-agents systems do not necessarily emerge from their individual members' deliberative capabilities alone. Although, we don’t need any kind of telepathy (wireless direct communication between robots) or collective consciousness in order to conceptualize those capabilities. Pettit (2004) helps understanding the problem, leading us from the doctrinal paradox, identified in jurisprudence, to a generalized discursive dilemma most deliberative collectives may face.

This is an example of the doctrinal paradox.
A three-judge court has to decide a tort case and judge the defendant liable if and only if the defendant’s negligence was causally responsible for the injury to the plaintiff and the defendant has a duty of care toward the plaintiff. Now, which decision has been taken when judges voted as follows?



Cause of harm?
(Premise 1)
Duty of care?
(Premise 2)
Liable?
(Conclusion)
Judge AYesNoNo
Judge BNoYesNo
Judge CYesYesYes


With a conclusion-centered procedure, the court decides “No”. With a premise-centered procedure, the court decides “Yes”, the conclusion following deductively from the conjunction of positive answers to both premises. The doctrinal paradox consists in having different outcomes to the same case with the same votes but different procedures. The same paradox can arise with a conclusion linked to a disjunction of premises. For example, when an appellant should be given a retrial either if inadmissible evidence has been used or a forced confession has taken place.



Inadmissible evidence?Forced confession?Retrial?
Judge AYesNoYes
Judge BNoYesYes
Judge CNoNoNo



The paradox in not confined to courts and legal domain. It can arise within many groups, like appointment and promotion committees or committees deciding who is to win a certain contract or a prize. “It will arise whenever a group of people discourse together with a view to forming an opinion on a certain matter that rationally connects, by the lights of all concerned, with other issues” (Pettit 2004:170).

In a generalized version, the paradox is named the discursive dilemma. Purposive groups (organizations with a specific function or goal, like states, political parties or business corporations) will almost inevitably confront the discursive dilemma in an especially interesting version. They have to take a series of decisions over a period of time in a consistent and coherent way.
Take as an example a political party that takes each major decision by a majority vote. It announces in March it will not increase taxes if it gets into government and announces in June it will increase defence spending. In September it must announce whether it will increase spending in other policy areas. The following matrix (where A, B, C stands for voting behaviour patterns) shows the dilemma’s structure.



Increase taxes?Increase defence spending?Increase other spending?
ANoNoNo (reduce)
BNoNo (reduce)Yes
CYesYesYes



If the party allows a majority vote on last issue, it risks incoherence and, so, discredit.

This kind of situations can occur partly because in ordinary social life people (even within organizations) do not show preferences and take decisions on the basis of complete information and deep theoretical basis. So, collectives prone to achieve their own goals, involving the outside world and/or their own members, must adopt some kind of collective reason, some mechanism to sustain coherent global behaviour towards those goals. Collective reason does not necessarily emerges from individuals’ reason alone.


REFERENCE

(Pettit 2004) PETTIT, Philip, “Groups with Minds of their Own”, in SCHMITT, Frederick (Ed.), Socializing Metaphysics, New York, Rowman and Littlefield, 2004 pp. 167-193 (click to get the paper in pdf file)

More information at the Philip Pettit's Web Page

2 Comments, criticism, questions, suggestions:

Benoit Hardy-Vallée, PhD 20 November 2007 at 15:37  

My guess is that if you want to have robots that build
institutions, you will need minimally:

1- a collective-action problem (such as a public good games)
2- reciprocal altruist (agenta who plays tit for tat)
3-"altruistic punishers" (agents who punish free-riders)

if you have all that, you may have something like an institution.

Porfirio Silva 21 November 2007 at 16:23  

For sure, we want robots able to "live" in institutional environments. And with some "institutional imagination" and "institutional building" capabilities, not to invent radically new institutions, but to make some (minor?) adjustments to them if needed. Because "Nobody is born alone in the wild. Not even robots."
Anyway: I'm actually trying to design a scenario inspired on public good games.

  © Blogger template Newspaper III by Ourblogtemplates.com 2008

Back to TOP