Post by Sapphire Capital on Sept 5, 2008 20:36:00 GMT 4
Moral Grammar and Intuitive Jurisprudence: A Formal Model of Unconscious Moral and Legal Knowledge
John Mikhail
Georgetown University - Law Center
July 20, 2008
THE PSYCHOLOGY OF LEARNING AND MOTIVATION: MORAL COGNITION AND DECISION MAKING, D. Medin, L. Skitka, C. W. Bauman, D. Bartels, eds., Vol. 50, Academic Press, 2009
Georgetown Public Law Research Paper No. 1163422
Abstract:
Could a computer be programmed to make moral judgments about cases of intentional harm and unreasonable risk that match those judgments people already make intuitively? If the human moral sense is an unconscious computational mechanism of some sort, as many cognitive scientists have suggested, then the answer should be yes. So too if the search for reflective equilibrium is a sound enterprise, since achieving this state of affairs requires demarcating a set of considered judgments, stating them as explanandum sentences, and formulating a set of algorithms from which they can be derived. The same is true for theories that emphasize the role of emotions or heuristics in moral cognition, since they ultimately depend on intuitive appraisals of the stimulus that accomplish essentially the same tasks. Drawing on deontic logic, action theory, moral philosophy, and the common law of tort, particularly Terry's five-variable calculus of risk, I outline a formal model of moral grammar and intuitive jurisprudence along the foregoing lines, which defines the abstract properties of the relevant mapping and demonstrates their descriptive adequacy with respect to a range of common moral intuitions, which experimental studies have suggested may be universal or nearly so. Framing effects, protected values, and implications for the neuroscience of moral intuition are also discussed.
papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID1193522_code395700.pdf?abstractid=1163422&mirid=2
John Mikhail
Georgetown University - Law Center
July 20, 2008
THE PSYCHOLOGY OF LEARNING AND MOTIVATION: MORAL COGNITION AND DECISION MAKING, D. Medin, L. Skitka, C. W. Bauman, D. Bartels, eds., Vol. 50, Academic Press, 2009
Georgetown Public Law Research Paper No. 1163422
Abstract:
Could a computer be programmed to make moral judgments about cases of intentional harm and unreasonable risk that match those judgments people already make intuitively? If the human moral sense is an unconscious computational mechanism of some sort, as many cognitive scientists have suggested, then the answer should be yes. So too if the search for reflective equilibrium is a sound enterprise, since achieving this state of affairs requires demarcating a set of considered judgments, stating them as explanandum sentences, and formulating a set of algorithms from which they can be derived. The same is true for theories that emphasize the role of emotions or heuristics in moral cognition, since they ultimately depend on intuitive appraisals of the stimulus that accomplish essentially the same tasks. Drawing on deontic logic, action theory, moral philosophy, and the common law of tort, particularly Terry's five-variable calculus of risk, I outline a formal model of moral grammar and intuitive jurisprudence along the foregoing lines, which defines the abstract properties of the relevant mapping and demonstrates their descriptive adequacy with respect to a range of common moral intuitions, which experimental studies have suggested may be universal or nearly so. Framing effects, protected values, and implications for the neuroscience of moral intuition are also discussed.
papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID1193522_code395700.pdf?abstractid=1163422&mirid=2