Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Friday, April 19, 2024

Computing morals, saving cups

Whether taken from religion, deep philosophical reflection or (god help us) enrapturing television programs like Game of Thrones, unique moral codes guide us through ambiguous and sticky situations. How else would we know how to act when vying for free rooms in Eaton? Why else would we stop stealing cups from Dewick?

Moral codes are shaped by experience, culture, societal norms, the scores of infinitesimal components that comprise individuals and a mother’s love. But there’s another, arguably more fundamental sculpture: neural hardwiring and human biology.

Today, there is a growing niche of evolutionary biologists, neuroscientists, linguists and academics targeting moral intuitions and the objectivity of ethical reasoning. Intellectuals like Sam Harris and Marc Hauser raise salient questions: What, if such a thing exists, is the common moral denominator, the parameters in which morality is crafted by a mother’s love? Does this mean morality has an underlying grammatical component?

To tackle these questions, moral theorists, such as Georgetown Professor of Law John Mikhail, have constructed a framework for human moral cognition aptly named Universal Moral Grammar (UMG). Influenced by Noam Chomsky’s theory of universal grammar, the linguistic concept that the human ability to use grammar is hardwired in the brain, UMG outlines sets of rules, principles and computations we use, consciously or otherwise, to make judgments of right and wrong. As if “computations” didn’t sound mechanical enough, here’s where it gets robotic. 

If these computations are coded for, robots may become capable of employing emotional and moral reasoning in decision making. Faux-emotional experiences in robots and computers would be akin to their faux-conscious experiences; computers simulate thinking, now they may be able to simulate emotion. This means that, when robots rule the world, maybe they’ll be forgiving.

These computations, these codes, are preliminary, but they would basically allow experimenters to recreate the most fundamental components of human emotional instinct. Much of this faux-reasoning involves deontic logic, a field concerned with actions that are either permissible, obligatory or forbidden. In order to categorize emotionally charged decisions into these groupings, researchers have analyzed moral dilemmas -- scenarios in which moral reasoning trumps logic. 

The classic dilemma, which investigators risk posing ad nauseam, is a series of hypothetical trolley scenarios designed to tease out factors of decision making, prominent among them side effects and means. When told that they could pull a lever so a runaway trolley kills one person instead of five, for example, most people (90 percent) say they would pull it. Conversely, when told that they could stop the same trolley by pushing a man off the platform, killing him, few people (10 percent) say they would. The former scenario is an example of side effect; the latter, means. 

A more relevant scenario might be the use of euthanasia. Death as a side effect of cutting off life support might feel more permissible than death via lethal injection as a means of relieving a patient of agony. Side-effects seem to have naturally larger scopes of permissibility than means do. Using such moral dilemmas, researchers have begun thinking about coding innate human morality.

Robotic emotional reasoning carries great implications on the nature, origin and purpose of moral knowledge. It allows us perspective into how intuitive and visceral decisions are made. The more we know about our own irrationality, the better we will be able to control our own emotions, ideas and decisions -- ideas and decisions that are not always socially acceptable or politically correct on impulse -- the better we will be able to know ourselves. 

Questioning the hardwired fundamentals of human emotion is, then, kind of cool and kind of important. Without them, Dewick might not have any cups left.