COA philosophy professor Dr. J. Gray Cox's “Transforming Rationality to Sustain the World” is...COA philosophy professor Dr. J. Gray Cox's “Transforming Rationality to Sustain the World” is featured at the 24th World Congress on Philosophy in Beijing, China. Credit: Junesoo Shin ’21

In “Transforming Rationality to Sustain the World: Dialogical Rationality as a Key to the Ecological, Political, Technological and Moral Existential Crises We Face,” Cox looks to Quaker communal discernment, the Gandhian satyagraha, and a wide variety of other traditions of nonviolent negotiation and conflict transformation for examples of how we, and the artificial intelligence we create, must think if we are to avoid certain disaster as a species.

“The civilization globally dominant on our planet is structured by modes of reasoning in economics, governance, technology and morality that threaten our species with ecological collapse, mutually assured destruction, domination by superhuman machine intelligences, and the annihilation of meaning for human life,” Cox writes.

“To insure that any AI that runs our planet is friendly to humans, good in intents, and wise in actions, we need to insure that the methods of dialogical reasoning – including Gandhian satyagraha – are ‘em-bodied’ in its program structures and incarnations.”

“A species which imposes such radical existential threats upon itself must, in some sense, have a problem rooted not simply in its environment and desires but also in the manner in which it reasons about these and seeks to adapt. Our dominant reasoning strategies are, in a profound sense, irrational.”

Humanity’s problematic, so-called “rational” modes of reasoning are monological, Cox says – idealized as the product of a single, systematic activity carried on by a single agent who starts with clear and evident statements and uses rules of inference to draw conclusions. Centuries of rational thought since the enlightenment have led to cultures of conflict and exploitation, environmental disregard, and spiraling planetary crises, he says.  

“We need to shift the way we reason from monological maximization of our individual utility preferences to a collaborative construction of new meanings, structures and institutions that carry on historical projects for creating a safe, just and wise world,” he says.

In exploring this transition, Cox dives into a host of dialogical models that demonstrate successful reasoning processes involving negotiations between two or more agents, who typically have substantively different practices for interacting with the world and systematically different languages, beliefs, and norms. In these interactions, people work together to develop new answers, along with new languages, practices, and successful plans of action.

“In these examples, stakeholders are committed to seeking genuine agreement through nonviolent practices of investigation and persuasion that forgo violent threats to coerce anCollege of the Atlantic professor Dr. J. Gray Cox teaches courses in the history of philosophy, e...College of the Atlantic professor Dr. J. Gray Cox teaches courses in the history of philosophy, ethics, social and political philosophy, peace and conflict, sustainability, and language learning. Credit: Junesoo Shin ’21unwilling consent. Such dialogical reasoning is essential to being human. ‘Incarnating’ it in global economic, political, technological and spiritual institutions provides our only coherent hope for survival that can give life enduring meaning,” he says.

Just as humans themselves have pursued rational, monological thinking to the point of irrationality and destruction, Cox argues, so too will AIs if we do not instill in their core code analternative: a dialogical approach to reasoning. AI must be able to partake in a dialogue as they participate in the world and thus incorporate various kinds of morality, he says. They must somehow be instilled with a Gandhian view, he says.

Cox calls for a transition away from mono-logical algorithms that pursue profit and to creation of a wiser planet. While businesses and some governments work to create “smart” systems, which are monitored by AI according to a specific set of criteria, Cox is calling for “wise” systems.

“The technology of science and institutional management that currently fuels consumption is guided, fundamentally, by mono-logical algorithms that pursue profit and GDP through creation of an ever ‘smarter planet.’ This threatens us with the creation of artificial intelligences that may surpass us in power and perhaps render us useless and extinct,” he writes. “To insure that any AI that runs our planet is friendly to humans, good in intents, and wise in actions, we need to insure that the methods of dialogical reasoning – including Gandhian satyagraha – are ‘em-bodied’ in its program structures and incarnations.”

“We need to shift the way we reason from monological maximization of our individual utility preferences to a collaborative construction of new meanings, structures and institutions that carry on historical projects for creating a safe, just and wise world.”

Cox is calling for the creation of a wiser Earth in the pursuit of nonviolent conflict transformation. In his paper he identifies other ways which dialogical reasoning could be applied to achieve a wiser planet. For example, he proposes an integration of “gifts of giving,” which would have consumers spending their income as agents of history instead of addicts to consumption, such as spending on acts of solidarity, socially responsible investment, and political change. According to Cox, such gifts are an important step in nurturing our shift to a new framework of reasoning.

The World Congress of Philosophy, organized by the International Federation of Philosophical Societies, brings together philosophers from around to the world every five years to share innovative thoughts and practices. The XXIV World Congress of Philosophy: Learning to be a Human is held in Beijing, China, from August 13 to August 20, 2018.

Cox holds a BA from Wesleyan University and an MA and Ph.D. in Philosophy from Vanderbilt University. He taught at Middle Tennessee State University and Earlham College before joining the COA faculty in 1994. He grew up locally on Mount Desert Island and was a guinea pig student in COA’s first experimental classes in the summer of 1971. He is a cofounder and clerk of the Quaker Institute for the Future, a nonprofit organization promoting research on social and environmental concerns out of the spiritual tradition of the Religious Society of Friends.