Opinion: Humanity needs to talk about AI
Is artificial intelligence going to replace us?
I posed this question to a recent panel discussion, co-sponsored by Emporia’s Current Club and the Emporia State University School of Humanities and Social Sciences, called “A Poet, a Pastor, a Philosopher, and a Programmer.” Kevin Rabas and Trevor Hoag and the Rev. Jan Todd joined programmer Bryan Schmiedeler for a lively discussion.
Panelists were split on whether or not AI would achieve sentience, or self-awareness. The programmer, Schmiedeler, argued that AI sentience may come by accident. The poet, Rabas, argued that like a virus, AI can be dangerous even when it is not sentient. The pastor, Todd, doubted whether AI could yet develop emotional intelligence, which is an essential part of sentience. The philosopher, Hoag, argued that AI will indeed achieve sentience and argued that other life forms besides humans already have it.
Regarding the matter of AI replacing us, Rabas quoted Frank Herbert’s 1965 book “Dune,” in which one character tells another, “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
Also on the replacement question, Todd noted that pastors are already using AI to help write sermons, and sermon quality has improved as a result. She stressed that human intelligence is relational and asked whether or not AI could develop the same relational intelligence. Hoag suggested that humans and AI may develop a symbiotic (mutually beneficial) relationship, but that humans would have to get over our belief that we are the only intelligent life in order to adapt to this. Rabas added that while computers typically beat humans at chess, a computer-human team can beat either of the others. Schmiedler answered most unequivocally that AI is indeed going to replace us — sadly.
For my next question, I invoked the concerns of nuclear scientist Robert Oppenheimer, who worried that humans were better at developing technology than learning how to use it responsibly. Rabas alluded to biologist E.O. Wilson, who stated that humans have paleolithic emotions, medieval institutions, and godlike technology. Todd noted that we reap what we sow, and that sometimes we realize our mistakes too late, that there are nefarious actors who would use AI for ill intentions, and that technology has always been part of the human story. Hoag asked who controls AI, and wondered if it is the world’s richest people. He noted the views of philosopher Martin Heidegger, who wrote that technology itself can be an agent of change. Schmiedeler wondered if a “small catastrophe” with AI would be enough to get humans to take its potential dangers more seriously.
Audience members asked about whether AI may have already anticipated and answered these questions without telling us, about the massive demand for water needed to cool AI data centers, and about “wet” (combined biological and computational) intelligence. Schmiedeler noted a case in which two different AI apps being developed by Google created their own language which only they could understand, without being prompted to do so. Caught unawares, Google ended the project.
Panelists agreed that humanity needs to have more civil, open discussions about the impact of AI, starting right away. Our panel was a great example of just such a discussion.
— Michael Smith is a professor of political science at Emporia State University.

