How can we resolve conflicts without compromising our ethics and moral vision? Each year, wars are being fought in our name or with our support that citizens never get an opportunity to vote on. How can we make our voices heard?
“Warism - taking war for granted as morally acceptable,
even morally required - is the primary obstacle to peace.” Duane L. Cady is a philosopher and Professor Emeritus at Hamline University. He was nominated for the 1991 Grawemeyer World Order Award, was named Outstanding Educator of the Year by the United Methodist Foundation for Higher Education, and a festschiff in his honor was published in 2012. Cady is best known for his works on pacifism, including Moral Vision: How Everyday Life Shapes Ethical Thinking, and From Warism to Pacifism: A Moral Continuum.
THE CREATIVE PROCESS · ONE PLANET PODCAST
What are your reflections on the importance of the humanities and the liberal arts?
DUANE L. CADY
So when I think about the importance of the liberal arts and the humanities, they literally are the skills that free us. I think our freedom depends on the liberal arts, learning about music, learning about mathematics, learning about history, reading books, the kinds of things that we need that are not going to turn into a market job, at least not immediately. But employers understand they need people who can think, and who are articulate when it comes to speaking and writing. And I also think the humanities generally are well-named because they in fact humanize us. Once we see the rich literature, artwork, theater, and so on, of previous generations, it gives us a much deeper perspective on who we might be, and how we might put our freedom and lives to good use.
*
So we can think about life as an organism or a market, as many do. Or a gift, a machine. And then when you raise other societies, the range gets even bigger. Well, reason can calculate options and guides decisions consistent with these frameworks, but reason cannot tell us which framework to be in. It doesn't prove or provide our conceptual framework. So what I'm saying is that morality is rational, but that it's not something that we could call scientific or mathematical. And in my view, in the last hundred years, philosophy has more and more modeled itself after science and mathematics. So it's gotten more technical and more narrow, very different than it had been for its first couple of millennia. If ethics is to be meaningful in a pluralistic world, we need ways of thinking that can take us beyond both relativism, regarding all moral visions as equal, choice is arbitrary, and dogmatism, regarding one's own moral view as superior to all others and as justifiably imposed on them.
The task for us is to understand how we can get moral visions and then consider the ethics of negotiating between and among them, including collisions between moral visions. So my interest is in the extent to which various forms of reason take part in these different projects. I argue that contemporary technical philosophers tend to avoid this kind of problem. They tend to think of reason as much more narrow, whereas I want to include things like ordinary experience, the arts, theater, and reading a book. All those things can have an effect. Or as the case, we can consider now with current events experiencing moral horror, even if it's at a bit of a distance, even on TV or reading about it in the newspaper, have an effect on helping shape our moral vision. And so it's not necessarily a matter of neuro-technical reasoning. That probably is enough to get an idea of what I might do.
THE CREATIVE PROCESS · ONE PLANET PODCAST
Climate change is present in our mind, and when we see these global conflicts arise and the expense of being involved in these conflicts, instead of fighting what I feel is coming for all of us, and that's the inequities caused by climate change, where we're already seeing climate refugees.
And so I would love to see a diversion of funds used for war, like a Marshall Pan or a Peace Corps. Investing in, say, in the US, a Peace Corps for the environment. Instead of this huge military and sending soldiers overseas to these hundreds of bases around the world, bringing some back home to build valuable infrastructure of renewable energy. They could classify it as a security measure to protect the American people from climate and future wars caused by global warming, food insecurity, and dwindling resources.
CADY
One of the statistics that sticks in my mind is that the U.S. military burns more fossil in a year than the whole country of Sweden. It's just staggering, and where do we dump military waste? Well, usually on poor people, and usually in parts of the country like on Native American reservations, where we dig into the ground and get rid of our nuclear waste. And why do we do that? Well, we wouldn't want it near us.
THE CREATIVE PROCESS · ONE PLANET PODCAST
For me, it feels like a wasted opportunity, with the military’s organizational capacity, huge budgets, and trained workforce who could be building things instead of destroying things. That's my wish.
CADY
All the money we're investing and giving to Ukraine or to support Israel, people need to understand that that money doesn't go to the Ukraine and doesn't go to Israel. That money goes to United States corporations that build weapons so we give them the money and they can buy weapons from us and then the weapons go there. And right now we're arguably fighting proxy wars using U.S. weapons to test them out and create tremendous profits for U.S. corporations.
THE CREATIVE PROCESS · ONE PLANET PODCAST
In the last 20 years we’d had many developments. AI is supposed to boost productivity and pioneer medical and climate breakthroughs, but it also presents clear regulatory security and governance dilemmas with implications for military and bioweapons. So how would you say update your book, bearing in mind these new technologies and current events today?
CADY
As soon as someone mentions artificial intelligence, I think about, more than 25 years ago, when Garry Kasparov, the international champion chess player, played against the IBM computer Deep Blue. In the first series, Kasparov won. But in the second contest, Kasparov lost. We learned that one of the ways that Deep Blue managed to beat Kasparov is they programmed into Deep Blue all of Kasparov's chess matches so he was, in a sense, playing himself. I think that says a lot about artificial intelligence. I'm sympathetic with Chomsky's view, which is that artificial intelligence isn't intelligence at all. It's a kind of theft. You just plug in enough information and certain logical techniques and algorithms can generate a lot of information, but it isn't creative. It's simply reproductive of other things that already have happened. Some of the AI creators are somewhat concerned that it could even start to make decisions that might run against decisions we might like, and the implications for the military are pretty scary.