LEE McINTYRE

LEE McINTYRE

Philosopher · Author of On Disinformation: How To Fight For Truth and Protect Democracy · How to Talk to a Science Denier

One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.

RAPHAËL MILLIÈRE

RAPHAËL MILLIÈRE

Asst. Professor in Philosophy of AI · Macquarie University
I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harm

ROBERT PIPPIN

ROBERT PIPPIN

Author of The Culmination: Heidegger, German Idealism, and the Fate of Philosophy
Modernism as a Philosophical Problem
· Hegel’s Idealism

Philosophy is both an academic discipline and also something that everybody does. Everybody has to have reflective views about what's significant. They also have to justify to themselves why it's significant or important. The nature of justice itself, and the various opinions that have been written about in philosophy about justice, can get to a very high level. So there's this unusual connection between philosophy and human life. We've inherited from the middle ages, this incredible tradition that's now developed into a chance for young people to spend four or five years, in a way, released from the pressures of life. The idea to pursue your ideas a little further in these four years you have, exempt from the pressures of social life, allows philosophy to have a kind of position unique in the academy. In confronting what the best minds in the history of the world have had to say about these issues, the hope is that they provide for the people who are privileged enough to confront philosophy a better and more thoughtful approach to these fundamental questions that everybody has to confront.