How to Fight for Truth & Protect Democracy in A Post-Truth World? - Highlights - LEE McINTYRE

How to Fight for Truth & Protect Democracy in A Post-Truth World? - Highlights - LEE McINTYRE

Philosopher · Author of On Disinformation: How To Fight For Truth and Protect Democracy · How to Talk to a Science Denier

One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.

On Disinformation: How to Fight for Truth & Protect Democracy in the Age of AI - LEE McINTYRE

On Disinformation: How to Fight for Truth & Protect Democracy in the Age of AI - LEE McINTYRE

Philosopher · Author of On Disinformation: How To Fight For Truth and Protect Democracy · How to Talk to a Science Denier

One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.

Can we have real conversations with AI? How do illusions help us make sense of the world? - Highlights - KEITH FRANKISH

Can we have real conversations with AI? How do illusions help us make sense of the world? - Highlights - KEITH FRANKISH

Editor of Illusionism as a Theory of Consciousness · Cambridge University Press’ Elements in Philosophy of Mind
Author of Mind and Supermind · Consciousness

What I like about the sort of view I have is that it represents us as fully part of the world, fully part of the same world. We're not sealed off into little private mental bubbles, Cartesian theaters, where all the real action is happening in here, not out there. No, I think we're much more engaged with the world. It's not all happening in some private mental world. It's happening in our engagement with the shared world, and that seems to me a vision that I find much more uplifting, comforting, and rewarding.

Is Consciousness an Illusion? with Philosopher KEITH FRANKISH

Is Consciousness an Illusion? with Philosopher KEITH FRANKISH

Editor of Illusionism as a Theory of Consciousness · Cambridge University Press’ Elements in Philosophy of Mind
Author of Mind and Supermind · Consciousness

What I like about the sort of view I have is that it represents us as fully part of the world, fully part of the same world. We're not sealed off into little private mental bubbles, Cartesian theaters, where all the real action is happening in here, not out there. No, I think we're much more engaged with the world. It's not all happening in some private mental world. It's happening in our engagement with the shared world, and that seems to me a vision that I find much more uplifting, comforting, and rewarding.

What can AI teach us about human cognition & creativity? - Highlights - RAPHAËL MILLIÈRE

What can AI teach us about human cognition & creativity? - Highlights - RAPHAËL MILLIÈRE

Asst. Professor in Philosophy of AI · Macquarie University
I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harm

How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE

How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE

Asst. Professor in Philosophy of AI · Macquarie University
I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harm

The Pursuit of Happiness - JEFFREY ROSEN - President & CEO of the National Constitution Center

The Pursuit of Happiness - JEFFREY ROSEN - President & CEO of the National Constitution Center

President & CEO of the National Constitution Center
Author of The Pursuit of Happiness: How Classical Writers on Virtue Inspired the Lives of the Founders and Defined America

That idea of planting seeds for future generations came from the Tusculan Disputations. There’s something especially empowering about Cicero. And it's very striking that Thomas Jefferson and John Adams and so many in the Founding Era viewed this manual about overcoming grief as the definition for achieving happiness. And I think it's because it's a philosophy of self-mastery, self-improvement, and self-empowerment.

Can AI help us understand animal language? -  Author SY MONTGOMERY & Illustrator MATT PATTERSON

Can AI help us understand animal language? - Author SY MONTGOMERY & Illustrator MATT PATTERSON

Author | Illustrator

I did know that I wanted this book about turtles to also be about time. It's one of two big questions in philosophy. The one big mystery that I had tackled in a previous book, Soul of an Octopus, was the mystery of consciousness. The other big hard problem in philosophy is time. And I felt, you know, who better to lead me in this exploration than turtles, who live in some cases for centuries, who've been around...they arose with dinosaurs, yet they survived the asteroid impact. They are the embodiment of patience and wisdom.

Tech, Spirituality & Selfhood: TARA ISABELLA BURTON - Author of Here in Avalon, Social Creature, & Self-Made

Tech, Spirituality & Selfhood: TARA ISABELLA BURTON - Author of Here in Avalon, Social Creature, & Self-Made

Author of Here in Avalon · Social Creature
Strange Rites: New Religions for a Godless World 
Self-Made: Curating Our Image from Da Vinci to the Kardashians
So this idea that we can present ourselves as works of art, that we can create ourselves has always had a particular sort of aristocratic coding, historically associated with monarchs, who create their public image and their public persona, including through fashion. Today, if we don't self-promote, self-create, and self-brand, will we find the right partner? Get into the right college? Even secure the best job?

HOWARD GARDNER - Author of A Synthesizing Mind & Frames of Mind: The Theory of Multiple Intelligences - Co-director of The Good Project

HOWARD GARDNER - Author of A Synthesizing Mind & Frames of Mind: The Theory of Multiple Intelligences - Co-director of The Good Project

Author of A Synthesizing Mind · Frames of Mind: The Theory of Multiple Intelligences
Co-director of The Good Project

The garden that makes up one's mind is always to some extent the flowers and trees that we get from our families genetically, but also we get from our families culturally. I became interested in synthesis and I wrote the memoir quite a while ago, but now with the advent of large language instruments or ChatGPT, the pressure to figure out what synthesis is, what these computing systems can or can't do that human beings are still the privileged cohort in carrying out those tasks, that's made the interest in synthesis more important than ever. If we're trying to decide what policy to cover, whether it's an economic policy about interest rates, or whether - we're talking now during the beginning of the war in the Middle East - what policies to follow militarily, economically, and ethically. For that matter, do we entrust that to some kind of a computational system, or is this something that human judgment needs to be brought to bear?

What distinguishes our consciousness from AI & machine learning? Highlights: LIAD MUDRIK - Neuroscientist, Tel Aviv University

What distinguishes our consciousness from AI & machine learning? Highlights: LIAD MUDRIK - Neuroscientist, Tel Aviv University

Neuroscientist · Principal Investigator Liad Mudrik Lab · Tel Aviv University

So when I say that I am a conscious creature, I mean that I don't only analyze information about the world, or not only even respond to the world because you can think about, your thermostat response to the world, but when I sense the world, I don't only process information. I also have a qualitative experience, adopting Thomas Nagel's famous title of his paper. It feels like something in his case to be a bat. In our case, to be me. It feels like something to drink coffee, right? So the question is what allows us as human beings not only to process information but also to experience it? And this is what we are trying to understand, basically. And I should say, I said us as human beings, but I think that animals also have such conscious experience.

So to what extent does our cognition affect perception itself? And I belong to those researchers who think that it does. We are affected by what we expect to see. And sometimes we even perceive the expected as opposed to the world as it is. That also pertains to day-to-day life, to politics, to the reality you construct for yourself. So the brain is an amazing, amazing piece of machinery. And one of the things that it does best is to create these narratives. into which we project ourselves. So it creates a model of the world.

LIAD MUDRIK - Neuroscientist - Principal Investigator Liad Mudrik Lab, Tel Aviv University

LIAD MUDRIK - Neuroscientist - Principal Investigator Liad Mudrik Lab, Tel Aviv University

Neuroscientist · Principal Investigator Liad Mudrik Lab · Tel Aviv University

So when I say that I am a conscious creature, I mean that I don't only analyze information about the world, or not only even respond to the world because you can think about, your thermostat response to the world, but when I sense the world, I don't only process information. I also have a qualitative experience, adopting Thomas Nagel's famous title of his paper. It feels like something in his case to be a bat. In our case, to be me. It feels like something to drink coffee, right? So the question is what allows us as human beings not only to process information but also to experience it? And this is what we are trying to understand, basically. And I should say, I said us as human beings, but I think that animals also have such conscious experience.

So to what extent does our cognition affect perception itself? And I belong to those researchers who think that it does. We are affected by what we expect to see. And sometimes we even perceive the expected as opposed to the world as it is. That also pertains to day-to-day life, to politics, to the reality you construct for yourself. So the brain is an amazing, amazing piece of machinery. And one of the things that it does best is to create these narratives. into which we project ourselves. So it creates a model of the world.

How is AI Changing Education, Work & the Way We Learn? - MICHAEL S. ROTH, President of Wesleyan University

How is AI Changing Education, Work & the Way We Learn? - MICHAEL S. ROTH, President of Wesleyan University

President of Wesleyan University
Author of The Student: A Short History

So I wrote this book and it was a lot of fun because I had to learn so much. The book examines three iconic teachers: Confucius, Socrates, and Jesus. And I look at how each of those teachers encourage a certain kind of student. The student as follower, someone who will take on the path that you've developed. In the case of Socrates, the student as critical interlocutor or critical conversation partner, someone who will, in dialogue with you, learn what they don't know, how to take things apart. And in the case of Jesus and the apostles, I look at trying to imitate a way of life to transform themselves to strive towards being the kind of person that Jesus incarnated. And so that's the beginning of the book, these models of studenthood, if I could use that word, and being a teacher. And then I look at the way in which these ideas reverberate in the West across a long period of time. So I'm interested in the idea of the student before there were schools. What did we expect young people to learn even when they weren't going to school?

Highlights - NICK BOSTROM - Author of Superintelligence: Paths, Dangers, Strategies - Founding Director, Future of Humanity Institute, Oxford

Highlights - NICK BOSTROM - Author of Superintelligence: Paths, Dangers, Strategies - Founding Director, Future of Humanity Institute, Oxford

Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.

NICK BOSTROM - Founding Director, Future of Humanity Institute, Oxford
 - Philosopher, Author

NICK BOSTROM - Founding Director, Future of Humanity Institute, Oxford
 - Philosopher, Author

Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.