JARON LANIER on Tech, Music, Creativity & Who Owns the Future - Highlights

JARON LANIER on Tech, Music, Creativity & Who Owns the Future - Highlights

VR Pioneer · Musician · Author JARON LANIER
Who Owns the Future? · Dawn of the New Everything: Encounters with Reality & Virtual Reality · Ten Arguments for Deleting Your Social Media Accounts Right Now

AI is obviously the dominant topic in tech lately, and I think occasionally there's AI that's nonsense, and occasionally there's AI that's great. I love finding new proteins for medicine and so on. I don't think we serve ourselves well when we put our own technology up as if it were a new God that we created. I think we're really getting a little too full of ourselves to think that.

When we do that, I think we confuse ourselves too easily. This goes back to Alan Turing, the main founder of computer science, who had this idea of the Turing test. In the test, you can't tell whether the computer has gotten more human-like or the human has gotten more computer-like. People are very prone to becoming more computer-like. When we're on social media, we let ourselves be guided by the algorithms, so we start to become dumb in the way the algorithms want us to. You see that all the time. It's really degraded our psychologies and our society.

AI, Virtual Reality & Dawn of the New Everything w/ JARON LANIER, VR Pioneer, Musician, Author

AI, Virtual Reality & Dawn of the New Everything w/ JARON LANIER, VR Pioneer, Musician, Author

Father of VR · Musician · Author JARON LANIER
Who Owns the Future? · Dawn of the New Everything: Encounters with Reality & Virtual Reality · Ten Arguments for Deleting Your Social Media Accounts Right Now

AI is obviously the dominant topic in tech lately, and I think occasionally there's AI that's nonsense, and occasionally there's AI that's great. I love finding new proteins for medicine and so on. I don't think we serve ourselves well when we put our own technology up as if it were a new God that we created. I think we're really getting a little too full of ourselves to think that.

When we do that, I think we confuse ourselves too easily. This goes back to Alan Turing, the main founder of computer science, who had this idea of the Turing test. In the test, you can't tell whether the computer has gotten more human-like or the human has gotten more computer-like. People are very prone to becoming more computer-like. When we're on social media, we let ourselves be guided by the algorithms, so we start to become dumb in the way the algorithms want us to. You see that all the time. It's really degraded our psychologies and our society.

How to Fight for Truth & Protect Democracy in A Post-Truth World? - Highlights - LEE McINTYRE

How to Fight for Truth & Protect Democracy in A Post-Truth World? - Highlights - LEE McINTYRE

Philosopher · Author of On Disinformation: How To Fight For Truth and Protect Democracy · How to Talk to a Science Denier

One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.

On Disinformation: How to Fight for Truth & Protect Democracy in the Age of AI - LEE McINTYRE

On Disinformation: How to Fight for Truth & Protect Democracy in the Age of AI - LEE McINTYRE

Philosopher · Author of On Disinformation: How To Fight For Truth and Protect Democracy · How to Talk to a Science Denier

One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.

What can AI teach us about human cognition & creativity? - Highlights - RAPHAËL MILLIÈRE

What can AI teach us about human cognition & creativity? - Highlights - RAPHAËL MILLIÈRE

Asst. Professor in Philosophy of AI · Macquarie University
I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harm

How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE

How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE

Asst. Professor in Philosophy of AI · Macquarie University
I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harm

What does the future hold for our late-stage capitalist society with mega-corps controlling everything? - Highlights - KYLE HIGGINS, KARINA MANASHIL & KID CUDI

What does the future hold for our late-stage capitalist society with mega-corps controlling everything? - Highlights - KYLE HIGGINS, KARINA MANASHIL & KID CUDI

Eisner Award-nominated Comic Book Author KYLE HIGGINS
Emmy-nominated Producer KARINA MANASHIL & KID CUDI on the Making of Moon Man

So, as we started talking and going through what this could look like. What a new black superhero in 2024 could look like? What would the threats be? What the world might look like if it's maybe not even five minutes in the future? I would argue it's like two and a half minutes in the future. And then what kind of really complex, emotionally layered journey we could put this character through?

Comics, Music, Ethics & AI: KYLE HIGGINS, KARINA MANASHIL & KID CUDI on the Making of Moon Man

Comics, Music, Ethics & AI: KYLE HIGGINS, KARINA MANASHIL & KID CUDI on the Making of Moon Man

Eisner Award-nominated Comic Book Author KYLE HIGGINS
Emmy-nominated Producer KARINA MANASHIL & KID CUDI on the Making of Moon Man

So, as we started talking and going through what this could look like. What a new black superhero in 2024 could look like? What would the threats be? What the world might look like if it's maybe not even five minutes in the future? I would argue it's like two and a half minutes in the future. And then what kind of really complex, emotionally layered journey we could put this character through?

HOWARD GARDNER - Author of A Synthesizing Mind & Frames of Mind: The Theory of Multiple Intelligences - Co-director of The Good Project

HOWARD GARDNER - Author of A Synthesizing Mind & Frames of Mind: The Theory of Multiple Intelligences - Co-director of The Good Project

Author of A Synthesizing Mind · Frames of Mind: The Theory of Multiple Intelligences
Co-director of The Good Project

The garden that makes up one's mind is always to some extent the flowers and trees that we get from our families genetically, but also we get from our families culturally. I became interested in synthesis and I wrote the memoir quite a while ago, but now with the advent of large language instruments or ChatGPT, the pressure to figure out what synthesis is, what these computing systems can or can't do that human beings are still the privileged cohort in carrying out those tasks, that's made the interest in synthesis more important than ever. If we're trying to decide what policy to cover, whether it's an economic policy about interest rates, or whether - we're talking now during the beginning of the war in the Middle East - what policies to follow militarily, economically, and ethically. For that matter, do we entrust that to some kind of a computational system, or is this something that human judgment needs to be brought to bear?