Does AI-generated Perfection Detach Us from Reality, Life & Human Connection? - Highlights - HENRY AJDER

Does AI-generated Perfection Detach Us from Reality, Life & Human Connection? - Highlights - HENRY AJDER

AI/Deepfakes/Synthetic Media Advisor · BBC Presenter
Adobe · Meta · European Commission · Partnership on AI · House of Lords

I would like to preserve a real sense of empathy and humility, which comes with understanding that the world is messy, that people are messy, that defects and imperfections exist, that things don't always necessarily kind of go the way you want, even as much as you wish they could. Imperfection is part of life and I guess my concern is that AI-generated content, which smooths and perfects a version of reality to precisely what you want and forces you or makes you feel pressured to represent yourself in this absolutely perfect way, fundamentally gives you no room for error and kind of detaches you from the reality of growth and life and and how people work. Empathize with other people. Everyone has their challenges. Things don't always have to be exactly perfect to how you want them to be or how other people want them to be. And that involves having some humility about yourself as a messy creature, as we all are. I hope that's retained, but I do see this kind of move towards this sort of smoothed and shaped reality that AI is enabling, potentially creating more of a disconnect between that imperfect, messy, but also quite beautiful world. This sort of polished but ultimately plastic version of reality increasingly is becoming the default for some people over the kind of fleshy, messy human side of things.

How is AI Changing Our Perception of Reality, Creativity & Human Connection? w/ HENRY AJDER - AI Advisor

How is AI Changing Our Perception of Reality, Creativity & Human Connection? w/ HENRY AJDER - AI Advisor

AI/Deepfakes/Synthetic Media Advisor · BBC Presenter
Adobe · Meta · European Commission · Partnership on AI · House of Lords

I would like to preserve a real sense of empathy and humility, which comes with understanding that the world is messy, that people are messy, that defects and imperfections exist, that things don't always necessarily kind of go the way you want, even as much as you wish they could. Imperfection is part of life and I guess my concern is that AI-generated content, which smooths and perfects a version of reality to precisely what you want and forces you or makes you feel pressured to represent yourself in this absolutely perfect way, fundamentally gives you no room for error and kind of detaches you from the reality of growth and life and and how people work. Empathize with other people. Everyone has their challenges. Things don't always have to be exactly perfect to how you want them to be or how other people want them to be. And that involves having some humility about yourself as a messy creature, as we all are. I hope that's retained, but I do see this kind of move towards this sort of smoothed and shaped reality that AI is enabling, potentially creating more of a disconnect between that imperfect, messy, but also quite beautiful world. This sort of polished but ultimately plastic version of reality increasingly is becoming the default for some people over the kind of fleshy, messy human side of things.

What does the future hold for our late-stage capitalist society with mega-corps controlling everything? - Highlights - KYLE HIGGINS, KARINA MANASHIL & KID CUDI

What does the future hold for our late-stage capitalist society with mega-corps controlling everything? - Highlights - KYLE HIGGINS, KARINA MANASHIL & KID CUDI

Eisner Award-nominated Comic Book Author KYLE HIGGINS
Emmy-nominated Producer KARINA MANASHIL & KID CUDI on the Making of Moon Man

So, as we started talking and going through what this could look like. What a new black superhero in 2024 could look like? What would the threats be? What the world might look like if it's maybe not even five minutes in the future? I would argue it's like two and a half minutes in the future. And then what kind of really complex, emotionally layered journey we could put this character through?

Comics, Music, Ethics & AI: KYLE HIGGINS, KARINA MANASHIL & KID CUDI on the Making of Moon Man

Comics, Music, Ethics & AI: KYLE HIGGINS, KARINA MANASHIL & KID CUDI on the Making of Moon Man

Eisner Award-nominated Comic Book Author KYLE HIGGINS
Emmy-nominated Producer KARINA MANASHIL & KID CUDI on the Making of Moon Man

So, as we started talking and going through what this could look like. What a new black superhero in 2024 could look like? What would the threats be? What the world might look like if it's maybe not even five minutes in the future? I would argue it's like two and a half minutes in the future. And then what kind of really complex, emotionally layered journey we could put this character through?

MAX BENNETT - Author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains - CEO of Alby

MAX BENNETT - Author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains - CEO of Alby

Author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains
Cofounder & CEO of Alby · Fmr. Cofounder & CPO of Bluecore

So, modern neuroscientists are questioning if there really is one consistent limbic system. But usually when we're looking at the limbic system, we're thinking about things like emotion, volition, and goals. And those types of things, I would argue reinforcement learning algorithms, at least on a primitive level, we already have because the way that we get them to achieve goals like play a game of go and win is we give them a reward signal or a reward function. And then we let them self-play and teach themselves based on maximizing that reward. But that doesn't mean that they're self-aware, doesn't mean that they're experiencing anything at all. There's a fascinating set of questions in the AI community around what's called the reward hypothesis, which is how much of intelligent behavior can be understood through the lens of just trying to optimize a reward signal. We are more than just trying to optimize reward signals. We do things to try and reinforce our own identities. We do things to try and understand ourselves. These are attributes that are hard to explain from a simple reward signal, but do make sense. And other conceptions of intelligence like Karl Friston's active inference where we build a model of ourselves and try and reinforce that model.