What does the future hold for our late-stage capitalist society with mega-corps controlling everything? - Highlights - KYLE HIGGINS, KARINA MANASHIL & KID CUDI

What does the future hold for our late-stage capitalist society with mega-corps controlling everything? - Highlights - KYLE HIGGINS, KARINA MANASHIL & KID CUDI

Eisner Award-nominated Comic Book Author KYLE HIGGINS
Emmy-nominated Producer KARINA MANASHIL & KID CUDI on the Making of Moon Man

So, as we started talking and going through what this could look like. What a new black superhero in 2024 could look like? What would the threats be? What the world might look like if it's maybe not even five minutes in the future? I would argue it's like two and a half minutes in the future. And then what kind of really complex, emotionally layered journey we could put this character through?

Comics, Music, Ethics & AI: KYLE HIGGINS, KARINA MANASHIL & KID CUDI on the Making of Moon Man

Comics, Music, Ethics & AI: KYLE HIGGINS, KARINA MANASHIL & KID CUDI on the Making of Moon Man

Eisner Award-nominated Comic Book Author KYLE HIGGINS
Emmy-nominated Producer KARINA MANASHIL & KID CUDI on the Making of Moon Man

So, as we started talking and going through what this could look like. What a new black superhero in 2024 could look like? What would the threats be? What the world might look like if it's maybe not even five minutes in the future? I would argue it's like two and a half minutes in the future. And then what kind of really complex, emotionally layered journey we could put this character through?

MAX BENNETT - Author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains - CEO of Alby

MAX BENNETT - Author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains - CEO of Alby

Author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains
Cofounder & CEO of Alby · Fmr. Cofounder & CPO of Bluecore

So, modern neuroscientists are questioning if there really is one consistent limbic system. But usually when we're looking at the limbic system, we're thinking about things like emotion, volition, and goals. And those types of things, I would argue reinforcement learning algorithms, at least on a primitive level, we already have because the way that we get them to achieve goals like play a game of go and win is we give them a reward signal or a reward function. And then we let them self-play and teach themselves based on maximizing that reward. But that doesn't mean that they're self-aware, doesn't mean that they're experiencing anything at all. There's a fascinating set of questions in the AI community around what's called the reward hypothesis, which is how much of intelligent behavior can be understood through the lens of just trying to optimize a reward signal. We are more than just trying to optimize reward signals. We do things to try and reinforce our own identities. We do things to try and understand ourselves. These are attributes that are hard to explain from a simple reward signal, but do make sense. And other conceptions of intelligence like Karl Friston's active inference where we build a model of ourselves and try and reinforce that model.