Assembling Tomorrow: A Guide to Designing a Thriving Future - SCOTT DOORLEY & CARISSA CARTER - Directors of Stanford’s d.School

Assembling Tomorrow: A Guide to Designing a Thriving Future - SCOTT DOORLEY & CARISSA CARTER - Directors of Stanford’s d.School

Creative & Academic Director · Stanford d.school
Co-authors of Assembling Tomorrow: A Guide to Designing a Thriving Future

Today, someone is putting the finishing touches on a machine-­ learning algorithm that will change the way you relate to your family. Someone is trying to design a way to communicate with animals in their own language. Someone is cleaning up the mess someone else left behind seventy years ago yesterday. Today, someone just had an idea that will end up saving one thing while it harms another.

To be a maker in this moment—­ to be a human today—­ is to collaborate with the world. It is to create and be created, to work and be worked on, to make and be made. To be human is to tinker, create, fix, care, and bring new things into the world. It is to design. You—­ yes, you!—­ might design products or policy, services or sermons, production lines or preschool programs. You might run a business, make art, or participate in passing out meals to the poor. You may write code or pour concrete, lobby for endangered species legislation or craft cocktails. Wherever you fit in, you are part of shaping the world. This is design work.

– Assembling Tomorrow: A Guide to Designing a Thriving Future

Is understanding AI a bigger question than understanding the origin of the universe? - NEIL JOHNSON

Is understanding AI a bigger question than understanding the origin of the universe? - NEIL JOHNSON

Professor of Physics · GWU · Head of the Dynamic Online Networks Lab

It gets back to this core question: Why AI comes out with what it does. That's the burning question. It's like it's bigger than the origin of the universe to me as a scientist, and here's the reason why. The origin of the universe, it happened. That's why we're here. It's almost like a historical question asking why it happened. The AI future is not a historical question. It's a now and future question.

How can we develop AI systems that are more respectful, ethical, and sustainable? - DR. SASHA LUCCIONI

How can we develop AI systems that are more respectful, ethical, and sustainable? - DR. SASHA LUCCIONI

Founding Member of Climate Change AI
AI Researcher & Climate Lead · Hugging Face

My work is really about figuring out how, right now, AI is using resources like energy and emitting greenhouse gases and how it's using our data without our consent. I feel that if we develop AI systems that are more respectful, ethical, and sustainable, we can help future generations so that AI will be less of a risk to society.  The way I got into this field was working on the environmentally beneficial applications of AI, and I do believe that that's an impactful way of using AI techniques because there's so much data about the climate, satellite data, and sensor data, and the way to go about this is to work with domain experts. AI is never going to solve the problem on its own, but it can be a tool. So I think that there's a lot of promise there.

BRIAN DAVID JOHNSON - Author of The Future You: How to Create the Life You Always Wanted - Futurist in Residence, ASU’s Center for Science & the Imagination

BRIAN DAVID JOHNSON - Author of The Future You: How to Create the Life You Always Wanted - Futurist in Residence, ASU’s Center for Science & the Imagination

Author of The Future You: How to Create the Life You Always Wanted
Director of the Arizona State University’s Threatcasting Lab
Futurist in Residence, ASU’s Center for Science & the Imagination

Let's talk about technology and the role of humanity and the role of being human and what it means to be present in that. We need to keep humans at the center of everything that we do, that everything that we do in our life is about humans. It begins with humans and ends with humans. There might be technologies and businesses and all these things in between, but we should measure the effect on humans.

When I talk to people about artificial intelligence or technology, I'm generally asking them two questions. What are you optimizing for? What's the effect that you're trying to get? Developing technology for technology's sake, although it can be kind of interesting...then is why you're doing it because you think it's interesting? But then ultimately, if you're doing it beyond your own gratification, why are you doing it?

So much of what I do in that is talking to governments and militaries and large organizations to say we always have to keep humans in the loop. You have to keep humans in the center because it's about us. That really is incredibly important. And that's one of the central ideas in the future. The future should be about humans, and where are humans going. And what do we want as humans? And how are we using technology to make us more human, or healthier, or happier, or more productive?

SUSAN SCHNEIDER - Director, Center for the Future Mind, FAU, Fmr. NASA Chair at NASA

SUSAN SCHNEIDER - Director, Center for the Future Mind, FAU, Fmr. NASA Chair at NASA

Founding Director · Center for the Future Mind · Florida Atlantic University
Author of Artificial You: AI and the Future of Your Mind
Fmr. NASA Chair at NASA · Fmr. Distinguished Scholar at US Library of Congress

So it's hard to tell exactly what the dangers are, but that's certainly one thing that we need to track that beings that are vastly intellectually superior to other beings may not respect the weaker beings, given our own past. It's really hard to tell exactly what will happen. The first concern I have is with surveillance capitalism in this country. The constant surveillance of us because the US is a surveillance capitalist economy, and it's the same elsewhere in the world, right? With Facebook and all these social media companies, things have just been going deeply wrong. And so it leads me to worry about how the future is going to play out. These tech companies aren't going to be doing the right thing for humanity. And this gets to my second worry, which is how's all this going to work for humans exactly? It's not clear where humans will even be needed in the future.

AI & THE FUTURE OF HUMANITY

AI & THE FUTURE OF HUMANITY

What will the future look like? What are the risks and opportunities of AI? What role can we play in designing the future we want to live in? In this first episode of our new channel, philosophers, futurists, AI experts, science fiction authors, activists, and lawyers reflect on AI, technology, and the Future of Humanity..

ANIL SETH - Author of Being You: A New Science of Consciousness - Co-director of Sussex Centre for Consciousness Science

ANIL SETH - Author of Being You: A New Science of Consciousness - Co-director of Sussex Centre for Consciousness Science

Author of Being You: A New Science of Consciousness
Co-director of the Sackler Centre for Consciousness Science · Canadian Institute for Advanced Research Program on Brain, Mind & Consciousness

This is a point in philosophy that the world as it is can never be directly apprehended by our minds. We are shielded from it by what's called a sensory veil. There are, for instance, no such thing as colors that are out there. As the artist Cezanne said, "The colors are where the brain and the universe meet." And color is, I think, a really good example because it is, in a sense, less than what's there because our eyes are only sensitive to three wavelengths of this huge electromagnetic spectrum, which goes all the way from x-rays and gamma rays to radio waves. And we live in a tiny, thin slice of that reality. But then out of those three wavelengths we experience our brains generate many more than three colors and almost an infinite palette of colors. So there's no sense in which our perception could ever reveal the world as it really is, that it reveals the world in a way that's very useful for us as organisms hell-bent on continuing to live and to survive.