Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.

Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.

Highlights–Nicholas A. Christakis, Director, Human Nature Lab, Yale

Highlights–Nicholas A. Christakis, Director, Human Nature Lab, Yale

Author of Blueprint: The Evolutionary Origins of a Good Society
Director of the Human Nature Lab at Yale · Co-director of the Yale Institute for Network Science

So these kinds of problems in what I call hybrid systems of humans and machines are a key focus of my lab right now. Margaret Traeger, who's now at Notre Dame, she did a wonderful project in which we made these groups of three humans and a humanoid robot work together to solve a problem.

We manipulated the humanity of the robot. For example, sometimes we had the robot tell stupid dad jokes, like corny jokes. Or we had the robot break the ice by saying, "You know, robots can make mistakes, too." This kind of stuff. And what we found was that the human interactions could be changed by the simple programming of the robot.

Nicholas A. Christakis, Director, Human Nature Lab, Yale

Nicholas A. Christakis, Director, Human Nature Lab, Yale

Author of Blueprint: The Evolutionary Origins of a Good Society
Director of the Human Nature Lab at Yale · Co-director of the Yale Institute for Network Science

We're not attempting to invent super smart AI to replace human cognition. We are inventing dumb AI to supplement human interaction. Are there simple forms of artificial intelligence, simple programming of bots, such that when they are added to groups of humans – because those humans are smart or otherwise positively inclined - that help the humans to help themselves? Can we get groups of people to work better together, for instance, to confront climate change, or to reduce racism online, or to foster innovation within firms?

Can we have simple forms of AI that are added into our midst that make us work better together? And the work we're doing in that part of my lab shows that abundantly that's the case. And we published a stream of papers showing that we can do that.