How to change our extractive mindset to a regenerative mindset? PAUL SHRIVASTAVA - Highlights

How to change our extractive mindset to a regenerative mindset? PAUL SHRIVASTAVA - Highlights

& The Limits to Growth with Co-President PAUL SHRIVASTAVA

Less than two weeks into the new year and the world’s wealthiest 1% have already used their fair share of the global carbon budget allocated for 2025. Climate change is here. It's already causing devastation to the most vulnerable populations. We are living with an extractive mindset, where we are extracting one way out of the life system of the Earth. We need to change from that extractive mindset to a regenerative mindset. And we need to change from the North Star of economic growth to a vision of eco civilizations. Those are the two main principles that I want to propose and that the Club of Rome suggests that we try to transform our current organization towards regenerative living and eco civilization.

The Club of Rome & The Limits to Growth w/ Co-President PAUL SHRIVASTAVA

The Club of Rome & The Limits to Growth w/ Co-President PAUL SHRIVASTAVA

& The Limits to Growth with Co-President PAUL SHRIVASTAVA

Less than two weeks into the new year and the world’s wealthiest 1% have already used their fair share of the global carbon budget allocated for 2025. Climate change is here. It's already causing devastation to the most vulnerable populations. We are living with an extractive mindset, where we are extracting one way out of the life system of the Earth. We need to change from that extractive mindset to a regenerative mindset. And we need to change from the North Star of economic growth to a vision of eco civilizations. Those are the two main principles that I want to propose and that the Club of Rome suggests that we try to transform our current organization towards regenerative living and eco civilization.

Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.

Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.