ALBERTO SAVOIA

ALBERTO SAVOIA

Google’s 1st Engineering Director · Innovation Agitator Emeritus
Author of The Right It: Why So Many Ideas Fail and How to Make Sure Yours Succeed

As much as I would love to take the credit, Google Ads was a big team, and I was fortunate to be brought in as a director that managed the team. I think the reason it was so successful is because innovations and new ideas, they compound. They build one upon the other. So the reason why ads was so successful for Google is because search was so successful for Google. So when you have search and you have billions of people coming in every day, maybe every hour, and searching all kinds of things, you have this treasure trove of data. If you have billion searches per day, you know how many experiments can you run? And so Google is very famous for doing a lot of A/B experiments. That's how we collect the data. So what actually enabled Google to be so successful and to grow is this mental attitude, which is the same one that Amazon and some of these really successful technology companies have, of doing a lot of experiments on small samples and continually refining their data based on that. If you're dealing with a lot of people, you can do those experiments and that's why these companies are successful. The sad thing or what happens with companies that do not operate in that way, that do not try to operate on data and do all of those experiments, those are the ones that are left behind. Innovation is experimentation.

NICK BOSTROM

NICK BOSTROM

Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.