Y Combinator Summary: In 2014, you were named president of Y Combinator. Did it feel scary to be leading this thing that had gained this kind of massive reputation? It didn’t feel scary at all. It felt a little sad. Like it took PGO while to convince me because I really wanted to do another startup. And so there was like some sense in which it was an admission of defeat about, “I can’t run a company”
Speaker 2
Yeah. So this was your life, looped was your life with support from Y Combinator for several years, but ultimately, it just didn’t take off. It was acquired for, I think, for about
5 million from that sale. I earned some, but this is another thing that is just sort of, I’m trying to think about, I made orders of magnitude more money from angel investments that I made.
Speaker 1
But I spent no time on the thing I poured my life into.
Speaker 2
Right. But after that happened, you were, in 2014, you were named president of Y Combinator. Already by that point, Y Combinator had kind of become this sort of legendary incubator. Tell me a little bit about sort of taking on that role. Did it feel scary to be, because again, you were still pretty young leading this thing that had gained this kind of massive reputation?
Speaker 1
You know, it, this may be some sort of like weird flaw of mine. It didn’t feel scary at all.
Speaker 3
It felt a little sad.
Speaker 1
Like it took PGO while to convince me because I really wanted to do another startup. I wanted to like prove myself. I was, you know, 28 or something at the time. I didn’t feel ready to like go retire and do a career adventure, which is really how I thought of it. I didn’t like have a ton of respect for the career path. And so there was like some sense in which it was an admission of defeat about, you know, I just, I can’t run a company. So I’m going to go do the easy job or I’m going to go do the retirement job or something. And it took me a long timeWe Follow the Technology Summary: The way we pick projects to work on is not as exciting as everybody would hope. We realize that it’s very hard to sit in a vacuum and predict where everything is going to go. Some day we’ll come back to it when the robots and the simulators get better.
Speaker 1
But what happened is it was too hard of an environment, and we ended up pausing our robotics work. Some day we’ll come back to it when the robots and the simulators get better. It’s clearly, you know, it would be great to have, but it turned out to be hard in the wrong kind of ways. The way we pick projects to work on is not as exciting as everybody would hope. I think there’s this belief that you have a bunch of people sitting in a room picking this brilliant secret strategy. We just run our own reinforcement learning algorithm. We do more of what works. We follow the technology. We realize that it’s very hard to sit in a vacuum and predict where everything is going to go. Most people who have tried that have been catastrophically wrong. And it’s very smart people trying it. The technology just kind of like surprises all of us. And so we pay close attention to what’s working. We put more effort into what’s working. We then put effort into figure out how we can deploy it safely and beneficially. And then we go do the next thing on top of that. But it’s very much like driving on a country road at night with headlights on. You can always like see before you have to make a turn, but you’d have no idea what the next few turns look like.
Speaker 2
Right. And that’s a question, right? Because if you are building a product, right, if you’re building an autonomous vehicle, you know what you’re building towards.The Biggest Challenge for OpenAI Is Uncertainty Summary: At OpenAI, we just don’t have any of that certainty. And I am like, wistfully envious of the Helion world because it’s still super important and super great, but it’s so much clearer. There’s no uncertainty about what to go after, what’s going to happen, what the company needs to do. At OpenAI, there’s none of the harder issues to contend with.
Speaker 2
Right. And that’s a question, right? Because if you are building a product, right, if you’re building an autonomous vehicle, you know what you’re building towards. But you’re really not building towards anything in particular. You’re building towards something that we humans can’t even imagine, right? And that’s the thing. Like, it’s, I guess, it’s a matter of just figuring out how to develop technology that can think like a human beyond how human things can process information in highly complex ways. Yeah.
Speaker 1
You bring up a great point. The other company I’m involved with is this nuclear fusion company called Helion. And there are a lot of similarities between the two companies in terms of like a very hard scientific and engineering problem. But the biggest difference is like what to do, what success looks like, what’s going to happen on the other side of this, what problem to go solve next.
Speaker 3
You can look at it backwards or forwards. It’s very clear. It’s all hard, but it’s very clear.
Speaker 1
There’s no uncertainty about what to go after, what’s going to happen, what the company needs to do. And another nice thing is like, it’s basically all upside. There’s there’s none of the harder issues to contend with. At OpenAI, we just don’t have any of that certainty. And one of the things that has been like a little bit surprising to me is how difficult it is to get advice about how to run a project or a company in the face of such uncertainty. It hasn’t happened a lot of times. And I am like, wistfully envious of the Helion world because it’s, you know, it’s still super important and super great, but it’s so much clearer.GPT Three and Artificial Intelligence Summary: GPT three is a powerful language model that people develop on top of. It’s not open source, but anyone can use it. When we deploy it via the API, we can take it back if we find out there’s a real safety issue. We can make changes if unsafe things are happening.
Speaker 1
Oh, it’s called GPT three. It’s like a powerful language model that people develop on top of.
Speaker 2
Right. And that’s available. That’s open source. Anybody can use that.
Speaker 1
It’s not open source, but anyone can use it. I got you. Okay. We could open source it someday in the future. It’s certainly something we’ll consider. But there is a thing about the API access, which we like, which is when we know less than we’d like when we’re in the fog of war, if we just publish the weights of a model on the internet, and Then we realize like, there’s actually a safety issue here, we can’t take that back. It’s done. It’s like a one way door. We also cannot put any usage restrictions on it after we open source it. When we deploy it via the API, we can take it back if we find out there’s a real safety issue. But more importantly than that, we can enforce usage restrictions. We can make changes if unsafe things are happening.
Speaker 2
So let’s kind of break down GPT three for people who don’t know this because obviously, as you know, I built this as a generalist show. We talk about chocolate chip cookies and artificial intelligence on the same show. And from what I understand, GPT three can do a lot of things, but there’s a famous example from 2020 in the Guardian newspaper. They published an op-ed. They essentially asked GPT three to write an op-ed called Why We Shouldn’t Be Afraid of AI and published it.Do We Accept That Some People Will Use Technology for Good? Summary: There are a lot of intelligent people on planet Earth. And some of those intelligent people are bad actors. There will be really intelligent people who figure out how to get an AI to create a new virus and deploy it very effectively. You just cannot prevent that. I mean, you look at how bad actors have deployed social media or digital technology to steal money from people’s bank accounts”
Speaker 3
You know, we deploy our technology slowly and cautiously, like we’re willing to piss users off to go slowly.
Speaker 1
But as the systems get much more powerful, the challenges become more and more unprecedented.
Speaker 2
Yeah. There are a lot of intelligent people on planet Earth. And some of those intelligent people are bad actors. I mean, you look at how bad actors have deployed social media or digital technology to steal money from people’s bank accounts or crypto accounts or, you know, just, these are not dummies. These are intelligent criminal networks, right, that have figured out how to use a variety of tools that are now available to them to do real harm. And that’s going to happen with this technology, right? There will be really intelligent people who figure out how to get an AI to create a new virus and deploy it very effectively, like that could happen or how to, you know, build your own very Powerful, explosive. Now, that being said, do we just accept that that is part and parcel of being human, that there are humans with bad intentions and there are humans with good intentions, and that it doesn’t Matter what we do with technology, because some people will use it for good and some people will use it for evil. And you just cannot prevent that.Predictions About Technology Roadmaps Are Hard Summary: The consensus five, 10 years ago was that first of all, the AI was going to come for the blue collar jobs. And if we look at it now, it appears relatively clear that it’s going in exactly the opposite direction. So my observation is just that like, this was hard to predict. It seems fairly clear now. But everybody was confident and wrong in the other direction.
Speaker 1
Absolutely. You know, the strong consensus five, 10 years ago was that first of all, the AI was going to come for the blue collar jobs.
Speaker 3
Second of all, would be the less sophisticated white collar jobs.
Speaker 1
Third of all, would be the very high cognitive load white collar jobs like a computer programmer or a mathematician or whatever. And then last of all, and maybe never, because maybe this was like special and human only, would be the creative jobs. And if we look at it now, it appears relatively clear that it’s going to go in exactly the opposite direction. And there’s a lot of things one can take away from that. But one of the most important ones is predictions about technology roadmaps are hard and a lot of very confident experts get them wrong.
Speaker 3
So my observation is just that like, this was hard to predict.
Speaker 1
It seems fairly clear now. With the benefit of hindsight, it also seems fairly obvious now. But everybody was confident and wrong in the other direction. Robotics are really hard as we talked about earlier.
Speaker 2
Yeah. So I mean, what does that mean in practical terms? So right now we’re talking here in 2022, there is a labor shortage in the United States, right? There’s just that are not enough humans to fill all the jobs available. And there’s a crisis that many companies across the board, not just in the service sector, the blue collar jobs, but also in technology companies.Is There a Labor Shortage? Summary: A lot of people would choose to spend their time in a very different way if they didn’t have to. I don’t know what the jobs of the future will look like, but I am confident that human creativity is not going to go anywhere. The economy and society will look super different. That’s for sure. But we have always worried about this. We have always found something new to do.
Speaker 1
And you should be able to do less of it. I really love work. It’s my hobby. It’s my passion. I think it’s great. But I know a lot of people don’t. And a lot of people would choose to spend their time in a very different way if they didn’t have to. I also think a lot of people who like work would clearly like to work less where we’re seeing this in this post pandemic world where a lot of people are like, I go into the office two or three Days. I’d really rather only work two or three days too. I have a lot of other things I like. No, I really don’t want to go back. And so I think the labor shortage is actually like much bigger than it seems. I also think like a lot of things that we do were understaffed even before the pandemic. But certainly like going through an airport now or going to get medical care now, you like really feel, you know, this is not what full employment would look like where I just like walk In, everything’s ready to go all of the time. Or where like every student has like a one to one teacher to student that could be amazing too. Who knows what? So hugely, we’re shortage now also. We’ve seen this with every other technological revolution too. I don’t know what the jobs of the future will look like, but I am confident that human creativity desire for status, desire to like do new things and to like accomplish. That’s not going to go anywhere. The economy and society will look super different. That’s for sure. But we have always worried about this. We have always found something new to do.
