- Episode AI notes
- Genius Mode in You.com provides complex answers by processing specific information and presenting it visually.
- Legislation regarding AI is heavily influenced by sci-fi narratives, leading to potential impediments in progress.
- Companies prioritize developing profitable AI aligned with their goals rather than focusing on AI with self-set objectives.
- AI has achieved superhuman capabilities in translating languages, predicting amino acids, and weather forecasts.
- Realistic optimism towards AI is supported by research debunking the idea of AI developing magical or unrealistic abilities.
- Acknowledging threat vectors in AI, cultural differences in concerns, and the need for a positive vision for the future are crucial.
Balancing resources between investing in existential threat mitigation and public perception is important for the advancement of AI. Time 0:00:00
Default Smart Mode vs. Genius Mode Summary: The default smart mode provides quick factual answers by using recent citations, while the genius mode tackles complex questions by searching for specific information, processing it through code, and presenting the answer with a visual representation.
Speaker 1
That’s kind of the default. And just to give you a sense of what that looks like. So here’s an example of what the default smart mode looks like. You know, there’s some doping case that happened. And you can see lots of careful citations. And then when you actually look into these citations, they actually are articles from literally yesterday. Or they could be, you know, from today if something came out today. So that’s kind of the default smart mode. You get a quick factual answer. But then we thought, well, what does he have a pretty complex question like around math, physics, chemistry, science, or like complex numbers. So here is a genius. And what kind of gives you a sense of what it does. And does mention like what you say, which is there’s an important L and the quarks traits multiple other L ends to actually do the right thing. And right. So the question here is find the current population of the United States, then it’s lots of population from 33 to 2000, 10, 100, and then assuming a true percent growth rate. And then it will go on the internet, it’ll find the numbers, and then realize like, well, I got it now visualize those numbers, now that I have any. So it will code up in Python, what this could look like, execute the code, and then gives you this answer. And visualize is it been a nice call. And so that I’m still sometimes amazed, I try and I push it, and you know, sometimes it fails.Overemphasis of Sci-Fi in Legislation Summary: The Western canon is dominated by sci-fi portraying negative outcomes like AGI takeover and time travel, which has heavily influenced the AI narrative and legislative actions. The European Union tends to focus on legislation to prevent harm rather than waiting for issues to arise like in the US. This approach enables quicker movements but some sci-fi scenarios have disproportionately influenced legislation, potentially impeding progress.
Speaker 1
Now, of course, especially in the Western sort of canon, most sci-fi is just so big, and like, people are scared for all the things that can happen that are wrong. And like, okay, the it’s super AGI, we develop time travel, come back, try to wear every once. Like, I mean, as a kid, I also enjoyed watching Terminator. It’s like a cool action movie, but it’s just taken over so much of the AI narrative, and it’s actually like actively hurting, especially the European Union, where, you know, there’s A sort of in the spectrum, the US is more of a litigation society in the US, that the Europe is more of a legislation society structure, and you know, it both comes from like reasonable Legal scholars minds, like, well, let’s just wait until there’s a problem. Someone seems now, I have to case law for that lawsuit, but you know, the legislation one tries to prevent harm from ever happening before it actually harms, which, no, makes sense. Now, and of course, does that with FDA and in the medical space now also, but not legal space as much. And so what that means is you can move quicker, but long story short, these some of these sci-fi scenarios have gotten so much weight and legislation that I think is slowing EuropeAI Ethics and Profitability Summary: Misinformation on the web related to AI is not significantly impacted by LMS. Conscious AI with self-set objectives is not a focus in the industry as it does not generate profits. Companies prioritize developing AI that aligns with their goals to maximize profitability. The lack of research and progress in AI with self-set goals hinders ethical advancements in the field.
Speaker 1
I haven’t seen like a huge change in misinformation on the web because of LMS like, there’s just a lot of fear mongering both and the immediate level, which actually has real like threat Vectors and concerns with the eye, but especially in the long term level of the AI and self-conscious. It turns out no one works in conscious AI. No one works on the eye that sets its own goals and even more fundamentally its own objective functions because that doesn’t make anyone any money. Imagine a company spends billions and billions of dollars, bills is like super intelligent system that’s punches understand itself inside its own goals. And now you’re like, okay, now that you can do it, like, all those make more money. It’s like, no, I’d rather just go watch the sunset. Maybe explore that. No, like no one pays for AI that sets its own goals because it doesn’t help anyone to cheat their goals. You know, because of that, there’s not even that much exciting research challenge that along those lines. And because there’s not much research progress, it’s very hard to predict my mental option.Unlocking the Power of AI in Various Fields Summary: AI has achieved superhuman capabilities in various areas such as translating languages, predicting amino acids, and weather forecasts. The advancements in language models have taken AI to a new level, where it can continuously be referred to as AI. The ability of language models to predict the next token is a remarkably powerful concept.
Speaker 1
And like, AI is already superhuman in translating 100 language, AI is already superhuman in predicting next amino acid in a large language model, you know, protein exists, we have All that’s an incredibly powerful tool. One of the other, you know, really exciting papers that we published in 2018, it’s also research that multiple companies have now used and are running with and I think, oh, cheap, all Medicine. AI is already better at predicting the weather than any. So you already have many superhuman skills. What does I think interesting is that now that it’s language that’s gotten to this new level, people might actually, for the first time, keep calling it AI. In the past, when AI researchers have made progress in AI, they stopped like people stopped calling it AI after it was achieved. Now it’s just your chest. It’s just a theory voice recognition, but voice recognition, chest playing, that was the pinnacle of AI research, right? And people thought, Oh, once we solved those, the other things will be easier to and it never was never quite the case. And once we have them, you know, now it’s not quite the I’d want now with language, I think we might keep calling it AI. But what a language model does is predict the next token. And that is an incredibly powerful idea, right?Realistic Optimism Towards AI Summary: The notion of AI developing magical or unrealistic abilities like consciousness, manipulating human behavior, or becoming a conscious entity is not supported by research. The idea of AI controlling people based on intelligence is debunked by the presence of world politicians. AI is viewed optimistically despite current issues such as bias detection, as it is believed to enhance foundational sciences like physics, chemistry, and biology.
Speaker 1
It’s like, oh, it’s going to develop this magical great guru or like magical new virus that is perfect in distributing, but then only will activate after like one yearly pill everyone, Like all these random scenarios that are just like not feasible. And the science isn’t great. I’m actually right now sort of on this side, the fun writing a book about the ISR science. I think it will do incredible things in improving science, like foundational physics, chemistry, biology and so on. And all this fear monitoring, I think it’s not really helpful. And again, there’s no research that suggests the eye is becoming conscious. There’s like a couple of papers here and there of people kind of playing around with yours, but nothing interesting has been published and breaks through, no breaks through this have Happened whatsoever in the eye having any sense of self. And then in a lot of the other sci-fi scenarios, people are saying, oh, what AI is so intelligent, it’ll convince everyone to murder each other or murder them like kill themselves and So on. But you know, if the most intelligent entities were to always rule, I don’t think we would have the politicians always everywhere in the world that we see. It’s not always just the most intelligent people that run this whole and that’s 10 because they’re incredible intelligence convinced any other person who is less intelligent to be Exactly what they want. It’s just not based in reality. So I am very, very optimistic about AI. You think there’s some real problems right now, you know, AI would pick up biases, not all the biases that you pick up on the web is something that most of humanity is out of anymore.Being Mindful of Threat Vectors and Cultural Differences in AI Concerns Summary: Acknowledging the three threat vectors of intentional misuse, accidental misuse, and loss of control in AI is crucial. People must be cautious about trusting content on the internet, especially with the evolution of AI and technologies like Photoshop. Different cultures will have varying responses to AI concerns based on their values, such as differences in freedom of speech legislation. While there are significant concerns related to AI, the speaker does not foresee a new existential risk scenario for humanity.
Speaker 1
So where I agree with Russia, Benjio and others is of the three threat vectors, which is intentional misuse, accidental misuse and loss of control. Obviously, like intentional misuse is real. And so that’s not ideal. And so yes, those are real concerns. I think both social help us understanding those threat vectors and finding best ways to compete with them. I think people still on the internet need to understand to not trust everything they see on the internet, which has been true ever since the internet dream about hasn’t really changed That much with AI. I think since Photoshop, people should already not trust any photo they see. They should be even more worried now about photos they see. And sadly, in the future, they’ll have to start worrying about videos and voice of course, just like they should have worried about photos ever since Photoshop started to really work. And so there are a lot of concerns. And I don’t want to diminish them. And I do think we need to work on them. And I think different cultures will have different answers. The freedom of speech is now defined differently in different countries, like it’s legal Germany to deny the Holocaust. We learn from artists that’s not illegal in the US. And so different countries and different cultures and societies will answer some of the problems that AI can amplify already in the past before we’ll answer these questions differently. But I don’t see any any probability for a full-on new scenario of existential risks to people.Questioning the Trust in Media and the Need for a Positive Vision Summary: The advancement of technology has eroded trust in media, starting from the era of Photoshop. With the evolution of technology, people need to be increasingly cautious not only about photos but also videos and voice recordings. The concerns raised by these advancements are valid, and the cultural and societal responses to these challenges will vary. While different countries may have varying approaches to freedom of speech and the regulations surrounding it, the core issue lies in the misuse of powerful tools by individuals against each other. Despite the potential risks posed by AI, there is no significant probability of an entirely new existential threat. It is crucial to focus on creating a positive vision for the future, as envisioning a desirable future scenario is becoming a rare commodity in today’s world.
Speaker 1
I think since Photoshop, people should already not trust any photo they see. They should be even more worried now about photos they see. And sadly, in the future, they’ll have to start worrying about videos and voice of course, just like they should have worried about photos ever since Photoshop started to really work. And so there are a lot of concerns. And I don’t want to diminish them. And I do think we need to work on them. And I think different cultures will have different answers. The freedom of speech is now defined differently in different countries, like it’s legal Germany to deny the Holocaust. We learn from artists that’s not illegal in the US. And so different countries and different cultures and societies will answer some of the problems that AI can amplify already in the past before we’ll answer these questions differently. But I don’t see any any probability for a full-on new scenario of existential risks to people. It’s mostly people using more and more powerful tools against other people.
Speaker 2
So there’s so many different threads there that I am interested in. For one thing, I applaud you for taking time to envision positive future. I think one of the scarce resources today, oddly, is a positive vision for the future. What do we want? The Jetsons is still almost like state of the art in terms of what we would envision a great 2030s to be like.Investing in Threat Mitigation versus Public Perception Summary: The speaker emphasizes the importance of balancing resources between investing in addressing existential threats and public perception. They suggest inspiring researchers with sci-fi scenarios to think about preventing catastrophic events. The speaker highlights how society historically feared various technologies, from books to the internet, and expresses the need to address real existential concerns amidst pervasive pessimism.
Speaker 1
It’s just that, like in terms of how much resources we should spend on key, like existential doo versus like, you know, I’d say, yeah, I have a couple of researchers like thinking of cool Sci-fi scenarios inspire us. Like maybe like think about ways that that could be prevented, but just spend billions of dollars on it to like spend a lot of like mind share the public about it was already scared of any Kind of technology. I mean, people are scared of Wi-Fi. I mean, there’s this great Twitter handle called the pessimist archive. I mean, people were scared and thought doom is happening because of bawls back in the day. You know, like all these kids, they’re just in their heads reading novels. They’re gonna all be useless human beings in the future. Use paper was terrible. Internet was terrible. Like there’s so many things that like people thought this is the end of civilization and we’re very pessimistic. And again, not just diminishing like real, real concerns, but again, existential one very, very likely given what we’re seeing right now.
