• You need to understand it in a way that is not immediately quantitative or scalable, before you can develop good quantitative measures. Scientists are not making their judgments individually just by citation counting, or something like that. You try to understand the evolution of ideas in your own area and figure out what is actually intrinsically important, what should you be working on? What kinds of ideas are

  • So the attempt to say what is most important is actually answering the wrong question, what you should be trying to do instead is generate the portfolio in different directions, some of which may be, in some sense, inconsistent.

  • There are a number of different schools of thought about how best to do this. If you insist on just picking out something as the best, you will then concentrate all your attention on one particular approach. If instead you say, just allow for at least some period of time, different approaches to flourish and not try and determine which of the two, three, five or 10 different approaches is best. I think, usually, that’s a significantly better way. It’s something I like about Silicon Valley actually. If you have two competitors in a space, let’s say you talk about, say, Netflix and Blockbuster, sort of two classic competitors. The fact is for a long time, although they’re competing for the same market, in some respects, they have their own internal infrastructure, capital, and momentum

  • If you want to delete a DOM element, you need to find the parent and then delete the child. This is crazy as a design, whereas something like Python seems much more like, they tried to make a language that was pretty consistent. I always feel like JavaScript seems a little bit more, almost more biological. There’s no real uniformity or consistency, it evolved to be whatever it is. DEVON: That is a great way to put it. And I see that as its greatest strength and its greatest weakness.

  • Just for sort of natural reasons, you don’t want to complain that you don’t like the lunch menu, or that you worry a lot about your relationship to that person. So it creates a really interesting opacity to the person at the top of the organization. In some sense, the CEO has less visibility into some parts of the organization than just a random new hire who can go and find out what people really think about certain issues. DEVON: It’s that observability effect again. It’s similar to the citations like, as soon as someone’s looking at something, their behavior will change, or at least mask itself so that it doesn’t get read in a way that is unfavorable, or out of control for that particular person.

  • DEVON: Now that you say that, I see a lot of resonance between the way you think about science as an ecosystem, and the way she talks about cities as ecosystems and how they can remain healthy and strong, and vibrant. MICHAEL: Yeah. It’s another example of the use of decentralized knowledge to improve society. That very much so influences the way I think about science funding. You have sort of the eyes on the street in Jane Jacobs, which provides safety, security, and so many other things. In science, you can either have very centralized funders making all the decisions, or you can try and devolve a lot of trust out to individual scientists, and trust that individual scientists may actually have a much better idea of how to spend their talents, than some centralized kind of vetoing power

  • DEVON: On the point of intrinsic motivation, how widespread do you think that is in the population of scientists? Like if you were to pluck a random scientist out of a hat? Do you think that that would apply to the person more likely than not? MICHAEL: Certainly, the scientists I’ve met in my life, I think the great majority of them, if they wanted to become wealthy or powerful, or high status, that many other things that they could have done that would have afforded them greater opportunity to do that. I think most of them just absolutely adored science. That’s not always true, actually it tends to be that they either absolutely adored science as kids, or else there’s sort of a little bit of luck or chance. They were in the right lab at the right time and all of a sudden they realized, ā€œOh, this is greatā€. So they forwent those other possibilities. With that said, it is difficult to escape some type of careerism. People like to eat. Most people, all other things being equal, prefer to have a higher salary. There are these very sad cases of people committing fraud or something very dubious. Very often, what’s going on is people who just become too invested in some false notion of success. They’re disconnected from what they’re really supposed to be doing. It’s sort of sad individually, and of course, pretty terrible for our society.

  • DEVON: I could even imagine a situation where that’s not inconsistent with some intrinsic motivation. I can reflect on times where, let’s say, I’m having a conversation with somebody about a topic I’m deeply interested in. I actually want to learn from them really, really badly. But then sometimes I’ll have a moment where I think ā€œIf I asked this question, I will look stupid, it will lower my status in their eyes, because they will think ā€˜she doesn’t know that already?ā€™ā€. I always try very hard to push through that feeling because I value information, but I can feel the tug of ā€œmaybe maybe I should hold back, maybe I shouldn’t ask this questionā€

  • What is a field that is a really interesting one, and really complicated. And as far as I know, there’s not. Okay, so like, one possible set of associations you might have is, to some deep set of related ideas. For example, the Maxwell Lorenz equations are used to describe electromagnetic phenomena, this is an incredibly deep set of ideas. You can spend all of your days just studying the consequences of these equations, and understanding all kinds of electrical and magnetic phenomena. So in that kind of approach, to think about what a field is, a field is somehow a social structure that grows up around a particular deep set of ideas.

    A field is a social structure that grows around a deep set of interconnected ideas. As the infrastructure for connecting ideas scales — through open access, digital tools, and cross-disciplinary communities — more clusters of ideas will cross the threshold into becoming fields. The bottleneck shifts from idea density to institutional recognition.
  • So I’ve started with deep ideas, but then you also have these kinds of political structures that sit on top of them, you know, is it a field? Is it something which has a named department at universities? You’ve made it entirely a political thing.

    Large technology products create barriers to ideas becoming fields. Enterprise structures impose organizational boundaries that prevent ideas from connecting across domains — the political question of whether something is a ā€œfieldā€ (with a named department) often overrides whether the ideas themselves have reached the depth to warrant one. ecology-of-technology
  • Maybe in the case of AI, you don’t actually need the deep understanding step quite as much because we can do the experiments so quickly. I think the skeptic says, ā€œYou’re never going to have successful artificial intelligence, if you don’t stop and understand what these systems are doing. You know, look at the history of the way we’ve improved technologies in the past, we always needed to understand how they operateā€. The counterpoint to that is well, yeah, that’s true. But actually, we’re in a different situation. Today, our ability to test our tinkering is just much greater than it has been in the past, we can try many, many, many more possibilities automatically. So we don’t need really strong theoretical explanations to rule out incorrect lines of investigation, we can instead just try a trillion or a trillion trillion trillion different things and rely on our ability to recognize when something is working, rather than derive from first principles why it’s working

  • Feeling is not a not a bad way of putting it. But you can’t go from like, the steam engine to a modern Tesla without having a lot of scientific explanation somewhere in between. In some sense, we’re trying to do that on the AI side without that sort of detailed understanding in between time. So it’s easy to see why you might be skeptical. But you just couldn’t test trillion trillion trillions of intermediate technologies in the past.

    The gap between tinkering and understanding suggests a role for formal grammars — systems of valid proofs and logic that constrain model output without requiring full theoretical explanation. A middle path between brute-force experimentation and first-principles derivation.
  • So in the past, some poor grad student had to write all the code and, you know, arrange all the infrastructure. Now, there will be a team that just specializes in making sure all the infrastructure is working extremely well, being able to rapidly scale up experiments and do these kinds of things

  • Actually, that’s a fun question, I think about cases when really dramatically speeding something up, like the ability to iterate, has actually resulted in a sort of secular slowdown. Like the overall process has maybe been burned by it. Can you think of any examples like that? DEVON: I would say debugging often has this effect, in programming, where if I get into a rhythm on debugging, and I’m just, I’m just trying all sorts of different things. This is something many, many people experience where as soon as you step away from the computer to go to the restroom, or grab a coffee, or go to sleep, that’s when the solution sort of hits your brain. I think the fact that you can, you have so many ways to test something at your fingertips can often stop you from stepping away and just letting the problem sink into your subconscious

    Speed of iteration can be counterproductive — the ease of testing one more thing prevents the stepping-away that lets the subconscious work. A tool designed for thinking might deliberately slow the user down, formulating questions that prompt reflection rather than more rapid experimentation.
  • in the last five or so years, is the rise of the independent researcher. It seems to me like more and more people are choosing settings outside of traditional academia to pursue lines of inquiry that previously would have found a home in the universities. First, there’s sort of two questions. One is just does that match your observations because you’ve been much more embedded in science for a much longer time than I have and if it does match your observations, what is driving that shift? MICHAEL: I mean, intuitively it does. I don’t have any real data to support it, just sort of noticing more and more people, which might just be that I’m getting older. There’s at least two really good structural reasons for it, maybe three. One really good structural reason is that it’s just getting a lot easier to access papers and other sorts of serious materials so you can participate in conversation. In that sense, the Academy has kind of opened up a little bit. And the second, which is very closely related, is that those communities of practice are no longer as closed as they used to be. It’s kind of shocking. To me, looking back on my experience of quantum computing in the late 1990s, and early 2000s, my experience, at least, was quite insular. I think it’s much easier now to just sit on the boundary. I still track at least a little bit of stuff about quantum computing, I mean, a lot of it is just catching up with old friends and sort of gossiping about stuff. But it’s made a lot easier by social media, there’s no doubt about that. It’s also hugely, hugely easier to sit on a lot of those boundaries, not just with one field, but with 2, 3, 10, 30 fields. Somebody who wants to be an independent researcher in some field, can to some extent just embed themselves.