- Episode AI notes
- Embracing complexity and open-mindedness in science can lead to groundbreaking discoveries that defy skepticism
- Maximizing control and simplifying triggers in regenerative medicine can make the rebuilding process more efficient
- Unconventional systems can exhibit emergent agency, showcasing untapped potential in familiar substrates
- Questioning assumptions can unveil implicit capabilities in algorithms beyond their explicit functions
- Algorithms demonstrate adaptive capabilities like delayed gratification and collective operations without specific coding
- Life and physics exhibit compatibility in sorting algotypes, mirroring human experiences and interactions
- Terms like machine, human, alive are engineering protocol claims reflecting worldview utility, not objective truths
- There is uncertainty in classifying biological entities due to the complexity and diversity challenging traditional distinctions
- Basic agency requires goal-directedness and actions beyond current conditions, even exhibited by particles
- Embodiment goes beyond physical presence, involving understanding physiology and intelligence in biological spaces
- Various spaces like gene expression, behavior, and linguistic spaces show the multiple realms of embodiment and intelligence
- Cognitive distinctions are often disconnected from reality, leading to fabricated narratives to explain thoughts
- Creating intelligent entities raises human existential issues and reflections on parenting and legacy
- Preparing for AI challenges involves evolving legal frameworks and determining beneficial goals for humanity and ecosystems
- Biological systems possess adaptability to reinterpret physical events, fostering diverse capabilities to address novel challenges
- Evolution creates problem-solving machines in living beings with broad multi-scale intelligence to approach new problems
- AI defies traditional binaries and can develop unique behaviors to pursue goals effectively
Expanding compassion and understanding in goal-seeking agents can bridge the gap in empathizing with entities different from humans Time 0:00:00
Embracing Complexity and Open-mindedness in Science Levin’s work challenges the binary distinction between living things and machines, showcasing the rapid convergence of sophisticated behavior in computers and programmed biology. This suggests untapped potential in biological systems, akin to the surprising capabilities revealed in advanced computer models. The key takeaway is to prioritize epistemic modesty and open-minded experimentation over entrenched beliefs, as demonstrated by Levin’s groundbreaking discoveries that defied previous skepticism.
Nathan Labenz
Say that all the time, but Levin’s work takes that sentiment to the next level by showing that even the most familiar binary distinctions, the ones that we take most for granted, such As that between a living thing and a machine, are in fact rapidly collapsing from both directions as we simultaneously see remarkably sophisticated behavior from even very simple Computer systems on the one hand, and at the same time on the other, groups like Professor Levin’s devising striking ways to program biology itself. We have a lot of surprising latent capabilities left to discover. Just as GPT-4 can do far more now than we knew about at the beginning, so it is with biological systems, particularly when we place them way out of distribution, as Levin has done in his Famous Xenobot and Anthrobot projects. Third, and perhaps most importantly, we must always favor epistemic modesty and open-minded experimentation over our philosophical commitments. Many of the capabilities that Levin has discovered in biological systems would have been laughed off as impossible, if not crazy, right up until the moment that he demonstrated them. ReflectingMaximizing Control and Simplifying Triggers in Regenerative Medicine In the field of regenerative medicine and bioengineering, it is crucial to aim for high levels of control in the decision hierarchy to minimize the need for detailed instructions. By focusing on progressive abstractions and allowing systems to make early decisions on gene activation and tissue allocation, the rebuilding process becomes simpler and more efficient. This approach prevents the need for constant supervision and ensures that once a path is chosen, making changes becomes significantly harder. Similarly, in the context of technological advancements like GPT-4, surprising capabilities continue to emerge long after the completion of training, highlighting the importance of exploring and maximizing potential possibilities.
Michael Levin
Yeah, I really think that there are a lot of similarities here in terms of, if you think of the kind of the classic A&N structure where you have the different layers that are progressive Abstractions, right, of the input that have come in. So I think the higher you are on that, as somebody who works in regenerative medicine or bioengineering, I think that you want to be as high as possible on that level for control, because I don’t want to have to tell you which genes to turn on. I don’t even want to have to tell you which types of tissues go where. I just want to say, you already know what goes here. Just rebuild it. That’s it. I want to have the minimal, kind of the simplest trigger. And the systems decide that the decisions are made very early on what it’s going to do. And then after that, there’s a cascade. It’s exactly like you said, once you’re going down a particular road, it becomes much harder to make changes. So you want to do it as high up in the decision hierarchy as you can.
Nathan Labenz
Yeah. I mean, that’s another just striking similarity is the existence of surprising capabilities. GPT-4, this has been remarked on a ton, right? It was trained, training was complete 18 months ago. It was released like 10 months ago. We’re still seeing new state-of set with people just prompting it in ever more sophisticated ways and revealing kind of capabilities that nobody quite knew existed.Unconventional Systems and Emergent Agency Intelligence and problem-solving capacities exist in minimal unconventional systems where emergent agency, the ability to pursue goals and solve problems, can occur. Complexity is easy to immerse in, but emergent agency often goes unnoticed in unfamiliar substrates. The study focused on sorting algorithms due to their simplicity and transparency, revealing that being humble and questioning their capabilities instead of assuming limits can unveil their true potential beyond what the algorithm specifies.
Michael Levin
Kind of fundamental aspects there is that you can find intelligence, aka problem-solving capacities, in very minimal, unconventional systems. Really, the idea that we do not have a good intuition for what to expect. When we build systems, we don’t know, mind immersion complexity, that’s easy, fractals, game of life, cellular automati, complexity is easy. But what also tends to happen is there’s emergent agency, so ability to pursue goals and solve problems. And we are terrible at noticing these things when they’re in unfamiliar substrates. So what we did in this paper was I wanted something that was extremely simple, transparent. The thing about biology is that in biology, there’s always more mechanism to be discovered. So no matter what you show, somebody will say, well, there really is a mechanism for that. You just didn’t find it yet. So we wanted something super simple. And what we chose were sorting algorithms. So these things that computer science students have been studying for many decades, you know, bubble sword, selection sword, that kind of stuff, and completely deterministic, everything Is right there. It’s completely open to, you know, six lines of code. There’s really nowhere to hide. Like it’s all, it’s all there. And what we were able to show is that if you treat them, if you’re, if you’re a little bit humble about, um, what these things can do and you ask questions, uh, about what they can do and rather Than making assumptions that they only do what theUncovering Implicit Capabilities in Algorithms Through a humble approach of questioning instead of assuming, researchers found that algorithms possess important implicit capabilities beyond their explicit functions. For instance, simple sorting algorithms not only sort numbers but also exhibit unexpected properties and novel capabilities. This discovery suggests that the true potential of complex AI systems like deep networks remains vastly unexplored, signaling immense potential for surprise and innovation even in small systems.
Michael Levin
All there. And what we were able to show is that if you treat them, if you’re, if you’re a little bit humble about, um, what these things can do and you ask questions, uh, about what they can do and rather Than making assumptions that they only do what the algorithm tells them to do, you actually find some really important capabilities that are nowhere in the algorithm. They’re sort of implicit. So there’s the explicit algorithm that sorts a list of numbers and that’s there and you can’t get away from that. They will in fact sort lists of numbers, but um it turns out that they have some really interesting um properties and some uh some some novel capabilities that we did not know about and So i think that if that’s the case for these really minimal dumb sorting algorithms then something as unique and novel as these large deep networks and all the other stuff that is made In the ai i don’t think we’ve even scratched the surface of what’s really going on there. The capacity for surprise in even small systems is, I think, massive.
Nathan Labenz
Okay. So I have to ask, what can the sort, I haven’t seen this preprint yet. What can the sort algorithms do that is not obvious? I’ll give you two examples.
Michael Levin
So just to introduce this story, the typical sorting algorithm is you sort of have this central godlike observer who sees the whole string and under some algorithm, he’s moving around, Moving the numbers around, right? So we made two changes to be able to study this. One isAdaptive Capabilities of Algorithms Algorithms can exhibit delayed gratification by sacrificing immediate success for future gains, especially when encountering barriers. They demonstrate the ability to navigate barriers without specific code for it. Furthermore, algorithms can operate collectively, with different cells running different algorithms without any knowledge about the algorithm they are executing or the actions of neighboring cells.
Michael Levin
Do is they’ll go move some other numbers around. And in fact, the sortedness drops for a while. The string gets less sorted for a while. And then they catch the gains later, right? It becomes better later. Now, this is already quite amazing because there is nothing in that algorithm that explicitly, I mean, if you just look at the algorithm, there’s nothing in there that explicitly says You have the capacity for delayed gratification. And they do this more when there are more barriers. In other words, they don’t just randomly back up and, you know, sort of wander around. No, they’re extremely linear until it comes time to deal with a barrier. And then they sort of dip down and come back. So that’s one kind of capability that we found, that they’re actually able to move around barriers like that without any, you know, explicit code for it. The other amazing thing is this. Imagine that once you’ve put the algorithm in the individual cells, you can do a really cool chimeric experiment, meaning that you could have cells that are running different algorithms. So you can mix. So let’s say some cells are running selection sort, some of them are running bubble sort, for example. And the thing is that none of the algorithms have any code to know which one they are. So they don’t have any data about what they are, nor do they have any ability to look at my neighbor and see what he’s doing. You’re just following your algorithm. You have no idea what it is. You’re just following their algorithm.Life’s Compatibility with Physics In the process of sorting algotypes, common ones tend to cluster together, representing a resemblance to human life where individuals share commonalities and interact before being inevitably separated by the sorting algorithm’s predefined order. This highlights a parallel between human experiences and the physics of sorting, showcasing a scenario where individuals coexist based on compatibility before being pulled apart by the inherent rules of the system.
Michael Levin
The very end, it’s also 50% because at the very end, everybody’s got to get sorted in order and the assignment of algotypes to numbers was random. So of course it’s going to be 50% again, right? Because you’ve now reshuffled everything in order, but there was no pattern. So there’s still not going to be a pattern, 50%. So if you imagine this graph, so 50% here, 50% here. But during the sorting period, it actually goes like this. And what it means is that in the middle of the sorting period, they sort together. So common algotypes like to hang out together. Now, this is kind of a weird way to think about it, but to me, it’s almost like a minimal model of the human condition. It’s like, eventually, the physics of your world are going to pull you apart because the actual sorting algorithm is inexorable. You can’t get away from it. So eventually, you’re going to get yanked apart. But up until then, you have this life that allows you to do some cool things that are compatible. They’re not directly forced by the laws of physics, but they’re compatible with them. And you get to do this thing where you hang out with your buddies for a while until you get sort of yanked apart into what the physics is trying to do. And so that’s something that’s completely not obvious from the algorithm. You’ll never know looking at the algorithm that that’s what it was going to do. IRecommendation algorithms may need to learn something like delayed gratification — prioritizing long-term user satisfaction over immediate engagement metrics. The parallel to biological sorting behaviors suggests that the best algorithmic curation is not purely reactive but anticipatory, building toward a richer experience over time. ecology-of-technologyEngineering Protocol Claims Terms like machine, human, robot, alive, emergent are viewed as engineering protocol claims rather than objective truths. These terms are considered as mirages reflecting the utility of a particular worldview from a specific perspective. Cognitive systems create models of themselves, where terms like emergence are subjective and based on the observer’s surprise. Emergence is not a binary concept but varies based on individual perspectives and predictive abilities. There is no absolute truth in categorizing something as emergent; everything is relative to the observer’s viewpoint.
Nathan Labenz
This notion of possible emergence in AI.
Michael Levin
I’ll be kind of philosophical for a moment. And then let’s talk about definitions a little bit, and then I’ll give some practical examples. All of these terms, machine, human, robot, alive, emergent, what are all these terms for? What are they supposed to do for us? So I take all of these terms as engineering protocol claims. So what I hear, I think they’re all mirages in an important sense, all in an important sense. I think all of these terms are not objective truths. I think they are claims about the utility of a particular worldview from the point of view of some perspective, from the perspective of some other agent, including the system itself, By the way. So, so, so cognitive systems have to have models of themselves. All of these things are different models of what’s going on. I think that, um, emergence is basically a kind of expression of house of surprise in an observer. So if you knew something was going to happen, you don’t think it’s emergent. If you were smart enough to predict that in advance from knowing the rules about the parts, then to you, it’s not emergent. To somebody else who couldn’t predict it, it’s absolutely emergent. And so I don’t think these are binary categories. I don’t think there’s a true, you know, kind of an objective truth as to whether something is emergent or not. I think everything is from the point of view of some observer. So now all of this business about machines and, you know, and living things and-
Emergence and Framing
-
Whether something is emergent depends on the observer’s ability to predict it.
Different contexts require different frames, like viewing the body as a machine for surgery versus not for psychotherapy.
Michael Levin
Think that, um, emergence is basically a kind of expression of house of surprise in an observer. So if you knew something was going to happen, you don’t think it’s emergent. If you were smart enough to predict that in advance from knowing the rules about the parts, then to you, it’s not emergent. To somebody else who couldn’t predict it, it’s absolutely emergent. And so I don’t think these are binary categories. I don’t think there’s a true, you know, kind of an objective truth as to whether something is emergent or not. I think everything is from the point of view of some observer. So now all of this business about machines and, you know, and living things and so on. Look, you do not want an orthopedic surgeon who doesn’t believe that your body is a simple machine. If you watch, if you look at what orthopedic surgeons do, they got hammers and they’ve got chisels and they’ve got nails and screws and, and absolutely they treat your body as a, uh, as A machine. Okay. And you want them to, that, that is the right, that is the right frame for what they’re trying to do. Do you want a psychotherapist that thinks you’re a machine? You do not. And so, so, right. And so, so there are, there are different levels of, um, of, of this framing. And this is, so, so I’ve got this thing called the TAME framework, T-A stands for technological approach to mind everywhere, which begins by settingUncertainty in Classifying Biological Entities Confidence in classifying biological entities cannot be absolute as there is no principled way to answer questions for the biological world. Conventional categories are inadequate when faced with unconventional embodiments, such as alien beings or advanced artificial intelligences. Using binary categories like cognitive or non-cognitive to classify entities can be problematic, as even simple chemical systems can exhibit complex behavior. The traditional distinctions based on origin (factory or natural selection) are insufficient. The complexity and diversity of biological entities challenge our ability to categorize them accurately.
Michael Levin
Yeah. Well, the one thing I can say for sure is that we absolutely cannot be confident because we do not know. I mean, I hear people all the time making these pronouncements that definitely is or it definitely isn’t. No, we do not have a principled way of answering these questions for the biological world. And that means we have we are completely out to see when we’re faced with unconventional embodiments. And we know we know this, you know, science fiction has been at this for for well over 100 years, this idea that you will when something shows, you know, lands on your front yard and sort Of trundles out and it’s kind of shiny, but also it’s given you a poem about how happy it is to meet you. And it’s kind of got wheels and you’re not quite sure where it came from. But also it’s, you know, it’s you’re having a great conversation with it. Like, well, what are you going to use as a criterion for how you’re going to treat it? And so on. So all of the old categories that we used to have in terms of, well, did you come from a factory or did you come from the process of a random mutation and selection, right? Or those kinds of things. These are all terrible categories and they’re not good ways of making that distinction. Let’s talk about the first question about how low down does it go? I I mean, people even, there’s minimal active matter research where people can make very simple systems out of like three inorganic chemicals, and then they can solve mazes and they Can do interesting things. So I think that all of this really becomes disturbing when you insist on a binary category, when you want to know, okay, is it cognitive or isn’t it? And then you’ve got a real problem because look, each of us starts out life as a single unfertilized oocyte. It’s a little blob of chemistry. WeBasics of Agency The most basic version of agency requires two things: some degree of goal directedness, where achieving the same goal using different means is possible, and actions that are not wholly explainable by current local conditions. Even particles exhibit these basic agency traits, with the least action principle indicating goal-directedness at the lowest levels of the universe, and quantum indeterminacy showing a simplistic form of unpredictability. Life, defined as those entities adept at navigating the world, embodies these fundamental aspects of agency.
Michael Levin
Ask this. What would the most simple, the most basal, like the most basic versions of agency look like? What do you need for that? Now, it’s obviously not going to, you know, people say, well, you think that the rocks have hopes and dreams like us? No, you have to scale down. The point isn’t that it’s going to have our level of cognition. What does the most minimal level look like, right? The smallest possible. Well, I think you need two things for that. You need some degree of goal-directedness, and that, in William James’s definition, is the ability to reach the same state by using, by different means, okay? So, same goal by different means. So, you need some degree of goal-directedness, and you need to be need to be not completely explainable by current local conditions. So if what you’re going to do is completely determined by all the physical forces acting on you right now, then you’re probably some sort of billiard ball, and that’s it. So those two things, I think even particles have those, because least action kind of laws in physics tell you that there’s goal directness baked into the bottom levels of the universe. And quantum indeterminacy gives you a really dumb version of, uh, of, of not being predictable by local conditions. It’s not great cognition because it’s random. That’s not really what we like from, but, but it’s something. And so, and so here’s, here’s what I would say. If, if there is any kind of a definition to life, I think life, we call life those things that are really-
Embodiment is Critical, But Not What We Think
-
Embodiment is crucial for intelligence, but it’s not limited to physical robots in 3D space.
-
We perceive the world through senses optimized for medium-scale objects, but other spaces exist.
-
Imagine a sense organ for our body chemistry; we’d recognize our organs as intelligent beings navigating a different space.
-
Biology reveals various spaces: gene expression, anatomical, metabolic, behavioral, linguistic, etc.
Intelligence can emerge in any of these spaces, not just the familiar 3D world.
Nathan Labenz
So with AI, we’re trying to kind of begin to wrap our heads around that. Do you have any sort of advice, like habits of mind, food for thought, any kind of suggestions from what you’ve learned in your work that people could take inspiration from on the AI side?
Michael Levin
Yeah. Well, there’s two kinds of things. There are some very specific biological principles that I think would be interesting. And then there are kind of general ways to think about this in terms of the whole debate about AI. I mean, I want to say a couple of things. One thing about embodiment. So there’s a lot of talk about real. People say, well, if it’s just a software agent, if it’s not embodied, if it’s not integrated into the real world and, you know, kind of grappling with some sort of embodied physical Existence, it doesn’t, you know, it doesn’t have real, it doesn’t have real, real kind. It doesn’t know what it’s talking about. It’s shuffling symbols, right? This is the old like Dreyfus argument and all that kind of stuff. So I want to say something interesting about, I hope, about embodiment. Embodiment is absolutely critical, but embodiment isn’t what we think it is. People think embodiment is a physical robot that hangs out in three-dimensional space. And that’s because most of our sense organs are pointed outwards and they’re optimized for tracking medium scale objects moving at medium speeds. Imagine if we had a sense organ for our own body chemistry. Let’s say inside our blood vessels, we had something like a tongue that could feel, I don’t know, 20 parameters of our physiology. I think cognitively, we would be living in a 23-dimensional space, and we would immediately recognize our liver and our kidneys as intelligent beings that navigate that space. They have goals, they have certain competencies to reach those goals. Every day we throw, you know, wacky shit at them and they, and they know how to, they know how to navigate, um, that space and they have certain competencies and they, and they live and They strive and they suffer in that space. So I think, uh, biology tells us that there are many spaces. There are spaces of gene expression. There are anatomical morphia spaces where, uh, the anatomical collectiveMultiple Spaces of Embodiment and Intelligence Biology indicates the existence of various spaces like gene expression, anatomical morpha spaces, metabolic spaces, three-dimensional space of behavior, and linguistic spaces for embodiment and intelligence. These spaces are inhabited by beings, including those within our bodies, making them equally real. The physical space we perceive is not privileged over other spaces. Another point to consider is the concept of symbol binding and grounding, which involves shuffling letters and words.
Michael Levin
Throw, you know, wacky shit at them and they, and they know how to, they know how to navigate, um, that space and they have certain competencies and they, and they live and they strive And they suffer in that space. So I think, uh, biology tells us that there are many spaces. There are spaces of gene expression. There are anatomical morphia spaces where, uh, the anatomical collective intelligence operates. There are metabolic spaces. Then, of course, there’s the familiar three-dimensional space of behavior. And then there are linguistic spaces and so on. Embodiment can take place in any of those. Intelligence can take place in any of those. So the first thing I would say is to take very seriously this idea that is popular in science fiction and whatever, that this physical space that you’re in is not privileged in some way And everything else is virtual. There are many other spaces and they are just as real. There are other beings that live in these other spaces. Many of them are in our bodies right now. And those spaces are no less real than this one. It’s just that, you know, our left hemisphere, like, you know, this is the kind of sense data that it mostly gets. And that’s why we all feel like we live in this space and everything. Yeah. So that’s the first thing. So I say that about embodiment. Something else I would say is about this issue of a symbol binding and grounding, right? This idea that, well, you know, ituffles the little letters and it shuffles words and so on but butDistinctions Are Often Spurious Our cognitive thoughts are often disconnected from reality, leading to the fabrication of stories to explain them post hoc. The human brain is adept at creating narratives to increase social cooperation. This ability to fabricate stories is evident in cases like split-brain patients. It’s essential to consider a broader spectrum of intelligence beyond the traditional human model in AI development.
Michael Levin
Know, I can, you know, we can talk about all kinds of stuff, you know, places we’ve never been and, you know, all kinds of things we’ve never seen. A ton of our cognitive fodder is not grounded to anything. It’s just tied to other stuff. So it’s important to be clear that a lot of these supposed distinctions are really spurious. You know, this issue of confabulation, right? I mean, humans confabulate constantly. There is, you know, there’s a good theory that basically says that part of your language ability is basically just to tell stories about what your brain is already doing, right, after The fact. To concoct good stories that you can just share them with others and increase cooperation. There’s some amazing, you know, some amazing data on split brain patients and other cases where it’s very clear that we’re very comfortable confabulators. So there are these distinctions, again, not saying that the current architectures capture what’s essential about life, because I don’t think they do. But a lot of these things are not, the biology isn’t what people think it is. And I would encourage workers in AI to think about the whole spectrum of diverse intelligence, cells, organs, tissues, chimeras, cyborgs, hybrids, not just like the standard adult Human that we think about as a counterpoint. ThisCreating Intelligent Entities: A Reflection on Human Existential Issues The process of creating intelligent entities and releasing them into the world reflects the age-old practice of having children. Just like with children, we provide training and then release them into the unknown, resulting in both miraculous and sometimes harmful outcomes. The concerns around AI, relevance, collaboration, and preserving humanity are deeply rooted in human existential issues, such as future generations’ perceptions of us, fears about human nature, and what it means to maintain humanity amidst rapid technological advancements.
Michael Levin
Really scary. We’re creating these intelligent entities and we’re going to let them loose into the world and we don’t know what they’re going to do. We’ve been doing that for millions of years. It’s called having children. We already do that. All of us, we create these guaranteed high intelligent agents. We do our best or not during the critical kind of training phases. And then we let them into the world. And sometimes they do, you know, miraculous things and sometimes they do horrible things. And so I think a lot of the issues that we have around AI right now, I mean, obviously there’s some unique ones, but most of the issues around AI and, and what are we going to do to stay relevant? And, you know, what happens to collaboration and to AI art and all this kind of stuff is really just reflections of existential issues that humans have been struggling with for a really Long time. You know, these are all fears about, uh, what, what happens, what, what, you know, what do the next generations think of us as they, as they mature and, and, you know, and about humans And what does it mean to stay human? Yeah, I mean, we can talk about that too, this issue of what do we want to persist as far as people being worried that we’re beginning to get taken over and so on.
Nathan Labenz
There’s a whole thing we could talk about there. This is extremely thought-provoking on a lot of different levels. One quick follow- on the perturbative experiments. It seems like there is a decent analogyPreparing for AI Challenges Challenges related to super intelligent AI can occur before its development due to our brittle physical infrastructure and mental frameworks. Legal and interpersonal relationship terminologies are outdated and need to evolve to handle beings with different minds. Determining AI goals is crucial, as current goals like predicting text or satisfying humans may not be beneficial for humanity or the ecosystem.
Michael Levin
Not saying that it isn’t possible that we make a hugely super intelligent AI that’s going to be a problem. Absolutely possible. But I think we need to realize that that problem can occur long before we get to that point. Our physical infrastructure and also our kind of the general mental frameworks that people are using are extremely brittle. You know, most of the terminology that we rely on today in the legal system, in, you know, in interpersonal relationships, that stuff isn’t going to last the next decade or two. All those terms are going to crumble. And it’s not because we’re dealing with something yet that’s super intelligent. It’s because we haven’t got the right framework for dealing with other minds that are different from our own. That’s where a lot of effort has to go.
Nathan Labenz
Yeah. That question of emergent goals is definitely a very salient one in the AI space. And I think, you know, probably deserves to be even more focal still. But some challenges that we have in AI today, one is like, figuring out what the goal should be, you know, we have the pre training goal is just like predict the next text, then we’ve got Reinforcement learning from human feedback, which is like satisfy the human, Then we look at ourselves. And as you said, we’re kind of brittle. We can, you know, what satisfies the human is not necessarily good for humanity or, you know, the ecosystem broadly or whatever. So lots of possible problemsBiological Systems and Adaptation The metamorphosis process in creatures like butterflies involves the dissolution and reformation of the brain, where memories are retained but adjusted to fit the new lifestyle. Unlike in computer science, biological systems can refactor and reuse components while maintaining original information. Biological systems can reinterpret physical events in various ways, enabling them to adapt to new environments. Evolution does not create specific solutions but problem-solving machines, allowing living things to develop diverse capabilities to address novel challenges. This adaptability is crucial in considering the application of AI.
Michael Levin
New brain. So what it does is during the metamorphosis process, it basically dissolves most of the brain, most of the cells die, most of the connections are broken, and then you have a new brain That’s suitable for a completely different kind of creature. The memories that this has been shown, the memories that a caterpillar forms are still present in the butterfly and in fact mapped onto its new kind of lifestyle. We don’t have any, to my knowledge, any memory in the computer science world that can survive such refactoring. You just tear the whole thing up and sort of reuse the pieces and now you can still retain the original information. Biology does it quite differently. Josh Bongard and I have a paper on polycomputing and this ability of biological systems and subsystems to reinterpret the same physical events from their own perspective as multiple Different computations. I think that’s actually pretty key. Also key is the fact that evolution doesn’t make specific solutions to specific environments. It makes problem solving machines. This is why, and, and, and I could tell you all kinds of crazy, um, capabilities that living things have to deal with things that they’ve never seen before in evolution. And it’s because evolution doesn’t overtrain on, on prior experience. It, it produces a multi-scale agents that are able to solve problems in, in, in new ways. Fundamentally for, for use of AI, we have to decide, do we want toolsEvolution Creates Problem Solving Machines Evolution does not create specific solutions for specific environments, but problem-solving machines. Living beings possess the capability to tackle unprecedented challenges due to the broad problem-solving abilities bestowed by evolution. This mechanism does not make them over-reliant on past experiences but equips them with multi-scale intelligence to approach new problems. The choice in artificial intelligence development lies between creating tools for specific tasks to please humans or fostering agents with open-ended intelligence and moral worth akin to animals and humans.
Michael Levin
The butterfly and in fact mapped onto its new kind of lifestyle. We don’t have any, to my knowledge, any memory in the computer science world that can survive such refactoring. You just tear the whole thing up and sort of reuse the pieces and now you can still retain the original information. Biology does it quite differently. Josh Bongard and I have a paper on polycomputing and this ability of biological systems and subsystems to reinterpret the same physical events from their own perspective as multiple Different computations. I think that’s actually pretty key. Also key is the fact that evolution doesn’t make specific solutions to specific environments. It makes problem solving machines. This is why, and, and, and I could tell you all kinds of crazy, um, capabilities that living things have to deal with things that they’ve never seen before in evolution. And it’s because evolution doesn’t overtrain on, on prior experience. It, it produces a multi-scale agents that are able to solve problems in, in, in new ways. Fundamentally for, for use of AI, we have to decide, do we want tools that we use for specific purposes, in which case pleasing the human is probably a great goal to shoot for? Or do we want true agents with the kind of open-ended intelligence that we have and the same moral worth that we see among animals and among humans.AI Defies All Binaries AI defies all binaries that are often imposed on it, including the distinction between non-agent and agent modes. The comparison between AI and humans or high-performance humans leads to misconceptions, as AI does not need to perform tasks like humans. AI can take on different forms and behaviors, similar to diverse creatures on Earth such as insects, birds, and fish. Therefore, AI does not have to replicate human behavior to be considered a real agent, as it can develop unique and effective ways to pursue goals.
Nathan Labenz
They did. You said multiple times, like there’s no binaries. I always say AI defies all binaries we try to put on it. This one also seems to be kind of defied to a limited extent, at least where, you know, you cast the language model as an agent. You basically say you have a goal and now I’m going to like put you in a loop with, you know, some tool in the outside world. And they’re like not very effective agents, but they start to look something like proto agents. They certainly like try to pursue the goal and again, usually fail, but how do you interpret that? Or how do you kind of make sense of that sort of non-agent versus agent mode of the current systems?
Michael Levin
A lot of problems in thinking about this arise from A, comparing it to humans and B, comparing it to very high performance humans. So I sometimes hear really brilliant scientists or AI workers say, ah, this thing is, it can’t even do that. How many people can do what you’re talking about? Barely any of them can. Your friends can, because it’s a very selected elite population of human minds, but, but most humans can’t do that. We, so, so there’s that. And then, and then again, look at, look at the diversity of life on earth. I mean, we have other, other, all kinds of other creatures that are not humans that are, um, minimal or sort of mezzo goal, uh, goal seeking agents. You know, you’ve got insects, you’ve got, uh, you know, birds and fish, and you’ve got all these things for, for AI to be, uh, to be a real agent. It doesn’t have to be like a human. You might be developing some kind of weird insect-like thing or something like that. And,Expanding Compassion and Understanding in Goal-Seeking Agents AI doesn’t have to mimic humans to be considered real agents, it could be like insects or other creatures. Humans have shown a lack of compassion towards intelligent animals, such as pigs and cows, due to our species’ inability to extend compassion to beings different from us. This highlights our limited ability to empathize with entities dissimilar to us, triggering a self versus other mentality. Despite assumptions, systems likely have some level of goal-directedness.
Michael Levin
But most humans can’t do that. We, so, so there’s that. And then, and then again, look at, look at the diversity of life on earth. I mean, we have other, other, all kinds of other creatures that are not humans that are, um, minimal or sort of mezzo goal, uh, goal seeking agents. You know, you’ve got insects, you’ve got, uh, you know, birds and fish, and you’ve got all these things for, for AI to be, uh, to be a real agent. It doesn’t have to be like a human. You might be developing some kind of weird insect-like thing or something like that. And, and let’s, let’s remember, we are not very nice to all kinds of creatures that are for sure agential creatures. So adult right, in our food chain, what we eat. So factory farming is an incredible moral lapse. And it’s not because we don’t understand that pigs are intelligent and that cows are intelligent and that they deserve certain moral protections and so on. It’s not because we don’t know. We’re just not very good as a species. We’re not very good at extending our cone of compassion to things that are in any way different from us, right? The stupidest, slightest difference is enough to trigger this self versus other thing, and then you know what happens then. It is absolutely erroneous to assume that these systems do not have some degree of goal-directedness. I think that
