‘Doing’ gets a lot of attention. In the passive corner, you have chat-style assistants: you ask, they answer. In the proactive box, you have agents, the holy grail of personal AI today (or enterprise; anything really). Book your flight. Order your groceries. Schedule your meetings. And it’s crowded for good reasons. These systems are closer to tangible outcomes, easier to measure, more visible, but getting them to stable high performance is no joke.
This is where the system takes the relationship sabotage pattern, combines it with others to build a deeper model to explain why you do it. It’s something approximating a theory of you, grounded in years of data, that enables predictive simulation: given a new situation, how might you respond based on your established operating principles?
The distinction between simulation and self-understanding matters for personal AI: do people want a predictive model of their behavior, or do they want to understand why they behave the way they do? Simulation may be easier to pitch, but the deeper hunger is probably for understanding — and those are different products.Think about the context needed to answer “Why did my ex and I end up hating each other in our relationship, and why did it take us so long to break up?” The LLM first needs to know who your ex is, which, without abstracted knowledge, would take quite a bit of resources to figure out. Then it needs to know the rough timeline of your relationship, the nature of it, the key events, the emotional states at various times, and so on, from which it can establish causality and hypothesize why the relationship developed the way it did. The LLM should know what it needs, what’s available, and where to look. The ideal result? Every question, yours or the system’s, pulls only the most relevant tokens, saving compute and sharpening output.
It must be generalizable and decided by the LLM. What we can do is providing the right tooling and architecture so it can make those decisions intelligently.
Or don’t ask, and let it automatically scan data to surface: “What happened with Mark” or “Career evolution 2024” or “Forgotten goals check”. These are things you would not think to ask but actually care about. They counteract human negligence and forgetfulness.
Proactive question-surfacing is valuable, but cross-source tagging could preempt the need for it entirely. If knowledge is tagged consistently across inputs at the point of capture, the system already has the structure to surface connections without needing to infer them later. pkmI’d say this is wrong. I hope. Pattern detection has real potential to help people break vicious cycles:“Your relationship with Emma becomes strained after late night drinking, which has been happening more frequently.” Or get closure: “A year ago on this day, you made a promise to give yourself a year to move on. How has that promise aged?” It can just as easily fuel self-fulfilling prophecies, when it unwittingly crystalizes negative behaviors and identities by defining them. It should be a top priority of any personal intelligence to not fabricate patterns based on thin evidence. This tension runs through the entire system: we want it to surface insights you didn’t see yourself, which requires interpretive leaps. But those leaps need to be grounded in actual evidence, and two mentions in a group chat don’t count. We’re experimenting with things like confidence scoring to guard against this.
The vicious cycle risk in pattern detection makes knowledge curation upstream critical. If the inputs are poorly curated, the system will fabricate patterns from noise. The quality of what gets captured determines whether pattern detection clarifies or distorts. editorial
