• Use Polymarket as an Analyst

  • Consider Polymarket and similar environments as training grounds for AI analyst alerts.

  • Broaden your scope of data consumption by going outside traditional pipelines to improve analysis.

    Rob Johnston
    I suspect that places like Polymarket and some of those environments are interesting training ground for agentic AI or interesting training ground for an analyst alert. That if you see the betting market changing for some reason, it might not be a bad idea to look at Polymarket. It might not be a bad idea to check that out as an analyst, right?
  • Open Source Value

  • Open source utilization has changed positively, addressing previous doubts about its value compared to expensive technical collection.

  • The first public reporting about the bin Laden raid came from Twitter, marking a significant shift in media landscape.

    Rob Johnston
    There has always been a problem, a difficulty doing that because of the environment itself. But I think that one thing has changed in the most positive way, and that’s the utilization of open source. Using open source was a tough task 20 years ago. People doubted its value when compared to expensive technical collection. They assumed that billions of dollars on a satellite would solve a problem that it could not solve, when in fact open source was clearly a better avenue into knowledge. One of the things that strikes me in that change over time, over 20 years, is that the very first public reporting about the bin Laden raid was Twitter. And it was people in Abbottabad who were witnessing this and tweeting live that this was happening.
  • Intelligence Talent Issue

  • The intelligence community has a talent issue due to security, secrecy, suitability requirements, and reluctance to engage with dissimilar people.

  • Different cognitive perspectives working together are crucial for problem-solving, and conflict is acceptable as long as it’s managed.

    Rob Johnston
    Tend to agree with the notion of talent. We have a talent issue and the talent issue is confluence of security and secrecy requirements and suitability for the community work and our ability to clear people to come in to work In the community. And it’s also a reluctance to engage folks that seem dissimilar from us. This is Rob the anthropologist. That’s just sort of a normal human thing, right? The other is always almost scary at first. So you’ve got to get comfortable with that. But if you can set aside the language of whatever it is that makes you uncomfortable about diversity for a minute, the real thesis is that you really want different cognitive perspectives All operating together to try to work through a problem. And if there’s conflict, that’s okay as long as you manage the conflict, right? It’s okay to have agreement, disagreement. It’s okay to
  • Intelligence Community Talent Issue

  • Rob Johnston suggests the intelligence community has a talent issue due to security requirements and reluctance to engage with dissimilar people.

  • Cognitive diversity is crucial, so manage conflict and value different perspectives to work through problems effectively.

    Rob Johnston
    Tend to agree with the notion of talent. We have a talent issue and the talent issue is confluence of security and secrecy requirements and suitability for the community work and our ability to clear people to come in to work In the community. And it’s also a reluctance to engage folks that seem dissimilar from us. This is Rob the anthropologist. That’s just sort of a normal human thing, right? The other is always almost scary at first. So you’ve got to get comfortable with that. But if you can set aside the language of whatever it is that makes you uncomfortable about diversity for a minute, the real thesis is that you really want different cognitive perspectives All operating together to try to work through a problem. And if there’s conflict, that’s okay as long as you manage the conflict, right? It’s okay to have agreement, disagreement. It’s okay to have heartfelt discussions about whatever it is that you’re working on. Oh, that’s okay. You just have to manage for that. That should be expected. The problem is if you never get there, right? So if the entire recruitment process and selection process and security process weeds out anybody that doesn’t look like me or doesn’t have an education, doesn’t have a doctorate Or doesn’t have, you know, something like that, that’s a problem. So we need to address that. I wouldn’t be surprised if we could find clever ways to air gap the most secret information and the most secret work that we’re doing from the less secret work that we’re doing so that Those two can come together in some way. I think that’s reasonable.
  • Knowledge Graphs as Presence for Ethnography

  • Rob Johnston suggests using technology to shadow analysts and create digital twins, addressing the challenge of articulating expertise in intelligence work.

  • As an anthropologist, Johnston notes that traditional ethnography, while valuable, lacks the coverage a digital shadow could provide.

  • He envisions an algorithm tuned to understand an analyst’s thinking, allowing others to tap into their decision criteria even when they are not present.

  • Johnston emphasizes the potential of knowledge graphs, agentic AI, and large language models to impact knowledge work by creating digital shadows.

    Rob Johnston
    That’s still a fundamental issue with human cognition. As an anthropologist, I might be able to do ethnography. I mean, I can go and interview three, 400 people. I can live in the community for a couple of years and get immersed. Now I’m bilingual. I speak their language, blah, blah, blah. But that doesn’t give me the kind of coverage that a digital shadow would give me. Digital shadow, and I’m making that term up, I don’t know what it means. But that algorithm that becomes Rob algorithm one is really tuned to trying to figure out how I think about the world so that when I’m not there, either I’ve been run over by a bus, I’ve Retired, I’m sick, whatever it happens to be. I’m just not present. And somebody wants to tap into whatever elements of my decision criteria they’re interested in, they should be able to. I mean, that’s kind of the ideal. But knowledge management had a long way to go 20 years ago to get here. Just the idea of knowledge graphs and agentic AI and large language models, all of that stuff has the potential to have a huge impact on knowledge work generally.
    Collaborative knowledge organization changes how analysis scales. Restructuring information placement (directionally arranging knowledge for specific tasks) renames the boundaries people use to understand the domain. This shifts the work of analysis from individual sense-making to discovering relevant groupings that can be shared and emulated across the organization.
  • Augment Analysts with Digital Tools

  • Use large language models to create digital twins for analysts to aid learning.

  • Leverage knowledge graphs, agentic AI, and LLMs to impact knowledge work.

  • Focus on being more consistent and rigorous in data entry and mapping.

  • Don’t look at AI as a replacement technology for complex cognitive tasks; view it as an augmentation tool.

    Rob Johnston
    I’m just not present. And somebody wants to tap into whatever elements of my decision criteria they’re interested in, they should be able to. I mean, that’s kind of the ideal. But knowledge management had a long way to go 20 years ago to get here. Just the idea of knowledge graphs and agentic AI and large language models, all of that stuff has the potential to have a huge impact on knowledge work generally.
    Saty Ruiz
    I’m really interested in this question. And something like a year ago, I talked to a guy who worked at the State Department on this. We had him on the podcast, Dan Spokajny. And he had a similar perspective on the way state does its own learning. They’re not working so much with classified information, but they have a lot of these same questions. You’re trying to figure out what’s going to happen in a country. And their information, their memo system is really kind of hopelessly arcane. There’s no codification of how information comes in or out. There’s not really any systems for assessing. Was this desk chief especially good at predicting, especially bad? But it’s interesting because I had some of the same questions for him as I think I do for you, which are about what are the limits of building information management tools and how much Can you improve efficacy by treating it like a science? In both that conversation and in this one, what we ended up saying is you can definitely be more consistent and more rigorous about data entry and trying to map and pull from it without Having to go so far as to say, like, if we built a digital version of it, it would be much better than our human analysts.
    Rob Johnston
    I’m not sure that a digital version would be better
    Analysis as lossy compression with high recall: the work is understanding the crux and surfacing relevant datapoints efficiently, not perfecting the encoding. Making information visible and accessible (placing it in shared space) reduces coordination overhead — less time preparing formal reports, less bureaucratic phone tag.
  • Offload Transactional Work

  • Offload as much transactional work as possible into automation.

  • This frees up more time for analysts to focus on thinking.

    Rob Johnston
    Right now I’m teaching one on AI in the intelligence community. And in that class, we talk about what the perils are with AI, that if you understand what the real guardrails are for our use case, it will help you figure out the good ways to implement It. But ideally, if I had any authority whatsoever, I would argue that we should probably offload as much transactional work as we can into automation and
  • Selling the Value of Thinking Time

  • Rob Johnston highlights the difficulty in convincing stakeholders to invest in providing analysts more time for deep thinking and learning.

  • Stakeholders often prioritize tangible outputs like more reports or innovative presentations, rather than the intangible benefits of enhanced cognitive processing.

  • Congressional committees want to see demonstrable improvements in crisis management, forecasting, or negotiation skills as a result of this extra thinking time.

  • Easily measurable KPIs are needed to justify the value of increased thinking time, despite them not being particularly flashy.

    Rob Johnston
    And the problem is that it’s hard to sell that to anybody. I mean, we say, okay, well, what if we can cut two hours out of every analyst day just getting rid of the transactional work? That’s two more hours. They could be thinking deeply about this problem, or they could be reading. They could be learning. They could be whatever. They could be learning the language. They could be in countries, you know, getting a firsthand experience. Lots of things they could be doing instead of filling out form. The problem is when you go to get your budget and you tell them we’re going to liberate two hours for people to think, you get blank stares. That’s swell, but does that mean I’m going to get more reports? You know, no, not necessarily. Does it mean my reports will come in new colors? No. Why would it come in new colors? You know, it’s just sort of, where is the thing? And so you’d go to the SISI, the Senate Select Committee on Intelligence, or the House version, the HIPC, Permanent Select Committee on Intelligence, and you say, we’re going to free Up time for analysts to spend more time thinking. And in their heads, they’re like, oh, great, what do we have to show for that? You know, how can we demonstrate that two hours more of thinking is going to change our ability to deal with crises or our ability to make forecasts or our ability to negotiate? You know, how do we know that there’s payoff there? Well, principally, because there are ways to measure, there are KPIs, but they’re not flashy. They’re not sexy. They’re not, you know, you’re not going to roll into DARPA and say, Hey, give me a gazillion dollars because I’ve got this great idea to
    The intelligence community and recommendation systems share the same structural problem: the most valuable insights come from deep, marginal connections that are hard to measure and hard to budget for. Institutions reward flashy, demonstrable outcomes — whether in national security or consumer products — at the expense of the slower, less visible work of genuine discovery.
  • Intelligence Resources

  • Constrained budgets and workforce prevent playing the ā€˜Black Swan game’ in intelligence.

  • The community overestimates signals intelligence value relative to human intelligence.

    Rob Johnston
    That’s very true. If we took the notion of shorting the market every day, eventually we’re going to make a lot of money, right? There’s someday we will make a lot of money. Up until that day, we’re going to lose a lot of money. We could do analysis like that. But the problem is what happens for the other 364 days, right? You have to have a lot of money to play that game. Or in the case of the agency or the community, you have to have a lot of people, a lot of resources to play that game. And if you have a constrained budget and a constrained workforce and a constrained pipeline between the people who are onboarding and the people who are retiring, you don’t have the Capital to play the Black Swan game. You don’t. If we want the community to be more attuned to that, it will take considerably more resources than the community has or has ever had. And generally speaking, those resources are driven towards collection platforms, not towards humans. Tell me about that.
    Saty Ruiz
    My impression has been the trend over time in the intelligence community has been human intelligence is hard. It’s costly. Sometimes China gets all our spies and kills them. Signals intelligence is expensive, but it’s relatively risk-free compared to putting people in foreign countries. And as a result, the trend has been more signals intelligence, more exquisite satellites, fewer people on the ground. Language acquisition is less of a priority. Is that kind of mental model that I have in my head roughly right? And if it’s not, why not?
    Rob Johnston
    No, I think it is. I think you’re correct. I think the problem is that it’s a bit of a false choice and not on your part, but on the community’s part.
  • Technical Collection vs. Human Intel

  • Technical collection provides access, not necessarily truth, and may lead to being bamboozled like Saddam.

  • Prioritize hiring analysts over building satellites to get to the truth, but political incentives often favor the latter.

    Rob Johnston
    And I fear that when we think about, oh, technical collection gets us truth. I mean, technical collection gets us access. Truth, I don’t know. I have seen pretty mixed findings. I think the budget impetus, if I’m going to build a satellite and I can engage satellite subcontractors in 43 states versus hiring a thousand more analysts, what do you think I’m going To do? I’m a politician, right? The thousand analysts aren’t going to come from my district, you know, so I’m going to go for the satellite MacGuffin, whatever it happens to be.
    Saty Ruiz
    In a former job a couple of years ago, I talked to quite a few NASA contractors for reporting stories. And it was always remarkable how explicit they were that the reason you should work with them is they hire in all 50 states and they hire in your state and they hire
  • Learn from Librarians

  • To improve critical thinking, focus on recognizing and refuting misinformation.

  • Steal skills from librarians and data journalists to improve data quality understanding.

    Rob Johnston
    People who understand sourcing and how important sourcing is for critical thinking. I see this in students and I’ve seen this all over the place. The broader question about education and preparation for the career, right? The education itself should be focused on helping people recognize and refute bullshit. That’s sort of step one. Is the critical thinking necessary to say, well, this makes no sense or this is just fluff? The people who are really good at understanding the quality of a source, understanding the history of that source, understanding the source’s access to information or lack of. And the people who are professionally trained to do that are librarians and have been for years. And librarians have a certain way of thinking in the context of the library. But those skills about data quality are really important skills. And we should probably steal shamelessly from librarians when it comes to thinking about this. Data journalism, same thing. There are lots of parallel professions where we could be learning more to improve our own performance. The folks that I’ve seen who crush it, folks who are really good, are the people that they’re like a dog with a bone. They will not let
  • Humility in Intelligence

  • Assume you’re not the smartest person in the room to avoid cognitive disservice.

  • Overinflated ego leads to rejecting information that challenges one’s mental model.

    Rob Johnston
    It’s better to walk into a room and assume that you’re not remotely the smartest person there. Always go in like that. You’re doing yourself a cognitive disservice if you think you’re cleverer than everybody else. It’s a rookie mistake, but you see it over and over. And if it works for you and you keep getting promoted, eventually you start to believe it.
    Saty Ruiz
    Doesn’t seem like a rookie mistake to me. It seems like a seasoned professional mistake. You know, it’s a mistake that you’re more prone to the further on you get. That’s true.
    Rob Johnston
    Yes, you’re right. You’re right.
    Saty Ruiz
    It is a longevity error. Rob, last round of rapid fire questions for you. But tomorrow, the president comes to you and says, Rob, I don’t