The Anthropic-Pentagon dispute signals something bigger than a single contract disagreement. When AI companies start declining government surveillance projects, boards should ask: what capabilities are we building, and who controls them once they exist?
The structural problem isn’t hypothetical. AI systems that can analyse patterns across vast datasets already exist in corporate environments. Customer behaviour tracking, employee monitoring, supply chain surveillance—these same technologies become mass surveillance tools when scaled and redirected.
Indian boards face a particular challenge. The Digital Personal Data Protection Act creates compliance obligations, but government exemptions remain broad. A system built for legitimate business intelligence can become a state surveillance asset without the company’s consent or knowledge.
I have seen enough technology contracts to know how quickly “defensive” capabilities become “offensive” ones. What starts as fraud detection becomes behavioural prediction. What begins as customer insights becomes citizen tracking. The technical infrastructure is identical; only the dataset changes.
The real question boards must grapple with: can you build surveillance-capable AI systems and simultaneously prevent their misuse? Anthropic’s answer appears to be no. They’re walking away from lucrative contracts rather than risk their technology enabling mass surveillance.
This creates a boardroom dilemma. Shareholders expect AI investments to generate returns. Regulators demand compliance with data protection laws. Citizens increasingly expect privacy protection. Government agencies want access to powerful analytical tools. These demands point in different directions.
The missing piece in most board discussions is capability drift. AI systems trained for one purpose often prove capable of others. A model trained to detect financial fraud might excel at identifying political dissent. A customer recommendation engine might effectively predict individual behaviour patterns. The boundaries between commercial intelligence and surveillance blur quickly.
What’s not being discussed: liability. If your AI system enables mass surveillance, who bears responsibility? The technology company? The government agency? The individual engineers? Current legal frameworks provide limited clarity on corporate accountability for dual-use AI technologies.
Success metrics compound the problem. AI systems improve through scale and data access. The most effective models require extensive information collection. But the same characteristics that make AI commercially valuable make it surveillance-ready. There’s no clean separation between beneficial AI and potential surveillance tools.
The Anthropic precedent matters because it demonstrates that profitable contracts can be declined on governance grounds. But it also reveals the problem’s scope. If Anthropic finds these technologies too dangerous to deploy, what does that say about companies proceeding without similar restrictions?
My Boardroom Takeaway
Directors overseeing AI development should establish clear use-case boundaries before building capabilities, not after. Consider implementing “dual-use restrictions” that prevent AI systems from being repurposed for surveillance applications, even under government pressure. Review data collection practices to ensure they align with intended use cases only. The question isn’t whether your AI could enable mass surveillance—it’s whether your governance framework prevents it from doing so.