Microsoft has blocked a set of its cloud and artificial intelligence (AI) services from being used by an Israeli defense unit, after an internal review found evidence supporting media reports of their involvement in large-scale surveillance of Palestinians.
The decision, announced on September 25, comes after months of scrutiny following The Guardian’s investigation that revealed Israel’s Unit 8200 allegedly stored millions of intercepted Palestinian phone calls on Microsoft’s Azure cloud servers located in Europe.
Microsoft’s Decision
In a statement, Microsoft said it had “ceased and disabled a set of services to a unit within the Israel Ministry of Defense (IMOD)”, clarifying that the move does not affect other cybersecurity or contractual services provided to Israel.
Brad Smith, Microsoft’s Vice Chair and President, told employees in an internal memo:
“We do not provide technology to facilitate mass surveillance of civilians.”
The company emphasized that the action was a targeted suspension based on policy violations, rather than a complete severance of ties with Israel’s defense establishment.
Allegations of Surveillance
Investigative reports earlier this year alleged that Israel’s Unit 8200 used Microsoft’s cloud and AI infrastructure to manage mass surveillance of Palestinians.

According to The Guardian and partner outlets, the data stored included millions of hours of phone recordings from Gaza and the West Bank, with files hosted in Microsoft’s data centers in the Netherlands and Ireland. The reports claimed Microsoft’s AI services enabled advanced processing of this data.
These revelations triggered concerns that the company’s technology was being used in ways that violated both international human rights standards and Microsoft’s own terms of service.
Background of Microsoft’s Involvement
Microsoft previously confirmed that it provided software, Azure cloud, and Azure AI services to the Israeli Ministry of Defense. In May 2025, it said internal and external reviews had found no evidence at that time that its technology was used to harm civilians, while admitting it could not fully monitor usage on private servers.
However, leaked documents suggested that Microsoft saw its relationship with Israel’s intelligence services as a strategic priority, referring to the partnership with Unit 8200 as a potential “brand moment” for Azure.
Reactions
From Israel
Israeli officials acknowledged the suspension but downplayed its impact. Local media reported that the unit had already backed up critical data and that Microsoft’s decision would “not have an operational effect.”
From Advocacy Groups
Human rights groups, including the Council on American-Islamic Relations (CAIR), welcomed Microsoft’s action. A CAIR spokesperson said it was a “necessary step to prevent technology from being used in the surveillance of civilians in Gaza.”
At the same time, activists argued the move was limited, as Microsoft continues to provide other services to Israel’s military and government.
Inside Microsoft
The decision followed months of pressure from within the company. Employee groups such as “No Azure for Apartheid” protested Microsoft’s contracts with Israel, accusing the firm of complicity in human rights abuses.
Wider Implications
Microsoft’s move marks one of the most high-profile cases of a major U.S. tech company restricting services to a military client on ethical grounds during an ongoing conflict.
It raises pressing questions for the industry:
- How far should technology firms go in enforcing their own ethical standards?
 - Can commercial suppliers effectively prevent misuse of their cloud and AI tools?
 - Will other firms such as Amazon or Google face similar scrutiny?
 
Limitations
Despite the bold move, Microsoft admitted it cannot fully monitor or restrict how customers use its software outside its cloud. Moreover, only a specific unit was affected, while wider contracts with Israel remain intact.
Critics argue this selective suspension may have more to do with reputation management than a comprehensive human rights stance.
Conclusion
Microsoft’s suspension of cloud and AI services to an Israeli defense unit highlights the growing tension between business, ethics, and geopolitics in the tech sector. While the move sets a precedent for accountability, it also underscores the limitations of corporate oversight when advanced technologies are deployed in conflict zones.