Microsoft France headquarters entrance in Issy les Moulineaux near Paris
Image Credits:Jean-Luc Ichard / Getty Images
Startups

Microsoft lets generative AI loose on cybersecurity

As a part of its continued quest to inject generative AI into all its products, Microsoft today introduced Security Copilot, a new tool that aims to “summarize” and “make sense” of threat intelligence.

In a light-on-the-details announcement, Microsoft pitched Security Copilot as a way to correlate data on attacks while prioritizing security incidents. Countless tools already do this. But Microsoft argues that Security Copilot, which integrates with its existing security product portfolio, is made better by generative AI models from OpenAI — specifically the recently launched text-generating GPT-4.

“Advancing the state of security requires both people and technology — human ingenuity paired with the most advanced tools that help apply human expertise at speed and scale,” Microsoft Security executive vice president Charlie Bell said in a canned statement. “With Security Copilot we are building a future where every defender is empowered with the tools and technologies necessary to make the world a safer place.”

Microsoft didn’t divulge exactly how Security Copilot incorporates GPT-4, oddly enough. It, instead, highlighted a trained custom model — perhaps GPT-4-based — powering Security Copilot that “incorporates a growing set of security-specific skills” and “deploys skills and queries” germane to cybersecurity.

Microsoft stressed that the model isn’t trained on customer data, addressing a common criticism of language model-driven services.

This custom model helps “catch what other approaches might miss,” Microsoft claims, by answering security-related questions, advising on the best course of action and summarizing events and processes. But given text-generating models’ untruthful tendencies, it’s unclear how effective such a model might be in production.

Microsoft itself admits that the custom Security Copilot model doesn’t always get everything right. “AI-generated content can contain mistakes,” the company writes. “As we continue to learn from these interactions, we are adjusting its responses to create more coherent, relevant and useful answers.”

Techcrunch event

Disrupt 2026: The tech ecosystem, all in one room

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.

Save up to $300 or 30% to TechCrunch Founder Summit

1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately

Offer ends March 13.

San Francisco, CA | October 13-15, 2026

Hopefully, those mistakes don’t end up making a bad security problem worse.

Topics

, , , , , ,
Loading the next article
Error loading the next article