Recently, the AI landscape was shaken by the sudden rise—and subsequent controversy—surrounding DeepSeek’s AI. While the company has made waves with its open-source models, its data security practices and questionable terms and conditions have raised important concerns. For businesses, this is a wake-up call: Questionable AI tools are here to stay, and they will keep emerging. The key is understanding how to mitigate the risks they bring.

Corporate Risk: The Unvetted AI Problem

When an AI tool with uncertain policies becomes the #1 app on the market, you can be sure your employees are already using it. The excitement of cutting-edge technology often blinds users to security risks, leading to widespread adoption before companies can properly assess its implications.

Corporate Data

Even if employees believe they’re using AI tools for personal inquiries, many will inadvertently include sensitive business information in their prompts—customer details, financials, internal documents, or strategic plans. This is how data leakage risks escalate, putting companies at risk of breaches, regulatory violations, and reputational harm.

The Path to Secure, Generative AI for Businesses

Rather than reacting to each new AI tool that gains popularity, organizations must take a proactive approach to secure AI adoption. Business leaders—CEOs, CISOs, CIOs, Chief Legal Officers, and CHROs—can take these critical steps today:

  • 1

    Provide an Alternative: Offer a secure, enterprise-approved AI solution that employees can leverage. Even if you aren’t using the paid Microsoft 365 Copilot, tools like Microsoft Copilot Chat, which is available at no cost, ensure a safe and compliant way to leverage AI. We believe the combination of M365 Copilot and Copilot Chat make for an effective formula for AI adoption.

  • 2

    Protect Your Data: Define holistic data security controls that follow the data wherever it lives. Even if users access corporate data on their personal devices, effective use of can prevent corporate data from being ingested into consumer-grade applications.

  • 3

    Monitor and Govern AI App Usage: A Cloud Access Security Broker (CASB) is a security tool that can monitor and govern the applications your users access. For example, using Microsoft Defender for Cloud Apps, a business can automatically block access to newly-discovered or high-volume Generative AI applications without manual intervention for each app.

  • 4

    Train Your Team: AI security training is essential. Just as employees are trained to secure their devices, lock facilities, and follow cybersecurity protocols, they need guidance on responsible AI usage. Businesses must educate their workforce on what AI tools are safe to use and how to use them correctly.

Conclusion

The rise of AI tools with uncertain security practices is inevitable, but businesses cannot afford to take a wait-and-see approach. Businesses that fail to act now expose themselves to unnecessary risks. By providing secure AI alternatives, training employees, and implementing clear policies, businesses can embrace AI with confidence and control.

Microsoft Copilot exists for this very reason—helping businesses leverage AI securely while protecting their data and mitigating risk. The future of AI belongs to businesses that prioritize security, governance, and responsible innovation.

Unlock the full potential of Microsoft 365 Copilot for your business. with the Vision and Value Workshop

  • Understand AI reinvention and it’s potential in your business

  • Assess your business’ technical readiness

  • Build a custom business case and implementation roadmap

Andrew Reade