Recently, the AI landscape was shaken by the sudden rise—and subsequent controversy—surrounding DeepSeek’s AI. While the company has made waves with its open-source models, its data security practices and questionable terms and conditions have raised important concerns. For businesses, this is a wake-up call: Questionable AI tools are here to stay, and they will keep emerging. The key is understanding how to mitigate the risks they bring.
- 1
Provide an Alternative: Offer a secure, enterprise-approved AI solution that employees can leverage. Even if you aren’t using the paid Microsoft 365 Copilot, tools like Microsoft Copilot Chat, which is available at no cost, ensure a safe and compliant way to leverage AI. We believe the combination of M365 Copilot and Copilot Chat make for an effective formula for AI adoption.
- 2
Protect Your Data: Define holistic data security controls that follow the data wherever it lives. Even if users access corporate data on their personal devices, effective use of can prevent corporate data from being ingested into consumer-grade applications.
- 3
Monitor and Govern AI App Usage: A Cloud Access Security Broker (CASB) is a security tool that can monitor and govern the applications your users access. For example, using Microsoft Defender for Cloud Apps, a business can automatically block access to newly-discovered or high-volume Generative AI applications without manual intervention for each app.
- 4
Train Your Team: AI security training is essential. Just as employees are trained to secure their devices, lock facilities, and follow cybersecurity protocols, they need guidance on responsible AI usage. Businesses must educate their workforce on what AI tools are safe to use and how to use them correctly.
