Coursera Learner working on a presentation with Coursera logo and

Understanding Shadow AI and How IT Can Address It

Coursera Learner working on a presentation with Coursera logo and

The unsanctioned use of generative AI within organizations is on the rise. Here’s how IT departments can support their workforce while ensuring business security.

You might already know about Shadow IT—systems or solutions created outside the official IT domain—but the concept of Shadow AI might be new to you. With the rapid proliferation of generative AI, accessible through any employee’s browser, it’s an emerging trend gaining traction.

Shadow AI refers to the unauthorized or ad-hoc use of generative AI within an organization, operating outside IT governance. According to Salesforce, about 49% of people have used generative AI, with over one-third using it daily. In workplaces, this translates to employees leveraging tools like ChatGPT for tasks such as drafting content, creating images, or writing code. For IT departments, this scenario can lead to a governance dilemma, requiring decisions on which AI usages to permit or restrict to balance workforce support with business safety.

Moreover, the use of generative AI is accelerating. The same Salesforce survey reveals that 52% of respondents reported increased usage of generative AI compared to when they first started. This underscores the growing threat of shadow AI for IT.

Addressing Shadow AI: Steps for IT Leaders

IT leaders can take several steps to manage shadow AI effectively:

  1. Establish Policies for Generative AI Access
    • Developing clear policies on how employees can access public generative AI tools is crucial. Given that many employees are already using these tools, determining access guidelines is the first step in shaping your organization’s generative AI strategy. Measures can range from written guidelines to setting up firewalls or using VPNs to control access. The approach should balance the benefits of these tools in terms of productivity and innovation with potential security risks.
  2. Communicate Generative AI Policies Clearly
    • Once internal policies are set, it’s essential to communicate them widely. Inform employees about which generative AI tools they can use, for what purposes, and with which data. Be specific about safe versus unsafe AI usage. Tailor guidance to different roles within the organization and prepare to communicate policies frequently. Research from Asana shows a gap between executives’ perceptions and employees’ awareness of AI policies, highlighting the need for repeated communication.
  3. Implement Education and Training Programs
    • Beyond setting policies, educating employees on responsible AI usage is vital. Provide hands-on training and accessible learning resources to help users maximize generative AI while protecting company data. This can include workshops, webinars, and self-paced e-learning modules. The goal is to ensure users understand and practice safe and responsible AI usage.

Taking Action on Shadow AI

Organizations can take immediate steps to address shadow AI by managing access, communicating policies effectively, and offering comprehensive training. As generative AI becomes more prevalent, a broader strategy may be needed, including developing or customizing tools to better meet business needs. According to a recent Dell survey, 76% of IT leaders believe generative AI will be significant, if not transformative, for their organizations, with 65% expecting meaningful results within the next year.

No matter where you are on your generative AI journey, these steps can help. If you need additional guidance, partnering with experts can accelerate your progress. At Dell, we collaborate with organizations to identify use cases, implement solutions, increase adoption, and train users to drive innovation.

Languages

Weekly newsletter

No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.