
A new survey found that more than 40% of health care clinicians and administrators know of a colleague using “shadow AI”—technology that hasn’t been approved or vetted by their organizations—and many have used similar technology themselves.
A Wolters Kluwer survey of more than 500 clinicians and health care executives found that 41% know a colleague using unapproved AI. Seventeen percent admitted they’ve used unapproved technology themselves, and 40% had encountered unapproved AI tools at work but chose not to use them.
When the survey asked why people used unapproved AI tools, almost 50% said they were trying to work faster. One in three cited said there were no approved AI tools or that tools that had been approved didn’t work well.
Among clinicians using shadow AI, 26% said they were experimenting and using the tools out of curiosity. And 42% of clinicians acknowledged that “inaccurate outputs” were a risk of using AI tools.
The fear is that using unauthorized tech of any kind—not just AI—can create security oversight challenges and expose health care organizations to security breaches and data privacy violations. When asked to rank the risks of using AI, clinicians and administrators in the survey chose patient safety, privacy and data breaches.
The survey also found that only 9% of clinicians said they had any role in reviewing, developing or updating their organization’s AI policies.
The Wolters Kluwer survey backs up previous research looking at the use of AI in health care. A study released in December by Black Book Market Research, for example, found that 58% of front-line clinicians admitted to using generic AI tools like ChatGPT for work-related tasks at least once in the previous 30 days. Thirty-nine percent said they used those tools weekly or more often.
That earlier research found that clinicians were using unauthorized AI for tasks like drafting e-mails and other internal communication, creating patient education materials, summarizing complex clinical information and drafting portal messages for patients.
Among clinicians using unauthorized AI tools, 17% in the earlier survey admitted that they “sometimes or often” include identifiable patient information.





















