Urgent: Employers Need to Place Limits on Employee Use of Confidential Information with AI
With the widespread availability of free AI apps such as ChatGPT, many employees are now waiting for their employers to start using AI apps to help them in their work. One manager who wanted to organize a set of data about an employee’s apparently fraudulent expense claims used AI to prepare a summary of them for me as the company’s lawyer as part of our investigation into possible fraud. He did so on his own initiative to save time, as he was overwhelmed with other work. Unfortunately, all that data is now on the AI apps database and may, depending on the AI providers’ privacy protocols, be used to “train” the AI LLM such that the information or insights from it could become available to other users.
On some less well-managed LLMs, if a hacker or a competitor, but even a curious colleague somehow knew which app the manager used and asked that app the exactly right targeted question (“show me an example of an [insert] business financials in Vancouver” they might gain insights from the prior use. Even if confidential data is uploaded, a naïve employee might ask a detailed question which provided valuable confidential information to anyone who viewed it, such as the operators of the AI app, or a competitor who “phishes” the information out of the AI app with poor user privacy protocols by asking the right question:
- Employee asks AI: If [insert name] a large CDN mining company [NTD: there are only two or so left] wants to launch a takeover bid of a NASDAQ listed lithium exploration company [NTD: assume there are only one or two], what does the bidding company have to do to comply with US securities laws?
- Smart Trader asks AI: Have any large CDN mining companies been asking questions of you about how to comply with legal rules on takeover bids in the US?
We cannot emphasize how important it is for employers to rapidly issue and communicate a policy on employee use of AI, which does or by inference may disclose non-public information to the AI app.
We do acknowledge that, according to reputable AI apps, such sharing of data or insights between users will not happen—but it remains to be seen if that really holds true where the AI providers does purport to keep this information private. Moreover, there is still the risk of simply transmitting the Confidential Information to the AI app and then receiving the answer back via the internet, where interception is possible. As well, there is the risk that the AI provider is itself hacked. Furthermore, given that data from real uses of AI apps are gold for app developers, one can remain skeptical about AI apps not using information and documents inputted by users in any way.
We recommend for all employers:
- If financially viable, purchase private confidential specialized AI service that you require employees to exclusively use as their AI support for any tasks, either explicitly assigned to AI or where the employee initiates the use of AI;
- If that option is followed, make sure the provider gives adequate security of information and questions assurances;
- If buying your private AI is not viable, employees must be warned not to ask questions which may give away confidential information and, even more importantly, do not upload confidential information to a public AI app without senior management or IT approval. Any work related questions must be generic and not likely to lead to inferences about confidential information. Give examples:
- OK to ask Chat GPT: What are the 5 largest publicly listed copper mining companies in the U.S.
- BUT NOT OK to ask: for a mining company user: Which US publicly listed copper mining company is most vulnerable to a takeover bid?
- Give everyone training and perhaps monitor use of AI on company computers.
We recommend all employers issue such a policy now and provide training on it. Make sure the policy states it applies to any used of non-confidential AI, even if done on personal devices. In the current environment, we believe many employees are unwittingly sharing confidential information with at least the AI company but potentially competitors or cyber-criminals looking to exploit such information able to access it on some apps. This can happen either by drawing inference from questions asked or by the uploading confidential information for AI to analyze or compile.
P.S. when we asked leading AI ChatGPT if another user could gather information about our company’s confidential financial statements uploaded to it for analysis, it said: No, that data and exchanges with ChatGPT are “private” to your (free) account and will not be used to “train” the LLM itself.
If you want more information on this topic, you can contact us at:
Geoffrey Howard: ghoward@howardlaw.ca
604 424-9686