Alphabet Inc, the parent company of Google, has issued a cautionary message to its employees regarding the use of chatbots, including its own Bard program.
This move comes as the company markets Bard globally, aiming to tap into the lucrative market of generative artificial intelligence chatbots. Four individuals familiar with the matter told Reuters that Alphabet is concerned about the potential risk of leaking confidential information.
The company has implemented a long-standing policy to safeguard its sensitive data, advising employees not to enter any confidential materials into AI chatbots. Human-sounding programs like Bard and ChatGPT employ generative AI to hold realistic conversations with users. However, researchers have discovered that such AI models can reproduce the data they absorbed during training, posing a risk of leaks. Consequently, Alphabet has also alerted its engineers to avoid direct use of computer code that chatbots can generate.
When questioned about the matter, Alphabet confirmed that while Bard may make undesired code suggestions, it still assists programmers. The company aims to maintain transparency regarding the limitations of its technology. The concerns expressed by Alphabet reflect the desire to avoid potential business harm caused by its software’s competition with ChatGPT, backed by OpenAI and Microsoft Corp.
This cautionary approach by Alphabet aligns with the emerging security standard adopted by corporations, which involves warning personnel about the use of publicly-available chat programs. Other major companies such as Samsung, Amazon.com, and Deutsche Bank have also established guidelines and protocols for the use of AI chatbots.
According to a survey conducted by networking site Fishbowl, approximately 43 per cent of professionals were already utilising ChatGPT or similar AI tools as of January, often without informing their superiors. Google itself instructed its staff testing Bard before its launch in February not to disclose internal information to the chatbot.
Google has revealed that it has engaged in extensive discussions with Ireland’s Data Protection Commission, addressing regulators’ inquiries about privacy concerns. This follows a Politico report suggesting that Bard’s launch in the European Union was postponed pending further information on its impact on privacy.
As businesses increasingly adopt AI chatbot technology for various tasks, concerns have risen regarding the inclusion of sensitive or copyrighted information in conversations. Some companies, like Cloudflare, have developed software to address these concerns, allowing businesses to tag and restrict data from flowing externally.
In response to the growing privacy concerns, Google and Microsoft are offering conversational tools to business customers that prioritise data protection. The default setting in Bard and ChatGPT is to save conversation history, but users have the option to delete it.
(Inputs from Reuters)