Boosted productivity through ChatGPT comes at huge risks
Companies fear that the use of ChatGPT will infringe on their intellectual properties in several ways.
A Reuters/Ipsos poll revealed that a significant number of US workers are turning to ChatGPT, a generative AI-powered chatbot, to aid them in basic tasks.
Despite apprehensions that led tech giants like Microsoft and Google to impose limitations on the use of ChatGPT, the poll indicates that approximately 28% of respondents use the AI tool regularly at work. This trend is noteworthy as only 22% reported explicit approval from their employers for using external AI tools.
The chatbot has rapidly become the fastest-growing app in history since its launch in November. It offers users the ability to hold conversational interactions and address various prompts, making it attractive for tasks like drafting emails, summarizing documents, and conducting preliminary research.
However, companies are concerned about their data security and proprietary information leaks, leading them to ban its use. Security firms and companies worry that the use of ChatGPT might inadvertently lead to unauthorized access to intellectual property and sensitive strategies by human reviewers of the chatbot.
OpenAI refrained from providing a statement to Reuters regarding the potential consequences of employees utilizing ChatGPT individually. However, the organization did emphasize the significance of a recent blog post that assures corporate collaborators that their data would not contribute to the additional training of the chatbot unless specific and clear authorization is granted.
In comparison, in Google's Bard, users' interactions result in the accumulation of various types of data, including text, location, and other usage-related information.
The poll also indicated that 10% of respondents were operating under explicit bans on using external AI tools, while approximately 25% were uncertain about their company's stance on the matter.
Companies globally are wrestling with the decision of how to best incorporate ChatGPT and similar AI technologies into their workflows. While some, like Coca-Cola and Tate & Lyle, are experimenting with AI to enhance operational efficiency and productivity, others, including Samsung Electronics, have implemented temporary bans due to security concerns.
In the face of these evolving dynamics, ChatGPT's developer, OpenAI, has chosen not to comment directly on individual employees' use of the tool. However, the company released a blog post reaffirming its commitment to data privacy, assuring corporate partners that their data would not be employed to further train ChatGPT unless explicit consent was provided.
Ben King, Vice President of customer trust at corporate security firm Okta, emphasized the need for businesses to address the potential risks associated with generative AI services, as these tools often operate outside conventional contractual agreements.
This necessitates that businesses ensure that they are not inadvertently exposing themselves to data breaches or proprietary information leaks.
Workers say that they utilize ChatGPT for harmless tasks, "It's regular emails. Very non-consequential, like making funny calendar invites for team events, farewell emails when someone is leaving ... We also use it for general research," an employee who spoke on anonymity told Reuters.
Today, businesses must balance the need to increase employee productivity with the crucial task of safeguarding sensitive information from unauthorized exposure.
Read more: AI producing 3,000 Australian local news stories a week