Samsung Electronics employees in the company’s semiconductor business unit have reportedly leaked sensitive corporate data in ChatGPT, according to a report by The Economist Korea. The incident happened in three separate instances. One employee entered faulty source code related to the Samsung Electronics facility measurement database download program, while another entered program code for identifying defective equipment. The third employee converted a smartphone recording of a company meeting to a document file and entered it into ChatGPT to get meeting minutes.
The incident prompted the company to implement an upload capacity of 1024 bytes per prompt, the report stated. Samsung Electronics has yet to issue a comment on the matter.
ChatGPT is an AI language model that uses input data to train and improve the tool. While it is intended to be used to aid programmers, data privacy concerns arise when employees enter corporate data as part of their prompt.
If employees input company data into ChatGPT, the AI could incorporate the information into its learning model. This information could then become part of its knowledge base, potentially exposing other users to proprietary data.
Relatest Post
For enterprise users accessing ChatGPT through its API, OpenAI added parameters around training data. Enterprise data submitted through the API is not used for model training or other service improvements unless organizations decide to opt in. API users also have a default 30-day data retention policy, with options for shorter retention windows.
While nearly half of HR leaders are in the process of formulating guidance on employee usage of OpenAI’s ChatGPT, data privacy concerns have prompted businesses and governments to question the security of OpenAI’s models and ChatGPT.
In March, Italy imposed a temporary limitation “on the processing of Italian users’ data by OpenAI,” citing a ChatGPT bug as one of the reasons behind the decision. Last month, OpenAI’s ChatGPT faced a significant issue due to a bug in an open-source library, causing a small percentage of users to see the titles of other active users’ conversation history.
OpenAI conducted an investigation into the issue and found that the same bug caused 1.2% of active ChatGPT Plus subscribers to have payment-related information leaked, including their first and last name, email address, payment address, the last four digits of their credit card, and credit card expiration date.
In conclusion, companies need to provide clear guidance to their employees on what can be included in prompts to avoid potential internal leaks of company data. It is also important to be aware of the risks involved in using AI language models such as ChatGPT and to take the necessary steps to protect sensitive information.