A recent revelation in AI research has brought to light a significant vulnerability in OpenAI’s ChatGPT. Researchers found that by using repetitive prompts, such as asking the AI to endlessly repeat a word like ‘poem’ or ‘book’, ChatGPT could inadvertently reveal snippets of its training data.

This data sometimes included sensitive personal information, raising critical concerns about privacy and data protection in AI systems.

Take aways:

  • Google researchers demonstrated that simple commands could prompt ChatGPT to disclose private user data, posing a risk to personal data security..
  • With just $200 worth of queries, researchers extracted over 10,000 unique memorized training examples from ChatGPT, indicating that dedicated adversaries could obtain even more sensitive data.
  • The risk of data breach led companies like Apple to restrict employee use of AI tools, including ChatGPT and GitHub’s Copilot.
  • OpenAI has implemented a feature to turn off chat history as a protective measure against data breaches, but the data is still retained for 30 days before permanent deletion.

This revelation underscores the urgent need for robust privacy safeguards in AI development. It serves as a wake-up call for the AI community to prioritize data security, influencing future AI policy and development practices, particularly in handling sensitive information.

See also 5 reasons to use Microsoft Copilot (previously Bing Chat Enterprise)

Reference article: