Advertising

Anyscale Resolves Critical Vulnerability on Ray Framework, Yet Numerous Users Remain at Risk

Anyscale Resolves Critical Vulnerability on Ray Framework, Yet Numerous Users Remain at Risk

The discovery of a critical vulnerability on the popular open-source Ray framework has left numerous organizations and their sensitive data at risk. Known as the “ShadowRay” vulnerability, it allowed attackers to gain unauthorized access to companies’ AI production workloads, computing power, credentials, and other sensitive information for a period of seven months. While the framework’s maintainer, Anyscale initially disputed the vulnerability, they have now released new tooling to help users determine whether their ports are being exposed.

The vulnerability, identified as CVE-2023-48022, exposes the Ray Jobs API to remote code execution attacks. This means that individuals with dashboard network access could invoke “arbitrary jobs” without requiring permission. Oligo Security, the company that first revealed the vulnerability in a research report, explains that the ShadowRay vulnerability could potentially expose AI production workloads, access to cloud environments (such as AWS, GCP, Azure, and Lambda Labs), KubernetesAPI access, passwords and credentials for services like OpenAI, Stripe, and Slack, as well as production database credentials and tokens.

Initially, Anyscale disputed the vulnerability, considering it to be “an expected behavior and a product feature.” However, in response to reports of malicious activity, Anyscale has now released the Open Ports Checker tool. This tool simplifies the process of determining whether ports are unexpectedly open or exposed. By pre-configuring the defaults of the client-side script to reach out to a server they have set up, Anyscale’s tool can return either an “OK” message or a “WARNING” report regarding open ports. However, Anyscale emphasizes that a warning message does not necessarily mean that the port is open to unauthenticated traffic. On the other hand, an “OK” response does not guarantee that no ports are open.

According to Censys, an attack management and threat-hunting company, there are currently 315 globally affected hosts as of March 28. The majority of these hosts have an exposed login page, while a few have exposed file directories. This vulnerability is particularly dangerous because it targets behind-the-scenes infrastructure. Attackers can gain access to valuable data by exploiting the infrastructure used by large language models (LLMs), which are often assumed to exist in secure environments.

The discovery of the ShadowRay vulnerability raises larger concerns about secure development principles and data awareness and hygiene. With the rapid progress in AI and the widespread adoption of LLMs, companies must consider data hygiene and validate datasets. Understanding the origin of the data used and any regulatory requirements is critical, especially when building on-premises LLMs. Additionally, organizations must address people, process, and technology issues in securing their infrastructure and avoid overreliance on LLMs.

Looking ahead, experts predict that as the field of generative AI continues to advance, there will be more infrastructure attacks rather than explicit use of AI to bolster attacks. If the data used to power AI models is easily accessible and exploits are readily available, attackers may choose to steal the data rather than using the tool itself.

In conclusion, while Anyscale has resolved the critical vulnerability on the Ray framework by releasing new tooling, there are still numerous users at risk. The ShadowRay vulnerability exposed sensitive information and highlighted the need for secure development practices and data hygiene. As the AI industry advances, organizations must prioritize securing their infrastructure and ensuring the responsible use of AI models.