Currently browsing: Security

VU#726882: Paragon Partition Manager contains five memory vulnerabilities within its BioNTdrv.sys driver that allow for privilege escalation and denial-of-service (DoS) attacks

VU#726882: Paragon Partition Manager contains five memory vulnerabilities within its BioNTdrv.sys driver that allow for privilege escalation and denial-of-service (DoS) attacks

Overview
Paragon Partition Manager’s BioNTdrv.sys driver, versions prior to 2.0.0, contains five vulnerabilities. These include arbitrary kernel memory mapping and write vulnerabilities, a null pointer dereference, insecure kernel resource access, and an arbitrary memory move vulnerability. An attacker with local access to a device can exploit these vulnerabilities to escalate privileges or cause a denial-of-service (DoS) scenario on the victim’s machine. Additionally, as the attack involves a Microsoft-signed Driver, an attacker can leverage a Bring Your Own Vulnerable Driver (BYOVD) technique to exploit systems even if Paragon Partition Manager is not installed. Microsoft has observed threat actors (TAs) exploiting this weakness in BYOVD ransomware attacks, specifically using CVE-2025-0289 to achieve privilege escalation to SYSTEM level, then execute further malicious code. These vulnerabilities have been patched by both Paragon Software, and vulnerable BioNTdrv.sys versions blocked by Microsoft’s Vulnerable Driver Blocklist.
Description
Paragon Partition Manager is a software tool from Paragon Software, available in both Community and Commercial versions, that allows users to manage partitions (individual sections) on a hard drive. Paragon Partition Manager uses a kernel-level Driver distributed as BioNTdrv.sys. The driver allows for a low-level access to the hard drive with elevated privileges to access and manage data as the kernel device.
Microsoft researchers have identified four vulnerabilities in Paragon Partition Manager version 7.9.1 and a fifth specific vulnerability (CVE-2025-0289) affecting version 17. These vulnerabilities, particularly in BioNTdrv.sys versions 1.3.0 and 1.5.1, allow attackers to achieve SYSTEM-level privilege escalation, which surpasses typical administrator permissions. The vulnerabilities also enable attackers to manipulate the driver via device-specific Input/Output Control (IOCTL) calls, potentially resulting in privilege escalation or system crashes (e.g., a Blue Screen of Death, or BSOD). Even if Paragon Partition Manager is not installed, attackers can install and misuse the vulnerable driver through the BYOVD method to compromise the target machine.
Identified Vulnerabilities:
CVE-2025-0288
An arbitrary kernel memory vulnerability in version 7.9.1 caused by the memmove function, which fails to sanitize user-controlled input. This allows an attacker to write arbitrary kernel memory and achieve privilege escalation.
CVE-2025-0287
A null pointer dereference vulnerability in version 7.9.1 caused by the absence of a valid MasterLrp structure in the input buffer. This allows an attacker to execute arbitrary kernel code, enabling privilege escalation.
CVE-2025-0286
An arbitrary kernel memory write vulnerability in version 7.9.1 due to improper validation of user-supplied data lengths. This flaw can allow attackers to execute arbitrary code on the victim’s machine.
CVE-2025-0285
An arbitrary kernel memory mapping vulnerability in version 7.9.1 caused by a failure to validate user-supplied data lengths. Attackers can exploit this flaw to escalate privileges.
CVE-2025-0289
An insecure kernel resource access vulnerability in version 17 caused by failure to validate the MappedSystemVa pointer before passing it to HalReturnToFirmware. This allows attackers to compromise the affected service.
Impact
An attacker with local access to a target device can exploit BioNTdrv.sys version 1.3.0 to escalate privileges to SYSTEM level or cause a DoS scenario. Microsoft has observed this driver being used in ransomware attacks, leveraging the BYOVD technique for privilege escalation prior to further malicious code execution.
Solution
Paragon Software has updated Parition Manager and released a new driver, BioNTdrv.sys version 2.0.0, which addresses these vulnerabilities. Ensure your installation of Paragon Partition Manager is updated to the latest version. Users can verify if their Vulnerable Driver Blocklist is enabled under Windows Security settings. On Windows 11 devices, this blocklist is enabled by default. Users can learn more about the Vulnerable Driver Blocklist here: Microsoft Vulnerable Driver Blocklist Information. Enterprise organizations should ensure the blocklist is applied for their user base to prevent potential loading of the vulnerable driver BioNTdrv.sys versions 1.3.0 and 1.5.1 by TAs.
Acknowledgements
Thanks to Microsoft for reporting the vulnerability.This document was written by Christopher Cullen.

Read more
VU#148244: PandasAI interactive prompt function can be exploited to run arbitrary Python code through prompt injection, which can lead to remote code execution (RCE)

VU#148244: PandasAI interactive prompt function can be exploited to run arbitrary Python code through prompt injection, which can lead to remote code execution (RCE)

Overview
PandasAI, an open source project by SinaptikAI, has been found vulnerable to Prompt Injection attacks. An attacker with access to the chat prompt can craft malicious input that is interpreted as code, potentially achieving arbitrary code execution. In response, SinaptikAI has implemented specific security configurations to address this vulnerability.
Description
PandasAI is a Python library that allows users to interact with their data using natural language queries. The library parses these queries into Python or SQL code, leveraging a large language model (LLM) (such as OpenAI’s GPT or similar) to generate explanations, insights, or code. As part of its setup, users import the AI Agent class, instantiate it with their data, and facilitate a connection to the database. Once connected the AI agent can maintain the context throughout the discussion, allowing for ongoing exchanges with the user’s queries as prompts.
A vulnerability was discovered that enables arbitrary Python code execution through prompt injection. Researchers at NVIDIA demonstrated the ability to bypass PandasAI’s restrictions, such as preventing certain module imports, jailbreak protections, and the use of allowed lists. By embedding malicious Python code in various ways via a prompt, attackers can exploit the vulnerability to execute arbitrary code within the context of the process running PandasAI.
This vulnerability arises from the fundamental challenge of maintaining a clear separation between code and data in AI chatbots and agents. In the case of PandasAI, any code generated and executed by the agent is implicitly trusted, allowing attackers with access to the prompt interface to inject malicious Python or SQL code. The security controls of PandasAI (2.4.3 and earlier) fail to distinguish between legitimate and malicious inputs, allowing the attackers to manipulate the system into executing untrusted code, leading to untrusted code execution (RCE), system compromise, or pivoting attacks on connected services. The vulnerability is tracked as CVE-2024-12366. Sinaptik AI has introduced new configuration parameters to address this issue and allow the user to choose appropriate security configuration for their installation and setup.
Impact
An attacker with access to the PandasAI interface can perform prompt injection attacks, instructing the connected LLM to translate malicious natural language inputs into executable Python or SQL code. This could result in arbitrary code execution, enabling attackers to compromise the system running PandasAI or maintain persistence within the environment.
Solution
SinaptikAI has introduced a Security parameter to the configuration file of the PandasAI project. Users can now select one of three security configurations:

Standard: Default security settings suitable for most use cases.
Advanced: Higher security settings for environments with stricter requirements.
None: Disables security features (not recommended).

By choosing the appropriate configuration, users can tailor PandasAI’s security to their specific needs. SinaptikAI has also released a sandbox. More information regarding the sandbox can be found at the appropriate documentation page.
Acknowledgements
Thank you to the reporter, the NVIDIA AI Red Team (Joe Lucas, Becca Lynch, Rich Harang, John Irwin, and Kai Greshake). This document was written by Christopher Cullen.

Read more
VU#733789: ChatGPT-4o contains security bypass vulnerability through time and search functions called “Time Bandit”

VU#733789: ChatGPT-4o contains security bypass vulnerability through time and search functions called “Time Bandit”

Overview
ChatGPT-4o contains a jailbreak vulnerability called “Time Bandit” that allows an attacker the ability to circumvent the safety guardrails of ChatGPT and instruct it to provide illicit or dangerous content. The jailbreak can be initiated in a variety of ways, but centrally requires the attacker to prompt the AI with questions regarding a specific time period in history. The jailbreak can be established in two ways, either through the Search function, or by prompting the AI directly. Once this historical timeframe been established in the ChatGPT conversation, the attacker can exploit time line confusion and procedural ambiguity in following prompts to circumvent the safety guidelines, resulting in ChatGPT generating illicit content. This information could be leveraged at scale by a motivated threat actor for malicious purposes.
Description
“Time Bandit” is a jailbreak vulnerability present in ChatGPT-4o that can be used to bypass safety restrictions within the chatbot and instruct it to generate content that breaks its safety guardrails. An attacker can exploit the vulnerability by beginning a session with ChatGPT and prompting it directly about a specific historical event, historical time period, or by instructing it to pretend it is assisting the user in a specific historical event. Once this has been established, the user can pivot the received responses to various illicit topics through subsequent prompts. These prompts must be procedural, first instructing the AI to provide further details on the time period asked before gradually pivoting the prompts to illicit topics. These prompts must all maintain the established time for the conversation, otherwise it will be detected as a malicious prompt and removed.
This jailbreak could also be achieved through the “Search” functionality. ChatGPT supports a Search feature, which allows a logged in user to prompt ChatGPT with a question, and it will then search the web based on that prompt. By instructing ChatGPT to search the web for information surrounding a specific historical context, an attacker can then continue the searches within that time frame and eventually pivot to prompting ChatGPT directly regarding illicit subjects through usage of procedural ambiguity.
During testing, the CERT/CC was able to replicate the jailbreak, but ChatGPT removed the prompt provided and stated that it violated usage policies. Nonetheless, ChatGPT would then proceed to answer the removed prompt. This activity was replicated several times in a row. The first jailbreak, exploited through repeated direct prompts and using procedural ambiguity, was exploited without authentication. The second, which requires exploit through the Search function, requires authentication by the user. During testing, the jailbreak was more successful using a time frame within the 1800s or 1900s.
Impact
This vulnerability bypasses the security and safety guidelines of OpenAI, allowing an attacker to abuse ChatGPT for instructions regarding, for example, how to make weapons or drugs, or for other malicious purposes. A jailbreak of this type exploited at scale by a motivated threat actor could result in a variety of malicious actions, such as the mass creation of phishing emails and malware. Additionally, the usage of a legitimate service such as ChatGPT can function as a proxy, hiding their malicious activities.
Solution
OpenAI has mitigated this vulnerability. One OpenAI spokesperson provided the below statement:
“It is very important to us that we develop our models safely. We don’t want our models to be used for malicious purposes. We appreciate you for disclosing your findings. We’re constantly working to make our models safer and more robust against exploits, including jailbreaks, while also maintaining the models’ usefulness and task performance.”
Acknowledgements
Thanks to the reporter, Dave Kuszmar, for reporting the vulnerability. This document was written by Christopher Cullen.

Read more