Data Continuity
Backup and recovery services are a necessity for todays modern networks. We can help to determine where and when your data needs to live to be sure it's always available
IT Consulting, Service and Management
Our decades of implementation and integration experience allows us to deliver best-of-class IT services to our customers
Cloud Services
With so many options and implementation scenarios available, let us help you determine how best to use new services available from the cloud.
Since 1996, our goal has been to help our clients maximize productivity and efficiency by expertly maintaining existing infrastructures, as well as designing and implementing new technologies, allowing them to continue growing into the future.
...
We focus on business process design and strategize and implement policies for continuous improvement and integration.
- Knowledgeable and friendly staff
- Flexible consumption-based pricing models
- Online strategy and consulting services
- Decades of experience
News, updates, trends and the latest
info you need to know about IT
February 16, 2025
Multi-Factor Authentication is more than just a buzzword – it’s a game-changer for online security. By requiring users to provide two or more authentication factors, MFA adds an extra layer of protection against phishing attacks and cyber threats.
February 11, 2025
Overview
PandasAI, an open source project by SinaptikAI, has been found vulnerable to Prompt Injection attacks. An attacker with access to the chat prompt can craft malicious input that is interpreted as code, potentially achieving arbitrary code execution. In response, SinaptikAI has implemented specific security configurations to address this vulnerability.
Description
PandasAI is a Python library that allows users to interact with their data using natural language queries. The library parses these queries into Python or SQL code, leveraging a large language model (LLM) (such as OpenAI’s GPT or similar) to generate explanations, insights, or code. As part of its setup, users import the AI Agent class, instantiate it with their data, and facilitate a connection to the database. Once connected the AI agent can maintain the context throughout the discussion, allowing for ongoing exchanges with the user’s queries as prompts.
A vulnerability was discovered that enables arbitrary Python code execution through prompt injection. Researchers at NVIDIA demonstrated the ability to bypass PandasAI’s restrictions, such as preventing certain module imports, jailbreak protections, and the use of allowed lists. By embedding malicious Python code in various ways via a prompt, attackers can exploit the vulnerability to execute arbitrary code within the context of the process running PandasAI.
This vulnerability arises from the fundamental challenge of maintaining a clear separation between code and data in AI chatbots and agents. In the case of PandasAI, any code generated and executed by the agent is implicitly trusted, allowing attackers with access to the prompt interface to inject malicious Python or SQL code. The security controls of PandasAI (2.4.3 and earlier) fail to distinguish between legitimate and malicious inputs, allowing the attackers to manipulate the system into executing untrusted code, leading to untrusted code execution (RCE), system compromise, or pivoting attacks on connected services. The vulnerability is tracked as CVE-2024-12366. Sinaptik AI has introduced new configuration parameters to address this issue and allow the user to choose appropriate security configuration for their installation and setup.
Impact
An attacker with access to the PandasAI interface can perform prompt injection attacks, instructing the connected LLM to translate malicious natural language inputs into executable Python or SQL code. This could result in arbitrary code execution, enabling attackers to compromise the system running PandasAI or maintain persistence within the environment.
Solution
SinaptikAI has introduced a Security parameter to the configuration file of the PandasAI project. Users can now select one of three security configurations:
Standard: Default security settings suitable for most use cases.
Advanced: Higher security settings for environments with stricter requirements.
None: Disables security features (not recommended).
By choosing the appropriate configuration, users can tailor PandasAI’s security to their specific needs. SinaptikAI has also released a sandbox. More information regarding the sandbox can be found at the appropriate documentation page.
Acknowledgements
Thank you to the reporter, the NVIDIA AI Red Team (Joe Lucas, Becca Lynch, Rich Harang, John Irwin, and Kai Greshake). This document was written by Christopher Cullen.
January 30, 2025
Overview
ChatGPT-4o contains a jailbreak vulnerability called “Time Bandit” that allows an attacker the ability to circumvent the safety guardrails of ChatGPT and instruct it to provide illicit or dangerous content. The jailbreak can be initiated in a variety of ways, but centrally requires the attacker to prompt the AI with questions regarding a specific time period in history. The jailbreak can be established in two ways, either through the Search function, or by prompting the AI directly. Once this historical timeframe been established in the ChatGPT conversation, the attacker can exploit time line confusion and procedural ambiguity in following prompts to circumvent the safety guidelines, resulting in ChatGPT generating illicit content. This information could be leveraged at scale by a motivated threat actor for malicious purposes.
Description
“Time Bandit” is a jailbreak vulnerability present in ChatGPT-4o that can be used to bypass safety restrictions within the chatbot and instruct it to generate content that breaks its safety guardrails. An attacker can exploit the vulnerability by beginning a session with ChatGPT and prompting it directly about a specific historical event, historical time period, or by instructing it to pretend it is assisting the user in a specific historical event. Once this has been established, the user can pivot the received responses to various illicit topics through subsequent prompts. These prompts must be procedural, first instructing the AI to provide further details on the time period asked before gradually pivoting the prompts to illicit topics. These prompts must all maintain the established time for the conversation, otherwise it will be detected as a malicious prompt and removed.
This jailbreak could also be achieved through the “Search” functionality. ChatGPT supports a Search feature, which allows a logged in user to prompt ChatGPT with a question, and it will then search the web based on that prompt. By instructing ChatGPT to search the web for information surrounding a specific historical context, an attacker can then continue the searches within that time frame and eventually pivot to prompting ChatGPT directly regarding illicit subjects through usage of procedural ambiguity.
During testing, the CERT/CC was able to replicate the jailbreak, but ChatGPT removed the prompt provided and stated that it violated usage policies. Nonetheless, ChatGPT would then proceed to answer the removed prompt. This activity was replicated several times in a row. The first jailbreak, exploited through repeated direct prompts and using procedural ambiguity, was exploited without authentication. The second, which requires exploit through the Search function, requires authentication by the user. During testing, the jailbreak was more successful using a time frame within the 1800s or 1900s.
Impact
This vulnerability bypasses the security and safety guidelines of OpenAI, allowing an attacker to abuse ChatGPT for instructions regarding, for example, how to make weapons or drugs, or for other malicious purposes. A jailbreak of this type exploited at scale by a motivated threat actor could result in a variety of malicious actions, such as the mass creation of phishing emails and malware. Additionally, the usage of a legitimate service such as ChatGPT can function as a proxy, hiding their malicious activities.
Solution
OpenAI has mitigated this vulnerability. One OpenAI spokesperson provided the below statement:
“It is very important to us that we develop our models safely. We don’t want our models to be used for malicious purposes. We appreciate you for disclosing your findings. We’re constantly working to make our models safer and more robust against exploits, including jailbreaks, while also maintaining the models’ usefulness and task performance.”
Acknowledgements
Thanks to the reporter, Dave Kuszmar, for reporting the vulnerability. This document was written by Christopher Cullen.
Contact us today if you'd like to know more
about how we can keep your network working at its best
VistaNet, Inc is a technology consulting and services company, helping enterprises
marry scale with agility to achieve competitive advantage.
