Aporia Guardrails serves as a protective layer for AI, mitigating issues like hallucinations, data leakage, prompt injection attacks, and inappropriate responses.

For developers, Aporia can safeguard the LLM apps you've built in-house so you can deploy them to production with more confidence.

For security and data governance teams, Aporia accelerates adoption of AI in your organization by safeguarding third-party AI tools, such as ChatGPT and Bard. This works by integrating with your organization's firewall.


What models are supported?

The product utilizes a blackbox approach and works on the prompt/response level without needing access to the model internals.

This ensures wide-range compatibility with any LLM, including commercial APIs like GPT-4 or Gemini, open-source models such as Llama2, or fine-tuned variants.

A key benefit of this approach is Aporia's ability to work not only with the models directly but also with third-party tools like ChatGPT or Bard.

What third-party AI tools are supported?

Currently, only ChatGPT and Bard are supported. In the future we're planning to add other tools such as Midjourney,, and others.

How does hallucination mitigation work?

Our current hallucination mitigation techniques focus on retrieval-augmented generation apps (RAGs), where the LLM prompt is enriched with context retrieved from a knowledge base could potentially answer the question.

The guardrail measures relevance between the question and context, context and the answer, and question and answer. If any one of them is low, it usually means the answer is hallucinated, and a suffix such as "Warning: This answer is highly prone to hallucinations" can be added to the final response. You can read more about in here.

There's no way you can prevent 100% of hallucinations.

Correct. We will never promise we can prevent 100% of anything, and you should not trust anyone who says so.

Can I use this for both customer-facing and internal LLM apps?


How is my private data handled?

Aporia is deployed in your cloud environment. No sensitive data leaves your cloud environment. See our deployment guide for more information.

Last updated