FAQ from Aporia
What is Aporia?
The AI Control Platform | Real-time Guardrails & Security
How to use Aporia?
Use Aporia to mitigate AI risks and ensure the reliability of AI apps.
How does Aporia work?
Aporia works by integrating with your AI models and providing real-time monitoring and control. It uses advanced techniques to detect and mitigate AI risks such as hallucinations, prompt injection attacks, and off-topic conversations.
What are some key features of Aporia?
Some key features of Aporia include real-time guardrails, security platform, AI integrity hallucinations mitigation, prompt injection prevention, off-topic detection, data leakage prevention, SQL security enforcement, profanity prevention, session explorer, cost tracking, dashboards, monitoring, direct data connectors, root cause analysis, explainability, customization, integrations, and tabular model support.
What are the use cases of Aporia?
Aporia can be used to mitigate AI risks, ensure reliable and secure AI responses, protect intellectual property, detect and avoid off-topic conversations, keep interactions tasteful, prevent data leakage, secure SQL queries, explore LLM interactions, track costs and budget control, monitor key metrics, detect model drifts, gain actionable insights, monitor, troubleshoot, and enhance efficiency, understand and communicate predictions, and tailor Aporia Observe for your models.
How can I integrate Aporia with my AI models?
Integrating Aporia with your AI models is easy. You can either call their REST API or change your base URL to an OpenAI-compatible proxy. Aporia is designed to fit seamlessly into your current setup, whether you're using GPT-X, Claude, Bard, LLaMA, or your own LLM.
Does Aporia support different types of AI models?
Yes, Aporia supports various types of AI models, including tabular LLMs, computer vision models, and NLP models.