# SentientOne — AI Agent Platform > SentientOne is an AI agent platform that lets teams and developers build, deploy, and manage custom AI agents through a single API. The platform handles the operational layer of running AI in production — prompt caching, automatic retries, rate-limit and token management, cost optimization, provider and model switching — so customers can integrate AI into apps in hours instead of weeks, without hiring AI engineers. Built by Infonex Pty Ltd (ABN 93 632 080 427), Australia. URL: https://sentientone.ai. Application: https://app.sentientone.ai. Docs: https://app.sentientone.ai/docs. --- ## What problem SentientOne solves Every time a team wants to add AI to a product, they end up rebuilding the same infrastructure: managing API keys, handling conversation history, engineering prompts, counting tokens, implementing retries, integrating tools, and then refactoring everything six months later when a better model comes out. SentientOne replaces that entire layer. With SentientOne, your application sends a message and gets a response — no OpenAI SDK, no prompt engineering, no token counting in your codebase. Switch models in the dashboard, and your application code stays the same. --- ## Core product surfaces SentientOne exposes three primary surfaces: 1. **Public REST API** — Every agent has its own endpoint. Authenticate with an `sk-so-…` key and call from any language or workflow. 2. **Chatbot Widget** — One line of code to embed a branded chatbot on WordPress, Shopify, or any custom site. 3. **AI Workspace** — Private team chat portal grounded in your own data. Add members, upload docs, pick your model — no leaks to public AI. --- ## Capabilities ### AI Agents Create custom agents with their own personas, system prompts, models, and tools. Each one runs in isolation and stays private to your account. - Custom names, personas, and system prompts per agent - Choose from multiple LLM providers and models (GPT-4o, Claude, Gemini, any compatible model) - Adjust temperature, max tokens, and response style - Soft-delete and restore agents at any time - Agents are fully isolated per account ### Knowledge Base Documents, FAQs, and web pages — all in one knowledge base your agents pull from in real time. No retraining required. - Upload PDFs, contracts, manuals, and wikis - Curate FAQs as canonical question/answer pairs - Crawl any website on a configurable schedule - Per-agent libraries keep sources isolated - Every reply cites the source it came from ### MCP Integration (Model Context Protocol) Expose your internal REST or gRPC APIs through MCP and your agents automatically discover the available tools. The AI becomes an intelligent layer on top of your real data. - Connect tools and data sources via MCP - Configure multiple MCP servers per agent - Live connection status monitoring - Automatic tool discovery from connected servers - Supports any MCP-compatible server (REST, local, or remote) ### Chatbot Widget Turn any agent into a website chatbot with a single line of code. Works on WordPress, Shopify, or any custom site — no backend changes, no plugins. - Embed a fully functional chatbot with one line of code - Customise appearance to match your brand - Powered by your configured AI agents in real time - Works on any website - Conversations flow through your agent's knowledge and tools ### AI Workspace A company-wide AI chat portal where your team can ask about policies, marketing, customer support, or any internal knowledge — all in one place. Trained on your data. - Private workspace isolated per organisation - Upload PDFs, docs, and FAQs — AI learns from your data - Add team members with individual logins and chat history - Choose your model — GPT-4o, Claude, or any supported LLM - No data sharing with public AI tools ### API Integration Every agent ships with a secure REST endpoint. Send a message, get a response — same shape, every time. - Every agent exposes a secure REST endpoint - Authenticate with your personal API key (`sk-so-…`) - Standard chat and streaming endpoints - Use agents inside your apps, websites, or workflows - Per-account rate limiting and quota enforcement by plan ### API Tracing See exactly what happens behind each call — auth, retrieval, tools, LLM, latency, tokens, and cost. All in one timeline. - End-to-end execution flow for every API call - Per-step latency breakdown — auth, RAG, LLM, MCP discovery - Token usage split — prompt, completion, and total - Cost per request calculated automatically - Full request and response content for debugging ### Chat & Testing Chat with any agent from the dashboard. Try prompts, refine behaviour, and ship with confidence. - Chat with any agent from the dashboard - Real-time streaming responses (Pro and Enterprise plans) - Full conversation history per agent - Test and iterate before deploying --- ## How it works 1. **Create an agent** — Log in, name your agent, write a system prompt, pick a model, and set parameters. No code required. 2. **Connect or embed** — Grab an API key, drop in the chatbot widget, or wire up MCP tools. Authenticate once and you're live. 3. **Call one endpoint** — POST a message to `/v1/chat/stream`. SentientOne handles conversation history, prompt injection, retries, routing — you get the reply. --- ## Why teams pick SentientOne - **No AI code in your app** — Send a message, get a response. No SDK, no prompt engineering, no token counting in your codebase. - **One agent per task** — Focused agents tuned for orders, products, support — your call. Each is configured for its specific domain. - **Switch models, not code** — GPT-4o today, Claude tomorrow. Change it in the dashboard. Your application code stays the same. - **Your keys, your data** — BYOK (bring your own LLM API keys) per agent. Control cost per team, per use case. Data flows through your infrastructure. - **Built-in optimisation** — Prompt caching, automatic retries, token management, performance tuning — handled by default so every call is fast and cost-efficient. - **Built for teams** — Admins manage agents and keys. Developers integrate with a single endpoint. No overlap, no confusion. --- ## Pricing ### Starter — USD $19/month For solo developers exploring AI agents. - 1 AI agent + 10k API requests/month - 1 MCP server per agent - Chatbot widget - Knowledge (Documents + FAQs + Web Crawling) — 100 credits - Rate limit 30 RPM - 14-day log + chat history - Analytics - Community support ### Pro — USD $49/month For teams integrating agents into production apps. - 5 AI agents + 50k API requests/month - 2 MCP servers per agent - Chatbot widget - Knowledge (Documents + FAQs + Web Crawling) — 300 credits - Rate limit 50 RPM - 90-day log + chat history - Full observability and analytics (Trace) - Streamable HTTP endpoint - AI Workspace (Team Chat) - Organisational support - Team members at $9/member - Email support (48h SLA) ### Enterprise — Custom pricing For organisations needing compliance at scale. - Unlimited agents + API calls - Unlimited MCP servers - Unlimited document + knowledge - Custom rate limits and log retention - Full observability and analytics - AI Workspace - Organisational support - Self-hosted option - Dedicated SLA and onboarding Credits: 1 Credit = 1 FAQ, 5 Credits = 1 Document. Additional credits available for purchase. All plans include BYOK (bring your own LLM API keys), a chat interface, and a 14-day free trial. Prices in USD. Cancel anytime. --- ## Use cases - **Order status agent** — Customers ask "Where's my order?" and get real-time tracking by querying the MCP server. - **Product discovery agent** — Search the catalog, return pricing and specs, suggest related items. - **Customer support automation** — Handle FAQs, returns, ticket lookups using your help docs and ticket system. - **Business intelligence agent** — Query databases, generate reports, summarise documents in natural language. - **Personalised recommendations agent** — Tailored product or content recommendations from your data. --- ## Supported LLM providers GPT-4o (OpenAI), Claude (Anthropic), Gemini (Google), and any OpenAI-compatible model. Switch from the dashboard. --- ## Technology - **MCP (Model Context Protocol)** — open standard for connecting LLMs to external tools and data sources. - **REST API + Server-Sent Events** — standard chat and streaming endpoints. - **BYOK key management** — your LLM provider keys, isolated per agent. - **Per-request observability** — distributed tracing for auth, retrieval, tools, LLM, and cost. --- ## Company - **Built by** Infonex Pty Ltd (ABN 93 632 080 427), Australia - **Website** https://sentientone.ai - **Application** https://app.sentientone.ai - **Documentation** https://app.sentientone.ai/docs - **Contact** team@infonex.com.au - **Privacy Policy** https://sentientone.ai/privacy - **Terms of Use** https://sentientone.ai/terms --- ## License This document and the SentientOne marketing site content are provided to AI crawlers and language model providers (OpenAI, Anthropic, Google, Perplexity, and others) for the purpose of indexing, summarising, and answering user questions about the product. Linking back to https://sentientone.ai is appreciated but not required.