Headless AI Platform for Search, RAG, and Assistants
TellusR is a toolkit for building, customizing, and scaling advanced AI search, generative services, and assistant solutions – with flexible architecture and documented APIs.
How to Get Started with TellusR
Comprehensive step-by-step documentation on:
TellusR Documentation
See what's possible in our static API documentation:
API Documentation
Drive wherever you want – with models you trust
TellusR can be operated on your preferred infrastructure—either on a local server or in your preferred cloud. When you utilize TellusR, you gain access to the following four main modules:
Importer module (Update assistant) for uploading and interpreting documents, metadata, and other content
Search/retrieve module (Flow) that, among other things, finds and ranks relevant text sections across all content, and provides precise context for search and AI functions
Inference service (NLP-service) for text vectorization and text generation
Assistant module (Dialogue) for generative assistants
The search/retrieve module utilizes compact language models that are downloaded and run locally alongside TellusR. This allows indexing, vectorization, and search to be executed without external API calls. The assistant module connects to an external LLM (or more) for the actual text generation, and the choice of model is fully controlled via configuration.
TellusR is platform-independent and designed to avoid vendor lock-in: you are free to choose which language models you wish to use, and can switch providers without altering the underlying solution.
Quick wins – getting started quickly
Quick installation on your own server. One-liner to download
Docker Compose file that brings up the solution.
Supports formats such as Word, PDF, PowerPoint, HTML, and Markdown.
Comes with proven standard setup configurations and prompts, with support for flexible tuning
End-to-end traceability and observability throughout the chat pipeline
AI search and AI chat integrated with built-in AI testing framework
Key Features
Platform and Architecture
Headless and API-first architecture – all functionality can be integrated directly into your own applications.
High configurability with full control via documented API endpoints and configuration interface.
GPU support and scalable execution across multiple servers.
Flexible operation: run locally, on your own server or in any cloud – without vendor lock-in.
RAG, search and context delivery
Advanced Hybrid Search (traditional/semantic)
Full Control over context, chunking, and source prioritization
Local Execution of Retriever Models
Assistant Framework
Support for building custom assistants and static assistant pipelines.
Support for MCP (Model Context Protocol)
Built-in prompt editor with support tools for testing and quality assurance of prompt changes
Free choice of LLM
Observability and Quality Assurance
Built-in testing framework with support for both static tests and LLM-as-a-judge scenarios – easy to implement, maintain, and run individual tests or entire test suites.
Automated quality monitoring: compare test results across versions, detect regressions early, and document quality before production deployment.
Full traceability and in-depth debugging throughout the pipeline – insight into context, prompts, intermediate steps, help assistants, and generated answers, with the ability to drill down into each step for troubleshooting and testing.
Additional value for consulting firms and software teams
Standardized APIs provide low vendor lock-in—build your own services on top, extend or switch the infrastructure without losing value.
Flexible model choice offers the freedom to use the best from open source, hyperscalers, or custom training, with low switching costs and without changing your codebase.
Enterprise-level security—all data processing and AI operations can be performed behind a firewall, on local hardware, or national cloud.
Actively used testing framework allows you to deliver production-ready solutions faster and document the quality for your customers.
See what is possible in our static API documentation, and read our step-by-step documentation here:
Read more
© 2025 TellusR. All rights reserved.









