v1.0.0 — Now Available for macOS & Windows

AI Agent
Powerhouse

Self-hosted AI workspace with parallel multi-agent orchestration, secure sandboxing, and a skill marketplace. Your data never leaves your machine. No Docker required.

TigrimOS v1.0.0
TigrimOS Task Manager with running agent tasks and Minecraft visualization TigrimOS AI Chat with tool-calling and React/Recharts visualizations TigrimOS Visual Agent Editor with drag-and-drop multi-agent design
Task Manager — monitor running agent tasks with live status, call counts, and Minecraft-style visualization.
16 Built-in Tools
7 Agent Topologies
0 Cloud Dependencies
MIT Licensed
Enterprise Developers Indie & Solo Devs Engineering Teams Open Source Enthusiasts AI Researchers Regulated Industries

The Problems You Face Every Day

Cloud AI tools are powerful — until they hit the wall of privacy, cost, and control.

Compliance blocks cloud AI

Regulated industries can't send proprietary code, patient data, or financial records to third-party servers. No audit trail, no deal.

TigrimOS runs 100% on-premise inside an isolated sandbox. Zero data leaves your machine.

API bills that spike without warning

Multiple AI subscriptions stack up. A heavy coding session can blow through your budget overnight — and you only see it after the fact.

Run free on local models, or mix cheap local workers with cloud orchestrators. You control every dollar.

Locked into a single AI vendor

Team-wide Copilot or ChatGPT subscriptions add up fast. Switching providers means retraining workflows and losing integrations.

Assign different providers per agent — OpenAI, Claude, Ollama, Codex — swap anytime, no lock-in.

No offline or air-gapped option

Traveling, remote locations, secure facilities — cloud AI simply doesn't work when you can't connect.

Full functionality offline with local LLMs. Works air-gapped — no internet required.

Can't customize or extend your AI tools

Closed-source tools give you what they give you. No way to add domain-specific skills, connect internal APIs, or adapt to your workflow.

MIT licensed. Build skills, connect MCP servers, export YAML workflows — own everything.

Hard to prove ROI to leadership

Per-seat AI licenses for the whole team are expensive. When budgets tighten, AI tools are the first to get cut.

Free and open source. No per-seat fees. Deploy to the entire team at zero licensing cost.

Flexible AI — Your Models, Your Cost

Choose the AI provider that fits your budget. Use premium cloud APIs when you need top performance, or run completely free on local models.

$0 / month

Run Local LLMs

Use open-source models on your own hardware. Zero API costs, zero data sharing, full privacy.

  • Ollama — run Llama, Mistral, Gemma locally
  • llama.cpp — lightweight C++ inference
  • LM Studio — GUI model manager
  • Air-gapped — works offline
Mix & match

Hybrid — Best of Both

Assign different providers per agent. Use a powerful cloud model for planning and a free local model for execution.

  • Orchestrator on GPT-4 / Claude Opus
  • Worker agents on local Llama / Mistral
  • Claude Code CLI + Codex CLI as coders
  • Pay only for what you use
API pay-as-you-go

Cloud APIs

Connect any OpenAI-compatible API for maximum capability. Use your own keys — no markup, no middleman.

  • OpenAI (GPT-4o, o3, o4-mini)
  • Anthropic (Claude Sonnet, Opus)
  • Any OpenAI-compatible endpoint
  • Your keys — direct billing, no markup

Everything You Need to Run AI In-House

A complete workspace where your agent teams build, reason, and execute — without ever leaving your infrastructure.

Parallel Multi-Agent Orchestration

7 orchestration topologies and 4 communication protocols. Agents work simultaneously on different parts of your problem — mesh networking, P2P swarm governance, and YAML-exportable workflows.

16 Built-in Tools

Web search, Python execution, React rendering, shell commands, file operations, skills, sub-agents — all sandboxed and ready to use out of the box.

Mix Any AI Provider

Assign different models per agent — OpenAI-compatible APIs, Claude Code CLI, Codex CLI, or local LLMs via Ollama, llama.cpp, and LM Studio. One team, many brains.

MCP Integration

Connect any Model Context Protocol server — Stdio, SSE, or StreamableHTTP. Extend your agents' toolbox with external services, databases, and APIs.

Built-in Terminal

Full xterm.js terminal with root access to the Ubuntu sandbox. Install packages, manage services, run CLI tools — all from the browser with color, tab completion, and cursor support.

Skills & ClawHub Marketplace

Install pre-built AI skills from the marketplace or create your own. Package domain-specific capabilities and share them across your organization.

Long-Running Stability

Smart context compression, sliding window management, and checkpoint recovery. Your agents keep working through complex, multi-hour tasks without losing context.

7 Ways to Orchestrate Your Agents

Choose the right topology for every task. Click each pattern to see how agents connect and collaborate.

A B C D Orchestrator Coder Researcher Reviewer EVERY AGENT CAN REQUEST ANY OTHER FOR HELP Plan Code Test Ship Design Implement Review Deploy ONE ROUTE: DESIGN → CODE → CHECK → SHIP HUB W1 W2 W3 W4 W5 CENTRAL HUB DELEGATES TO ALL WORKERS TASK A bid $2 B bid $5 C bid $1 D bid $3 WIN C wins task E AGENTS BID ON TASKS — BEST FIT WINS THE JOB SEND to all R1 R2 R3 R4 ONE AGENT SENDS THE SAME TASK TO ALL CEO LEAD WORKER M L1 L2 W1 W2 W3 W4 TREE LAYERS — MANAGERS DELEGATE DOWN STAR PIPELINE MESH ORCH A B C P1 P2 P3 COMBINES STAR + MESH + PIPELINE IN ONE SWARM

Mesh Network

Every agent can talk to every other agent directly. Any node can request help from any peer — no bottleneck, full redundancy. Best for collaborative problem-solving where context is shared.

Deploy Agentic AI to Your Organization

Set up TigrimOS as a self-hosted web application. Give every team autonomous AI agents — from business ops to R&D — all on your own infrastructure.

Software Development

Agent teams that code, review, test, and deploy in parallel. Claude Code + Codex CLI as autonomous coders inside your swarm.

Research & Data Science

Run experiments, analyze data, and synthesize literature with full Python/numpy/pandas. Sensitive data never leaves your servers.

Business & Management

Automate reports, process documents, and run strategic analysis. AI agents handle routine work so your team focuses on decisions.

Tech Company & Enterprise

Full data sovereignty, air-gapped deployments, and org-wide skill sharing. MIT licensed — no vendor lock-in, no per-seat fees.

1
Download & launch — sandbox provisions automatically
2
Connect AI models — cloud APIs, local LLMs, or both
3
Design agent teams — visual editor, YAML export, ship faster

Secure by Design

Every AI operation runs inside an isolated Ubuntu sandbox. Your host system is protected by default.

Full Sandbox Isolation

macOS: Virtualization.framework VM. Windows: WSL2. AI code physically cannot access your host.

Invisible by Default

Host files hidden from AI. Share only what you choose — read-only by default.

NAT Networking

Sandbox isolated from host network. No unexpected outbound connections.

No Docker Required

Native OS-level virtualization. Lighter, faster, zero Docker dependency.

Download TigrimOS

Free and open source. MIT licensed. No account required.

macOS

Apple Silicon & Intel

macOS 13.0+ (Ventura) 4 GB RAM, ~5 GB disk Virtualization.framework

Windows

Windows 10/11

Windows 10 v2004+ 4 GB RAM, ~5 GB disk WSL2 sandbox
Download for Windows

Build from Source

MIT Licensed

Full source code Swift + Node.js Community contributions welcome
View on GitHub
Build from source
# Clone the repo
git clone https://github.com/Sompote/Tigrimos.git
cd TigrimOS

# Install qemu (macOS)
brew install qemu

# Build
swift build -c release
./Scripts/build.sh silicon  # or: intel, all

Stop Sending Your Data to the Cloud

Deploy autonomous AI agent teams that run entirely on your infrastructure. Free, open source, and secure by design.