2026-03-26
OpenClaw + NemoClaw: Enterprise AI Deployment for Indian Businesses
A technical deep-dive into deploying OpenClaw and NemoClaw as a secure, on-prem AI workforce for Indian enterprises — architecture, integration patterns, security policy, and the Agentex Sprint model.

The Problem with AI Deployment in Indian Enterprises
Indian enterprises are not short of interest in AI. Every CTO, COO, and IT Head we speak to has attended AI briefings, run a GenAI pilot, or evaluated one of the large cloud AI platforms. The bottleneck is not awareness — it is deployment. Specifically: how do you deploy AI agents that act inside your company's real systems, on your own infrastructure, without your data leaving your security perimeter, with audit trails your compliance team will accept?
Cloud-hosted AI platforms cannot answer that question for sectors governed by DPDP, RBI, IRDAI, or HIPAA-adjacent frameworks. On-prem LLM deployments without a proper agent runtime produce isolated scripts, not AI employees. Custom-built agent frameworks collapse under the maintenance burden before they reach production.
OpenClaw and NemoClaw together solve this problem. This post explains how — technically, not conceptually.
What OpenClaw Is (and Is Not)
OpenClaw is an open-source AI agent gateway — a self-hosted Linux daemon that routes enterprise messaging channels to autonomous AI agents. It is MIT-licensed, actively maintained, and engineered for production enterprise workloads.
It is **not** a chatbot builder. It is not a workflow automation tool (no drag-and-drop canvas, no trigger-action pairs). It is not a SaaS platform where your data transits a vendor's cloud. OpenClaw is the runtime layer that your AI employees live inside — connecting to your tools, acting on your data, on your server.
The core unit is the **Gateway**: a single long-lived process that owns your company's messaging surfaces. Your WhatsApp business number, your Telegram bot, your Slack workspace, your email inbox — all of these route through the Gateway. Inside the Gateway, individual AI agents are bound to specific channels, personas, and tool permissions. The Gateway orchestrates all of them.
How OpenClaw Handles Multi-Agent Coordination
Enterprise workflows are rarely single-agent. A QA workflow might involve an agent that reads Jira tickets, a second agent that writes test cases, and a third that executes the test suite and files the results. OpenClaw handles this through its `sessions_spawn` tool: any running agent can spawn a sub-agent, pass context and instructions, and wait for results — all within the same auditable session tree.
This is how Agentex builds multi-step AI employees that mirror real enterprise workflows: an AI QA Employee that writes tests, runs the suite, and opens a PR is not one agent with three prompts — it is an orchestrator agent coordinating three specialized sub-agents, each with scoped tool access and its own audit log.
What NemoClaw Adds: The Security Layer
NemoClaw is NVIDIA's open-source reference stack that runs OpenClaw inside **OpenShell sandboxes** — a policy-governed execution environment developed by NVIDIA. It was released in alpha on March 16, 2026, and it changes the security posture of enterprise AI deployment fundamentally.
Without NemoClaw, OpenClaw sandboxing relies on Docker containers or SSH isolation — which provides process-level separation but does not give compliance teams a declarative, auditable security policy. NemoClaw changes this.
OpenShell: Policy-Governed AI Execution
OpenShell is the sandbox runtime inside NemoClaw. Every AI agent session runs inside an OpenShell instance governed by policy files that define:
**Filesystem access** — which directories the agent can read, which it can write, which are forbidden. An AI Finance Ops Employee can write to `/workspace/invoices` and cannot touch `/workspace/hr-records`. This is enforced at the sandbox level — the agent process physically cannot access paths not in its policy.
**Network egress** — which hosts and ports the agent can connect to. An AI QA Employee can reach your GitHub API, your Jira instance, and your test runner — and nothing else. Egress to public internet endpoints outside the allowlist is blocked at the OpenShell network layer.
**Inference routing** — NemoClaw includes NVIDIA Nemotron open-source models, enabling inference to run fully on-prem. No API call to OpenAI, Anthropic, or any cloud LLM provider is required. Model calls stay inside the client's server. Tokens never leave the data center.
**Per-agent policy files** — each AI employee has its own NemoClaw policy file. The QA agent's policy is different from the Finance agent's policy, which is different from the Support agent's policy. There is no single "AI system" with blanket permissions — each role has exactly the access it needs to do its job.
For Indian enterprises operating under DPDP, this is the architecture that makes compliance tractable. Data does not leave the server. Inference happens on-prem. Every action is logged. Every policy is declarative and auditable.
The Agentex Deployment Architecture
Agentex uses OpenClaw + NemoClaw as the foundation for every client deployment. Here is what the architecture looks like in practice for a mid-market Indian enterprise.
Infrastructure Layer
We deploy on the client's own server — GCP (Cloud Run or Compute Engine), AWS (ECS or EC2), Azure, or on-prem Linux hardware. The server requirement is modest: a 4-core machine with 16GB RAM handles 10–15 concurrent AI employee sessions. GPU is optional for Nemotron inference but accelerates it significantly; CPU inference works for workloads without latency-sensitive requirements.
The OpenClaw Gateway daemon runs as a systemd service. NemoClaw's OpenShell is configured as the sandbox backend. Each AI employee's policy file is stored in the NemoClaw config directory on the same server.
Channel Integration Layer
OpenClaw's channel plugins connect the Gateway to the client's enterprise messaging surfaces:
- **WhatsApp Business Cloud API** — for customer-facing and external stakeholder workflows. Requires Meta Business verification. Messages route through Meta's Cloud API to OpenClaw's webhook endpoint on the client's server.
- **Telegram Bot API** — for internal ops workflows. No Meta approval required. Bot tokens route directly to the Gateway. Preferred for initial deployment because it is faster to set up.
- **Email (SMTP/IMAP + Resend)** — for email-based workflows. The AI employee monitors a designated inbox, processes inbound emails, and sends outbound via a configured sender domain.
- **Slack / Microsoft Teams** — for enterprise team collaboration surfaces. Configured via OAuth app registration in the client's workspace.
Integration Skills Layer
OpenClaw's Skills system allows agents to call external enterprise systems through pre-built integration modules. Agentex authors Skills for each client's specific tool stack:
- **Jira / Linear** — read tickets, update status, create issues, add comments
- **GitHub / GitLab** — read repos, open PRs, post review comments, trigger CI
- **ERP systems** (SAP, Oracle, Tally) — via REST API or browser automation for systems without APIs
- **CRM systems** (Salesforce, Zoho, HubSpot) — read/write deal and contact records
- **HRMS** (Darwinbox, Keka, Zoho People) — process leave requests, onboarding triggers, headcount queries
- **Internal databases** (Supabase, PostgreSQL, MySQL) — read/write via parameterized queries with schema-scoped access controls
Each Skill is scoped to the AI employee that needs it. The QA agent has GitHub and Jira Skills. The Finance agent has ERP and accounting platform Skills. Skills are not shared across agents unless the role explicitly requires it.
Role-Defining an AI Employee: The Three-File Model
Every AI employee deployed on OpenClaw is defined by three files that Agentex authors as part of the Sprint:
SOUL.md — Identity and Boundaries
SOUL.md defines who the AI employee is: its role title, its personality, its decision-making framework, and — critically — its hard stops. What it handles autonomously. What it always escalates to a human. What it never does under any instruction.
A QA AI Employee's SOUL.md specifies: it runs test suites autonomously, it files bug reports autonomously, it assigns severity tags autonomously — but it never marks a production release as cleared without a named human sign-off. These boundaries are not prompts asking the agent to behave well. They are structural constraints written into the agent's operating definition that it cannot reason around.
AGENTS.md — Workflow and Orchestration
AGENTS.md defines how the AI employee executes its role: the step-by-step process, escalation conditions, sub-agent coordination patterns, tool use sequence, and output format for each action type. This is what makes the agent predictable — not just capable. A capability without a defined workflow produces inconsistent output. AGENTS.md is the workflow.
TOOLS.md — Environment and Access
TOOLS.md maps the AI employee's environment: which enterprise systems it connects to, which credentials it uses, which channels it operates in, which specific endpoints and schemas it works with. It is the context that makes a general-purpose agent behave like someone who actually knows your company's tools.
Enterprise Integration Patterns: Three Deployment Examples
Pattern 1: AI QA Employee (IT Services / SaaS)
The AI QA Employee listens on a Telegram channel shared with the engineering team. When a developer posts "QA this PR: [link]", the agent reads the PR diff on GitHub, identifies the changed modules, generates test cases based on the existing test suite structure, runs the suite against a staging environment, and posts a structured QA report to the same Telegram channel — with pass/fail counts, failed test details, and a severity assessment. If critical tests fail, it opens a GitHub issue automatically and tags the PR author.
Time per QA cycle before deployment: 4–6 hours of engineering time. After: 20 minutes of AI execution, human review of the report.
Pattern 2: AI Finance Ops Employee (BFSI / Enterprise)
The AI Finance Ops Employee monitors a designated email inbox for vendor invoices. On receipt, it extracts the invoice metadata (vendor, amount, line items, GST number, due date), matches against the purchase order in the ERP system, flags mismatches for human review, and routes matched invoices for approval via a Telegram message to the finance manager. On approval, it updates the ERP record. Every action is logged with a timestamp and the document source.
This workflow typically handles 80–120 invoices per month at a mid-market company. The Finance Ops Employee processes the extraction, matching, and routing autonomously — the finance manager only sees the edge cases that require judgment.
Pattern 3: AI Customer Support Employee (D2C / E-Commerce)
The AI Customer Support Employee handles WhatsApp messages from customers. It is trained on the company's product catalog, return policy, and FAQ. For L1 queries (order status, return initiation, product questions), it resolves without escalation. For L2 queries (exceptions, complaints, payment disputes), it prepares a structured case summary and routes to a human agent via Telegram with full conversation context. The human sees the AI's proposed resolution and can approve it in one tap.
Resolution rate on L1 support for Indian D2C brands using this pattern: typically 55–70% without human intervention.
The Agentex Sprint: From Zero to Live AI Employee in 2 Weeks
Agentex delivers each AI employee through a fixed 2-week Sprint. Here is the technical sequence:
**Week 1:** Discovery and configuration. We map the target workflow, identify the integration points, author the SOUL.md / AGENTS.md / TOOLS.md files, configure OpenClaw and NemoClaw on the client's server, and set up channel bindings. By end of Week 1, a staging AI employee is running against test data in your environment.
**Week 2:** Integration and QA. We connect the real enterprise systems (live Jira, live GitHub, live ERP), run the AI employee through 20–30 real workflow instances with human review, tune the agent's behavior based on edge cases surfaced, and configure the NemoClaw policy files to match the client's security requirements. By end of Week 2, the AI employee is in production.
Sprint pricing: ₹1.5L–₹2L fixed fee. Post-Sprint retainer: ₹50,000–₹1,50,000/month depending on number of agents and integration complexity. AI token costs are billed at provider cost with zero markup.
Why "On-Prem" Matters More Than "AI" for Indian Enterprise Buyers
The CTO at an Indian NBFC or a healthcare platform is not primarily asking "is this AI good?" They are asking "where does my data go?" and "what does my compliance team need to sign off on this?"
OpenClaw + NemoClaw is the first AI agent deployment stack that can answer those questions with architectural evidence rather than vendor assurances:
- Data does not leave the client's server — inference runs on-prem via Nemotron
- Every agent action is logged with a full audit trail
- Every agent's access is defined by a declarative policy file, not application-level trust
- The stack is open source — the client (or their security team) can read every line of the runtime
For Indian enterprises navigating DPDP compliance, this is not a feature — it is the prerequisite for deployment.
Getting Started: What You Need to Evaluate OpenClaw + NemoClaw
If you are evaluating this stack for enterprise deployment, the relevant technical questions are:
1. What server environment can you provide? (GCP/AWS/Azure/on-prem — minimum 4 vCPU, 16GB RAM, Linux)
2. What are the messaging channels for the first AI employee? (WhatsApp / Telegram / email / Slack)
3. What enterprise systems does the first workflow touch? (Jira, GitHub, ERP, CRM — we need API access or browser automation access)
4. What are the human approval boundaries for the first workflow? (What decisions must always route to a person?)
5. What does your security team need to approve the deployment? (We provide NemoClaw policy files, OpenClaw audit logs, and full stack documentation)
Agentex handles the rest. We have authored this deployment dozens of times for Indian enterprises across IT services, BFSI, healthcare operations, and SaaS. The technical risk is known and managed. The open question is which workflow you start with.
Read how the 2-week Sprint model works, then book a Sprint discovery call at agentex.in/book-demo. We'll scope your first AI employee deployment in one conversation and give you a fixed-cost proposal within 48 hours.
Ready to deploy?
Book an AI Deployment Sprint — one workflow, live in 2 weeks.
Book AI Deployment Sprint →