Prerequisites
- A Loomal API key (free at console.loomal.ai)
- Python 3.10+ or Node.js 20+
- openai-agents installed (Python) or @openai/agents (TypeScript)
- An OpenAI API key
OpenAI's Agents SDK is the recommended way to build tool-using agents on top of GPT models. It wraps the function-calling loop, handles multi-turn tool execution, and accepts MCP servers as a tool source so you don't have to write Python wrappers around every external API.
Loomal's MCP server fits directly into this model. Register it as an MCPServerStdio (Python) or MCPServer (TypeScript), pass it to the Agent constructor, and every Loomal primitive becomes available to the model — mail, vault, TOTP, and identity operations.
1. Create an identity and grab your key
Provision a new identity at console.loomal.ai. The API key (loid-...) and the email address (e.g. agent-x8k2m@loomal.ai) are the two things you need; the rest is wired in the SDK.
Don't reuse a single identity across multiple agents in production. Each agent should get its own — it makes revocation precise and keeps audit logs separable.
export LOOMAL_API_KEY="loid-your-api-key"
export OPENAI_API_KEY="sk-..."2. Install the SDK and MCP server
Install the OpenAI Agents SDK in your language of choice. The Loomal MCP server is launched on demand via npx, so you only need Node.js available on the path.
# Python
pip install openai-agents
# or TypeScript
npm install @openai/agents3. Register Loomal as an MCP server
The SDK launches the MCP server as a subprocess and discovers tools at runtime. You don't enumerate the tool list — every primitive Loomal exposes becomes available automatically.
Use cache_tools_list=True if you're starting many agents quickly; otherwise the SDK will probe the server on each agent construction, which adds a few hundred milliseconds.
import os, asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
async def main():
async with MCPServerStdio(
params={
"command": "npx",
"args": ["-y", "@loomal/mcp"],
"env": {"LOOMAL_API_KEY": os.environ["LOOMAL_API_KEY"]},
},
cache_tools_list=True,
) as loomal:
agent = Agent(
name="sales-agent",
instructions="You handle sales follow-ups via email. Always reply in-thread.",
mcp_servers=[loomal],
)
result = await Runner.run(
agent,
"Send a thank-you to alice@example.com for today's demo and propose a follow-up Tuesday.",
)
print(result.final_output)
asyncio.run(main())4. Run an inbox-monitoring loop
For agents that monitor an inbox, the typical pattern is a long-lived loop: poll mail.list_messages with labels=unread, hand each thread to the agent, let it reply or label, and repeat. The MCP server is reused across iterations — don't tear it down each time.
import os, asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
async def main():
async with MCPServerStdio(
params={
"command": "npx",
"args": ["-y", "@loomal/mcp"],
"env": {"LOOMAL_API_KEY": os.environ["LOOMAL_API_KEY"]},
},
) as loomal:
agent = Agent(
name="support",
instructions=(
"Check unread mail. Reply to billing/password questions. "
"Label anything else 'needs-human' so a person can pick it up."
),
mcp_servers=[loomal],
)
while True:
await Runner.run(agent, "Process the next batch of unread support emails.")
await asyncio.sleep(60)
asyncio.run(main())5. Add vault access for credentials and 2FA
For workflows that touch services behind a login, store the credentials and TOTP secret in the vault once. The agent calls vault.get to retrieve passwords and vault.totp to get the current 2FA code — no human in the loop, no shared secrets in env files.
# Pre-load credentials once (do this from your dev machine, not the agent):
# curl -X POST https://api.loomal.ai/v0/vault \
# -H "Authorization: Bearer $LOOMAL_API_KEY" \
# -d '{"label": "crm-password", "value": "..."}'
# curl -X POST https://api.loomal.ai/v0/vault \
# -H "Authorization: Bearer $LOOMAL_API_KEY" \
# -d '{"label": "crm-totp", "otpauth": "otpauth://totp/..."}'
agent = Agent(
name="crm-bot",
instructions=(
"To log in: read 'crm-password' from the vault, get the current code from 'crm-totp', "
"then submit the login form."
),
mcp_servers=[loomal],
)Things to watch out for
Tool calls show up in the trace
If you have OpenAI tracing enabled, every Loomal MCP call appears as a tool span. This is useful for debugging but means email subjects and labels are visible in your traces. Avoid logging message bodies if you're concerned about retention.
Cold-start latency
The npx invocation takes ~1–2 seconds the first time per host. For latency-sensitive workloads, install @loomal/mcp globally and reference its absolute path so npm doesn't fetch on every cold start.
FAQ
Do I need to define each Loomal tool as a function tool?
No. The MCP server advertises its full tool list to the SDK at connection time. The model sees them as native function-call options. You only write code if you want to wrap an MCP tool with extra logic (validation, redaction).
Can multiple agents share one MCP server connection?
Yes — instantiate one MCPServerStdio and pass it to several Agent objects. The server itself is stateless from the agent's perspective; concurrent calls are fine.
What about the hosted MCP feature?
If you prefer the hosted MCP transport, point it at Loomal's HTTP MCP endpoint instead of stdio. The tool surface is identical; only the transport changes.
Loomal primitives used
mail.sendmail.list_messagesmail.replyvault.getvault.totpOther framework guides
Concepts in this guide
See it in production
Last updated: 2026-04-14 · See also: AutoGen, Claude Desktop, CrewAI