Prerequisites
- Loomal API key (free at console.loomal.ai)
- Python 3.10+
- pydantic-ai installed
- An LLM provider key
Pydantic AI was built to bring Pydantic's validation guarantees to LLM tool calling. Tool definitions become typed Python functions; arguments are validated before the function runs; results are validated on the way back.
MCP slots into this model neatly. Pydantic AI consumes MCP servers via MCPServerStdio, automatically generating typed wrappers from the server's advertised schemas. Loomal's MCP server publishes complete JSON Schemas for every primitive — recipients arrays, label lists, message bodies — so the agent can't make a malformed call by accident.
1. Provision an identity and install dependencies
Set up a Loomal identity. Install pydantic-ai with the MCP extra so MCPServerStdio is available.
export LOOMAL_API_KEY="loid-your-api-key"
export OPENAI_API_KEY="sk-..."
pip install "pydantic-ai-slim[openai,mcp]"2. Register Loomal as an MCP server
Pass the MCP server in the Agent's mcp_servers list. Pydantic AI launches the subprocess on the first run and reuses it across calls within the same agent context.
Use async with agent.run_mcp_servers(): around your invocations to ensure the server is properly started and torn down.
import os
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
loomal = MCPServerStdio(
command="npx",
args=["-y", "@loomal/mcp"],
env={"LOOMAL_API_KEY": os.environ["LOOMAL_API_KEY"]},
)
agent = Agent(
"openai:gpt-4o-mini",
mcp_servers=[loomal],
system_prompt="You handle email replies. Stay in-thread by default.",
)
async def main():
async with agent.run_mcp_servers():
result = await agent.run(
"Send a thank-you to alice@example.com for the demo today."
)
print(result.output)
import asyncio
asyncio.run(main())3. Read inbox and reply with typed results
Tool results come back as Python dicts conforming to the MCP server's response schema. If you want even stronger typing, define Pydantic models that mirror the server's response shape and parse on the way out.
For a simple inbox loop, the agent can chain mail.list_messages → mail.get_thread → mail.reply on its own based on the prompt.
agent = Agent(
"openai:gpt-4o-mini",
mcp_servers=[loomal],
system_prompt=(
"You triage support email. Reply directly to billing or password questions. "
"Label anything else 'needs-human' and move on."
),
)
async def loop():
async with agent.run_mcp_servers():
while True:
await agent.run("Process unread support email.")
await asyncio.sleep(60)4. Add output validation with Pydantic
Pydantic AI's signature feature is structured output. Define a result_type and the agent will validate its final answer against your schema. Combine this with email tools and you get an agent that, for example, sends an email and returns a typed receipt your downstream code can rely on.
from pydantic import BaseModel
class SendResult(BaseModel):
recipient: str
subject: str
message_id: str
thread_id: str
agent = Agent(
"openai:gpt-4o-mini",
mcp_servers=[loomal],
output_type=SendResult,
system_prompt="Send the email, then return the receipt fields from mail.send.",
)
async def send_typed():
async with agent.run_mcp_servers():
result = await agent.run("Email bob@example.com 'Welcome aboard'.")
# result.output is a SendResult — validated, typed, ready to use
return result.output5. Vault and TOTP usage
Storing credentials in the vault means the agent never sees them in source. Pre-load via the REST API; have the agent call vault.get and vault.totp by label.
If you want to constrain the agent to specific labels, validate inside a wrapper tool and forward only valid label names — this gives you allow-listing without changing the MCP server.
# Pre-load secrets via REST (one-time, from your dev box):
# POST https://api.loomal.ai/v0/vault
# { "label": "crm-key", "value": "sk_live_..." }
# POST https://api.loomal.ai/v0/vault
# { "label": "crm-totp", "otpauth": "otpauth://totp/..." }
agent = Agent(
"openai:gpt-4o-mini",
mcp_servers=[loomal],
system_prompt=(
"Use vault label 'crm-key' for the API and 'crm-totp' for the second factor. "
"Never echo secret values back to the user."
),
)Things to watch out for
Output validation can re-prompt
If the agent's final answer fails validation against output_type, Pydantic AI re-prompts with the validation error. This is great for correctness but can cost extra tokens. Keep your output schema tight but achievable.
Async-only for MCP
MCPServerStdio is async. If the rest of your codebase is sync, run agent.run inside an asyncio.run wrapper at the boundary.
FAQ
Can I mix Loomal MCP tools with my own @agent.tool functions?
Yes. MCP tools and @agent.tool-decorated functions live in the same tool namespace from the model's perspective. Use this to combine Loomal's email primitives with your own domain tools (CRM lookups, internal APIs).
Does this work with Anthropic or Gemini models?
Yes. Replace 'openai:gpt-4o-mini' with 'anthropic:claude-opus-4-5' or 'gemini-2.5-pro'. Tool calling is a model-agnostic capability in Pydantic AI.
How do I pass message-level dependencies into the tools?
Pydantic AI's deps_type covers your custom @agent.tool functions but not MCP tools (which run out-of-process). For request-scoped values, set environment variables or use the vault — the MCP server reads from its own env, which inherits from Python's process env.
Loomal primitives used
mail.sendmail.list_messagesmail.replyvault.getvault.totpOther framework guides
Concepts in this guide
See it in production
Last updated: 2026-04-14 · See also: AutoGen, Claude Desktop, CrewAI