Prerequisites
- Loomal API key (free at console.loomal.ai)
- Node.js 20+
- ai SDK 5.x or later (with MCP support)
- An LLM provider key (OpenAI, Anthropic, Google, etc.)
The Vercel AI SDK is the dominant choice for TypeScript apps that need an LLM and tool calling, especially in Next.js. Recent versions added MCP support, which means you can hand the model an MCP server and it discovers tools automatically — no manual zod schemas, no boilerplate.
Loomal's MCP server fits this model. One createMCPClient call gives you the full email + vault + TOTP surface, ready to pass to streamText or generateText. Works in the Node runtime; for edge, run the MCP server in a Node sidecar and connect over HTTP.
1. Create an identity and set environment
Provision an identity at console.loomal.ai. Set the API key in your environment alongside your model provider key.
For Next.js, put both in .env.local for dev and your hosting platform's secret store for prod.
LOOMAL_API_KEY=loid-your-api-key
OPENAI_API_KEY=sk-...
# or ANTHROPIC_API_KEY=sk-ant-...2. Install dependencies
The MCP client lives in the ai package. Install your model provider package separately.
npm install ai @ai-sdk/openai
# or @ai-sdk/anthropic, @ai-sdk/google3. Connect Loomal as an MCP client
experimental_createMCPClient launches the MCP server (here, the Loomal server via npx) and returns a client whose .tools() method gives you a typed tool record — exactly the shape streamText and generateText expect.
Always close the client when the request finishes. In a Next.js route handler, do this in a finally block so a thrown error doesn't leak the subprocess.
import { experimental_createMCPClient, streamText } from "ai";
import { Experimental_StdioMCPTransport } from "ai/mcp-stdio";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { prompt } = await req.json();
const client = await experimental_createMCPClient({
transport: new Experimental_StdioMCPTransport({
command: "npx",
args: ["-y", "@loomal/mcp"],
env: { LOOMAL_API_KEY: process.env.LOOMAL_API_KEY! },
}),
});
try {
const tools = await client.tools();
const result = streamText({
model: openai("gpt-4o-mini"),
tools,
maxSteps: 8,
prompt,
});
return result.toDataStreamResponse();
} finally {
await client.close();
}
}4. Build a chat interface that can send email
The same MCP setup works in a chat route. The user types a message; the model decides whether to call mail.send, mail.list_messages, vault.totp, etc.; the SDK handles the tool execution loop.
maxSteps caps the number of tool-call rounds in one user turn — set it to whatever the longest reasonable workflow needs (8–10 covers most email tasks).
import { experimental_createMCPClient, streamText, convertToCoreMessages } from "ai";
import { Experimental_StdioMCPTransport } from "ai/mcp-stdio";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { messages } = await req.json();
const client = await experimental_createMCPClient({
transport: new Experimental_StdioMCPTransport({
command: "npx",
args: ["-y", "@loomal/mcp"],
env: { LOOMAL_API_KEY: process.env.LOOMAL_API_KEY! },
}),
});
try {
const result = streamText({
model: openai("gpt-4o-mini"),
system: "You are an email assistant. Reply in-thread by default.",
messages: convertToCoreMessages(messages),
tools: await client.tools(),
maxSteps: 10,
onFinish: async () => { await client.close(); },
});
return result.toDataStreamResponse();
} catch (e) {
await client.close();
throw e;
}
}5. Edge runtime: run MCP as an HTTP sidecar
Edge runtimes don't allow spawning child processes, so the stdio transport won't work there. The pattern is to run the Loomal MCP server as a small Node service (Cloud Run, Fly machine, container) and connect over HTTP from the edge route. The tool surface is identical.
import { experimental_createMCPClient, streamText } from "ai";
export const runtime = "edge";
export async function POST(req: Request) {
const client = await experimental_createMCPClient({
transport: { type: "sse", url: process.env.LOOMAL_MCP_URL! },
});
try {
const result = streamText({
model: openai("gpt-4o-mini"),
tools: await client.tools(),
messages: (await req.json()).messages,
});
return result.toDataStreamResponse();
} finally {
await client.close();
}
}Things to watch out for
Don't forget client.close()
Each request spawns a subprocess. Failing to close the client leaks Node processes; under load the host runs out of file descriptors within minutes. Wrap in try/finally or use the onFinish hook on streamText.
Stdio doesn't work on edge
Vercel Edge, Cloudflare Workers, and similar runtimes can't spawn child processes. Run the MCP server as an HTTP sidecar and connect over SSE.
FAQ
Does this work with the AI SDK's useChat hook on the client?
Yes — the tool execution happens in the route handler, not the client. The hook receives streamed text and tool-call events the same as it would for any tool source.
Can I use Anthropic or Google models?
Yes. Replace openai('gpt-4o-mini') with anthropic('claude-opus-4-5') or google('gemini-2.5-pro'). The MCP tool surface is model-agnostic; any provider that supports tool calling works.
What about streaming tool results back to the client?
streamText emits tool-call and tool-result events as part of the data stream. The frontend (useChat or a custom consumer) sees them in order — useful for showing 'sending email...' UI affordances during the tool call.
Loomal primitives used
mail.sendmail.list_messagesmail.replyvault.getvault.totpOther framework guides
Concepts in this guide
See it in production
Last updated: 2026-04-14 · See also: AutoGen, Claude Desktop, CrewAI