Prerequisites
- Loomal API key (free at console.loomal.ai)
- n8n 1.60+ (Cloud or self-hosted)
- An LLM credential set up in n8n (OpenAI, Anthropic, etc.)
n8n's AI Agent node added MCP client support in late 2025. That makes wiring Loomal a five-minute job: point the AI Agent at the @loomal/mcp server, hand it a Loomal API key as a credential, and every email and vault primitive shows up as a tool the agent can call.
For workflows that don't need the AI Agent node — say, a scheduled job that just sends an email when a Postgres row appears — the same primitives are reachable via HTTP Request nodes against api.loomal.ai/v0. Either way, no custom node development.
1. Provision an identity and add it as an n8n credential
Create a Loomal identity at console.loomal.ai. In n8n, go to Credentials → New → Header Auth (or HTTP Custom Auth), name it 'Loomal API', and set the Authorization header to Bearer loid-your-api-key.
If you have multiple workflows, mint multiple identities — one per workflow — so revocation is surgical.
{
"name": "Loomal API",
"type": "httpHeaderAuth",
"data": {
"name": "Authorization",
"value": "Bearer loid-your-api-key"
}
}2. Wire Loomal into the AI Agent node via MCP
In the AI Agent node's tool config, add an MCP Client tool. Point it at the Loomal MCP server using the stdio command. The agent now sees mail.send, mail.list_messages, vault.get, and so on as callable tools — no manual schema work.
{
"server": {
"command": "npx",
"args": ["-y", "@loomal/mcp"],
"env": {
"LOOMAL_API_KEY": "={{$credentials.loomalApi.value}}"
}
},
"toolFilter": ["mail.send", "mail.list_messages", "mail.reply", "vault.get", "vault.totp"]
}3. Send mail without the AI Agent node
For non-agent workflows — schedule trigger, database trigger, form trigger — use the HTTP Request node directly. Method POST, URL https://api.loomal.ai/v0/messages/send, Authentication: your Loomal credential, Body: a JSON object with to, subject, and text.
{
"method": "POST",
"url": "https://api.loomal.ai/v0/messages/send",
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth",
"sendBody": true,
"contentType": "json",
"jsonBody": "={\n \"to\": [\"{{$json.email}}\"],\n \"subject\": \"{{$json.subject}}\",\n \"text\": \"{{$json.body}}\"\n}"
}4. Receive mail via webhook
Configure an inbound webhook in the Loomal console pointing to your n8n Webhook node URL. When mail arrives at the agent's address, n8n receives a POST with the message payload and you can branch on labels, sender, or subject from there.
Pair this with the AI Agent node downstream to triage and respond — the webhook delivers the message, the agent decides what to do, mail.reply sends the response in-thread.
{
"event": "message.received",
"messageId": "<abc123@loomal.ai>",
"threadId": "thd_xyz",
"from": "alice@example.com",
"subject": "Re: Demo follow-up",
"extractedText": "Sounds great, can we do Tuesday at 2pm?",
"labels": ["unread"]
}5. Use vault for downstream API credentials
When a workflow needs a credential the AI Agent should fetch on demand (Stripe key, CRM token), store it in the Loomal vault and reference it via vault.get. The model never sees the credential value — only the result of using it.
For TOTP-protected logins, the same pattern works with vault.totp. Pre-load the otpauth secret once via the REST API; have the agent call vault.totp by label whenever a code is required.
{
"method": "POST",
"url": "https://api.loomal.ai/v0/vault",
"jsonBody": {
"label": "stripe-key",
"value": "sk_live_..."
}
}Things to watch out for
Self-hosted n8n needs npx
The MCP stdio transport spawns npx, so your n8n container needs Node.js available on PATH. The official n8n image has it; minimal custom images may not.
Per-workflow identities
If you reuse one Loomal API key across ten workflows, you can't tell which workflow sent which email, and revoking the key takes them all down. Mint one per workflow.
FAQ
Do I need the AI Agent node at all?
No. Plenty of useful workflows just send mail on a trigger — those use the HTTP Request node directly. The AI Agent node is for workflows where the LLM decides what to send and when.
Can I use this in n8n Cloud?
Yes. The HTTP Request approach works on Cloud out of the box. The MCP stdio transport requires self-hosted (since Cloud doesn't expose subprocess spawning); for Cloud, use the HTTP MCP transport or call REST directly.
How do I attach files in a sent email?
POST to /v0/messages/send with an attachments array containing filename, contentType, and base64-encoded content. The HTTP Request node's binary data option handles the encoding.
Loomal primitives used
mail.sendmail.list_messagesmail.replyvault.getvault.totpOther framework guides
Concepts in this guide
See it in production
Last updated: 2026-04-14 · See also: AutoGen, Claude Agent SDK, CrewAI