LOOMAL
LangChainPython / TypeScript

Email and credentials
for LangChain agents.

LangChain agents need tools, and email is the tool every agent eventually wants. Wire Loomal in through the langchain-mcp-adapters package and your agent gets a real address, an encrypted vault, and TOTP — as standard LangChain tools the executor can route to.

BaseTool integrationEmail send / receiveVault & TOTPThreaded repliesWorks with create_react_agent

Prerequisites

  • A Loomal API key (free at console.loomal.ai)
  • Python 3.10+ or Node.js 20+
  • langchain and langchain-mcp-adapters installed
  • An LLM provider key (OpenAI, Anthropic, etc.)

LangChain's tool system is the connective tissue between an LLM and the outside world. For email specifically, most teams end up writing a custom Tool that wraps SMTP and an IMAP poller — and then maintaining the credential storage, retry logic, and threading by hand.

Loomal collapses that work into an MCP server. Using langchain-mcp-adapters (Python) or @langchain/mcp-adapters (TypeScript), every Loomal primitive becomes a LangChain Tool you can hand to create_react_agent or any other executor.

1. Provision an identity

Create an identity in the Loomal console. The API key (loid-...) authenticates your agent; the email address (e.g. agent-x8k2m@loomal.ai) is what the world sees as the sender.

Set both LOOMAL_API_KEY and your LLM provider key in the environment before running anything.

shell
export LOOMAL_API_KEY="loid-your-api-key"
export OPENAI_API_KEY="sk-..."

2. Install LangChain and the MCP adapter

The MCP adapter package converts MCP tool advertisements into LangChain BaseTool instances at runtime. You don't write any wrapper code; one call returns a list of tools you can pass straight to the executor.

shell
pip install langchain langgraph langchain-openai langchain-mcp-adapters

3. Load Loomal as LangChain tools

MultiServerMCPClient connects to one or more MCP servers and exposes their tools as LangChain Tool objects. The Loomal server runs as a stdio subprocess; pass LOOMAL_API_KEY in env so the subprocess inherits it.

client.get_tools() returns the full list. Filter it if you only want a subset — for example, drop mail.delete_message if your agent should never delete mail.

agent.py
import os, asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

async def main():
    client = MultiServerMCPClient({
        "loomal": {
            "command": "npx",
            "args": ["-y", "@loomal/mcp"],
            "env": {"LOOMAL_API_KEY": os.environ["LOOMAL_API_KEY"]},
            "transport": "stdio",
        }
    })
    tools = await client.get_tools()

    agent = create_react_agent(
        ChatOpenAI(model="gpt-4o-mini"),
        tools,
    )
    result = await agent.ainvoke({
        "messages": [("user", "Email alice@example.com a thank-you for today's demo.")]
    })
    print(result["messages"][-1].content)

asyncio.run(main())

4. Build an inbox-watcher agent

For an agent that processes inbound mail, the loop is: list unread, get full thread context, decide on a reply, send it, label the thread. mail.list_messages plus mail.reply plus mail.update_labels covers the whole pattern.

watcher.py
agent = create_react_agent(
    ChatOpenAI(model="gpt-4o-mini"),
    tools,
    state_modifier=(
        "You triage support email. Read unread threads. Reply directly to questions about "
        "billing or password resets. Label other messages 'needs-human' and stop."
    ),
)

while True:
    await agent.ainvoke({"messages": [("user", "Process unread mail.")]})
    await asyncio.sleep(60)

5. Use the vault for downstream credentials

When the agent needs to authenticate to another service, store the credential in the vault and have the agent fetch it on demand. vault.get returns the secret value; vault.totp returns the current 2FA code. The model never sees the underlying TOTP secret, only the six digits.

vault_usage.py
# Pre-load secrets via the REST API, not via the agent:
#   POST https://api.loomal.ai/v0/vault
#     { "label": "stripe-key", "value": "sk_live_..." }
#   POST https://api.loomal.ai/v0/vault
#     { "label": "crm-totp", "otpauth": "otpauth://totp/..." }

# Then in the prompt, tell the agent which labels to use:
result = await agent.ainvoke({
    "messages": [("user",
        "Issue a refund for order #4421 using the Stripe key labeled 'stripe-key'. "
        "Email the customer a confirmation."
    )]
})

Things to watch out for

Tool count affects model context

Loading every Loomal tool gives the model 25+ options. For small models, this can degrade tool selection. Filter client.get_tools() to the subset this agent actually needs — usually 4–6 tools is enough.

Async only for MCP adapter

langchain-mcp-adapters is async-first. If your existing LangChain code is synchronous, switch to ainvoke / async constructors or wrap the MCP client in asyncio.run on initialization.

FAQ

Does this work with the older AgentExecutor?

Yes. client.get_tools() returns standard BaseTool instances, which AgentExecutor accepts. We recommend create_react_agent from LangGraph because it's the maintained path forward, but the tools themselves are framework-agnostic.

Can I swap out the LLM?

Yes. The Loomal tools have nothing to do with the model. Replace ChatOpenAI with ChatAnthropic, ChatVertexAI, or any other LangChain chat model. Tool calling is what matters; any tool-calling model works.

How do I let the agent send email but not delete it?

After client.get_tools(), filter the list: tools = [t for t in tools if 'delete' not in t.name]. Pass the filtered list to create_react_agent. The deletion tools are simply not visible to the model.

Loomal primitives used

mail.sendmail.list_messagesmail.replymail.update_labelsvault.get

Ship it.

Free tier, no card. 30 seconds to first email.

Last updated: 2026-04-14 · See also: AutoGen, Claude Agent SDK, Claude Desktop