LOOMAL
LangGraphPython

Stateful email workflows
for LangGraph agents.

LangGraph is the right framework when your agent's job spans hours or days — send an email, wait for a reply, branch on the response. Loomal provides the email primitives as MCP tools that drop into LangGraph nodes, with the agent's own inbox as the durable signal between steps.

Long-running graphsWait-for-reply nodesStateful checkpointingMCP tool integrationInbox as signal source

Prerequisites

  • A Loomal API key (free at console.loomal.ai)
  • Python 3.10+
  • langgraph and langchain-mcp-adapters installed
  • An LLM provider key

LangGraph models agents as graphs with persistent state, which is the right shape for any workflow that has to wait. Email-driven workflows fall in this category by default — you send a message and you don't know when (or whether) a reply will arrive. LangGraph's checkpointer can suspend the graph between sends and resume when new mail lands.

Loomal's MCP server provides the email primitives. The agent's own inbox becomes the signal that wakes the graph back up: a periodic node calls mail.list_messages, the graph routes based on what arrived, and the conversation continues.

1. Provision an identity and install dependencies

Set up a Loomal identity and install LangGraph plus the MCP adapter. The same MCP server you'd use with raw LangChain works here — LangGraph's tool nodes accept LangChain BaseTool objects.

shell
export LOOMAL_API_KEY="loid-your-api-key"
pip install langgraph langchain-openai langchain-mcp-adapters

2. Load Loomal tools and define state

Define a state shape that includes the conversation thread ID and the agent's status. Thread ID is the key: it's how you correlate outgoing sends with incoming replies, and it's what lets the graph resume the right conversation when mail arrives.

graph.py
import os
from typing import TypedDict, Annotated
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI

client = MultiServerMCPClient({
    "loomal": {
        "command": "npx",
        "args": ["-y", "@loomal/mcp"],
        "env": {"LOOMAL_API_KEY": os.environ["LOOMAL_API_KEY"]},
        "transport": "stdio",
    }
})
tools = await client.get_tools()
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)

class State(TypedDict):
    messages: list
    thread_id: str | None
    status: str  # 'sent', 'waiting', 'replied', 'done'

3. Build send / wait / resume nodes

Three nodes cover most email-driven workflows. send_node calls mail.send and stores the returned thread_id in state. wait_node polls mail.list_messages filtered by thread; if no reply, it returns to itself after a delay. handle_reply_node processes the reply and decides whether to send another message or finish.

With LangGraph's checkpointer, the wait node can persist between polls — your process can crash and restart and the graph resumes from where it left off.

graph.py (continued)
def send_node(state: State):
    response = llm.invoke(state["messages"])
    return {"messages": [response], "status": "sent"}

def wait_for_reply(state: State):
    # Tool call: mail.list_messages with thread filter and labels=unread
    return {"status": "waiting"}

def should_continue(state: State):
    return "reply" if state["status"] == "replied" else "wait"

graph = StateGraph(State)
graph.add_node("send", send_node)
graph.add_node("tools", ToolNode(tools))
graph.add_node("wait", wait_for_reply)
graph.add_edge("send", "tools")
graph.add_conditional_edges("tools", should_continue, {"reply": "send", "wait": "wait"})
graph.add_edge("wait", "tools")
graph.set_entry_point("send")
app = graph.compile()

4. Run a multi-step email negotiation

With the graph compiled, you can run conversations that span an arbitrary number of replies. The agent sends, the recipient replies, the graph wakes, the agent considers the reply and either sends another message or ends.

Add a checkpointer (Postgres, SQLite, or in-memory for dev) so state survives process restarts. Without checkpointing, the graph is in-memory only.

run.py
from langgraph.checkpoint.postgres import PostgresSaver

with PostgresSaver.from_conn_string("postgresql://...") as checkpointer:
    app = graph.compile(checkpointer=checkpointer)
    config = {"configurable": {"thread_id": "deal-4421"}}
    result = await app.ainvoke({
        "messages": [("user",
            "Negotiate the renewal with billing@acme.com. Aim for 12 months at 15% off list. "
            "Walk away below 10%."
        )],
        "status": "start",
    }, config)

5. Use the vault for downstream API calls

If a node needs to call an external API mid-graph (CRM lookup, Stripe charge, etc.), use vault.get for credentials. Because the vault is scoped to the agent identity, the same key works across every node in the graph without environment plumbing.

graph.py (vault usage)
# In any tool-calling node, the model can invoke vault.get directly:
#   { "tool": "vault.get", "args": {"label": "crm-api-key"} }
# The returned secret is scoped to the agent's identity. Revoking the
# identity removes the credential from every node simultaneously.

Things to watch out for

Polling cost

If you poll mail.list_messages aggressively across many threads, you'll burn through API calls. For high-volume workflows, configure the inbound webhook in the Loomal console — the webhook can call back into a LangGraph endpoint that resumes the right thread.

Checkpointer choice matters

Use a durable checkpointer (Postgres) in production. The default in-memory checkpointer is fine for dev, but it loses all suspended graphs on restart, which means lost email conversations.

FAQ

Do I have to poll for replies, or is there a webhook?

Both work. Polling is simpler to set up; webhooks are more efficient at scale. Configure the webhook in the Loomal console to POST to your LangGraph endpoint, and resume the corresponding thread when an inbound message arrives.

Can a single graph handle multiple parallel email conversations?

Yes. Each conversation gets its own configurable thread_id. The checkpointer keeps state per thread, so a hundred parallel negotiations don't interfere with each other.

How do I attach files in a multi-step graph?

mail.send accepts attachments. Store files in object storage, pass the URL or bytes to the agent in state, and let it call mail.send with the attachment array. Inbound attachments come back through mail.get_attachment.

Loomal primitives used

mail.sendmail.list_messagesmail.replymail.get_threadvault.get

Ship it.

Free tier, no card. 30 seconds to first email.

Last updated: 2026-04-14 · See also: AutoGen, Claude Desktop, Cursor