LOOMAL
AutoGenPython

Email and identity
for AutoGen agents.

AutoGen makes multi-agent conversations easy and credentials hard. Loomal fills the gap: each AssistantAgent gets its own identity through an MCP workbench, so revocation is per-agent and the audit trail attributes correctly.

Per-agent identityMCP workbench toolsAssistantAgent + UserProxyAgentEmail between agentsVault & TOTP

Prerequisites

  • Loomal API key (free at console.loomal.ai)
  • Python 3.10+
  • autogen-agentchat with MCP support
  • An LLM provider key

AutoGen's strength is multi-agent conversation — you compose AssistantAgents that talk to each other (and to a UserProxyAgent) until a task completes. Its weakness, by default, is that every agent in the conversation shares the same process credentials and has no addressable identity outside the chat.

Loomal fixes this two ways. The MCP workbench gives each agent typed email and vault tools. And because each agent gets its own Loomal identity, credentials don't bleed between agents — the researcher can't see the writer's API keys, and revoking one identity doesn't kill the others.

1. Provision one identity per agent role

Create a separate Loomal identity for each AutoGen agent that needs email or credentials. The point is that they don't share a vault — a leak in one doesn't compromise the rest.

shell
# Create one identity per agent (or do this in your bootstrap script)
curl -X POST https://api.loomal.ai/v0/identities \
  -H "Authorization: Bearer $LOOMAL_ROOT_KEY" \
  -d '{"name": "researcher"}'
# Repeat for writer, reviewer, etc.

2. Install AutoGen with MCP support

Recent autogen-agentchat ships an McpWorkbench that wraps an MCP server's tools and presents them to AssistantAgent. Install both packages.

shell
pip install autogen-agentchat autogen-ext[openai,mcp]

3. Wire one MCP workbench per agent

Each AssistantAgent gets its own McpWorkbench, configured with that agent's specific LOOMAL_API_KEY. The workbench manages the subprocess lifecycle — start it before the conversation, close it after.

agents.py
import os
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.tools.mcp import McpWorkbench, StdioServerParams
from autogen_ext.models.openai import OpenAIChatCompletionClient

def loomal_workbench(api_key: str) -> McpWorkbench:
    return McpWorkbench(StdioServerParams(
        command="npx",
        args=["-y", "@loomal/mcp"],
        env={"LOOMAL_API_KEY": api_key},
    ))

model = OpenAIChatCompletionClient(model="gpt-4o-mini")

researcher = AssistantAgent(
    "researcher",
    model_client=model,
    workbench=loomal_workbench(os.environ["LOOMAL_RESEARCHER_KEY"]),
    system_message="Find prospects, then email a brief to writer-y@loomal.ai.",
)

writer = AssistantAgent(
    "writer",
    model_client=model,
    workbench=loomal_workbench(os.environ["LOOMAL_WRITER_KEY"]),
    system_message="Read mail from researcher-x@loomal.ai. Draft outreach for each prospect.",
)

4. Run a multi-agent conversation

AutoGen's RoundRobinGroupChat (or any other team primitive) drives the conversation. Each agent uses its own workbench when calling tools, so when researcher sends mail it's from its own address; when writer reads mail it's from its own inbox.

Don't forget to start and stop the workbenches — the McpWorkbench is an async context manager.

run.py
import asyncio
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination

async def main():
    async with researcher.workbench, writer.workbench:
        team = RoundRobinGroupChat(
            [researcher, writer],
            termination_condition=MaxMessageTermination(8),
        )
        await team.run(task="Find 3 SaaS companies hiring ML engineers and draft outreach.")

asyncio.run(main())

5. UserProxy with vault for credentials

If you have a UserProxyAgent that executes code or hits external APIs, give it its own Loomal identity and store any required keys in its vault. The model never sees the keys; the proxy fetches them at execution time via vault.get.

user_proxy.py
from autogen_agentchat.agents import UserProxyAgent

proxy = UserProxyAgent(
    "proxy",
    workbench=loomal_workbench(os.environ["LOOMAL_PROXY_KEY"]),
)

# In a tool the proxy calls, retrieve the key from the vault:
#   key = vault.get(label="github-token")
# The model only sees that the tool succeeded, never the key value.

Things to watch out for

Workbench lifecycle

McpWorkbench launches a subprocess on enter, kills it on exit. Use async with or explicit start/stop. Skipping this leaks Node processes per agent per run.

Per-agent keys are the point

It's tempting to use one LOOMAL_API_KEY across all AutoGen agents in a script. Don't — you lose the per-agent revocation property and the audit trail collapses to one identity for the whole crew.

FAQ

Does this work with the older autogen package (pyautogen)?

MCP workbench support is in the newer autogen-agentchat (Microsoft AutoGen 0.4+). For the older pyautogen, register Loomal as a function tool manually using the REST API — it works, just without the MCP convenience.

Can two AutoGen agents share one identity?

Technically yes; we recommend against it. Per-agent identities make revocation surgical and keep the audit log readable. If two agents truly serve the same role, model them as one agent with two prompts.

How do AutoGen tools and Loomal MCP tools interact?

They sit in the same tool list from the model's perspective. AutoGen native tools (e.g. code execution) and Loomal MCP tools (mail, vault) can be called interchangeably in the same conversation turn.

Loomal primitives used

mail.sendmail.list_messagesmail.replyvault.getidentity.whoami

Ship it.

Free tier, no card. 30 seconds to first email.

Last updated: 2026-04-14 · See also: Claude Agent SDK, Claude Desktop, Cursor