DAO Proposals & Community
View active proposals, submit new ideas, and connect with the SWARMS community.
## Summary Enterprise swarms need identity for agent-level access control, compliance, and cross-swarm trust. AgentID provides ECDSA P-256 certificates and trust scores. ## Reference Implementation [AgentID](https://getagentid.dev) is an open-source identity layer for AI agents: - ECDSA P-256 certificates per agent - Verification API for runtime credential validation - Trust scores for authorization decisions - Agent registry for discovery We have built integrations for CrewAI, LangChain, and MCP. Source: [github.com/haroldmalikfrimpong-ops/getagentid](https://github.com/haroldmalikfrimpong-ops/getagentid) Happy to contribute an implementation.
## What changed - `swarms/utils/x402_spending_limits.py` — new module implementing per-agent spending caps; tracks cumulative spend per agent ID using an in-memory ledger with optional persistence, enforces configurable hard limits before each x402 payment attempt - `swarms/utils/__init__.py` — exports `SpendingLimitTracker` and `enforce_spending_limit` so downstream agent code can import them without reaching into the utils internals - `examples/guides/x402_examples/agent_spending_limits_example.py` — end-to-end walkthrough: configure limits, run a multi-agent swarm against a paid API, observe enforcement and graceful degradation when a cap is reached - `tests/utils/test_x402_spending_limits.py` — unit tests covering limit initialisation, single-agent exhaustion, multi-agent isolation, and reset behaviour ## Why Issue #1346 identified a risk in autonomous swarm deployments: an agent entering a loop or mis-classifying task complexity can drain the operator's x402 balance before a human can intervene. Without per-agent caps the only safeguard was the global account balance, which is too coarse for production swarms where different agents have different cost profiles. The spending tracker is intentionally decoupled from the payment transport layer — it wraps the call site rather than patching x402 internals, so it works with any x402 provider and survives future transport upgrades without modification. ## Result - Each agent in a swarm can be assigned an independent spending ceiling - Exceeding the cap raises a typed exception (`SpendingLimitExceeded`) that calling code can catch and handle without crashing the whole swarm - Limits are composable: per-call, per-session, and lifetime caps can be layered Closes https://github.com/kyegomez/swarms/issues/1346
## Summary - Adds `raise_on_error` parameter to `GraphWorkflow.validate()` so callers can opt into `ValueError` on validation failure instead of inspecting the returned dict - Integrates `validate(auto_fix=True)` into `compile()` so structural graph problems (disconnected nodes, missing entry points, unreachable nodes, cycles) are caught at build time — not mid-execution - Expands docstring with full Args/Returns/Raises/Example documentation - Adds 10 new tests covering: empty graphs, disconnected nodes, missing entry points, unreachable nodes, auto-fix behavior, raise_on_error flag, valid graphs, and compile integration Fixes #1485 ## Test plan - [ ] `pytest tests/structs/test_graph_workflow.py -v` passes all new validate tests - [ ] Existing tests remain green (no breaking changes — new parameter defaults to `False`) - [ ] `compile()` now logs validation warnings for misconfigured graphs before computing topological layers 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1493.org.readthedocs.build/en/1493/ <!-- readthedocs-preview swarms end -->
## Summary Skip redundant `any_to_str()` call in `_run_sequential_workflow` when `agent.run()` already returns a string (the common path). ## Changes - Added `isinstance(current_task, str)` guard before `any_to_str()` call at line 577 - Near-zero cost check that avoids function call overhead on every agent step ## Before ```python current_task = agent.run(task=self.conversation.get_str(), img=img) current_task = any_to_str(current_task) # Always called, even when already str ``` ## After ```python current_task = agent.run(task=self.conversation.get_str(), img=img) if not isinstance(current_task, str): current_task = any_to_str(current_task) # Only when needed ``` ## Test plan - [x] Existing tests pass unchanged (behavior is identical) - [x] `any_to_str()` still called for non-string returns (dicts, lists, objects) - [x] String returns skip the redundant conversion Fixes #1462 <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1492.org.readthedocs.build/en/1492/ <!-- readthedocs-preview swarms end -->
## Summary - Fixes #1482 — adds an `on_node_complete` callback that fires immediately when each agent finishes, before the layer completes - Callback receives `(node_id, output)` and can be set at `__init__` or passed to `run()` (run-level takes precedence) - Enables real-time UIs, logging pipelines, progress reporting, and early-exit patterns ## Test plan - [x] `test_graph_workflow_on_node_complete_callback_via_run` — callback passed to `run()` fires for each agent with correct outputs - [x] `test_graph_workflow_on_node_complete_callback_via_init` — callback set at `__init__` fires for each agent - [x] `test_graph_workflow_on_node_complete_run_overrides_init` — run-level callback takes precedence over init-level 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1491.org.readthedocs.build/en/1491/ <!-- readthedocs-preview swarms end -->
## Summary - Fixes #1481 — `max_loops` parameter in `GraphWorkflow.run()` was dead code; a `return` inside the `while` loop caused early exit after the first iteration - Moves `return` outside the loop, accumulates per-loop results, and feeds end-point outputs as context into subsequent loop iterations - Adds `loop_idx` parameter to `_build_prompt()` so entry-point agents in loop 2+ receive previous iteration outputs for refinement ## Test plan - [x] `test_graph_workflow_max_loops_accumulates_results` — verifies 3 loops produce all expected `_loop_N` keys - [x] `test_graph_workflow_single_loop_backward_compatible` — verifies `max_loops=1` returns same format as before (no loop-suffixed keys) - [x] All 15 existing tests pass with no regressions 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1490.org.readthedocs.build/en/1490/ <!-- readthedocs-preview swarms end -->
## Summary Closes #1487 - `swarms autoswarm --task "..." --model "..."` now writes a ready-to-run `.py` file to the current directory - Every agent in the YAML config maps to a correctly-named `Agent(...)` variable - Swarm architecture maps to `SwarmRouter` with the correct `swarm_type` - `--output`/`-o` flag overrides the output path - `--no-run` flag skips execution and only writes the file - Existing run-immediately behavior preserved by default ### Bugs fixed along the way - `parse_yaml_from_swarm_markdown` used `re.search` (first match), grabbing the template YAML from the system prompt instead of the LLM's generated output — fixed to use last match - `create_agents_from_yaml` checked `return_type not in Literal[...]` — Python's `in` operator doesn't work on `Literal` types, always raising `ValueError` — fixed to use a plain tuple ### Files changed | File | What | |---|---| | `swarms/agents/auto_generate_swarm_config.py` | `write_autoswarm_file()`, helpers, YAML extraction fix, `generate_swarm_config` returns config dict | | `swarms/agents/create_agents_from_yaml.py` | `Literal` `in` operator bug fix | | `swarms/cli/main.py` | `--output`, `--no-run` flags, 3-step orchestration in `run_autoswarm` | | `tests/agents/test_autoswarm_writer.py` | 44 functional tests | ## Test plan - [x] 44 automated tests pass (helpers, real Agent/SwarmRouter construction, CLI arg parsing, full pipeline) - [x] `swarms autoswarm --task "..." --model gpt-4.1 --no-run -o out.py` writes correct file - [x] `swarms autoswarm --task "..." --model gpt-4.1` writes file AND executes swarm end-to-end - [x] Generated file imports and creates real objects when run independently <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1489.org.readthedocs.build/en/1489/ <!-- readthedocs-preview swarms end -->
## Summary `generate_swarm_config()` in `swarms/agents/auto_generate_swarm_config.py` creates its internal `Auto-Swarm-Builder` agent with `dynamic_temperature_enabled=True`. This causes LiteLLM to send both `temperature` and `top_p` parameters to the LLM provider. Anthropic's API rejects requests that include both, making `generate_swarm_config` unusable with any Anthropic model. ## Root Cause ```python # swarms/agents/auto_generate_swarm_config.py line 413-422 agent = Agent( agent_name="Auto-Swarm-Builder", system_prompt=AUTO_GEN_PROMPT, llm=model, max_loops=1, dynamic_temperature_enabled=True, # <-- this is the problem saved_state_path="swarm_builder.json", user_name="swarms_corp", output_type="str", ) ``` When `dynamic_temperature_enabled=True`, the agent sets both `temperature` and `top_p` on the LLM call. Anthropic's API returns: ``` AnthropicException - {"type":"error","error":{"type":"invalid_request_error", "message":"`temperature` and `top_p` cannot both be specified for this model."}} ``` ## Steps to Reproduce ```bash swarms autoswarm --task "build a research pipeline" --model claude-haiku-4-5 ``` ## Expected Behavior The command should work with any LiteLLM-supported model, including Anthropic models. ## Suggested Fix Either: 1. Remove `dynamic_temperature_enabled=True` from the Auto-Swarm-Builder agent (it's a config-generation agent, not a creative task — dynamic temperature adds no value here) 2. Or make `dynamic_temperature_enabled` detect the provider and skip `top_p` for Anthropic models --- ## Additional Bug Found: `Literal` type used with `in` operator In `swarms/agents/create_agents_from_yaml.py` line 326: ```python if return_type not in ReturnTypes: ``` `ReturnTypes` is `typing.Literal['auto', 'swarm', 'agents', 'both', 'tasks', 'run_swarm']`. The `in` operator does not work on `Literal` types in Python — it always returns `False`, so `not in` always returns `True`. This means **every call** to `create_agents_from_yaml` raises `ValueError("Invalid return_type")`. ```python >>> from typing import Literal >>> "run_swarm" not in Literal['auto', 'swarm', 'agents', 'both', 'tasks', 'run_swarm'] True # wrong — should be False >>> "garbage" not in Literal['auto', 'swarm', 'agents', 'both', 'tasks', 'run_swarm'] True # same result for invalid values ``` ### Fix Replace the `Literal` type check with a plain tuple: ```python valid_return_types = ("auto", "swarm", "agents", "both", "tasks", "run_swarm") if return_type not in valid_return_types: raise ValueError(f"Invalid return_type. Must be one of: {valid_return_types}") ``` ## Relevant Files | File | Issue | |---|---| | `swarms/agents/auto_generate_swarm_config.py:418` | `dynamic_temperature_enabled=True` breaks Anthropic | | `swarms/agents/create_agents_from_yaml.py:326` | `in` operator on `Literal` type always fails | | `swarms/utils/types.py` | `ReturnTypes` defined as `Literal` |
## Summary The `autoswarm` CLI command currently calls `generate_swarm_config()` which produces a YAML config and runs the swarm internally, but **never persists a usable Python file** for the user. The user has no artifact they can inspect, modify, or re-run without touching the CLI again. This issue tracks adding a full pipeline to `autoswarm` that: 1. **Generates the config** — LLM produces the agent/swarm YAML (already works via `generate_swarm_config` / `AutoSwarmBuilder`) 2. **Parses the dict → `Agent()` structs** — map every key in the config dict to the correct `Agent(...)` constructor parameter 3. **Writes a ready-to-run `.py` file** in the current working directory so the user has a reusable, editable script --- ## Current behavior (`swarms/cli/main.py:55–100`, `swarms/agents/auto_generate_swarm_config.py`) ``` swarms autoswarm --task "build a research pipeline" --model gpt-4.1 ``` - Calls `generate_swarm_config(task=task, model=model)` - Config is generated, agents are created via `create_agents_from_yaml()`, swarm runs - **Nothing is written to disk**; the user has no output file --- ## Desired behavior ``` swarms autoswarm --task "build a research pipeline" --model gpt-4.1 ``` Expected output: ``` ✓ Swarm configuration generated ✓ Parsed 3 agents from config ✓ Written to: ./autoswarm_research_pipeline.py ``` Generated file (`autoswarm_research_pipeline.py`): ```python from swarms import Agent from swarms.structs.swarm_router import SwarmRouter # Auto-generated by `swarms autoswarm` — edit freely researcher = Agent( agent_name="Researcher", agent_description="Finds and summarises relevant papers", system_prompt="You are a research specialist ...", model_name="gpt-4.1", max_loops=1, temperature=0.5, ) analyst = Agent( agent_name="Analyst", agent_description="Synthesises findings into actionable insights", system_prompt="You are a data analyst ...", model_name="gpt-4.1", max_loops=2, temperature=0.3, ) writer = Agent( agent_name="Writer", agent_description="Produces the final report", system_prompt="You are a technical writer ...", model_name="gpt-4.1", max_loops=1, ) swarm = SwarmRouter( name="Research-Pipeline", description="End-to-end research pipeline", agents=[researcher, analyst, writer], swarm_type="SequentialWorkflow", max_loops=1, ) if __name__ == "__main__": result = swarm.run("build a research pipeline") print(result) ``` --- ## Implementation plan ### 1. Config dict → `Agent()` parameter mapping (`swarms/agents/auto_generate_swarm_config.py`) Add a `config_dict_to_agent_code(agent_dict: dict) -> str` helper that maps the parsed YAML fields to the `Agent(...)` constructor signature: | YAML key | `Agent()` param | |---|---| | `agent_name` | `agent_name` | | `system_prompt` | `system_prompt` | | `description` | `agent_description` | | `model_name` | `model_name` | | `max_loops` | `max_loops` | | `temperature` | `temperature` | | `max_tokens` | `max_tokens` | | `autosave` | `autosave` | | `verbose` | `verbose` | | `output_type` | `output_type` | | `context_length` | `context_length` | Unknown keys should be emitted as `**kwargs`-style comments so the user knows they may need to map them manually. ### 2. File writer (`swarms/agents/auto_generate_swarm_config.py` or new `swarms/utils/swarm_file_writer.py`) ```python def write_autoswarm_file( config: dict, output_path: str = "autoswarm_output.py", ) -> str: """ Given a parsed swarm config dict, render a ready-to-run Python file and write it to `output_path`. Returns the resolved file path. """ ``` - Slugify the swarm `name` field to build the default filename (e.g. `Research Pipeline` → `autoswarm_research_pipeline.py`) - Include a header comment noting the file was auto-generated - Import only what is needed (`Agent`, the relevant swarm struct) - Emit a `if __name__ == "__main__":` block with `swarm.run(task)` ### 3. Wire into `run_autoswarm` (`swarms/cli/main.py:55`) ```python result = generate_swarm_config(task=task, model=model) output_file = write_autoswarm_file(config=result, task=task) console.print(f"[green]✓ Written to: {output_file}[/green]") ``` Add an optional `--output` / `-o` flag to `setup_argument_parser()` so users can specify a custom output path. ### 4. Preserve existing run-immediately behaviour Keep the existing in-process execution path. Add a `--no-run` flag that skips execution and only writes the file, for users who just want the generated script. --- ## Relevant files | File | Role | |---|---| | `swarms/cli/main.py:55` | `run_autoswarm()` — entry point to modify | | `swarms/agents/auto_generate_swarm_config.py` | `generate_swarm_config()` — produces the config dict | | `swarms/agents/create_agents_from_yaml.py` | `create_agents_from_yaml()` — existing YAML→Agent pipeline for reference | | `swarms/structs/auto_swarm_builder.py` | `AgentSpec`, `SwarmRouterConfig` Pydantic models — use for field validation | | `swarms/structs/agent.py` | `Agent.__init__` — canonical parameter list | | `swarms/cli/utils.py` | CLI helpers, console, error formatting | --- ## Acceptance criteria - [ ] `swarms autoswarm --task "..." --model "..."` writes a `.py` file to the current directory - [ ] The generated file is valid Python that runs without modification (assuming env vars are set) - [ ] Every agent in the YAML config maps to a correctly-named `Agent(...)` variable in the output - [ ] Swarm architecture section maps to the correct swarm struct (`SwarmRouter`, `SequentialWorkflow`, etc.) - [ ] An `--output` / `-o` flag lets the user override the output path - [ ] A `--no-run` flag skips in-process execution and only writes the file - [ ] Existing behaviour (running the swarm immediately) is preserved by default - [ ] Unit tests cover: config→code mapping, file writing, CLI flag wiring --- ## Labels `enhancement`, `cli`, `autoswarm`, `good first issue`
## Summary Add [MiniMax](https://www.minimax.io/) as a first-class LLM provider in Swarms, enabling users to use MiniMax models (M2.7, M2.5, M2.5-highspeed) with a simple `minimax/` prefix just like other providers. ### Changes - **LiteLLM Wrapper** (`swarms/utils/litellm_wrapper.py`): Auto-route `minimax/` model prefix through MiniMax's OpenAI-compatible API with automatic base_url, api_key detection, and temperature clamping to [0, 1.0] - **Model Router** (`swarms/structs/model_router.py`): Register MiniMax-M2.7 and MiniMax-M2.5-highspeed in model recommendations and providers - **CLI Utils** (`swarms/cli/utils.py`): Detect `MINIMAX_API_KEY` in setup-check and active provider display - **Documentation**: New `minimax.md` provider guide + update `model_providers.md` with MiniMax entry - **Tests**: 14 unit tests (auto-routing, temp clamping, config preservation) + 3 integration tests ### Usage ```python from swarms import Agent from dotenv import load_dotenv load_dotenv() agent = Agent( agent_name="MiniMax-Agent", model_name="minimax/MiniMax-M2.7", system_prompt="You are a helpful assistant.", max_loops=1, ) response = agent.run("Explain quantum computing.") ``` ### MiniMax Model Highlights - **MiniMax-M2.7**: Latest flagship model, 204K context window, strong multilingual and reasoning - **MiniMax-M2.5-highspeed**: Optimized for fast inference, 204K context - OpenAI-compatible API (base URL: `https://api.minimax.io/v1`) ## Test plan - [x] 14 unit tests pass (auto-routing, temperature clamping, API key detection, config preservation) - [x] 3 integration tests pass with real MiniMax API - [ ] Verify `swarms setup-check` shows MiniMax when `MINIMAX_API_KEY` is set - [ ] Test Agent with `model_name="minimax/MiniMax-M2.7"` end-to-end <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1486.org.readthedocs.build/en/1486/ <!-- readthedocs-preview swarms end -->
## Summary `GraphWorkflow` has no explicit `validate()` method. Misconfigured graphs (disconnected nodes, missing entry points, cycles in DAG-only workflows, unreachable nodes) are only discovered at `run()` time — often mid-execution — rather than at build time. ## Proposed Enhancement Add a `validate() -> List[str]` method that runs all structural checks and returns a list of error/warning messages without executing any agents: ```python def validate(self, raise_on_error: bool = True) -> List[str]: issues = [] # 1. Check for disconnected nodes (nodes with no edges) for node_id in self.nodes: if self.graph_backend.in_degree(node_id) == 0 and self.graph_backend.out_degree(node_id) == 0: issues.append(f"Node '{node_id}' is disconnected (no edges)") # 2. Check entry points exist and are valid if not self.entry_points: issues.append("No entry points defined or auto-detectable") for ep in self.entry_points: if ep not in self.nodes: issues.append(f"Entry point '{ep}' does not exist in nodes") # 3. Check end points exist and are valid if not self.end_points: issues.append("No end points defined or auto-detectable") # 4. Detect cycles (for workflows that expect DAG topology) cycles = self.graph_backend.simple_cycles() if cycles: issues.append(f"Graph contains {len(cycles)} cycle(s): {cycles}") # 5. Check for unreachable nodes (not reachable from any entry point) reachable = set() for ep in self.entry_points: reachable.add(ep) reachable.update(self.graph_backend.descendants(ep)) unreachable = set(self.nodes) - reachable if unreachable: issues.append(f"Unreachable nodes: {unreachable}") if raise_on_error and issues: raise ValueError(f"GraphWorkflow validation failed:\n" + "\n".join(f" - {i}" for i in issues)) return issues ``` ## Usage ```python wf = GraphWorkflow(...) wf.add_node(agent_a) wf.add_node(agent_b) wf.add_edge(agent_a, agent_b) issues = wf.validate(raise_on_error=False) if issues: print("Workflow has issues:", issues) else: wf.run(task="...") ``` ## Integration - Call `validate()` automatically inside `compile()` so misconfiguration is always caught before execution. - Expose as a public API so users can call it after building a workflow and before deploying. ## File `swarms/structs/graph_workflow.py`
## Summary Long-running `GraphWorkflow` executions have no fault tolerance. If an agent fails or the process crashes mid-graph, the entire workflow must restart from scratch — including re-running all previously completed agents. ## Proposed Enhancement Add a `checkpoint_dir` parameter that automatically persists `prev_outputs` to disk after each layer completes, and resumes from the last successful checkpoint on restart: ```python GraphWorkflow( ..., checkpoint_dir="./checkpoints/run_abc123", ) ``` ### Checkpoint behavior - After each layer completes, serialize `prev_outputs` to `{checkpoint_dir}/layer_{idx}.json`. - On `run()`, check if a checkpoint exists for the current task. If so, skip already-completed layers and load their outputs from disk. - Checkpoint files are keyed by `(task_hash, layer_idx)` to avoid collisions across different tasks. ```python checkpoint_path = Path(checkpoint_dir) / f"{hash(task)}_layer_{layer_idx}.json" if checkpoint_path.exists(): prev_outputs.update(json.loads(checkpoint_path.read_text())) continue # skip re-executing this layer ``` ### Cleanup Add a `clear_checkpoints(task: str)` method to delete checkpoint files after successful completion. ## Use Cases - Multi-hour agentic pipelines where any single agent may time out or hit API limits. - Cost savings — avoid paying for re-running expensive LLM calls on agents that already succeeded. - Debugging — inspect the exact output of each layer independently. ## File `swarms/structs/graph_workflow.py` — `run()` method, layer execution loop (~line 1750)
## Summary `GraphWorkflow` has no serialization or deserialization support. Users cannot save a workflow definition, version it in git, share it, or reconstruct it from config — they must always build it programmatically. ## Proposed Enhancement Add a `to_spec()` method that serializes the graph structure (without requiring Agent objects to be serializable): ```python def to_spec(self) -> Dict[str, Any]: return { "name": self.name, "description": self.description, "max_loops": self.max_loops, "nodes": [ {"id": node_id, "agent_name": node.agent.agent_name} for node_id, node in self.nodes.items() ], "edges": [ {"source": e.source, "target": e.target, "metadata": e.metadata} for e in self.edges ], "entry_points": self.entry_points, "end_points": self.end_points, } def to_json(self, path: str) -> None: with open(path, "w") as f: json.dump(self.to_spec(), f, indent=2) ``` And a `from_spec()` classmethod that reconstructs the graph topology given a pre-built agent registry: ```python @classmethod def from_spec(cls, spec: Dict[str, Any], agent_registry: Dict[str, Agent]) -> "GraphWorkflow": ... ``` ## Use Cases - Version-control workflow definitions alongside code. - Share workflow configs between teams without sharing agent implementation details. - Reconstruct a workflow from a database or config file at runtime. - Diff workflow changes in PRs. ## File `swarms/structs/graph_workflow.py`
## Summary There is currently no way to observe agent results as they complete within a layer. All results are only accessible after `run()` returns the final `Dict[str, Any]`. ## Proposed Enhancement Add an `on_node_complete` callback parameter to `GraphWorkflow` and/or `run()`: ```python def run( self, task: str, on_node_complete: Optional[Callable[[str, Any], None]] = None, ... ) -> Dict[str, Any]: ... # called immediately when each agent finishes, before the layer completes if on_node_complete: on_node_complete(node_id, output) ``` ## Use Cases - **Real-time UIs** — stream partial results to a frontend as each agent finishes rather than waiting for the full graph. - **Logging pipelines** — write each agent output to a log or database immediately without buffering everything in memory. - **Early-exit patterns** — inspect intermediate outputs and cancel remaining agents in a layer if a satisfactory result has already been produced. - **Progress reporting** — show users which agents have completed in a long-running workflow. ## File `swarms/structs/graph_workflow.py` — `run()` method, result collection loop (~line 1832)
## Summary There is an explicit comment at line 1897 of `swarms/structs/graph_workflow.py`: > "For now, we still return after the first loop — this maintains backward compatibility." The `while loop < self.max_loops` loop runs correctly, but a `return` statement inside the loop exits on the first iteration regardless of `max_loops`. This means `max_loops > 1` has zero effect. ## Expected Behavior - When `max_loops=N`, the workflow should execute all N iterations. - Results should either accumulate across loops (keyed by loop index) or the final loop's results should be returned. - Alternatively, if multi-loop is not needed, the `max_loops` parameter and dead loop logic should be removed to avoid misleading users. ## Current Behavior `run()` always returns after a single execution pass regardless of `max_loops`. ## File `swarms/structs/graph_workflow.py` — line ~1897 ## Fix Options 1. Move `return execution_results` outside the `while` loop and accumulate per-loop results in a `Dict[int, Dict[str, Any]]`. 2. Remove `max_loops` parameter entirely and document the breaking change. 3. Only accumulate end-point outputs across loops and pass them as context into the next loop.
## Description Conversation.get_str() was rebuilding the full conversation string on every call by iterating over all messages from scratch — even when nothing had changed between calls. In AgentRearrange, this is called once per agent step, meaning with 10 concurrent agents sharing the same conversation, the same string gets rebuilt 10 times redundantly per step. I implemented a string cache (_str_cache) on the Conversation class that stores the result of get_str() and returns it instantly on subsequent calls. The cache is invalidated only when the conversation is actually mutated — via add, delete, update, clear, batch_add, load_from_json, load_from_yaml, or truncate_memory_with_tokenizer. Additionally, I added get_cache_stats() which returns live hit/miss counts, hit rate, and cached token count — and included an interactive example (examples/conversation_cache_interactive.py) that spins up a real agent so you can chat and observe the cache stats update in real time after every response. Additionally I added multiple fixes to ensure all 31 conversation tests now pass, up from 29: - get_cache_stats() — returns hits, misses, cached token count, total tokens, and hit rate - to_yaml() — missing method that the test suite expected - list_cached_conversations() — missing classmethod that the test suite expected - cache_enabled as a parameter alias for caching ## Explanation Video https://drive.google.com/file/d/12Jr5trKBu8NSdoMdRpt4fqultO3-5wCZ/view?usp=sharing ## Issue #1460 ## Dependencies None ## Tag maintainer @kyegomez ## Twitter handle @akc__2025 <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1480.org.readthedocs.build/en/1480/ <!-- readthedocs-preview swarms end -->
## Summary - `_get_sequential_awareness` always resolved to the first occurrence of an agent name, so repeated agents (e.g., `Writer -> Reviewer -> Writer`) all received identical positional info - Add `task_idx` param to `_get_sequential_awareness` and `_run_sequential_workflow` so each occurrence gets correct neighbors - Use indexed keys in `response_dict` to preserve all outputs from repeated agents instead of silently overwriting - Remove commented-out duplicate agent check (dead code) ## Test plan - [x] 8 new tests covering repeated agent awareness, backward compat, 3-occurrence flows, and end-to-end runs - [x] All 29 tests pass (21 existing + 8 new) - [x] No public API changes — only private `_` methods updated with optional params Closes #1466 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1479.org.readthedocs.build/en/1479/ <!-- readthedocs-preview swarms end -->
## Summary Implement the native multi-agent architecture of grok4.20 heavy architecture with 4 agents and their specializations: https://x.com/elonmusk/status/2034710771075273177?s=46 can find architectural docs here: https://www.google.com/search?sca_esv=822a830dd0cf28f6&rlz=1C5CHFA_enUS1111US1111&sxsrf=ANbL-n46LavBHAdsvhIRLSnJj_iekVZ73Q:1773948669964&udm=2&fbs=ADc_l-aN0CWEZBOHjofHoaMMDiKp9lEhFAN_4ain3HSNQWw-mMGVXS0bCMe2eDZOQ2MOTwnMa06_-qUutYsIv5lB1HqB7Pf6gcWqPaZz5tPxChciWIfJTUMbCU6e7kZ-C3rDUjSijh5aLl1-wfSAfnoI6evxac4yV1AawJ2rTjJPFVLz6bjlf2o5YCmDC9DXUr0vWWu8iNtjWmH8xN1KM75ZHztICn3hyQ&q=grok+4.20+heavy+docs&sa=X&ved=2ahUKEwiWt8ri2ayTAxXVRTABHe6GGBgQtKgLegQIFhAB&biw=1544&bih=992&dpr=1.5 https://aitoolland.com/grok-4-20-heavy-guide/ - leader -> 4 agents -> summary -> final summary - create 4 specialized agents with logic, creativity, etc - put them in the heavy swarm class - make them optional to use with some new parameter
## Summary Formally support repeating agents in the flow pattern (e.g., `agent1 -> agent2 -> agent1`) to enable iterative refinement workflows. ## Background The duplicate agent check in `validate_flow()` is already commented out (agent_rearrange.py lines 330-333): ```python # # If the length of the agents does not equal the length of the agents in flow # if len(set(agents_in_flow)) != len(agents_in_flow): # raise ValueError( # "Duplicate agent names in the flow are not allowed." # ) ``` This suggests the feature was considered but not fully implemented. Agent repetition is valuable for patterns like: - **Iterative refinement**: `writer -> reviewer -> writer` (revise based on feedback) - **Multi-pass analysis**: `researcher -> analyst -> researcher -> analyst` - **Self-correction**: `coder -> tester -> coder` ## What needs to happen 1. **Ensure conversation context is correct** — When an agent appears a second time, it should see its own prior output plus the intermediate agent's feedback, not just re-process the same input. 2. **Update team awareness** — `_get_sequential_awareness()` currently finds the first occurrence of an agent name. It should be position-aware so `agent1` at step 1 knows it leads to `agent2`, while `agent1` at step 3 knows it follows `reviewer`. 3. **Response dict handling** — `response_dict[agent_name] = result` (line 686) overwrites prior results from the same agent. Use a list or indexed key (e.g., `agent1_1`, `agent1_2`). 4. **Document the pattern** — Add examples to the docstring and docs showing iterative refinement flows. ## Acceptance criteria - [ ] Flows like `agent1 -> agent2 -> agent1` execute without error - [ ] Each agent invocation gets correct positional awareness - [ ] All outputs from repeated agents are preserved (not overwritten) - [ ] Documentation updated with iterative refinement examples - [ ] Tests cover repeated agent flows
## Summary Two related issues with `AgentRearrange.batch_run()`: 1. **batch_run is not concurrent** — The comment says "process batch using concurrent execution" (line 843) but the implementation is a sequential list comprehension. It should use a thread pool. 2. **No pipeline parallelism** — When processing multiple tasks through a sequential flow, agent 2 could start on task 1 while agent 1 works on task 2. Currently each task must fully complete before the next one starts. ## Problem Current implementation (agent_rearrange.py lines 844-852): ```python # Process batch using concurrent execution <-- comment says concurrent batch_results = [ # <-- but it's sequential self.run(task=task, img=img_path) for task, img_path in zip(batch_tasks, batch_imgs) ] ``` For 5 tasks with 3 agents each taking 2s: current = 30s, with thread pool = ~6s per batch. ## Proposed solution ### Fix 1: Use ThreadPoolExecutor within batches ```python with ThreadPoolExecutor(max_workers=min(len(batch_tasks), os.cpu_count())) as executor: futures = [ executor.submit(self.run, task=task, img=img_path) for task, img_path in zip(batch_tasks, batch_imgs) ] batch_results = [f.result() for f in futures] ``` ### Fix 2: Pipeline parallelism (future enhancement) For sequential flows, allow task N+1 to enter the pipeline at agent 1 while task N is at agent 2. This requires decoupling per-task conversation state from the shared `self.conversation`. ## Acceptance criteria - [ ] `batch_run` uses `ThreadPoolExecutor` to process tasks within each batch concurrently - [ ] Each task gets its own conversation copy to avoid shared state corruption - [ ] Batch ordering is preserved in results - [ ] Existing tests pass unchanged
## Summary `any_to_str(current_task)` is called on every agent response in `_run_sequential_workflow` (line 577) even though `agent.run()` already returns a string in the common path. Check type first to skip the conversion. ## Problem ```python current_task = agent.run(task=self.conversation.get_str(), img=img) current_task = any_to_str(current_task) # Redundant when already a string ``` `any_to_str()` handles arbitrary types (dicts, lists, objects), but in practice `agent.run()` returns a string. The extra function call and type inspection on every agent step is wasted work. ## Proposed solution ```python current_task = agent.run(task=self.conversation.get_str(), img=img) if not isinstance(current_task, str): current_task = any_to_str(current_task) ``` Simple `isinstance` check is near-zero cost and skips the function call overhead in the common case. ## Acceptance criteria - [ ] Guard `any_to_str()` with `isinstance(current_task, str)` check - [ ] Existing tests pass unchanged
## Summary `validate_flow()` and `self.flow.split("->")` re-parse the flow string on every `_run()` call. Parse it once at init (or when `set_custom_flow` is called) into a structured execution plan. ## Problem In `_run()` (agent_rearrange.py lines 629-633): ```python if not self.validate_flow(): # Re-validates every call ... tasks = self.flow.split("->") # Re-parses every call ``` `validate_flow()` iterates all agent names in the flow and checks them against the agents dict — O(n) per run. The flow string is also re-split into steps every time. For a flow that never changes between runs, this is redundant work. ## Proposed solution Parse the flow into a structured execution plan at init time: ```python def _parse_flow(self) -> List[List[str]]: """Parse flow string into list of steps, each step a list of agent names.""" steps = [] for step in self.flow.split("->"): agent_names = [name.strip() for name in step.split(",")] steps.append(agent_names) return steps ``` - Call `_parse_flow()` in `__init__` and store as `self._execution_plan` - Call it again in `set_custom_flow()` to update the plan - Validate once during parsing, not on every run - `_run()` uses `self._execution_plan` directly instead of re-parsing ## Acceptance criteria - [ ] Add `_parse_flow()` method that returns structured plan + validates agent names - [ ] Cache result as `self._execution_plan` at init - [ ] Update cache in `set_custom_flow()` and `add_agent()` / `remove_agent()` - [ ] Remove `validate_flow()` call and `flow.split("->")` from `_run()` - [ ] Existing tests pass unchanged
## Summary `Conversation.get_str()` rebuilds the full string representation of the conversation history on every call. In `AgentRearrange`, this is called once per agent step (sequential) or once per concurrent group, meaning the same string is reconstructed repeatedly even when no new messages have been added between calls. Cache the result and invalidate only when `conversation.add()` is called. ## Problem In `AgentRearrange._run_sequential_workflow` (line 572) and `_run_concurrent_workflow` (line 509), `self.conversation.get_str()` is called to pass the full history to each agent. For concurrent agents in the same step, the conversation hasn't changed between calls — yet `get_str()` rebuilds the string each time. With 10 agents and 1K tokens each, this means re-concatenating up to ~50K tokens of string data redundantly. ## Proposed solution ```python class Conversation: def __init__(self, ...): ... self._str_cache: Optional[str] = None def add(self, role: str, content: str, ...): self._str_cache = None # Invalidate cache # ... existing add logic ... def get_str(self) -> str: if self._str_cache is None: self._str_cache = self._build_str() # Current get_str logic return self._str_cache ``` ## Impact - Eliminates redundant string rebuilds in concurrent agent groups (N agents = N-1 saved rebuilds per step) - Reduces CPU + memory allocation pressure in long sequential chains where `get_str()` is called between validation/logging calls - Zero behavior change — purely internal optimization ## Acceptance criteria - [ ] Add `_str_cache` field to `Conversation`, initialized to `None` - [ ] Invalidate cache in `add()` and any other mutation methods (`delete`, `update`, etc.) - [ ] Return cached value in `get_str()` when available - [ ] Existing tests pass unchanged
Fixes #1456 Adds documentation showing how to integrate a Hugging Face model as a custom base LLM with the `Agent` class. The `Agent` class supports any custom LLM object implementing a `run(task: str) -> str` method, but no example existed for Hugging Face models. Creates `docs/examples/huggingface_custom_llm.md` with a minimal `HuggingFaceLLM` wrapper class using `AutoModelForCausalLM` and `AutoTokenizer`, and registers it in the LLM providers table in `docs/examples/basic_examples_overview.md`. Verified that the diff stays within the 50-line addition limit. <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1459.org.readthedocs.build/en/1459/ <!-- readthedocs-preview swarms end -->
## Summary Add a new documentation example showing how to integrate an Ollama-hosted model as a custom base LLM for the `Agent` class. ## Details The example should demonstrate: 1. **Creating a custom LLM class** with a `run(task: str) -> str` method that calls the Ollama API (via `ollama` Python package or raw HTTP) 2. **Passing the custom LLM instance** to `Agent(llm=...)` as the base language model 3. **Running the agent** end-to-end with the Ollama-backed LLM ### Suggested example structure ```python import ollama class OllamaLLM: def __init__(self, model_name: str = "llama3", temperature: float = 0.7, max_tokens: int = 500): self.model_name = model_name self.temperature = temperature self.max_tokens = max_tokens def run(self, task: str) -> str: response = ollama.chat( model=self.model_name, messages=[{"role": "user", "content": task}], options={ "temperature": self.temperature, "num_predict": self.max_tokens, }, ) return response["message"]["content"] ``` Then wire it into an Agent: ```python from swarms import Agent llm = OllamaLLM(model_name="llama3") agent = Agent( agent_name="Ollama-Agent", llm=llm, max_loops=1, agent_description="An agent powered by a local Ollama model", ) agent.run("Explain quantum computing in simple terms.") ``` ## Acceptance criteria - [ ] New docs page (e.g., `docs/examples/ollama_custom_llm.md`) with the full working example - [ ] Covers prerequisites: installing Ollama, pulling a model (`ollama pull llama3`), and installing the `ollama` Python package - [ ] Explains the `run` method contract that `Agent` expects from a custom LLM - [ ] Added to the mkdocs nav under examples ## Labels `documentation`, `enhancement`
## Summary Add a new documentation example showing how to integrate a Hugging Face model as a custom base LLM for the `Agent` class. ## Details The example should demonstrate: 1. **Creating a custom LLM class** with a `run(task: str) -> str` method that wraps a Hugging Face model (e.g., using `transformers` pipeline or `AutoModelForCausalLM`) 2. **Passing the custom LLM instance** to `Agent(llm=...)` as the base language model 3. **Running the agent** end-to-end with the Hugging Face-backed LLM ### Suggested example structure ```python from transformers import AutoModelForCausalLM, AutoTokenizer class HuggingFaceLLM: def __init__(self, model_name: str, max_tokens: int = 500): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.max_tokens = max_tokens def run(self, task: str) -> str: inputs = self.tokenizer(task, return_tensors="pt") outputs = self.model.generate(**inputs, max_new_tokens=self.max_tokens) return self.tokenizer.decode(outputs[0], skip_special_tokens=True) ``` Then wire it into an Agent: ```python from swarms import Agent llm = HuggingFaceLLM(model_name="meta-llama/Llama-2-7b-chat-hf") agent = Agent( agent_name="HuggingFace-Agent", llm=llm, max_loops=1, agent_description="An agent powered by a local Hugging Face model", ) agent.run("Explain quantum computing in simple terms.") ``` ## Acceptance criteria - [ ] New docs page (e.g., `docs/examples/huggingface_custom_llm.md`) with the full working example - [ ] Covers installing dependencies (`transformers`, `torch`) - [ ] Explains the `run` method contract that `Agent` expects from a custom LLM - [ ] Added to the mkdocs nav under examples ## Labels `documentation`, `enhancement`
## Summary Add Novita AI as a new provider to Swarms, following the existing provider pattern. ## Changes ### CLI Utils (`swarms/cli/utils.py`) - Added `NOVITA_API_KEY` environment variable detection in `_detect_active_provider()` - Added `NOVITA_API_KEY` to `check_api_keys()` validation ### Model Router (`swarms/structs/model_router.py`) - Added three Novita AI model recommendations: - `moonshotai/kimi-k2.5`: Kimi K2.5 for complex reasoning and code generation - `zai-org/glm-5`: GLM-5 for multimodal reasoning and general tasks - `minimax/minimax-m2.5`: MiniMax M2.5 for video and multimodal tasks - Added `novita` to `providers` dictionary ## Usage Set the API key: ```bash export NOVITA_API_KEY="your-novita-api-key" ``` Use with Agent via OpenAI-compatible endpoint: ```python import os from swarms import Agent agent = Agent( agent_name="Novita-Agent", model_name="moonshotai/kimi-k2.5", llm_base_url="https://api.novita.ai/openai", llm_api_key=os.getenv("NOVITA_API_KEY"), max_loops=1, ) result = agent.run("Your task here") ``` ## Notes - Endpoint: `https://api.novita.ai/openai` (OpenAI-compatible) - API key follows existing provider pattern via `NOVITA_API_KEY` env var - No existing code modified — only additions to CLI utils and model router - Minimal diff, no unrelated refactoring
## Description This PR fixes a consistency issue in autonomous runs where the final summary didn't always follow the same streaming path as earlier steps. Now, both the normal end-of-loop path and the early-completion case route through the same streaming logic, so users always receive live output through the last response instead of seeing streaming cut off at completion. I also added targeted regression tests to verify callback forwarding into summary generation, confirm that streamed tokens are emitted during summary creation, and validate both completion flows. ## Explanation Video https://drive.google.com/file/d/1_wcdvmzqXJ2o96PxV2KEibeDNd7UOFL0/view?usp=sharing ## Files Changed * `swarms/structs/agent.py` * `tests/structs/test_agent_autonomous_streaming_callback.py` ## Issue #1297 ## Dependencies No extra dependencies required. ## Maintainer @kyegomez ## Twitter [@akc__2025](https://x.com/akc__2025) <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1451.org.readthedocs.build/en/1451/ <!-- readthedocs-preview swarms end -->
Swarms' enterprise-grade multi-agent framework is impressive — the production focus and reliability guarantees are exactly what enterprise deployments need. **[GNAP](https://github.com/farol-team/gnap)** (Git-Native Agent Protocol) addresses a gap in enterprise multi-agent deployments: persistent, auditable task coordination across agent boundaries. **Enterprise pain point GNAP solves:** - Agent crashes → task assignment lost → manual recovery - Audit requirements → need full trace of who did what - Cross-team agent deployments → no shared infrastructure assumption **How it fits Swarms:** ```python # SwarmRouter with GNAP persistence backend router = SwarmRouter( name="enterprise-workflow", swarms=[research_swarm, writing_swarm, review_swarm], coordination_backend="gnap", # git repo as task queue gnap_repo="https://github.com/your-org/workflow-state" ) # Tasks survive agent restarts # Full audit trail in git history # Compliance teams can query git log for any task ``` **Why enterprises care about git-based coordination:** - Git repos have access controls, branch protection, PR reviews - Compliance: git history is tamper-evident - No new infrastructure — every enterprise already has git Repo: https://github.com/farol-team/gnap — would love to discuss integration patterns.
## Summary Fixes #1400 - **Tier 1**: `_build_skills_prompt` now only injects skill name + description into the system prompt (no full content) - **Tier 2**: `load_full_skill` is registered as a callable tool so the LLM can load full skill content on demand mid-conversation - Removed `content` from metadata in both `DynamicSkillsLoader._load_all_skills_metadata` and `Agent.load_skills_metadata` - `DynamicSkillsLoader.load_full_skill_content` now reads from disk instead of returning cached content - Added `_register_skills_tool` to handle tool registration and deduplication ## Test plan - [x] 13 new tests in `tests/structs/test_progressive_skills_loading.py` - [x] Metadata has no `content` key (Tier 1 verified) - [x] `load_full_skill` reads full content from disk on demand (Tier 2 verified) - [x] `handle_skills` on a real Agent registers `load_full_skill` as a tool - [x] `tool_struct.execute_function_calls_from_api_response` can execute a simulated LLM `load_full_skill` call and returns correct content - [x] No duplicate tool registration on repeated `handle_skills` calls - [x] Existing agent tests unaffected 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- readthedocs-preview swarms start --> ---- 📚 Documentation preview 📚: https://swarms--1434.org.readthedocs.build/en/1434/ <!-- readthedocs-preview swarms end -->
Hey @kyegomez — I ran [AIR Blackbox](https://airblackbox.ai), an open-source EU AI Act compliance scanner, against the swarms codebase. Sharing the results in case it's useful as you think about the August 2026 high-risk AI enforcement deadline. **Scan result: 36% coverage — 12 compliance gaps across all 6 articles** Framework detected: LangChain ### Gaps by Article **Article 9 — Risk Management** - No tool call risk classification (ConsentGate missing) - - Risk decisions not logged to audit trail **Article 10 — Data Governance** - No PII tokenization before LLM calls (DataVault missing) - - Prompts not captured for bias examination **Article 11 — Technical Documentation** - No structured operation timestamps - - No full call graph capture (chain → LLM → tool → result) **Article 12 — Record-Keeping** - No automatic event recording over system lifetime - - Consent decisions not logged **Article 14 — Human Oversight** - No audit trail for human review of agent actions - - No mechanism to define per-tool risk thresholds - - - No interrupt/override capability (ConsentDeniedError / InjectionBlockedError) **Article 15 — Robustness & Cybersecurity** - Prompts not scanned for injection attacks - - Only 0/4 security layers active ### What passes ✅ - Risk-based blocking policy (consent mode: max detected) - - Sensitive data patterns configured - - - HMAC-SHA256 chain integrity present ### How to fix A drop-in LangChain trust layer covers most of this in ~10 lines: ``` pip install air-langchain-trust ``` ```python from air_langchain_trust import TrustCallback callback = TrustCallback(audit_chain=True, hmac_secret=os.environ["AUDIT_SECRET"]) executor = AgentExecutor(agent=agent, tools=tools, callbacks=[callback]) ``` Full scanner: `pip install air-compliance && air-compliance scan .` Docs: https://airblackbox.ai 154 days until August 2, 2026 enforcement. Happy to answer questions.
## Summary The current `DynamicSkillsLoader` and `Agent.handle_skills()` implementation claims to support Anthropic's tiered progressive skill loading (Tier 1: metadata only → Tier 2: full content on demand), but in practice **no progressive loading actually happens**. All matched skills have their full content injected into the system prompt in a single step. ## Current Behavior When `agent.run(task)` is called: 1. `handle_skills(task)` is invoked 2. `DynamicSkillsLoader.load_relevant_skills(task)` computes cosine similarity and filters skills 3. **All matching skills' full content is immediately concatenated into `system_prompt`** The `load_full_skill(skill_name)` method exists as a public API but is **never called internally** — it's an orphaned method with no integration into the agent's execution pipeline. ## Expected Behavior (True Progressive Loading) A proper tiered implementation would look like: - **Tier 1 (initialization)**: Only inject skill `name` + `description` into the system prompt, keeping it lightweight - **Tier 2 (runtime, on-demand)**: When the agent determines a skill is needed during conversation, dynamically load the full SKILL.md content via `load_full_skill()` and append it to the context This requires a mechanism for the agent to "request" a skill at runtime (e.g., a tool/function call), which is currently missing. ## Why This Matters - **Token waste**: Loading all full skill content upfront bloats the system prompt, especially with many skills - **Context window pressure**: For agents with many skills, this can consume significant context unnecessarily - **Documentation mismatch**: The docs and code comments reference "Tier 2 progressive disclosure" but the implementation doesn't deliver it ## Relevant Code - `swarms/structs/dynamic_skills_loader.py` — `DynamicSkillsLoader` class - `swarms/structs/agent.py` — `handle_skills()`, `_load_dynamic_skills_for_task()`, `load_full_skill()` ## Suggested Approach 1. In Tier 1 mode, only append a skill summary (name + description) to the system prompt 2. Expose `load_full_skill` as an agent tool/function that the LLM can call mid-conversation 3. When the LLM invokes the tool, load and inject the full skill content into the context dynamically
## SAID Protocol — on-chain Solana identity for Swarms agents Adds automatic SAID Protocol identity registration to the Swarms `Agent` class as part of the default agent initialization flow. ### What it does - **Default-on**: `auto_said=True` by default — every new Agent registers with SAID Protocol on first run - Agents can opt out with `auto_said=False` - Registration is free (no SOL required), non-blocking, and fails silently if offline - Builds on-chain reputation over time through agent interactions ### SAID Protocol [SAID Protocol](https://saidprotocol.com) provides on-chain identity, reputation scoring, and verification for AI agents on Solana. Every agent gets a cryptographically verifiable identity tied to a Solana wallet. - Free pending registration → Verified badge (0.01 SOL) → SAID Passport soulbound NFT (0.05 SOL) - Public agent directory: [saidprotocol.com/agents](https://saidprotocol.com/agents) ### Files changed - `swarms/utils/said.py` — SAID registration helpers - `swarms/structs/agent.py` — `auto_said=True` parameter in Agent.__init__, registers on startup