Make AI a Software
Engineering Discipline.

OpenSymbolicAI brings Git, CI/CD, unit tests, and code review to AI behavior so your agents are reliable, debuggable, and maintainable.

Free and open source. MIT License.

Structural Security, Not Probabilistic Guardrails

Security isn't bolted on. It's architecturally guaranteed.

Traditional AI AgentsOpenSymbolicAI
Data dumped into contextData stays in variables
"Please don't access other users' data"Code enforces boundaries
Hope the AI doesn't cause harmMutations require approval
Probabilistic guardrailsStructural guarantees
Cloud-dependentDeploy anywhere

Working agent in 3 steps

From install to running output — no config files, no boilerplate.

1Install
pip install opensymbolicai-core ddgs
2Define your first agent
agent.py
from ddgs import DDGS
from opensymbolicai import (
    PlanExecute, primitive, decomposition,
)
from opensymbolicai.llm import LLMConfig, Provider

class SearchAgent(PlanExecute):
    @primitive(read_only=True)
    def search(self, query: str, k: int = 5) -> list[str]:
        """Search the web via DuckDuckGo."""
        results = DDGS().text(query, max_results=k)
        return [r["body"] for r in results]

    @primitive(read_only=True)
    def answer(self, question: str, ctx: list[str]) -> str:
        """Answer a question given search context."""
        return self._llm.generate(
            "Answer based on context:\n"
            + "\n".join(ctx)
            + f"\n\nQ: {question}"
        ).text

    @decomposition(
        intent="What are the new features in Python 3.13?",
        expanded_intent="Search the web, then answer using results",
    )
    def web_qa(self) -> str:
        hits = self.search("Python 3.13 new features", k=3)
        return self.answer(
            "What are the new features in Python 3.13?", hits,
        )
3Run it
agent = SearchAgent(llm=LLMConfig(
    provider=Provider.OLLAMA, model="qwen3:1.7b",
))
result = agent.run("What is Rust and why is it popular?")
print(result.result)

How It Works

Three concepts turn prompt spaghetti into maintainable software.

Define

Typed primitives: the atomic actions your agent can take, like search, retrieve, or send email.

Compose

Wire primitives into decompositions: named workflows the agent selects by matching user intent.

Run

Call agent.run() and intent matching picks the right decomposition. Guardrails are built in.

Your AI systems deserve the same engineering rigor as your backend.

Tired of your AI doing random things you didn't ask for?

Your AI agent needs to search a database, summarize results, and email a report without ever touching the wrong data. With OpenSymbolicAI, you define those steps as code functions, not paragraphs of instructions.

  • No more 500-token prompts
  • No more guessing what the AI will do
  • No more untestable behavior

Engineering Certainty into AI

Production-Grade Reliability

Agents That Actually Work in Production

While LangChain hits 77.8% and CrewAI hits 73.3%, OpenSymbolicAI achieves a 100% framework pass rate on complex workflows. By replacing unpredictable prompts with type-safe primitives, you eliminate the randomness of agents that work on Tuesday but fail on Friday.

See the benchmarks

Compound Improvements

Fix Once, Improve Everywhere

Stop playing whack-a-mole with one-off prompt patches. Because the architecture uses reusable symbolic primitives, every fix automatically upgrades every workflow that uses it. Ten primitives combine in hundreds of ways. Twenty combine in thousands.

2 primitives
3 primitives
1000s of workflows

Zero-Fail Tooling

0% Error Rate on External Actions

Standard agent frameworks face a 20% error rate when calling external tools. A symbolic boundary between planning and execution brings that to zero, so your agents never invent parameters or leak sensitive data during real-world execution.

Optimization by Design

Fewer Tokens, Lower Cost

Reliability shouldn't come with a token tax. The LLM plans once and your code executes: 3.1x fewer tokens than LangChain, 5.8x fewer than CrewAI. A $0.006/task open-source model on OpenSymbolicAI outperforms standalone GPT-4.

Make your AI engineers 10x more productive

Reduce debugging time. Version-control behavior changes. Onboard new engineers faster.

Read the Docs
pip install opensymbolicai-core