The Future of Robyn: The AI Spiritual Successor to Flask & Django

Four years ago I set out to build a fast Flask. A really fucking fast Flask called Robyn. The same tiny decorator-based API that everyone loves, but with a Rust engine that can outrun anything in Python-land. I was obsessed with performance, async correctness, and keeping the core lean. We went from a toy to a tool used by thousands.
But with every step, one thought kept returning:
What if we went further than fast?Robyn was the first modern Python framework to ship a built-in web server. Then authentication. Then scaffolding. A batteries-included approach, but done the right way. Clean, composable, and actually fun.
And yet, the world didn’t stay still.
AI broke the web stack. And we’ve all been coping.
In the last two years, AI has gone from “optional” to foundational. APIs don’t just CRUD anymore - they converse, plan, reason, and remember.
But the tools haven’t kept up.
We’re duct-taping LangChain to Flask. Wrapping FastAPI in RAG hacks. Running agent loops in notebooks. It’s messy. Painful. And deeply unscalable.
There’s no real successor to Django for this new era.
And so, we cope. We glue. We patch.
But maybe… we don’t have to anymore.
What if Robyn could be the spiritual successor to Flask and Django?
Still fast. Still async-native.
But now with batteries included for AI-native workflows.
Imagine a framework that speaks the language of modern apps: memory, context, agents, and tools. Not just views and models.
- Memory-first (powered by Mem0 or any backend)
- Agentic primitives that just work (LangGraph, Autogen, or your runner)
- Type-safe APIs with real-time validation and tracing
- Unified admin - permissions for both humans and agents
- AI-aware authentication
- FastAPI-style params, Pydantic-style validation
- RSGI/ASGI cross compatibility for existing middlewares
- And yes, MCP and Web routers in the same framework
In short: batteries included, but opinionated and extensible. Like a brainchild Django and Flask, but for apps that think.
Why agents should be a part of your web stack?
Today, agents are an afterthought. You spin up a LangChain runner, plug in a memory backend, cross fingers and ship duct taped code.
Robyn aims to change that. Agents should be the new primitive, like routes, auth or forms.
Define them once. Give them memory. Expose them as simple endpoints. Done.
from robyn.ai import agent, memory
mem = memory(provider="mem0", user_id="guest")
chat = agent(runner="simple", memory=mem)
@app.get("/chat")
async def ask(request):
q = request.query_params.get("q", [""])[0]
return await chat.run(q)
No need to manage various microservices. Just manage your agents like you manage your HTTP routes.
And I want to make them pluggable -- use our built-in runner or swap in LangGraph, CrewAI, or your own. Like an ORM for reasoning. But async, composable, and Pythonic.What if MCPs were just routes?
Right now, Model Context Protocols (MCPs) feel like academic experiments with scattered implementations and separate frameworks. But what if they were just routes?
What if you could expose agents like APIs, composably and natively?
With Robyn, you can do exactly that:
from robyn import Robyn, auth
from robyn.ai import agent, memory
from typing import Optional, Dict, Any
app = Robyn(__file__)
@app.before_request()
@auth(required=True)
async def check_user(request):
return request.user
@app.mcp.tool()
def simple_chat(message: str) -> str:
return f"Processed: {message}"
@app.mcp.resource("notes://{note_id}")
"""Get a note by ID"""
return f"Note {note_id}: This is a sample note"
No extra servers. No sidecars.
What Robyn Apps will Feel Like
@app.get("/agent")
async def agent_endpoint(request):
q = request.query_params.get("q", [""])[0]
user_id = request.query_params.get("user_id", ["guest"])[0]
# Initialize agent with memory for this user
mem = memory(provider="mem0", user_id=user_id)
agent = chat_agent(runner="openai", memory=mem)
return await agent.run(q)
@app.get("/memory")
async def memory_endpoint(request):
user_id = request.query_params.get("user_id", ["guest"])[0]
action = request.query_params.get("action", ["get"])[0]
mem = memory(provider="mem0", user_id=user_id)
if action == "get":
return {"memories": await mem.get_all()}
elif action == "clear":
await mem.clear()
return {"message": "Memory cleared"}
@app.post("/users")
async def create_user(request):
data = request.json()
# ORM-style database operations
user = User.create(
name=data.get("name"),
email=data.get("email"),
preferences=data.get("preferences", {})
)
return {"user_id": user.id, "created": True}
Start small. Grow fast without any boilerplate.
Performance is still at core
Python should feel fast, not patched fast.
Robyn keeps its Rust runtime where it counts: low-level async I/O, concurrency, and routing. The rest? Stays Pythonic.
We’re working with Granian for native RSGI/ASGI support—outsourcing server internals to proven, production-grade implementations.
Why all of this Matters?
Right now, you’re building apps with agents, memory, and LLMs - but your framework doesn’t understand any of those things.
Robyn will.
Because it’s not just adding AI “support” - it’s being redesigned around the workflows AI apps actually need.
What’s Next?(Non AI native features)
Robyn today is already fast. But here’s what’s coming:
- Stronger typing and validation
- More Pythonic, minimal APIs
- Full RSGI/ASGI integration (via Granian)
- First-class form support
- Typed query params that just work
- Better docs, and more content
- and more ….
And most importantly: Robyn v1.0 - the stable launch of a full-stack framework for cognitive software.
Where we're going
Robyn starts with the same minimal charm you expect from a microframework, just one route and a return.
But behind that single decorator?
A full stack, waiting to be unlocked.
ORMs if you want them.
Memory, agents, typing, auth, admin, context -- when you need them.
Clean surface on day one. Rich features on day thirty.
That’s what a modern framework should feel like.
Thank you for being part of Robyn’s journey so far. You took it from a side project to a serious platform.
The next steps will be a build to Robyn v1.0. 🦇. This is an ambitious project but I believe it is truly worth giving a shot. I imagine it will take a few months. Please get involved, I would love feedback, pushbacks, and obviously your support. ❤️
And please spread the word! The more people know about Robyn, the more we can build together.
Check out Robyn 0.70.0 to try the features in action. It is an early implementation, not recommended for production, yet.