1. What is Mastra Framework
Overview and Purpose
Mastra is an open-source TypeScript framework designed to help developers build AI applications and features quickly. Created by the team behind Gatsby.js (Abhi Aiyer, Sam Bhagwat, and Shane Thomas), Mastra provides a comprehensive "batteries-included" solution that eliminates the need to glue together multiple third-party libraries for building GenAI and agentic workflows.
Core Philosophy: "One stack, one set of primitives, no glue code" - allowing developers to focus on product logic rather than infrastructure complexity.
Key Features
- LLM Model Routing: Uses Vercel AI SDK for unified interface across providers (OpenAI, Anthropic, Google Gemini)
- Agents: Autonomous AI entities that choose sequences of actions with persistent memory
- Workflows: Durable graph-based state machines built on XState with branching, loops, and human-in-the-loop support
- Tools: Typed functions with built-in integration access and parameter validation
- RAG: Complete ETL pipeline for knowledge bases with vector search capabilities
- Integrations: Auto-generated, type-safe API clients for third-party services
- Evals: Automated testing for LLM outputs using model-graded, rule-based, and statistical methods
How Mastra Differs from Other Frameworks
vs. LangChain: Mastra offers a more integrated, opinionated approach with built-in workflow capabilities and unified memory systems, while LangChain focuses on modular flexibility.
vs. CrewAI: Mastra provides broader AI primitives beyond multi-agent orchestration, with better TypeScript support and comprehensive tooling.
vs. AutoGPT: Mastra emphasizes developer experience with TypeScript-first approach and production-ready deployment options.
vs. LangGraph.js: Mastra offers a broader feature set with local dev environment, built-in tracing, and better developer tooling.
2. Prerequisites and Setup
Prerequisites
System Requirements:
- Node.js v20.0 or higher
- TypeScript knowledge
- Command line access
LLM Provider Access:
- OpenAI (recommended for beginners - generous free tier)
- Anthropic (requires credit card)
- Google Gemini (generous free tier)
- Groq, Cerebras, or local LLMs via Ollama
Installation Process
Quick Start (Recommended)
npx create-mastra@latest
Interactive prompts will guide you through:
- Project name
- Components to install (agents, tools, workflows)
- Default LLM provider
- Example code inclusion
- IDE integration (Cursor/Windsurf MCP server)
Non-Interactive Installation
npx create-mastra@latest my-app \
--components agents,tools,workflows \
--llm openai \
--example
Manual Installation
# Create project
mkdir my-mastra-app && cd my-mastra-app
# Initialize and install dependencies
npm init -y
npm install @mastra/core@latest zod@^3 @ai-sdk/openai
npm install -D typescript tsx @types/node mastra@latest
Development Environment Setup
Project Structure
my-mastra-app/
├── src/
│ └── mastra/
│ ├── agents/ # AI agent definitions
│ ├── tools/ # Custom tools
│ ├── workflows/ # Multi-step workflows
│ └── index.ts # Main Mastra config
├── .env # Environment variables
├── tsconfig.json
└── package.json
Environment Variables
# .env file
OPENAI_API_KEY=your_openai_api_key_here
# Optional providers
ANTHROPIC_API_KEY=your_anthropic_key_here
GOOGLE_GENERATIVE_AI_API_KEY=your_gemini_key_here
TypeScript Configuration
{
"compilerOptions": {
"target": "ES2022",
"module": "ES2022",
"moduleResolution": "bundler",
"esModuleInterop": true,
"strict": true,
"skipLibCheck": true,
"outDir": "dist"
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", ".mastra"]
}
3. Core Concepts of Mastra
Agents
Autonomous AI entities where language models choose sequences of actions:
import { Agent } from '@mastra/core/agent';
import { openai } from '@ai-sdk/openai';
const agent = new Agent({
name: "Weather Assistant",
instructions: "You are a helpful weather assistant...",
model: openai('gpt-4o-mini'),
tools: { weatherTool },
memory: true // Enables persistent memory
});
Tools
Typed functions that agents can execute:
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';
export const weatherTool = createTool({
id: 'get-weather',
description: 'Get current weather for a location',
inputSchema: z.object({
location: z.string().describe('City name'),
}),
outputSchema: z.object({
temperature: z.number(),
conditions: z.string(),
humidity: z.number()
}),
execute: async ({ context }) => {
// Implementation
return await fetchWeatherData(context.location);
}
});
Workflows
Graph-based state machines for complex operations:
const researchWorkflow = workflow()
.step('research', async (context) => {
return await performResearch(context);
})
.then('analyze', async (context, researchResult) => {
return await analyzeData(researchResult);
})
.branch((context, analysisResult) => {
return analysisResult.confidence > 0.8 ? 'finalize' : 'review';
});
Integrations
Auto-generated, type-safe API clients that can be used as tools or workflow steps.
RAG (Retrieval-Augmented Generation)
Complete ETL pipeline for knowledge bases:
- Document processing (text, HTML, Markdown, JSON)
- Embedding generation
- Vector storage (Pinecone, pgvector, Qdrant)
- Semantic search and retrieval
Evals
Automated testing for LLM outputs:
- Model-graded evaluation
- Rule-based checks
- Statistical methods
- Built-in metrics for toxicity, bias, relevance, and accuracy
4. Step-by-Step Tutorial: Building Your First Agent
Step 1: Create a Weather Tool
// src/mastra/tools/weather-tool.ts
import { createTool } from "@mastra/core/tools";
import { z } from "zod";
export const weatherTool = createTool({
id: "get-weather",
description: "Get current weather for a location",
inputSchema: z.object({
location: z.string().describe("City name"),
}),
outputSchema: z.object({
temperature: z.number(),
conditions: z.string(),
humidity: z.number(),
}),
execute: async ({ context }) => {
// Using Open-Meteo API (no key required)
const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(context.location)}`;
const geoResponse = await fetch(geocodingUrl);
const geoData = await geoResponse.json();
if (!geoData.results?.[0]) {
throw new Error(`Location '${context.location}' not found`);
}
const { latitude, longitude } = geoData.results[0];
const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,relative_humidity_2m`;
const weatherResponse = await fetch(weatherUrl);
const weatherData = await weatherResponse.json();
return {
temperature: weatherData.current.temperature_2m,
conditions: "Clear", // Simplified for example
humidity: weatherData.current.relative_humidity_2m,
};
},
});
Step 2: Create Your Agent
// src/mastra/agents/weather.ts
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { weatherTool } from "../tools/weather-tool";
export const weatherAgent = new Agent({
name: "Weather Assistant",
instructions: `You are a helpful weather assistant.
- Always ask for a location if none is provided
- Provide temperature in both Celsius and Fahrenheit
- Include relevant advice based on conditions`,
model: openai("gpt-4o-mini"),
tools: { weatherTool },
});
Step 3: Register with Mastra
// src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { weatherAgent } from "./agents/weather";
export const mastra = new Mastra({
agents: { weatherAgent },
});
Step 4: Start Development Server
npm run dev
# or
mastra dev
Visit:
- Playground: http://localhost:4111/
- API: http://localhost:4111/api
- Swagger UI: http://localhost:4111/swagger-ui
Progressive Examples
Adding Memory
import { Memory } from '@mastra/memory';
export const weatherAgent = new Agent({
name: "Weather Assistant",
// ... other config
memory: new Memory({
options: {
lastMessages: 20,
semanticRecall: {
topK: 3,
messageRange: { before: 2, after: 1 },
},
},
}),
});
Creating Workflows
import { createWorkflow, createStep } from "@mastra/core/workflows";
const fetchWeatherStep = createStep({
id: "fetch-weather",
execute: async ({ inputData }) => {
return await getWeather(inputData.city);
},
});
const suggestActivitiesStep = createStep({
id: "suggest-activities",
execute: async ({ inputData }) => {
// Use AI to suggest activities based on weather
return await generateActivities(inputData);
},
});
export const weatherWorkflow = createWorkflow({
id: "weather-activity-workflow",
})
.then(fetchWeatherStep)
.then(suggestActivitiesStep)
.commit();
5. Best Practices for Designing Autonomous Agents
Agent Design Patterns
1. Declarative Agent Definition
const agent = new Agent({
name: 'Assistant',
instructions: 'Clear, specific instructions...',
model: openai('gpt-4o-mini'),
tools: { /* tools */ }
});
2. Hybrid Agent-Workflow Pattern
Combine deterministic workflows with autonomous decision-making for optimal reliability and flexibility.
3. Multi-Agent Orchestration
- Hierarchical: Supervisor agents coordinating specialist agents
- Sequential: Workflow-based agent coordination
- Parallel: Multiple agents working on independent tasks
Instruction Writing Best Practices
- Be Specific and Clear: Use concrete language, avoid ambiguity
- Define Role and Context: Establish agent's identity and expertise
- Set Behavioral Guidelines: Include response style and constraints
- Include Tool Usage Instructions: Explain when and how to use tools
- Add Safety Guardrails: Include boundaries and fallback behaviors
Example:
const instructions = `You are a customer support agent for our SaaS platform.
ROLE: Provide helpful, accurate technical support
GUIDELINES:
- Always be polite and professional
- Ask clarifying questions when needed
- Use searchKnowledge for product information
- Use createTicket for complex issues
TOOLS:
- searchKnowledge: For product documentation
- createTicket: For escalation
- escalateToHuman: For billing issues`;
Tool Design Principles
- Single Responsibility: One clear purpose per tool
- Clear Descriptions: Unambiguous tool descriptions
- Strong Typing: Use Zod schemas for validation
- Error Handling: Comprehensive error messages
Memory Management
const optimizedMemory = new Memory({
storage: new LibSQLStore({ url: 'file:agent-memory.db' }),
options: {
lastMessages: 100,
semanticRecall: {
topK: 5,
messageRange: { before: 2, after: 1 }
},
workingMemory: { enabled: true }
}
});
6. Testing and Debugging Mastra Agents
Using Mastra's Eval System
import { SummarizationMetric } from '@mastra/evals/llm';
import { ContentSimilarityMetric } from '@mastra/evals/nlp';
const agent = new Agent({
name: 'Content Writer',
// ... config
evals: {
summarization: new SummarizationMetric(model),
contentSimilarity: new ContentSimilarityMetric(),
}
});
Local Development with "mastra dev"
Features available at http://localhost:4111/:
- Interactive playground for testing agents
- Agent execution traces
- Tool testing in isolation
- Workflow visualization
- REST API testing
Writing Tests
import { describe, it, expect } from 'vitest';
describe('Weather Tool', () => {
it('should return weather data', async () => {
const result = await weatherTool.execute({
context: { city: 'London' }
});
expect(result).toHaveProperty('temperature');
expect(typeof result.temperature).toBe('number');
});
});
Debugging Strategies
- Use Playground Traces: Step-by-step execution visualization
- Enable Debug Mode: Add logging to tools and agents
- Memory Inspection: Check agent memory state
- Tool Isolation: Test tools independently
- Structured Output: Use schemas for predictable outputs
7. Cloud Deployment with Serverless Functions
Vercel Deployment (Recommended)
import { VercelDeployer } from "@mastra/deployer-vercel";
export const mastra = new Mastra({
agents: { /* agents */ },
deployer: new VercelDeployer({
teamSlug: "your-team",
projectName: "your-project",
token: process.env.VERCEL_TOKEN
})
});
Deploy:
npx mastra build
vercel --prod .mastra/output
AWS Lambda Deployment
Docker-based approach:
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM public.ecr.aws/lambda/nodejs:18
COPY --from=builder /app/node_modules ${LAMBDA_TASK_ROOT}/node_modules
COPY . ${LAMBDA_TASK_ROOT}
CMD ["index.handler"]
Netlify Functions
# netlify.toml
[build]
functions = ".mastra/output/functions"
[functions]
node_bundler = "nft"
Cloudflare Workers
npx mastra build
wrangler deploy .mastra/output
Cold Start Optimization
- Minimize bundle size
- Optimize memory allocation
- Initialize SDK clients globally
- Use provisioned concurrency for critical paths
8. Browserbase Integration for Web Automation
What is Browserbase?
Browserbase is a managed headless browser infrastructure platform designed for AI agents and web automation at scale. It provides:
- Cloud-based browser instances
- Anti-detection features (captcha solving, residential proxies)
- Enterprise-grade security (SOC-2, HIPAA compliant)
- Instant scaling to thousands of browsers
Setup and Configuration
npm install @browserbasehq/stagehand
Environment variables:
BROWSERBASE_API_KEY=your_api_key
BROWSERBASE_PROJECT_ID=your_project_id
Creating Web Automation Tools
import { createTool } from '@mastra/core/tools';
import { Stagehand } from '@browserbasehq/stagehand';
export const webActTool = createTool({
id: 'web-act',
description: 'Take actions on webpages',
inputSchema: z.object({
url: z.string().optional(),
action: z.string().describe('Action to perform'),
}),
execute: async ({ context }) => {
const stagehand = new Stagehand({
env: "BROWSERBASE",
apiKey: process.env.BROWSERBASE_API_KEY,
projectId: process.env.BROWSERBASE_PROJECT_ID,
});
await stagehand.init();
const page = stagehand.page;
if (context.url) {
await page.goto(context.url);
}
await page.act(context.action);
return { success: true };
},
});
Web Agent Example
export const webAgent = new Agent({
name: 'Web Assistant',
instructions: `You can navigate websites and extract information.
Use webActTool for actions, webObserveTool for finding elements,
and webExtractTool for data extraction.`,
model: openai('gpt-4o'),
tools: { webActTool, webObserveTool, webExtractTool },
});
9. Production Deployment Considerations
Security Best Practices
Authentication & Authorization
export const mastra = new Mastra({
server: {
middleware: [{
handler: async (c, next) => {
const authHeader = c.req.header("Authorization");
if (!authHeader?.startsWith("Bearer ")) {
return new Response("Unauthorized", { status: 401 });
}
const token = authHeader.substring(7);
const isValid = await validateJWT(token);
if (!isValid) {
return new Response("Forbidden", { status: 403 });
}
await next();
},
path: "/api/*"
}]
}
});
Rate Limiting
Implement request throttling to prevent abuse and control costs.
Monitoring and Observability
export const mastra = new Mastra({
telemetry: {
serviceName: "mastra-production",
enabled: true,
sampling: { type: "always_on" },
export: {
type: "otlp",
endpoint: "https://api.honeycomb.io",
headers: {
"x-honeycomb-team": process.env.HONEYCOMB_API_KEY
}
}
}
});
Supported providers:
- SigNoz (open-source APM)
- Langfuse (LLM-specific)
- New Relic, Datadog, Honeycomb
Scaling Strategies
- Auto-scaling: Leverage platform-specific auto-scaling
- Database optimization: Use connection pooling
- Caching: Implement multi-layer caching
- Session management: Use Redis for distributed state
Cost Optimization
- Set appropriate function timeouts
- Balance memory allocation vs performance
- Implement connection pooling
- Use caching strategies
- Monitor and optimize LLM token usage
10. Real-World Examples and Use Cases
Production Deployments
WorkOS Integration
- Building AI agents for workplace automation
- Meme generator workshop demonstrating agent orchestration
Artifact - "Cursor for Hardware"
- Y Combinator S25 company
- AI-native IDE for hardware engineers
- Automating electrical system design
NotebookLLM Clone
- Multi-agent orchestration for content creation
- RAG implementation with PgVector
- Audio generation with Play.ai
Deep Research Assistant
- Nested workflows with suspend/resume
- Integration with Exa API
- Human-in-the-loop refinement
Example Repositories
- weather-agent - Simple weather assistant
- mastra-auth-examples - Authentication patterns
- template-deep-research - Research assistant
11. Common Pitfalls and Troubleshooting
Common Issues
SQLite Telemetry Error
- Problem:
SQLITE_ERROR: no such table: mastra_traces
- Solution: Disable telemetry temporarily or ensure proper table creation
Memory Type Compatibility
- Problem: Type mismatches between packages
- Solution: Ensure all Mastra packages are on compatible versions
Workflow Errors
- Use built-in retry mechanisms
- Implement proper error boundaries
- Monitor with
.watch()
method
Performance Issues
- Agent response time: Optimize model selection and prompts
- Memory retrieval: Adjust topK and message limits
- Tool execution: Implement caching for expensive operations
Debugging Tips
- Use playground traces for execution flow
- Enable verbose logging during development
- Test tools in isolation before integration
- Use structured outputs for predictability
12. Resources for Further Learning
Official Documentation
- Main docs: mastra.ai/docs
- API reference: mastra.ai/reference
- Examples: mastra.ai/examples
Community
- Discord: 1000+ members, active support
- GitHub: github.com/mastra-ai/mastra (13k+ stars)
- Contributing: Open issues for discussion before PRs
Learning Materials
- Workshop repositories for workflows and RAG
IDE Integration
Install MCP server for enhanced development:
npm install -g @mastra/mcp-docs-server
Configure for your IDE:
- Cursor:
.cursor/mcp.json
- Windsurf:
~/.codeium/windsurf/mcp_config.json
- VSCode:
~/.vscode/mcp.json
Additional Resources
- Video tutorials: Check Discord and YouTube
- Template repositories: Multiple starter projects on GitHub
- Blog posts: Case studies and deep dives on mastra.ai/blog
- Y Combinator network: Connect with other YC companies using Mastra
Conclusion
Mastra provides a comprehensive, production-ready framework for building AI agents with TypeScript. Its batteries-included approach, strong typing, and cloud-first design make it an excellent choice for developers building modern AI applications. Start with the quick tutorials, leverage the active community, and progressively build more complex agents and workflows as you gain expertise.