MCP Tools
The AI agent in Teta is more than a language model — it is an agent with access to a rich set of tools that let it interact with your project, the web, and external services. These tools are provided through the Model Context Protocol (MCP).
What is MCP?
The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. Instead of the AI being limited to generating text, MCP lets it take real actions: read files, run commands, search the web, generate images, and more.
In Teta, MCP servers run as background processes inside your sandbox. The AI agent discovers available tools through configuration files and can call any tool as part of its reasoning process. Each tool call is executed in the sandbox, and the result is fed back to the agent so it can continue working.
Available tool categories
File tools
The agent has full read and write access to your project's file system:
| Tool | Description |
|---|---|
| Read | Read the contents of any file by path. Supports text files, images, PDFs, and Jupyter notebooks. |
| Write | Create new files or overwrite existing ones at any path in the project. |
| Edit | Make targeted edits to existing files by specifying the exact text to find and replace. More precise than rewriting an entire file. |
| Glob | Search for files by name pattern (e.g., **/*.svelte, src/lib/**/*.ts). Useful for discovering project structure. |
| Grep | Search file contents using regular expressions. Find usages, locate definitions, or search for patterns across the codebase. |
These tools give the agent the same file access a developer would have in a terminal. It can explore your project structure, read existing code to understand patterns, and make precise edits to implement your requests.
Terminal tools
| Tool | Description |
|---|---|
| Bash | Execute any shell command in the sandbox. Install packages, run builds, start processes, run tests, or interact with any CLI tool. |
The Bash tool is extremely versatile. The agent uses it to:
- Run
npm installto add dependencies. - Execute
npxcommands for one-off tools. - Run build commands and inspect output for errors.
- Start or restart the dev server.
- Run test suites and analyze results.
- Use git commands for version control operations.
Commands run with the same permissions as the sandbox user, and their output (stdout and stderr) is captured and returned to the agent.
Web tools
| Tool | Description |
|---|---|
| WebSearch | Perform web searches and return relevant results. Useful for finding documentation, solutions to errors, and library information. |
| WebFetch | Fetch the content of a specific URL and process it. Used to retrieve API documentation, read web pages, and analyze online resources. |
When the agent encounters an unfamiliar library, an obscure error message, or needs to look up API documentation, it can search the web and read the results. This keeps the agent's knowledge current and helps it find solutions to problems beyond its training data.
Image generation
| Tool | Description |
|---|---|
| generate_image | Create images using AI and save them to your project. |
The image generation tool uses Gemini 3.1 Flash to create images from text descriptions. Generated images are automatically optimized and converted to WebP format for efficient web delivery. The agent can:
- Generate hero images, icons, and illustrations for your app.
- Create placeholder images during prototyping.
- Produce visual assets based on your descriptions.
Generated images are saved directly to your project's file system and can be referenced immediately in your SvelteKit components.
Browser tools
| Tool | Description |
|---|---|
| take_screenshot | Capture a screenshot of your running app in the sandbox preview. |
| get_console_logs | Retrieve browser console output (logs, warnings, errors) from the preview. |
Browser tools let the agent "see" your running application. After making visual changes, the agent can take a screenshot to verify the result looks correct. If there are runtime errors, it can read the console logs to diagnose and fix them.
This feedback loop — edit code, check the result, fix issues — mirrors how a developer works and enables the agent to catch and correct its own mistakes.
Skills
| Tool | Description |
|---|---|
| Skill loader | Load workflow-specific skills that provide specialized knowledge and procedures for common tasks. |
Skills are pre-defined workflows that give the agent domain-specific expertise. For example, a skill might contain best practices for setting up authentication, creating API routes, or configuring deployment. Skills help the agent follow established patterns rather than reinventing solutions for common problems.
How tools are used
You do not need to tell the agent which tools to use. When you send a message, the agent analyzes your request and autonomously decides the best approach:
- Understanding the request — Claude reads your message and any relevant conversation history.
- Planning — The agent determines what information it needs and what actions to take.
- Tool selection — For each step, the agent chooses the most appropriate tool. It might use Glob to find relevant files, Read to understand existing code, then Edit to make changes.
- Execution — The tool is called and its result is returned to the agent.
- Iteration — The agent reviews the result and decides whether to make more tool calls or respond to you.
For example, if you ask the agent to "add a dark mode toggle to the navbar," it might:
- Use Glob to find
**/Navbar.svelteor similar files. - Use Read to understand the current navbar implementation.
- Use Grep to find how styling is handled across the project.
- Use Edit to add the toggle component to the navbar.
- Use Write to create a theme store if one does not exist.
- Use Bash to verify the build succeeds.
- Use take_screenshot to confirm the toggle appears correctly.
This entire sequence happens automatically in response to a single message.
Tool approval in supervised mode
When you enable supervised mode in the chat panel, the agent must request your approval before executing certain tools. Specifically, any tool that modifies your project requires approval:
- Write — Creating or overwriting files.
- Edit — Modifying existing file contents.
- Bash — Running terminal commands.
- NotebookEdit — Editing Jupyter notebook cells.
Read-only tools (Read, Glob, Grep, WebSearch, WebFetch, take_screenshot, get_console_logs) execute without approval since they do not modify your project.
When approval is required, the agent pauses and shows you exactly what it plans to do: the file path, the content it will write, or the command it will run. You can approve (Enter) or reject (Escape) each action individually.
Extending with custom MCP servers
The MCP architecture is designed to be extensible. The tools available to the agent are determined by MCP server configurations deployed to the sandbox. This modular design means new capabilities can be added by deploying additional MCP servers without changing the core agent.
Current MCP servers running in Teta sandboxes include:
- Image generation server — Handles the
generate_imagetool, communicating with the backend for Gemini API access. - Browser tools server — Provides screenshot and console log capabilities.
- Skill loader server — Manages workflow skills available to the agent.
As the platform evolves, new MCP servers can be added to give the agent additional capabilities — for example, database interaction, API testing, or design system tools.
FAQ
Can the agent use tools I have not heard of?
The agent can only use tools provided by the MCP servers configured in your sandbox. The full list of available tools is described on this page. If a tool is not listed here, the agent does not have access to it. New tools are added through platform updates.
How does image generation work technically?
When the agent calls the generate_image tool, the MCP server in your sandbox sends a request to the Teta backend, which forwards it to the Gemini 3.1 Flash model with image generation capabilities. The generated image is processed with sharp (converted to WebP for size optimization), uploaded to storage, and written to your sandbox's file system. The agent receives the file path and can immediately reference the image in your code.
Are web search results filtered or restricted?
Web search results are provided as-is from the search engine. There is no additional filtering or restriction on what the agent can search for or retrieve. The agent uses its judgment to determine which results are relevant to your request and may fetch specific URLs to read their content in detail.
What happens if a tool call fails?
If a tool call fails — for example, a Bash command exits with an error, or a file path does not exist — the error is returned to the agent as part of the tool result. The agent then reads the error, diagnoses the issue, and tries a different approach. Tool failures are a normal part of the development process and the agent is designed to handle them gracefully.
Can I see which tools the agent used?
Yes. In the chat panel, the agent's responses show each tool call it made, including the tool name, input parameters, and result. This transparency lets you understand exactly what the agent did and how it arrived at the final result. In supervised mode, you see each tool call before it executes.