1. Introduction
In a previous post, we explored the concepts behind the Model Context Protocol (MCP) – a way for AI assistants to securely interact with your local environment and external tools. MCP bridges the gap between isolated AI models and the rich context of your development workflow.
But how do you actually create one of these servers to extend an AI’s capabilities? This post provides a hands-on tutorial for building a simple, functional MCP server from scratch using Python.
Our goal is to create a server that exposes a single, straightforward tool: get_cwd
. When called by an MCP client (like an integrated AI assistant), this tool will simply return the current working directory where the server script is running. This example, while basic, covers the fundamental communication patterns and structure of an MCP server.
2. Prerequisites
Before we start, make sure you have the following:
- Python 3.x: Ensure Python 3 is installed on your system. You can check with
python --version
orpython3 --version
. - Basic Python & Command Line Knowledge: Familiarity with Python syntax, standard libraries, and navigating directories in your terminal.
- Code Editor: Any text editor will work, but VS Code, PyCharm, etc., are recommended.
- MCP Client: An AI assistant or development tool capable of discovering and interacting with MCP servers (like the environment you might be using right now!). This is needed for configuration and testing.
3. Understanding MCP Communication (Python Context)
At its core, MCP communication relies on exchanging structured JSON messages over standard input (stdin) and standard output (stdout).
- Client -> Server: The MCP client (your AI tool) sends a JSON request object to the server’s stdin.
- Server -> Client: The MCP server processes the request and sends a JSON response object (or an error object) back to the client via its stdout.
While official SDKs exist for languages like Node.js (@modelcontextprotocol/sdk
) that abstract some of this, we’ll implement the protocol directly in Python for this tutorial. This gives us a clearer understanding of the underlying mechanism and requires only Python’s built-in libraries (json
, sys
, os
).
4. Project Setup
Let’s set up our project directory and environment.
Create Project Directory:
1 2
mkdir python_cwd_server cd python_cwd_server
Set up Virtual Environment: Using a virtual environment is crucial to manage dependencies.
1
python3 -m venv .venv
Activate it:
- Linux/macOS:
source .venv/bin/activate
- Windows (Command Prompt):
.\.venv\Scripts\activate.bat
- Windows (PowerShell):
.\.venv\Scripts\Activate.ps1
Your terminal prompt should now indicate you’re inside the.venv
environment.
- Linux/macOS:
Create Server Script:
1
touch main.py
(On Windows, you might use
type nul > main.py
or create it via your editor).
5. Implementing the Server Logic (main.py)
Open main.py
in your editor. We’ll build the core server logic step-by-step.
|
|
6. Defining and Implementing the get_cwd
Tool
The code above already includes the logic:
handle_list_tools
: This function responds to theListTools
request. It returns a list containing the definition of ourget_cwd
tool, including its name, description, and an empty input schema (since it takes no arguments).handle_call_tool
: This function responds toCallTool
requests. It checks if the requested tool name isget_cwd
. If it is, it callsos.getcwd()
to get the directory and sends it back in a structured success response. If the tool name doesn’t match, it sends aMethodNotFound
error.
7. Making the Script Executable
On Linux and macOS, make the script directly executable:
|
|
This allows the system to run the script using the interpreter specified in the shebang (#!/usr/bin/env python
). Windows doesn’t use the shebang or executable bits in the same way; you’ll rely on the MCP client configuration to call the Python interpreter directly.
8. Configuring the MCP Client
Now, you need to tell your MCP client (e.g., your AI assistant’s configuration) how to run this server. You’ll typically edit a JSON configuration file. The exact file depends on the client (e.g., cline_mcp_settings.json
, claude_desktop_config.json
). Add an entry like this:
|
|
CRITICAL:
- Replace
/full/path/to/your/project/python_cwd_server/
with the absolute path to your project directory. - The
command
must point to the Python interpreter inside your virtual environment (.venv/bin/python
). Using the system Python might cause import errors if you add dependencies later. - The
args
should contain the absolute path to yourmain.py
script.
After saving the configuration, the MCP client should automatically detect and start your server when needed.
9. Running and Testing
With the server configured, you can test it using your MCP client. Try a prompt like:
“Use the
python_cwd_server
’sget_cwd
tool.”
Or, more specifically if the client allows direct tool calls:
“Call the
get_cwd
tool from thepython_cwd_server
.”
If everything is set up correctly, the client should communicate with your script, and you should receive a response containing the working directory where main.py
is running (which should be your python_cwd_server
project directory).
Troubleshooting:
- Path Errors: Double-check the absolute paths in your MCP configuration file. Typos are common!
- Permissions: Ensure
main.py
is executable (chmod +x
) on Linux/macOS. - Python Errors: Check the logs or terminal output where the MCP client might report errors from your script. Add more
logging
statements inmain.py
if needed. - Configuration Reload: Some MCP clients might require a restart or a specific action to reload their configuration after you edit the settings file.
10. Conclusion
Congratulations! You’ve built a basic MCP server in Python from scratch. While simple, this example demonstrates the core principles:
- Using stdio for communication.
- Exchanging JSON-RPC messages.
- Handling
ListTools
to advertise capabilities. - Handling
CallTool
to execute actions. - Configuring the client to launch and manage the server.
You can use this structure as a foundation for building more sophisticated MCP servers in Python, perhaps integrating external APIs, interacting with local databases, or wrapping complex command-line tools. The Model Context Protocol opens up exciting possibilities for making AI assistants more powerful and context-aware development partners.