Building with MCP: A Practical Guide to Model Context Protocol

IndiaStock Dashboard - Real Stock Market Data

Building with MCP: A Practical Guide to Model Context Protocol

A hands-on exploration of MCP through a real project


Table of Contents

1. The Problem: AI is Isolated 2. What is MCP? 3. Project Overview 4. Part 1: Without MCP (The Hard Way) 5. Part 2: With MCP (The Better Way) 6. Comparing the Two Approaches 7. Key MCP Concepts 8. When to Use MCP 9. Limitations and Considerations 10. Conclusion


The Problem: AI is Isolated

You’ve built a powerful AI assistant. It can answer questions, write code, analyze data. But ask it “What’s my portfolio worth?” or “Read my project files” and it hits a wall. Your AI lives in a box. It can’t access your files, your databases, your APIs — unless you hard-code each integration.

This is the isolation problem. Every AI developer faces it. The solutions look like:

# Every integration is custom work
def get_stock_price(symbol):
    return yfinance_download(symbol)  # Custom code
    
def read_file(path):
    return open(path).read()  # Custom code

def query_database(sql):
    return db.execute(sql)  # Custom code

Every new data source = new integration code. Every API change = broken code. Every developer rebuilding the same wheel.


What is MCP?

Model Context Protocol (MCP) is a standardized way for AI applications to connect to external tools and data sources. Think of it as USB, but for AI.

Just as USB standardized how devices connect to computers, MCP standardizes how AI clients connect to servers.

Without MCP:     Custom integration for every data source
                App ↔ Stock API (custom code)
                App ↔ File system (custom code)
                App ↔ Database (custom code)

With MCP:        One protocol, any data source
                App ↔ MCP Client ↔ MCP Server (stock data)
                            ↔ MCP Server (file system)
                            ↔ MCP Server (database)

MCP defines: – Tools: Operations the server exposes (like functions) – Resources: Data the server makes available (like GET endpoints) – Prompts: Reusable prompt templates


Project Overview

We built two MCP servers and a client to demonstrate MCP in action:

mcp-showcase/
├── stock-mcp-server/      # Exposes stock price queries as MCP tools
├── filesystem-mcp-server/ # Exposes file operations as MCP tools
├── ai-client/             # Connects to both servers
└── blog/                  # This documentation

The goal: Show how a single AI client can access multiple data sources through MCP, without custom integration code for each.


Part 1: Without MCP (The Hard Way)

The Traditional Approach

Here’s how you’d build a stock-aware AI assistant without MCP:

# app.py - Traditional hard-coded integration
from flask import Flask, jsonify
import yfinance as yf

app = Flask(__name__)

@app.route('/api/stock/<symbol>')
def get_stock_price(symbol):
    ticker = yf.Ticker(symbol)
    price = ticker.fast_info.current_price
    return jsonify({'symbol': symbol, 'price': price})

# Problem: This only works for stocks.
# To add file access, you'd need ANOTHER integration:
# @app.route('/api/file/<path>')
# def read_file(path):
#     return open(path).read()

Problems with This Approach

1. No standardization: Every integration is custom 2. Breaking changes: API updates require code changes 3. Limited reuse: Can’t share integrations between apps 4. Security complexity: Each integration handles auth differently


Part 2: With MCP (The Better Way)

The MCP Approach

Here’s the same functionality with MCP:

# server.py - MCP server for stock data
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("stock-server")

@mcp.tool()
def get_stock_price(symbol: str) -> str:
    """Get current stock price for a symbol."""
    ticker = yf.Ticker(symbol)
    price = ticker.fast_info.current_price
    return json.dumps({'symbol': symbol, 'price': price})

The MCP client that connects to this is generic — it works with ANY MCP server:

# client.py - MCP client
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

async def use_mcp_server():
    server_params = StdioServerParameters(
        command="python", 
        args=["stock-server/server.py"]
    )
    
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            await session.initialize()
            
            # Call the tool - same interface for ANY MCP server
            result = await session.call_tool("get_stock_price", {"symbol": "AAPL"})
            print(result)

Benefits of This Approach

1. Standardized: Same code pattern for any data source 2. Maintainable: Server changes don’t break clients 3. Composable: Connect to multiple servers simultaneously 4. Shareable: One server can serve many clients


Comparing the Two Approaches

| Aspect | Without MCP | With MCP | |——–|————-|———-| | Integration | Custom per data source | One pattern fits all | | Maintenance | Update every client when API changes | Update server only | | Discovery | Read API docs for each integration | `list_tools()` tells you what’s available | | Security | Custom auth per integration | MCP handles transport security | | Scale | N integrations for N data sources | N servers, one client |


Key MCP Concepts

Tools

Tools are functions exposed by MCP servers. Clients call them like remote procedure calls:

# Server defines a tool
@mcp.tool()
def get_stock_price(symbol: str) -> str:
    """Docstrings become tool descriptions."""
    ...

# Client calls it
result = await session.call_tool("get_stock_price", {"symbol": "AAPL"})

Resources

Resources are data made available by servers (read-only):

# Server exposes a resource
@mcp.resource("stock://{symbol}")
def get_stock_resource(symbol: str):
    return json.dumps({'symbol': symbol, 'price': get_price(symbol)})

# Client reads it
resource = await session.read_resource("stock://AAPL")

Server Types

MCP servers can run over different transports:

stdio: Local processes, great for development – SSE: Server-Sent Events, good for web clients – Streamable HTTP: Modern, flexible


When to Use MCP

MCP is great when:

✅ You want to expose tools to AI clients ✅ You need multiple consumers to access the same data ✅ You want to maintain server-side logic separately from clients ✅ You’re building a data platform that AI agents will query

MCP is not ideal when:

❌ You have a simple script with one data source (just code it directly) ❌ You need bidirectional real-time communication (use WebSockets instead) ❌ Your use case is purely synchronous request/response REST


Limitations and Considerations

1. Ecosystem maturity: MCP is still evolving. Check compatibility with your AI framework.

2. Transport trade-offs: stdio is simple but local-only. HTTP allows remote servers but adds complexity.

3. Security: MCP doesn’t automatically secure your data. You still need authentication and authorization on servers.

4. Error handling: Network issues, server downtime, and tool failures need proper handling in client code.


Running the Demo

# Install MCP
pip install mcp

# Start the stock MCP server
cd stock-mcp-server
python server.py &

# Start the filesystem MCP server
cd filesystem-mcp-server  
python server.py &

# Run the client demo
cd ai-client
python client.py

Conclusion

MCP won’t solve every integration problem, but for the specific challenge of “how do I give AI clients access to tools and data,” it’s a well-designed solution. The protocol is clean, the Python SDK is straightforward, and the pattern of “many clients, one server per data source” scales nicely.

The demo project is at `https://github.com/editingdestiny/mcp-showcase` — clone it, play with the servers, and see MCP in action.


Further Reading

MCP SpecificationPython SDK DocumentationOfficial MCP Servers


This post is part of a series on building practical AI applications. Next: “Connecting MCP to your home automation system.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top