What is an MCP server?

MCP server definition

MCP stands for Model Context Protocol, which is an open standard that defines how AI models communicate with external services. An MCP server is the component that implements this protocol on the service side: it exposes a standardised set of tools and data endpoints that any compatible AI model can call.

Think of it like a USB standard for AI. Just as USB means you don't need a different cable for every device, MCP means an AI model doesn't need bespoke integration code to connect to every new tool. Any MCP-compatible AI can connect to any MCP server and immediately understand what it can do.

In plain terms

An MCP server is a go-between that lets an AI model safely ask external systems – like your calendar, database, or file storage – for information or actions, without AI having direct, uncontrolled access.

At its core, an MCP server does two things: it defines what an AI model is allowed to access, and it provides a standardised way for any compatible AI model to access it. The protocol handles the language; the server handles the rules.


How does an MCP server work?

When an AI model needs information or wants to take action, it sends a structured request to an MCP server. The server validates the request, queries the relevant system, and returns a formatted response the AI can act on. The AI never touches the underlying system directly. 

This is what makes MCP servers so valuable: they standardise the connection layer so developers only need to build the integration once, and any AI model that speaks MCP can use it.


MCP server example

To understand what an MCP server is used for in practice, consider an AI assistant integrated into a company's Learning Management System.

An employee wants to know what training they still need to complete before their performance review. Rather than logging into the LMS and navigating to their profile, they ask the AI assistant directly.

  1. The employee types "what learning do I still need to complete this quarter?" into the AI assistant. 
  2. The AI model recognises it needs learner progress data, so it sends a structured MCP request to the company's LMS MCP server. 
  3. The server authenticates the request, queries the LMS for that employee's assigned courses and completion records, and returns only the relevant data. 
  4. The AI responds naturally: "You have two courses outstanding this quarter — Leadership Essentials (due 28 June) and Data Privacy Refresher (due 15 July)."

At no point did the AI model have direct access to the LMS database. The MCP server acted as the controlled intermediary throughout — validating what could be requested and by whom.

The same pattern applies across other LMS actions: an MCP server could allow an AI to recommend content based on a learner's role, flag overdue compliance training to a manager, or surface the most relevant resource when an employee asks a question. The purpose of an MCP server is always the same — safe, standardised access to the right data, for the right person, at the right time.

MCP servers and AI

Understanding what an MCP server is in an AI context requires understanding the core limitation it solves. Large language models, by themselves, are isolated — they know what they were trained on, and nothing else. They cannot look things up, take actions, or interact with live systems without a mechanism to do so.

MCP servers are that mechanism. An AI MCP server extends the reach of an AI model beyond its training data, giving it grounded, real-time access to the information and tools it needs to be genuinely useful in a workplace setting.

Why this matters for L&D

Learning platforms are increasingly integrating AI assistants. MCP servers are what allow those assistants to surface relevant content, track learner progress, and trigger actions inside an LMS — all without requiring custom integration work for each AI model.

The Model Context Protocol was introduced by Anthropic in late 2024 and has been rapidly adopted by major developer tools, productivity platforms, and enterprise software vendors. Understanding what an MCP server is in this broader AI context helps L&D teams evaluate AI-powered platform features more critically — and ask the right questions about how their data is being accessed and governed.

Frequently asked questions

MCP server FAQ

What is an MCP server used for?

MCP servers are used to connect AI models to real-world tools and data sources. Common use cases include: giving an AI access to a calendar, task manager, or CRM; letting an AI read from and write to documents or databases; enabling an AI to query live web data or internal knowledge bases; and allowing an AI to trigger actions in external platforms like issuing a support ticket or updating a record. In an L&D context, MCP servers power AI assistants that can surface learning content, check completion records, and personalise recommendations.

What is an MCP server in simple terms?

In simple terms, an MCP server is a translator and gatekeeper between an AI and the outside world. It lets the AI ask questions like "what's on my calendar?" or "what are the open support tickets?" — and it fetches the answers safely, without giving the AI direct access to those systems. Think of it like a librarian: you ask for a book, the librarian retrieves it for you. You never go into the restricted stacks yourself.

What is the purpose of an MCP server?

The purpose of an MCP server is twofold. First, it gives AI models the ability to act on real, live information rather than being limited to their training data. Second, it does so in a standardised, secure way — so organisations don't need to build custom integrations for each AI tool they adopt. The Model Context Protocol creates a shared language for AI-to-system communication, making integrations reusable and auditable.

What exactly is an MCP server?

Technically, an MCP server is a process (usually a small application or microservice) that exposes a set of tools and resources via the Model Context Protocol specification. It listens for requests from MCP-compatible AI clients, handles authentication and authorisation, executes the requested operations against an underlying system, and returns structured responses. Servers can be local (running on the same machine as the AI) or remote (hosted as a web service). Major platforms including Notion, Atlassian, GitHub, Figma, and Slack have published official MCP servers for their products.

LMS / Glossary

See all

See Thrive in action

Explore what impact Thrive could make for your team and your learners today.