Why your learning platform is the most disconnected tool in your AI stack
Enterprise AI is connecting everything. Almost everything.

Something significant is happening across enterprise technology right now. AI is knitting your business systems together. Your CRM is feeding your AI assistant context about customer relationships. Your calendar is syncing with your project tools. Your finance platforms are talking to your analytics stack.
Slowly but steadily, organisations building real AI capability are turning a collection of disconnected tools into something that actually functions like a system.
Except for one. Almost universally, the learning platform gets left out.
The irony at the centre of your tech stack
Here's the thing worth sitting with: the platform your organisation relies on to develop its people is usually the one that can't talk to any other system.
Your LMS holds some of the most strategically important data in your business. It knows which skills your people have developed. It knows who completed what, when, and how well. It knows which teams are investing in development and which aren't.
That data is directly relevant to hiring decisions, succession planning, performance conversations, and capability strategy. And yet, when your organisation's AI tools go looking for context about your people, the LMS is almost never part of what they can see.
"The irony of the modern L&D tech stack: the platform responsible for developing people is usually the one that can't talk to any other system."
How did we get here?
The LMS was built for a different era. Its job was to store courses and log completions — and it did that job in isolation because containment was the point. Integration wasn't even a consideration.
The problem is that the rest of enterprise technology moved on. SaaS platforms started building open APIs as standard. Data started flowing between systems. The expectation shifted from "does this tool do its job?" to "does this tool play well with everything else?" The LMS, largely, missed that shift. It kept doing what it always did — holding learning data inside its own walls — while the tools around it became increasingly connected.
The result is a strange architectural reality. Organisations that have invested seriously in AI readiness have a gap sitting right in the middle of their people strategy. Their AI assistant knows about deals, meetings, projects, and deadlines. It has no idea what their people are learning or what their manager has flagged as a development priority.
What this actually costs you
This isn't just a technical inconvenience. The disconnection has real consequences for how learning lands in your organisation.
When your learning platform can't communicate with the systems your people actually use, learning becomes a separate destination. Something you go to, rather than something that comes to you. Your people have to remember to open it. Your L&D team has to manually assign content. Your managers have to chase completions in a separate system and cross-reference that data somewhere else entirely.
More significantly: your AI tools can't help. An AI assistant that doesn't know what your people have already learned can't suggest what they should do next. It can't flag a skill gap when it spots one in a project brief. It can't connect a development need to available content in the moment that need becomes visible. The intelligence is there, but the connection isn't.
And the cost of that disconnection compounds as AI matures. Recent independent research by CData tested MCP providers across enterprise platforms and found that accuracy problems cascade across multi-step workflows fast — 75% accuracy across a five-step process ends in less than 24% of processes completing successfully. When your learning platform is outside the loop entirely, you're not starting at 75%. You're starting at zero.
As AI moves from experiment to infrastructure — which, across enterprise technology, it is doing fast — organisations are building AI agents that handle everything from onboarding workflows to performance support. Every one of those agents needs context about the people it's working with. A learning platform that can't provide that context becomes a structural problem, not just a technical one.
The AI interoperability question, explained plainly
This is where a term you may have started hearing becomes relevant: AI interoperability.
AI interoperability means your organisation's AI tools can reach into your platforms and use the data and capabilities they hold. An AI-interoperable learning platform can be used by AI tools that live elsewhere — your Microsoft Copilot, your company's AI assistant, the agents your IT or HR teams are building.
The distinction matters more than it might sound. An AI-enabled LMS gives your learners a chatbot or a recommendation engine inside the platform. An AI-interoperable LMS becomes part of your organisation's wider AI infrastructure. It's the difference between a platform that has AI, and a platform that your AI can use.
Most learning platforms are the former. Very few are the latter.
And there's a further layer worth understanding. Connecting your LMS to an AI agent via MCP is only half the equation — the quality of what the AI can actually do with that connection depends on what's on the other side of it. An HRIS can tell an AI agent that someone completed a course on a given date. A purpose-built learning platform can tell it what skills that course developed, how those skills map to the employee's current role, what the recommended next step in their development journey looks like, and how their progress compares to peers.
That difference determines whether AI-surfaced learning is genuinely useful or just technically possible.
What a connected learning platform actually looks like
When your LMS supports open standards like MCP — the Model Context Protocol that's become the backbone of how AI tools connect to enterprise software — the architecture changes.
A new joiner gets added to your HRIS. That event is visible to your AI tools. The right onboarding pathway appears in their learning platform automatically. Their manager gets a nudge to check in at the right moment. If they ask an AI assistant a question about their role, it can factor in what they've already learned and what's coming next in their development plan.
A skills gap emerges in a team. Your AI tools can spot it — because they can see learning data alongside performance data and project data — and suggest a response before anyone has had to manually run a report.
None of this requires your L&D team to work harder. It requires your learning platform to be part of the conversation.
Where this leaves L&D leaders right now
If you're evaluating your platform strategy in 2026, AI interoperability is the question that separates platforms built for now from the ones still catching up.
The right questions to ask your LMS vendor: Does your platform support MCP? Can external AI tools access learning data and trigger actions inside the platform? When an AI agent in another part of your stack needs to know something about a learner's development, can it find out?
If the answer is vague, that's informative.
Thrive is built to be AI-interoperable — which means it's part of the infrastructure your organisation's AI runs on. Thrive's MCP Server is already in beta for 500+ customers, connecting the platform to AI agents from Claude, ChatGPT, and more, covering skills including content authoring, managing social spaces, and tracking user insights on assignments and completions.
The learning platform that couldn't talk to anything else had its time. That time is ending.
See what an AI-interoperable learning platform actually looks like. We'll show you how Thrive connects to the AI tools your organisation already uses — and what becomes possible when it does.
This is the second blog in Thrive's MCP series for L&D and HR leaders. If you missed Blog 1: What is MCP? Read it here.
