
Image credit: Kanshi Tanaike
The “USB-C for AI” has officially arrived at Google.
If you have been following the rapid rise of Anthropic’s Model Context Protocol (MCP), you know it promises a standardised way for AI models to connect to data and tools. Until now, connecting an LLM to your Google Drive or BigQuery data meant building custom connectors or relying on fragmented community solutions.
That changed this week with Google’s announcement of fully managed, remote MCP servers for Google and Google Cloud services. But typically for the Google Workspace developer community, we aren’t just waiting for official endpoints—we are building our own local solutions too.
Here is an overview of the different approaches emerging for MCP tools and what the future looks like for Workspace developers.
The Official Path: Managed MCP Servers
Google’s announcement represents a massive shift in how they view AI agents. Rather than asking developers to wrap every granular API endpoint (like compute.instances.insert) into a tool, Google is releasing Managed MCP Servers.
Think of these as “Client Libraries for Agents.” Just as Google provides SDKs for Python or Node.js to handle the heavy lifting of authentication and typing, these managed servers provide a simplified, reasoned interface for agents.
The announcement lists a massive rollout over the “next few months” covering everything from Cloud Run and Storage to the Android Management API. The goal described is to make MCP the standard way agents connect to the entire Google ecosystem. To start with Google announced MCPs for:
- Google Maps: instead of just raw data, the agent gets “Grounding Lite” to accurately answer location queries without hallucinating.
- BigQuery: The agent can interpret schemas and execute queries in place, meaning massive datasets don’t need to be moved into the context window.
- Google Kubernetes Engine (GKE): Agents can diagnose issues and optimize costs without needing to string together brittle CLI commands.
The Community Path: Privacy-First and Scriptable
While managed servers are excellent for scale and standardisation, there are some other notable approaches using local execution.
1. The Gemini CLI Workspace Extension As detailed in a recent tutorial by Romin Irani, the official google-workspace extension for the Gemini CLI takes a privacy-first approach. Instead of running in the cloud, the MCP server runs locally on your machine.
It uses your own local OAuth token, meaning your data flow is strictly Google Cloud <-> User's Local Machine <-> Gemini CLI. This is ideal for tasks like searching Drive, checking your Calendar, or drafting Gmail replies where you want to ensure no third-party intermediary is processing your data.
2. The Power of Apps Script (gas-fakes) For true customisability, Kanshi Tanaike has been pushing Google Apps Script. A recent proof-of-concept using gas-fakes CLI demonstrates how to build MCP tools directly using Apps Script.
This is a big step for Apps Script developers. It allows you to write standard GAS code (e.g., DriveApp.getFilesByName()) and expose it as a tool to an MCP client like Gemini CLI or Google Antigravity. Because it uses the gas-fakes sandbox, you can execute these scripts locally, bridging the gap between your local AI agent and your cloud-based Workspace automation.
The Future: A Universal MCP Strategy?
We are likely moving toward a world where every Google product launch includes three things: API documentation, Client SDKs, and an Official MCP Server Endpoint.
For developers, the choice will come down to granularity and control:
- Use Managed Servers when you need reliable, standard access to robust services like BigQuery or Maps with zero infrastructure overhead.
- Use Community/Local Tools (like the Workspace extension or
gas-fakes) when you need rapid prototyping, deep customization, or strict data privacy on your local machine.
The “Agentic” future is here, and whether you are using Google’s managed infrastructure or writing your own tools, the ecosystem is ready for you to build.
Sources:









