
The conversation around Enterprise AI is shifting. In 2024, we were preoccupied with the novelty of the prompt box, marveling at the ability of large language models to draft emails or summarise long documents. By 2026, that novelty has faded. The real frontier is no longer about asking an AI to “write this”; it is about instructing it to “execute this process.” We are witnessing the birth of a managed Action Layer within our digital environments.
The DORA Reality Check: AI as an Amplifier
To understand why this shift is happening, we must look at the data. The 2025 DORA Report provides a sobering reality check for technology leaders. Its central finding is that AI is an amplifier of your existing organisational systems. If your workflows are streamlined and your data is clean, AI accelerates excellence. However, if your processes are fragmented, AI simply creates “unproductive productivity,” generating isolated pockets of speed that eventually lead to downstream chaos.
DORA’s insights suggest that the bottleneck for AI value isn’t the model’s intelligence; it is the “Context Constraint.” AI cannot be effective if it is trapped in a silo, unable to see the broader business environment or act upon the tools that employees use every day. To move from hype to value, we must stop treating AI as a separate chatbot and start treating it as a governed co-worker.
Demystifying MCP
An emerging key component at the heart of this transition will be the Model Context Protocol (MCP). This protocol acts as a universal bridge that allows AI models to connect directly with data sources and local tools without the need for custom code for every new integration. It works by standardising how a model requests information from a database, a file system, or a specific software application. Because this protocol creates a common language between the AI and the external world, developers can swap different models or data sets in and out of their workflows with much less friction.
We are moving away from the era of closed silos. Instead, we see a future where an AI assistant can securely reach into your local environment to read a specific document or query a live database using a single, unified interface. It is a fundamental shift in how we think about connectivity.
Two Philosophies of Action: Managed Features vs. Configurable Platforms
Given the scale and pace of change, it is not surprising that Google’s AI efforts are not singular. The reality is that Gemini Enterprise (the platform formerly known as Agentspace) and Gemini for Workspace are two distinct products developed by separate teams with very different philosophies of control.
Gemini for Workspace primarily operates as a “managed service.” We see a steady expansion of features, but these remain “black boxes” entirely defined and controlled by Google. The user receives the feature, but the administrator has very little say in how that tool is connected, calibrated, or exposed to the model’s reasoning engine. Gemini Enterprise, by contrast, is a more configurable platform. It allows organisations to define their own data connectors, tailor system prompts and even deploy enterprise-owned agents.
From Black Boxes to Federated Actions
Whilst Google continue to expand the actions Gemini for Workspace can perform in products like Google Sheets, these tools remain non-configurable for the domain administrator. The January 23, 2026, Gemini Enterprise release reveals a more radical direction: the expansion of Configurable Actions.
Whilst the internal implementation details of every Gemini feature remain proprietary, Google’s recent announcement of official MCP support suggests that the protocol is becoming the standard connective tissue for their entire AI stack. The Enterprise team is simply the first to expose this plumbing for organisational governance and configuration. To understand the practical power of this protocol when it is stripped of its corporate packaging, we only need look at how MCP is being used in other generative AI tooling.
Lessons from the Gemini CLI
While most Workspace users interact with AI through polished side panels and the Gemini App, some power users are turning to the Gemini CLI. This command-line interface is the playground often being used to explore the automation of AI-driven workflows.
Unlike the “black box” nature of Gemini Enterprise and Gemini for Workspace, the Gemini CLI is easy to extend and customise. Users can easily install and create Extensions which can wrap commands, system prompts and MCP tools. The Google Workspace Extension is a wonderful demonstration of the power of MCP. By providing a few lines of configuration, a user grants the model permission to query and manage core productivity data: it can search files in Drive, draft emails in Gmail, dispatch Chat alerts, or organise Calendar appointments directly from the command line.
The value of the CLI lies in its demonstration that generative AI is capable of much more than just summarisation and analysis. By extending its capabilities through MCP, a user can hand-pick the actions they need based on their specific persona and the systems they inhabit. This shift toward action-oriented tools allows the model to move from passive reasoning to active execution without the user ever leaving their primary environment.
The Aspiration: A Managed Action Marketplace
The challenge now is to take this raw, unfiltered power seen in the CLI and translate it into a governed enterprise experience. Individual productivity is a start, but true organisational scale requires a bridge between these flexible protocols and the managed safety of the Workspace environment. I believe the defining moment for enterprise productivity will arrive when Google closes this gap by utilising its existing, battle-tested infrastructure.
For Gemini for Workspace to truly align with the DORA findings on systemic stability, it must move away from the current “one-size-fits-all” managed feature-set and toward a framework of granular administrative control. This is the aspiration for a truly professional Action Layer.
The Google Workspace Add-on Manifest is an existing way for enterprises to customise Google Workspace. Today, developers use a manifest to define which products an add-on supports, such as Gmail or Docs. To enable the next wave of productivity, Google could add the Gemini App (gemini.google.com) as a first-class, configurable product within this manifest. This would effectively turn the “Add-on” from a static sidebar tool into a dynamic Action that Gemini can call upon.
In this vision, a developer defines the location of their MCP server directly within the manifest. This creates a standardised way to deliver agentic tools through the existing Marketplace. This is not about building another static side panel. Rather, it is about injecting new capabilities directly into the core reasoning engine of the Gemini App and the integrated Gemini side panels across Workspace.
Imagine a “Tableau for Workspace” add-on. Instead of a user switching tabs to query a dashboard, the add-on uses the Tableau MCP toolset to allow Gemini to “see” live visualisations. The user can simply ask their Gemini side panel in Google Docs to “Insert a summary of our regional performance from Tableau,” and the agent executes the query, interprets the data, and drafts the text in situ.
This transformation would provide the necessary governance layer to ensure that AI-accelerated workflows remain under human control. By allowing admins to curate and configure these agentic tools, Google can turn Workspace into a high-quality internal platform. It would magnify the strengths of its users without introducing the chaos of ungoverned automation. The path to enterprise productivity is paved by protocol, not just better models. The winners of the next decade will not be the companies that buy the most AI licenses, but those that have the best platforms for those AI agents to inhabit.









