With the announcement of Google Workspace Flows at Google Cloud Next 2025, Workspace developers and power users have been eager to see how this new platform integrates with the ecosystem we know and love. Currently available in the Gemini for Workspace Alpha programme, Workspace Flows promises to bridge the gap between simple tasks and complex, AI-driven processes.
In a fantastic new three-part series, Google Workspace expert Stéphane Giron takes us on a complete tour of the platform. From the initial interface to complex logic handling, this series is the perfect primer for anyone looking to automate their organisation’s daily grind.
Here is a breakdown of the series:
Part 1: A First Look The introductory article sets the stage, explaining how to enable Workspace Flows within the Admin console and providing a tour of the dashboard. Stéphane highlights how Flows is built with Gemini at its foundation, allowing users to describe workflows in natural language to generate automations instantly.
Part 2: The Starters (Triggers) In the second instalment, the focus shifts to the engine of every Agent: the Starter. Stéphane provides a deep dive into Application-based starters (reacting to events in Gmail, Drive, Chat, Sheets, Calendar, and Forms) versus Schedule-based starters. He details how granular these triggers can be—such as filtering emails by specific criteria before a flow even begins.
Part 3: Steps and Logic The final piece explores the “Steps”—the actions that happen after the trigger. The article covers:
Core App Steps: Standard actions like sending emails or creating Calendar events.
AI Integration: Using ‘Ask Gemini’ to reason, summarise, or transform content.
Logic: Implementing ‘Check if’ and ‘Decide’ steps to create dynamic paths.
Apps Script Integration
Of particular interest to the AppsScriptPulse community is the mention in Part 3 regarding Custom Steps. Stéphane notes that for ultimate power users, Workspace Flows will offer the ability to create custom steps using Google Apps Script. This feature will unlock the full potential of the ecosystem, allowing developers to build highly specific integrations that aren’t available out of the box.
For developers seasoned in the Google Workspace ecosystem, the promise of agentic AI is not just about automating tasks, but about creating intelligent, nuanced interactions with the services we know so well. Google’s Agent Development Kit (ADK) provides the foundation, but integrating the familiar, user-centric OAuth 2.0 flow into this new world requires a deliberate architectural approach.
This article explores the patterns for building a sophisticated, multi-capability agent. It explores why you might choose a custom tool implementation (ADK’s “Journey 2”) over auto-generation, and presents a reusable authentication pattern that you can apply to any Workspace API.
The full source code for this project is available in the companion GitHub repository. It’s designed as a high-fidelity local workbench, perfect for development, debugging, and rapid iteration.
The Architectural Crossroads: Journey 1 vs. Journey 2
Journey 1: The “Auto-Toolify” Approach. This involves using components like OpenAPIToolset or the pre-built GoogleApiToolSet sets. You provide an API specification, and the ADK instantly generates tools for the entire API surface. This is incredibly fast for prototyping but can lead to an agent that is “noisy” (has too many tools and scopes) and lacks the robustness to handle API-specific complexities like pagination.
Journey 2: The “Crafted Tool” Approach. This involves using FunctionTool to wrap your own Python functions. This is the path taken in this project. It requires more initial effort but yields a far superior agent for several reasons:
Control: This approach exposes only the high-level capabilities needed (e.g., search_all_chat_spaces), not every raw API endpoint.
Robustness: Logic can be built directly into the tools to handle real-world challenges like pagination, abstracting this complexity away from the LLM.
Efficiency: The tool can pre-process data from the API, returning a concise summary to the LLM and preserving its valuable context window.
Journey 2 was chosen because the goal is not just to call an API, but to provide the agent with a reliable, intelligent capability.
The Core Implementation: A Reusable Pattern
The cornerstone of this integration is a single, centralised function: get_credentials. This function lives in agent.py and is called by every tool that needs to access a protected resource.
It elegantly manages the entire lifecycle of an OAuth token within the ADK session by following a clear, four-step process:
Check Cache: It first looks in the session state (tool_context.state) for a valid, cached token.
Refresh: If a token exists but is expired, it uses the refresh token to get a new one and updates the cache.
Check for Auth Response: If no token is found, it checks if the user has just completed the OAuth consent flow using tool_context.get_auth_response().
Request Credentials: If all else fails, it formally requests credentials via tool_context.request_credential(), which signals the ADK runtime to pause and trigger the interactive user flow.
This pattern is completely generic. You can see in the agent.py file how, by simply changing the SCOPES constant, you could reuse this exact function for Google Drive, Calendar, Sheets, or any other Workspace API.
Defining the Agent and Tools
With the authentication pattern defined, building the agent becomes a matter of composition. The agent.py file also defines the “Orchestrator/Worker” structure—a best practice that uses a fast, cheap model (gemini-2.5-flash) for routing and a more powerful, specialised model (gemini-2.5-pro) for analysis.
The Other Half: The Client-Side Auth Flow
The get_credentials function is only half the story. When it calls tool_context.request_credential(), it signals to the runtime that it needs user authentication. The cli.py script acts as that runtime, or “Agent Client.”
The cli.py script is responsible for the user-facing interaction. As you can see in the handle_agent_run function within cli.py, the script’s main loop does two key things:
It iterates through agent events and uses a helper to check for the specific adk_request_credential function call.
When it detects this request, it pauses, prints the authorisation URL for the user, and waits to receive the callback URL.
It then bundles this callback URL into a FunctionResponse and sends it back to the agent, which can then resume the tool call, this time successfully.
This client-side loop is the essential counterpart to the get_credentials pattern.
Production Considerations: Moving Beyond the Workbench
This entire setup is designed as a high-fidelity local workbench. Before moving to a production environment, several changes are necessary:
Persistent Sessions: The InMemorySessionService is ephemeral. For production, you must switch to a persistent service like DatabaseSessionService (backed by a database like PostgreSQL) or VertexAiSessionService. This is critical for remembering conversation history and, most importantly, for securely storing the user’s OAuth tokens.
Secure Credential Storage: While caching tokens in the session state is acceptable for local testing, production systems should encrypt the token data in the database or use a dedicated secret manager.
Runtime Environment: The cli.py would be replaced by a web server framework (like FastAPI or Flask) if you are deploying this as a web application, or adapted to the specific requirements of a platform like Vertex AI Agent Engine.
By building with this modular, Journey 2 approach, you create agents that are not only more capable and robust but are also well-architected for the transition from local development to a production-grade deployment.
Explore the Code
Explore the full source code in the GitHub repository to see these concepts in action. The README.md file provides all the necessary steps to get the agent up and running on your local machine. By combining the conceptual overview in this article with the hands-on code, you’ll be well-equipped to build your own robust, production-ready agents with the Google ADK.
Thanks to Google’s Agent Development Kit (ADK) and Vertex AI’s Agent Engine, that’s no longer just an idea. These frameworks let you build intelligent agents, deploy them at scale, and integrate them seamlessly into your Google Workspace tools - enabling a new era of agentic productivity.
In this three-part tutorial series, we’ll explore exactly how to do that.
The ability to build and integrate custom AI agents directly into Google Workspace is rapidly moving from a far-off idea to a practical reality. For developers looking for a clear, end-to-end example, Google Developer Expert Aryan Irani has published an excellent three-part tutorial series.
This comprehensive guide, which includes video tutorials for each part, offers a complete end-to-end journey for building agentic workflows. It starts by building a local ‘AI Auditor’ agent with the new Agent Development Kit (ADK)—programming it to verify claims with the Google Search tool. The series then guides developers through the entire cloud deployment process to Vertex AI Agent Engine, covering project setup and permissions. Finally, it ties everything together with Google Apps Script, providing the complete code (using UrlFetchApp and the OAuth2 for Apps Script library) to integrate the agent as a powerful, fact-checking tool inside Google Docs.
Why This Series Matters
This series is a fantastic, practical guide for any Google Workspace developer looking to bridge the gap between classic Apps Script solutions and Google’s powerful new generative AI tools. It provides a complete, end-to-end blueprint for building truly ‘agentic’ workflows inside the tools your organisation already uses.
A big thank you to Aryan Irani for creating and sharing this detailed resource with the developer community!
Automate research with this script. It turns a Google Sheet into an AI that plans, researches, and writes detailed reports on any topic.
The Gemini App (gemini.google.com) has a sophisticated Deep Research agent whose capabilities are incredibly powerful, but they are designed to tackle one complex topic at a time. The manual process of running each query and collating the results can however be very slow.
What if you could deconstruct the capabilities of Deep Research and build a similar agent which is instead orchestrated from a Google Sheet? Thankfully, Google Workspace Developer Expert, Stéphane Giron, has designed and shared an elegant solution that transforms Google Sheets and Apps Script into a powerful, automated, bulk research assistant.
Stéphane’s approach is built on a “Divide and Conquer” strategy, which he outlines in his detailed article on the project.
Instead of just asking a single AI to answer a complex question, this script acts as a manager for a specialised AI team… The process is broken down into three distinct phases:
Phase 1: The Strategist (Plan): A powerful AI model analyses your main query and breaks it down into a logical plan of smaller, essential sub-questions.
Phase 2: The Researcher (Execute): A fast, efficient AI model takes each sub-question and uses targeted Google searches to find factual, concise answers.
Phase 3: The Editor (Synthesise): The strategist AI returns to act as an editor, weaving all the collected research and data into a single, cohesive, and well-written final report.
How the AI Research Team Works
The solution, which is available in full on GitHub, uses a clever combination of different Gemini models and tools, all orchestrated by Google Apps Script:
Input: The user simply lists all their research topics in a Google Sheet (e.g., in column A).
Phase 1 (Plan): The script loops through each topic. For each one, it calls the Gemini 2.5 Pro model, treating it as the “Strategist.” It uses Gemini’s function-calling capability to force the model to output a structured plan of sub-questions needed to answer the main query.
Phase 2 (Execute): The script then takes this list of sub-questions and calls the Gemini 2.5 Flash model for each one, treating it as the fast “Researcher.” This call uses the built-in Google Search tool (grounding) to find up-to-date, factual answers for each specific sub-question.
Phase 3 (Synthesise): Finally, with all the collected research, the script calls Gemini 2.5 Pro one last time. In this “Editor” role, the model receives the original query and all the question/answer pairs, synthesising them into a single, comprehensive report.
Output: The script creates a new Google Doc for this final report and places a link to it directly in the Google Sheet, next to the original topic.
This “AI team” approach is a fascinating pattern we’re beginning to see emerge in the community. It strongly echoes the “AI Scrum Master” concept shared by Jasper Duizendstra at the recent Google Workspace Developer Summit in Paris. Both solutions smartly move away basic prompting and instead orchestrate a team of specialised AIs, leading to a far more robust and scalable workflow.
Stéphane’s script is highly customisable, allowing you to set the number of sub-questions to generate, define the output language, and pass the current date to the models to ensure the research is timely.
This is a fantastic example of how to build sophisticated, autonomous AI agents inside Google Workspace. A big thank you to Stéphane Giron for sharing this project with the community.
Get Started
You can find the complete code, setup instructions, and a deeper dive into the architecture at the links below:
At the Devoteam UK GDC event on the 16, October 2025 I debuted, ‘Mystic Marty’ and made three AppSheet and Apps Script predictions for H2 2026.
The Predictions
These predictions were:
Deep Integration of AppSheet into Google Workspace Flows: By allowing AppSheet applications to act as both triggers and actions within Workspace Flows, AppSheet transitions from an isolated tool into a managed step within a larger, automated business value stream, preventing the creation of new data silos and process bottlenecks.
AppSheet Actions within Gemini Enterprise (aka Agentspace): Integrating AppSheet into Gemini Enterprise positions it as a mechanism for structuring and exposing proprietary business data, enabling the creation of custom, data-aware agents at enterprise scale.
Gemini App as a Google Apps Script Service: By making it simple for AppSheet automations to call Apps Script functions that interact directly with Vertex AI and Gemini models, this feature addresses complex, domain-specific enterprise needs that go beyond the scope of no-code solutions.
The Validation
Roll forward one week to the Google Workspace Developer Summit in Paris, and we had confirmation in a presentation by Luke Camery (Product Lead, Enterprise Collaboration at Google) that Google is in the early planning stages of an AI Advanced Services in Apps Script which would be launched in 2026.
Not only that, but a free quota was being considered, which would avoid the need for users to create an associated Google Cloud Project (GCP). The details are still being worked out and this offer is subject to change, but the fact it is being considered answers a lot of the questions I hear on this topic from the community.
The Method Behind the Mysticism
These predictions weren’t just guesswork. They stemmed from both historic product knowledge and more significantly from an analysis with Gemini of the ‘DORA: The 2025 State of AI-assisted Software Development’ report and in particular exploring how high-quality internal platforms can unlock and amplify the value of AI.
The report’s emphasis on internal platforms as a key enabler for AI was a clear indicator. It suggested that Google would inevitably need to provide more powerful, integrated tools—like AI services in Apps Script—to allow enterprise developers to build these exact kinds of high-value platforms.
The hint of a free quota, is more than just interesting news—it signals a significant potential shift for the Apps Script community. The removal of the GCP barrier, in particular, would allow seamless access to powerful AI tools, allowing Google Workspace users an opportunity to experiment with generative AI in their automations.
While Google’s plans are “subject to change”, the direction seems clear. The integration of AI Advanced Services directly into Apps Script could represent the next major evolution of the platform, moving it from a tool for automation to an integrated high-value platform for intelligent application development. As always, we’ll be watching this space closely.
In a helpful post on “The Gift of Script,” Phil Bainbridge tackles the common need to publish a Google Form so it can only be completed by users within your organisation.
He shares the complete Apps Script snippet, highlighting the crucial (and often missed) step of first setting the form as published (.setPublished(true)) before using the Drive API to apply domain-wide “reader” permissions. It’s a quick, effective solution for securing internal-only forms.
A fact-check custom function for Google Sheets to be used as a bound Apps Script project powered by a Vertex AI agent and Gemini model.
This sample demonstrates how you can use two powerful types of AI resources directly into your Google Sheets spreadsheets
A new tutorial published by Google’s Pierrick Voulet, while ostensibly about creating a fact-checking tool in Google Sheets, offers something more valuable to developers: a clear framework for integrating powerful, multi-step Vertex AI Agents into Google Workspace solutions.
While the example is a FACT_CHECK custom function, the underlying architecture caught my eye as it provides a blueprint for connecting Google Workspace applications with the sophisticated reasoning capabilities of custom AI agents.
The Core Architectural Pattern
The solution uses Google Apps Script as a bridge between the user-facing application (Google Sheets) and the backend AI services on Google Cloud.
When a user calls the custom function, the script authenticates with a service account and makes a call to a deployed Vertex AI Agent. This agent then performs its multi-step reasoning. The result is then optionally passed to a Gemini model for final formatting before being returned to the Sheet.
This pattern is highly adaptable for various use cases, allowing developers to bring advanced Agentic AI into your own solutions.
Key Components for Developers
For developers looking to adapt this framework, the tutorial outlines the essential components:
A configured Google Cloud Project with the Vertex AI API enabled.
A deployed ADK Agent on the Vertex AI Agent Engine.
An Apps Script project using a service account for secure, server-to-server authentication with Google Cloud.
The provided Apps Script code (Code.gs and AiVertex.js) serves as a robust starting point, handling the API calls to both the reasoning agent and a Gemini model for final output formatting.
Ultimately, the fact-checking tool serves as an excellent proof of concept. The true value for the developer community, however, lies in the architectural blueprint it provides. This tutorial offers a clear model for integrating multi-step AI reasoning into Google Workspace add-ons, opening the door to a new class of intelligent applications.
Learn how to use the Append method in Google Sheets Advanced Service in Google Apps Script and see how it differs from the SpreadsheetApp option
While most Google Apps Script developers are familiar with the standard appendRow() method, it does come with its limitations, namely being restricted to a single row and always appending after the very last row with content on a sheet. For those looking for more control and flexibility, community expert Scott Donald (aka Yagisanatode) has published an excellent tutorial on using the spreadsheets.values.append() method available through the Google Sheets API Advanced Service.
Scott’s guide provides a deep dive into the powerful features this method unlocks, offering a more versatile approach to handling data in your Google Sheets.
The Power of the Append Method
In his tutorial, Scott highlights several key advantages over the traditional SpreadsheetApp service:
Intelligently append to a table: The API automatically finds the first empty row after a data table within a specified range. This allows you to append new rows directly to the correct location in a single API call, without first needing to find the last row of content.
Choose Your Insert Option: The API allows you to decide whether you want to INSERT_ROWS, which pushes existing data down, or OVERWRITE the empty cells below your target table.
Control Value Input: You can specify USER_ENTERED to have Sheets parse the data as if it were typed in by a user (which processes formulas and formats dates), or RAW to insert the values exactly as they are in your array.
Scott’s tutorial provides a detailed breakdown of each parameter and includes a helpful video walkthrough and a starter sheet for you to get hands-on with the concepts. He also notes some current documentation errors and bugs, which makes this a valuable resource for anyone looking to implement this feature.
A big thank you to Scott for sharing this insightful guide with the community!
The Gemini CLI is a powerful tool for developers, but its true potential lies in its extensibility. I’m excited to share a project that showcases this flexibility: the gas-fakes Gemini CLI Extension. This extension is a case study in tailoring the Gemini CLI to your specific needs, particularly when it comes to secure software development in Google Apps Script.
At its core, this project addresses a critical security concern in the age of AI-driven development: how can you safely test and execute AI-generated code that requires broad access to your Google Workspace data? The answer lies in the pioneering work of Bruce Mcpherson on his gas-fakes library, which this extension integrates into a seamless and secure workflow, thanks to the invaluable contributions of Kanshi Tanaike. I’m looking forward to discussing this project in more detail with Bruce at the upcoming Google Workspace Developer Summit in Paris.
The Power of Gemini CLI Extensions
The ability to create extensions for the Gemini CLI opens up a world of possibilities for developers. By packaging together a collection of tools and resources, you can create a customised experience that is perfectly suited to your workflow. An extension can include three key components:
System Prompts (GEMINI.md): A GEMINI.md file allows you to provide the model with custom instructions and context, guiding its behaviour to ensure it generates code that aligns with your specific requirements.
Custom Commands: You can create custom commands to automate common tasks and streamline your development process.
MCP Tools: The Model Context Protocol (MCP) allows you to integrate external tools and services with the Gemini CLI, enabling powerful, interactive experiences.
The gas-fakes Extension: A Case Study in Secure Development
The gas-fakes Gemini CLI Extension is a practical example of how these components can be combined to create a powerful and secure development environment for Google Apps Script.
The extension tackles the challenge of safely executing AI-generated code by creating a sandboxed environment where scripts can be tested without granting them access to your Google account. Here’s how it works:
GEMINI.md: The GEMINI.md file provides the model with detailed instructions on how to use the gas-fakes library, ensuring that the generated code is compatible with the sandboxed environment.
Custom Commands: The extension includes custom commands like /gas:init and /gas:new that automate the process of setting up a new project and generating code.
MCP Tool: The packaged MCP tool allows the Gemini CLI to interact with the gas-fakes sandbox, enabling it to execute code and receive feedback in a secure environment. This extension also includes the new official Model Context Protocol (MCP) for Google Workspace Development to interact directly with Google Workspace APIs.
Getting Started
To get started with the gas-fakes extension, you’ll first need to have the Google Gemini CLI installed. Once that’s set up, you can install the extension with the following command:
For more information on managing extensions, including uninstallation and updates, please see the official documentation.
Usage
Once the extension is installed, you can start a new sandboxed Apps Script project directly from your terminal.
First, create a new directory for your project, navigate into it, and start the Gemini CLI. From there, you can use the /gas:init command to scaffold a complete project structure, which includes all the necessary files for local development and testing.
With the project initialised, you can then use the /gas:new command to generate code from a natural language prompt. For example:
/gas:new "create a new Google Doc and write 'Hello, World!'to it"
This command generates the Apps Script code in src/Code.js and a corresponding runner script in run.js. From here, you can continue the conversation with the Gemini CLI to refine and build upon your code, testing each iteration locally in the secure, sandboxed environment.
This project structure is deliberate: the run.js file is your sandbox for testing, while the src folder contains the clean, production-ready Apps Script code. This separation makes it easy to use other command-line tools like clasp to push only the code in the /src directory to your online Apps Script project when you are ready to deploy.
Fine-Grained Security with Conversational Controls
Beyond creating new files, a common requirement is to have a script interact with existing documents in a user’s Google Drive. The gas-fakes extension provides a robust solution for this, and because it’s integrated into the Gemini CLI, you can configure these advanced security settings using natural language.
This conversational control is powered by the extension’s MCP tool, run-gas-fakes-test. When you ask Gemini to “whitelist this Google Doc for read access”, the model doesn’t write the configuration code itself. Instead, it calls this tool, translating your request into a set of structured parameters that the tool understands. The MCP tool then dynamically assembles and executes the run.js script with the precise security settings you requested. This abstraction is what makes the natural language interface so powerful.
For example, instead of requesting broad access to all of a user’s files, you can create a specific whitelist of file IDs that your script is allowed to interact with, specifying read-only or write access on a per-file basis. This granular approach ensures your script only ever touches the files it is supposed to.
For even tighter security, you can ask Gemini to:
Disable entire services: If your script only needs to work with Sheets, you can completely disable DriveApp and DocumentApp.
Whitelist specific methods: Lock down a service to only allow certain actions, for example, permitting DriveApp.getFiles() but blocking DriveApp.createFile().
Manage test artefacts: For debugging, you can disable the automatic cleanup feature to inspect any files created during a test run.
These advanced features provide developers with the confidence to build and test powerful automations, knowing that the execution is contained within a secure, predictable environment.
Conclusion
The Gemini CLI is more than just a command-line interface; it’s a powerful platform for creating customised and intelligent experiences. The gas-fakes Gemini CLI Extension is just one example of what is possible. I encourage you to explore the world of Gemini CLI Extensions and see what you can create.
Acknowledgements
This extension stands on the shoulders of giants. It directly builds upon the pioneering work of Bruce Mcpherson and his gas-fakes library. I’d also like to express my sincere gratitude to Kanshi Tanaike, whose work on the gas-fakes sandbox and MCP server has been instrumental in the development of this extension.
When and how to use Application default credentials for Google API auth, with example scripts to set them up and example code to consume them
In my recent post, ‘The Eject Button‘, I explored the idea of writing Apps Script code that can be easily moved to other environments like Google Cloud. This was inspired by the seamless local development workflows available in other languages, which begs the question: how can Apps Script developers replicate that experience?
A big piece of that puzzle is authentication. How do you securely test code on your local machine that needs to talk to Google APIs?
Google Workspace Developer Expert Bruce Mcpherson provides the answer in his excellent guide, ‘Application Default Credentials with Google Cloud and Workspace APIs‘. His post is the perfect starting point for any developer looking to streamline their workflow. The key takeaway is that by using the Google Cloud CLI, your local code can securely use your own credentials to access Google APIs.
When you combine ADC for authenticating real API calls with his gas-fakes mocking library for simulating Apps Script services, you have a powerful toolkit that brings a professional development cycle to the Apps Script world.
If you’re looking to level up your skills and knowledge, I highly recommend diving into Bruce’s article.