AppsScriptPulse

Bring Your AI Agents to Life Across Google Workspace with the New Travel Concierge Sample

This tutorial shows you how to publish AI agents to Google Workspace as Google Workspace add-ons, using Apps Script or HTTP endpoints. After your publish your add-on, your users can interact with the AI agents within their workflows.

Google has released a comprehensive new tutorial that demonstrates how to bridge the gap between complex AI agents and the daily workflows of Google Workspace users. The Travel Concierge sample uses the Agent Development Kit (ADK) to deploy a conversational, multi-agent AI directly into the Google Workspace platform.

While many AI demos focus on standalone chat interfaces, this solution stands out by implementing the agent as a Google Workspace Add-on. This means the AI isn’t just a browser tab; it is a helpful sidebar accessible directly within Gmail, Calendar, Drive, Docs, Sheets, and Slides, as well as a fully functioning Google Chat app.

Key Features for Developers

The sample provides a robust blueprint for developers looking to build “agentic” workflows. Key capabilities include:

  • Multi-Agent Architecture: The solution uses Vertex AI to manage a “Travel Concierge” that orchestrates sub-agents for specific tasks, utilising tools like the Google Maps Platform Places API and Google Search Grounding.
  • Context Awareness: In applications like Gmail, the agent can read the context of the currently selected email to help plan trips based on actual correspondence.
  • Rich User Interfaces: The add-on moves beyond simple text, utilising the Card framework to display rich widgets and interactive elements.
  • Cross-Platform Persistence: User sessions are managed by Vertex AI, ensuring that a conversation started in Google Chat can be seamlessly continued in a Google Doc sidebar.

Flexible Implementation

The Apps Script implementation demonstrates how to connect a standard Workspace Add-on to the powerful Vertex AI backend using UrlFetchApp. It serves as an excellent reference for handling synchronous execution limits and rendering complex agent responses within the constraints of the Add-on sidebar.

Whether you are looking to build a travel planner or a complex enterprise assistant, this sample provides the architectural patterns needed to bring your AI agents to where your users actually work.

Source: Plan travels with an AI agent accessible across Google Workspace  |  Google Workspace add-ons  |  Google for Developers

Creating a ‘Knowledge Doc’ for Gemini: Consolidating Google Workspace Flows Documentation

How to build a server-less crawler using Google Apps Script to create a unified knowledge base for AI-assisted development.

The Goal: A Unified Context for Gemini

When building Google Workspace add-ons, particularly those using newer features like Workspace Flows, the documentation is often spread across dozens of nested pages. While excellent for browsing, this structure presents a challenge when working with Large Language Models (LLMs) like Gemini.

The primary motivation for this project was to create a single, consolidated “Knowledge Doc” containing the complete technical specification for Workspace Flows. By feeding this unified document into Gemini, I can:

  • Control the Context: Ensure the model answers based on the specific, official documentation rather than outdated training data or hallucinated APIs.
  • Accelerate Development: Ask complex architectural questions (“How do I build a flow that integrates X and Y?”) and get answers grounded in the full API reference without the latency of multiple web searches.
  • Bridge the Knowledge Gap: Work effectively with brand-new products and services that haven’t yet been absorbed into the LLM’s core training set.

This article details how I built the tool to create this document—a server-less crawler that runs entirely within Google Apps Script. If you are less concerned about the how and more about the end result here is the [Shared] Google Workspace Flows Guide

The Technical Challenge: Apps Script vs. The DOM

My initial plan was straightforward:

  1. Fetch the HTML of each documentation page.
  2. Parse it into Markdown (using a library like Turndown or Showdown).
  3. Convert that Markdown into a Google Doc.

However, I hit a major roadblock immediately. Most JavaScript parsing libraries rely on the DOM (windowdocumentDOMParser). Google Apps Script runs in a server-side environment (similar to a Node.js runtime but without standard packages) where no DOM exists. Libraries like Turndown crashed instantly with ReferenceError: document is not defined.

The Solution: A Hybrid HTML Pipeline

Instead of fighting to make a browser library work on the server, I pivoted to a hybrid approach:

  1. Cheerio for Parsing: I used the Cheerio library (which parses HTML using pure JavaScript strings, no DOM required) to “clean” the content.
  2. Direct HTML Injection: Instead of converting to Markdown (which lost formatting for tables and images), I switched to the Google Drive API which can natively convert HTML files into Google Docs.

This pipeline proved superior because it preserved:

  • Code Blocks: <pre> tags with specific formatting.
  • Tables: Complex data tables rendered perfectly.
  • Images: <img> tags with absolute URLs were automatically fetched and embedded by Google Drive.

Important Note: This script was developed and tested specifically for Google Workspace Flows documentation. Web scraping inherently carries the risk that the target website’s HTML structure may change over time. Other documentation sites will use different CSS classes and HTML structures. While the core logic (crawling + HTML cleaning + Docs conversion) remains valid, you will need to inspect the source of your target site and update the Cheerio selectors (like .devsite-article-body or .devsite-book-nav) to match its specific layout.

Step 1: The Crawler

First, we needed to find all the relevant pages. Traversing the DOM tree proved fragile—class names change, and nesting levels vary.

The robust solution was Path Filtering. We fetch the main navigation bar and simply filter for any link that starts with our target root URL.

function extractLinksByPath(html, rootUrl) {
  const $ = Cheerio.load(html);
  const links = new Set();

  $('.devsite-book-nav a').each(function() {
    let href = $(this).attr('href');
    if (href) {
      if (href.startsWith('/')) href = CONFIG.BASE_URL + href;
      if (href.startsWith(rootUrl) && !href.includes('#')) links.add(href);
    }
  });
  return Array.from(links);
}

Step 2: The HTML Cleaner

We couldn’t just dump the raw page HTML into a Doc; it would include navigation bars, “Feedback” buttons, and footer links. We used Cheerio to strip these out.

We also encountered a specific challenge with Definition Lists (<dl><dt><dd>). Google Docs doesn’t have a native “Definition List” element, so they often flattened into messy text.

The fix was to transform them into semantic HTML paragraphs with inline styling to force the visual layout we wanted:

// Transform <dt> (Terms) into bold/italic paragraphs
$body.find('dl').each((i, dl) => {
    let newContent = '';
    $(dl).children().each((j, child) => {
      const $child = $(child);
      if ($child.is('dt')) {
        newContent += `<p style="font-weight:bold; font-style:italic; margin-top:12pt; margin-bottom:4pt; font-size:11pt; color:#202124;">${$child.html()}</p>`;
      } else if ($child.is('dd')) {
        newContent += `<p style="margin-left:24pt; margin-bottom:8pt; font-size:11pt; line-height:1.15;">${$child.html()}</p>`;
      }
    });
    $(dl).replaceWith(`<div>${newContent}</div>`);
  });

Step 3: Post-Processing with the Docs API

Even with clean HTML, the Google Docs converter can be stubborn. It sometimes ignores semantic headings (<h1>) or allows images to overflow the page width.

To fix this, we added a post-processing step. After the HTML file is converted to a Google Doc, we open it with DocumentApp to programmatically enforce styles and constraints.

Fixing Headings: We scan for our specific “Title” styling (24pt Bold) and force the Heading 1 style.

function applyHeadingsToDoc(docId) {
  const doc = DocumentApp.openById(docId);
  const body = doc.getBody();
  const paragraphs = body.getParagraphs();

  paragraphs.forEach(p => {
    const text = p.getText();
    if (text.length === 0) return;

    const fontSize = p.getAttributes().FONT_SIZE;
    const isBold = p.getAttributes().BOLD;
    // If it looks like a H1 (24pt + Bold), make it a H1!
    if (fontSize >= 24 && isBold) {
      p.setHeading(DocumentApp.ParagraphHeading.HEADING1);
    }
  });

  doc.saveAndClose();
}

Resizing Images: We also scan for images wider than the standard page width (e.g., 600px) and scale them down to fit, ensuring the document layout remains printable and clean.

function resizeImagesInDoc(docId) {
  const doc = DocumentApp.openById(docId);
  const body = doc.getBody();
  const MAX_WIDTH = 600;

  body.getImages().forEach(image => {
    const width = image.getWidth();
    const height = image.getHeight();
    if (width > MAX_WIDTH) {
      image.setWidth(MAX_WIDTH);
      image.setHeight((height / width) * MAX_WIDTH);
    }
  });
  doc.saveAndClose();
}

The Workflow in Action

Once generated, this document becomes the cornerstone of my development workflow:

  1. Consolidate: I run the script to pull the latest documentation into a fresh Google Doc.
  2. Enrich: I use tools like Gemini Deep Research to find community patterns, or architectural best practices and paste them directly into the same doc.
  3. Contextualise: This single document is then uploaded to Gemini (or another LLM workspace), providing a focused, hallucination-resistant context for answering questions and generating code.

By combining the UrlFetchApp for crawling, Cheerio for parsing, and the Drive API for document generation, we built a custom scraping pipeline in under 200 lines of code without spinning up a single server. If you’d like to make your own feel free to copy this script project which comes with a compiled Apps Script friendly version of Cheerio

Extending Google Workspace Flows with Custom Steps using Google Apps Script

Google Workspace Flows is currently available as part of the Gemini Alpha program for Google Workspace customers. If you don’t have access to Workspace Flows you will need to request access via your Google Workspace Administrator.

As developers, we often find ourselves building complex automations to glue different services together. But with the introduction of Google Workspace Flows, Google is handing some of that power directly to business users, allowing them to string together tasks—like summarising emails or posting to Chat—without writing a single line of code.

But where does that leave us? Right in the driver’s seat.

While Flows provides a “no-code” interface for users, it offers a robust “code” backend for us. Developers can extend the platform by building custom steps, in Google Apps Script. These steps act as reusable building blocks that users can drop into their flows to perform specific, business-critical tasks that aren’t available out of the box.

If you have an existing Google Workspace Add-on, you can even extend its functionality to include these Flows steps. Since Google Workspace Flows is currently part of the Gemini Alpha program, custom steps are intended for internal use and testing within your own Google Workspace organization rather than public distribution. You can deploy them as test deployments or private add-ons for your domain, but a public listing is not yet supported.

Here is a high-level overview of how it works and where to find the documentation to get started.

The Anatomy of a Step

Building a custom step in Apps Script isn’t too dissimilar from building an Editor Add-on, but there are specific architectural differences to be aware of. A step essentially consists of three parts:

  1. The Definition (Manifest): You define the step’s identity, required inputs, and expected outputs in the appsscript.json file.
  2. The Configuration (UI): You build a card interface (using CardService) that allows the user to configure the step when they add it to their flow.
  3. The Execution (Logic): You write the actual function that processes the inputs and returns the outputs.

Key Capabilities

The documentation provided by Google is comprehensive. Here are the key areas you should explore to understand what is possible:

1. Configuration Cards & Variables
Unlike a static settings page, configuration cards in Flows are dynamic. You can build cards that accept input variables from previous steps. For example, if a user has a “New Email” trigger, your step can ingest the email body as a variable.

2. Passing Data
Your step doesn’t just run in isolation; it talks to the rest of the flow. By defining output variables, your script can return data (like a calculated value or a file ID) that subsequent steps can use.

3. Robust Validation
To ensure users don’t break your script with bad data, Flows supports two types of validation. You can use Client-side validation (using Common Expression Language, or CEL) for instant feedback on the UI, or Server-side validation for complex checks against external databases.

4. Handling Complexity
For more advanced use cases, simple strings and integers might not be enough. The platform supports Custom Resources for grouping complex data structures (like a CRM lead object) and Dynamic Variables for inputs that change based on context (like selecting a specific question from a Google Form).

Getting Started

If you are part of the Gemini Alpha program and want to get your hands dirty, the best place to start is the Quickstart guide. It walks you through building a simple calculator step that takes two numbers and an operator to output a result.

It is exciting to see Google offering a bridge between the ease of no-code and the power of Apps Script.

Mastering Google Workspace Flows: A Comprehensive Guide to the New AI-Powered Automation Tool from Stéphane Giron

With the announcement of Google Workspace Flows at Google Cloud Next 2025, Workspace developers and power users have been eager to see how this new platform integrates with the ecosystem we know and love. Currently available in the Gemini for Workspace Alpha programme, Workspace Flows promises to bridge the gap between simple tasks and complex, AI-driven processes.

In a fantastic new three-part series, Google Workspace expert Stéphane Giron takes us on a complete tour of the platform. From the initial interface to complex logic handling, this series is the perfect primer for anyone looking to automate their organisation’s daily grind.

Here is a breakdown of the series:

Part 1: A First Look The introductory article sets the stage, explaining how to enable Workspace Flows within the Admin console and providing a tour of the dashboard. Stéphane highlights how Flows is built with Gemini at its foundation, allowing users to describe workflows in natural language to generate automations instantly.

Part 2: The Starters (Triggers) In the second instalment, the focus shifts to the engine of every Agent: the Starter. Stéphane provides a deep dive into Application-based starters (reacting to events in Gmail, Drive, Chat, Sheets, Calendar, and Forms) versus Schedule-based starters. He details how granular these triggers can be—such as filtering emails by specific criteria before a flow even begins.

Part 3: Steps and Logic The final piece explores the “Steps”—the actions that happen after the trigger. The article covers:

  • Core App Steps: Standard actions like sending emails or creating Calendar events.
  • AI Integration: Using ‘Ask Gemini’ to reason, summarise, or transform content.
  • Logic: Implementing ‘Check if’ and ‘Decide’ steps to create dynamic paths.

Apps Script Integration

Of particular interest to the AppsScriptPulse community is the mention in Part 3 regarding Custom Steps. Stéphane notes that for ultimate power users, Workspace Flows will offer the ability to create custom steps using Google Apps Script. This feature will unlock the full potential of the ecosystem, allowing developers to build highly specific integrations that aren’t available out of the box.

Click the source links to find out more!

Sources

Mastering Workspace API Authentication in ADK Agents with a Reusable Pattern

For developers seasoned in the Google Workspace ecosystem, the promise of agentic AI is not just about automating tasks, but about creating intelligent, nuanced interactions with the services we know so well. Google’s Agent Development Kit (ADK) provides the foundation, but integrating the familiar, user-centric OAuth 2.0 flow into this new world requires a deliberate architectural approach.

This article explores the patterns for building a sophisticated, multi-capability agent. It explores why you might choose a custom tool implementation (ADK’s “Journey 2”) over auto-generation, and presents a reusable authentication pattern that you can apply to any Workspace API.

The full source code for this project is available in the companion GitHub repository. It’s designed as a high-fidelity local workbench, perfect for development, debugging, and rapid iteration.

The Architectural Crossroads: Journey 1 vs. Journey 2

When integrating a REST API with the ADK, you face a choice, as detailed in the official ADK authentication documentation:

  • Journey 1: The “Auto-Toolify” Approach. This involves using components like OpenAPIToolset or the pre-built GoogleApiToolSet sets. You provide an API specification, and the ADK instantly generates tools for the entire API surface. This is incredibly fast for prototyping but can lead to an agent that is “noisy” (has too many tools and scopes) and lacks the robustness to handle API-specific complexities like pagination.
  • Journey 2: The “Crafted Tool” Approach. This involves using FunctionTool to wrap your own Python functions. This is the path taken in this project. It requires more initial effort but yields a far superior agent for several reasons:
    • Control: This approach exposes only the high-level capabilities needed (e.g., search_all_chat_spaces), not every raw API endpoint.
    • Robustness: Logic can be built directly into the tools to handle real-world challenges like pagination, abstracting this complexity away from the LLM.
    • Efficiency: The tool can pre-process data from the API, returning a concise summary to the LLM and preserving its valuable context window.

Journey 2 was chosen because the goal is not just to call an API, but to provide the agent with a reliable, intelligent capability.

The Core Implementation: A Reusable Pattern

The cornerstone of this integration is a single, centralised function: get_credentials. This function lives in agent.py and is called by every tool that needs to access a protected resource.

It elegantly manages the entire lifecycle of an OAuth token within the ADK session by following a clear, four-step process:

  1. Check Cache: It first looks in the session state (tool_context.state) for a valid, cached token.
  2. Refresh: If a token exists but is expired, it uses the refresh token to get a new one and updates the cache.
  3. Check for Auth Response: If no token is found, it checks if the user has just completed the OAuth consent flow using tool_context.get_auth_response().
  4. Request Credentials: If all else fails, it formally requests credentials via tool_context.request_credential(), which signals the ADK runtime to pause and trigger the interactive user flow.

This pattern is completely generic. You can see in the agent.py file how, by simply changing the SCOPES constant, you could reuse this exact function for Google Drive, Calendar, Sheets, or any other Workspace API.

Defining the Agent and Tools

With the authentication pattern defined, building the agent becomes a matter of composition. The agent.py file also defines the “Orchestrator/Worker” structure—a best practice that uses a fast, cheap model (gemini-2.5-flash) for routing and a more powerful, specialised model (gemini-2.5-pro) for analysis.

The Other Half: The Client-Side Auth Flow

The get_credentials function is only half the story. When it calls tool_context.request_credential(), it signals to the runtime that it needs user authentication. The cli.py script acts as that runtime, or “Agent Client.”

The cli.py script is responsible for the user-facing interaction. As you can see in the handle_agent_run function within cli.py, the script’s main loop does two key things:

  1. It iterates through agent events and uses a helper to check for the specific adk_request_credential function call.
  2. When it detects this request, it pauses, prints the authorisation URL for the user, and waits to receive the callback URL.
  3. It then bundles this callback URL into a FunctionResponse and sends it back to the agent, which can then resume the tool call, this time successfully.

This client-side loop is the essential counterpart to the get_credentials pattern.

Production Considerations: Moving Beyond the Workbench

This entire setup is designed as a high-fidelity local workbench. Before moving to a production environment, several changes are necessary:

  • Persistent Sessions: The InMemorySessionService is ephemeral. For production, you must switch to a persistent service like DatabaseSessionService (backed by a database like PostgreSQL) or VertexAiSessionService. This is critical for remembering conversation history and, most importantly, for securely storing the user’s OAuth tokens.
  • Secure Credential Storage: While caching tokens in the session state is acceptable for local testing, production systems should encrypt the token data in the database or use a dedicated secret manager.
  • Runtime Environment: The cli.py would be replaced by a web server framework (like FastAPI or Flask) if you are deploying this as a web application, or adapted to the specific requirements of a platform like Vertex AI Agent Engine.

By building with this modular, Journey 2 approach, you create agents that are not only more capable and robust but are also well-architected for the transition from local development to a production-grade deployment.

Explore the Code

Explore the full source code in the GitHub repository to see these concepts in action. The README.md file provides all the necessary steps to get the agent up and running on your local machine. By combining the conceptual overview in this article with the hands-on code, you’ll be well-equipped to build your own robust, production-ready agents with the Google ADK.

Source: GitHub – mhawksey/ADK-Workspace-User-oAuth-Example

Building Agentic Workflows: A 3-Part Series from Aryan Irani on ADK, Vertex AI, and Google Apps Script Tags: ADK, Vertex AI, Apps Script, Gemini, AI, Agents

 

Thanks to Google’s Agent Development Kit (ADK) and Vertex AI’s Agent Engine, that’s no longer just an idea. These frameworks let you build intelligent agents, deploy them at scale, and integrate them seamlessly into your Google Workspace tools - enabling a new era of agentic productivity.

In this three-part tutorial series, we’ll explore exactly how to do that.

The ability to build and integrate custom AI agents directly into Google Workspace is rapidly moving from a far-off idea to a practical reality. For developers looking for a clear, end-to-end example, Google Developer Expert Aryan Irani has published an excellent three-part tutorial series.

This comprehensive guide, which includes video tutorials for each part, offers a complete end-to-end journey for building agentic workflows. It starts by building a local ‘AI Auditor’ agent with the new Agent Development Kit (ADK)—programming it to verify claims with the Google Search tool. The series then guides developers through the entire cloud deployment process to Vertex AI Agent Engine, covering project setup and permissions. Finally, it ties everything together with Google Apps Script, providing the complete code (using UrlFetchApp and the OAuth2 for Apps Script library) to integrate the agent as a powerful, fact-checking tool inside Google Docs.

Why This Series Matters

This series is a fantastic, practical guide for any Google Workspace developer looking to bridge the gap between classic Apps Script solutions and Google’s powerful new generative AI tools. It provides a complete, end-to-end blueprint for building truly ‘agentic’ workflows inside the tools your organisation already uses.

A big thank you to Aryan Irani for creating and sharing this detailed resource with the developer community!

Sources:

Automate In-Depth Research with Apps Script and a Gemini AI ‘Research Team’

Automate research with this script. It turns a Google Sheet into an AI that plans, researches, and writes detailed reports on any topic.

The Gemini App (gemini.google.com) has a sophisticated Deep Research agent whose capabilities are incredibly powerful, but they are designed to tackle one complex topic at a time. The manual process of running each query and collating the results can however be very slow.

What if you could deconstruct the capabilities of Deep Research and build a similar agent which is instead orchestrated from a Google Sheet? Thankfully, Google Workspace Developer Expert, Stéphane Giron, has designed and shared an elegant solution that transforms Google Sheets and Apps Script into a powerful, automated, bulk research assistant.

Stéphane’s approach is built on a “Divide and Conquer” strategy, which he outlines in his detailed article on the project.

Instead of just asking a single AI to answer a complex question, this script acts as a manager for a specialised AI team… The process is broken down into three distinct phases:

  1. Phase 1: The Strategist (Plan): A powerful AI model analyses your main query and breaks it down into a logical plan of smaller, essential sub-questions.
  2. Phase 2: The Researcher (Execute): A fast, efficient AI model takes each sub-question and uses targeted Google searches to find factual, concise answers.
  3. Phase 3: The Editor (Synthesise): The strategist AI returns to act as an editor, weaving all the collected research and data into a single, cohesive, and well-written final report.

How the AI Research Team Works

The solution, which is available in full on GitHub, uses a clever combination of different Gemini models and tools, all orchestrated by Google Apps Script:

  • Input: The user simply lists all their research topics in a Google Sheet (e.g., in column A).
  • Phase 1 (Plan): The script loops through each topic. For each one, it calls the Gemini 2.5 Pro model, treating it as the “Strategist.” It uses Gemini’s function-calling capability to force the model to output a structured plan of sub-questions needed to answer the main query.
  • Phase 2 (Execute): The script then takes this list of sub-questions and calls the Gemini 2.5 Flash model for each one, treating it as the fast “Researcher.” This call uses the built-in Google Search tool (grounding) to find up-to-date, factual answers for each specific sub-question.
  • Phase 3 (Synthesise): Finally, with all the collected research, the script calls Gemini 2.5 Pro one last time. In this “Editor” role, the model receives the original query and all the question/answer pairs, synthesising them into a single, comprehensive report.
  • Output: The script creates a new Google Doc for this final report and places a link to it directly in the Google Sheet, next to the original topic.

This “AI team” approach is a fascinating pattern we’re beginning to see emerge in the community. It strongly echoes the “AI Scrum Master” concept shared by Jasper Duizendstra at the recent Google Workspace Developer Summit in Paris. Both solutions smartly move away basic prompting and instead orchestrate a team of specialised AIs, leading to a far more robust and scalable workflow.

Stéphane’s script is highly customisable, allowing you to set the number of sub-questions to generate, define the output language, and pass the current date to the models to ensure the research is timely.

This is a fantastic example of how to build sophisticated, autonomous AI agents inside Google Workspace. A big thank you to Stéphane Giron for sharing this project with the community.

Get Started

You can find the complete code, setup instructions, and a deeper dive into the architecture at the links below:

Source: Bulk Deep Research with Gemini and Google Apps Script + Code Repository: Bulk-Deep-Research on GitHub

Google Hints at AI Advanced Services in Apps Script: Validation for a Mystic Marty Prediction

At the Devoteam UK GDC event on the 16, October 2025 I debuted, ‘Mystic Marty’ and made three AppSheet and Apps Script predictions for H2 2026.

The Predictions

These predictions were:

  1. Deep Integration of AppSheet into Google Workspace Flows: By allowing AppSheet applications to act as both triggers and actions within Workspace Flows, AppSheet transitions from an isolated tool into a managed step within a larger, automated business value stream, preventing the creation of new data silos and process bottlenecks.
  2. AppSheet Actions within Gemini Enterprise (aka Agentspace): Integrating AppSheet into Gemini Enterprise positions it as a mechanism for structuring and exposing proprietary business data, enabling the creation of custom, data-aware agents at enterprise scale.
  3. Gemini App as a Google Apps Script Service: By making it simple for AppSheet automations to call Apps Script functions that interact directly with Vertex AI and Gemini models, this feature addresses complex, domain-specific enterprise needs that go beyond the scope of no-code solutions.

The Validation

Roll forward one week to the Google Workspace Developer Summit in Paris, and we had confirmation in a presentation by Luke Camery (Product Lead, Enterprise Collaboration at Google) that Google is in the early planning stages of an AI Advanced Services in Apps Script which would be launched in 2026.

Not only that, but a free quota was being considered, which would avoid the need for users to create an associated Google Cloud Project (GCP). The details are still being worked out and this offer is subject to change, but the fact it is being considered answers a lot of the questions I hear on this topic from the community.

The Method Behind the Mysticism

These predictions weren’t just guesswork. They stemmed from both historic product knowledge and more significantly from an analysis with Gemini of the ‘DORA: The 2025 State of AI-assisted Software Development’ report and in particular exploring how high-quality internal platforms can unlock and amplify the value of AI.

The report’s emphasis on internal platforms as a key enabler for AI was a clear indicator. It suggested that Google would inevitably need to provide more powerful, integrated tools—like AI services in Apps Script—to allow enterprise developers to build these exact kinds of high-value platforms.

For a deeper dive, you can read my full analysis here: DORA 2025 Insights: Is Gemini Enterprise the Answer to the AI Challenges?

What This Means for Apps Scripters

The hint of a free quota, is more than just interesting news—it signals a significant potential shift for the Apps Script community. The removal of the GCP barrier, in particular, would allow seamless access to powerful AI tools, allowing Google Workspace users an opportunity to experiment with generative AI in their automations.

While Google’s plans are “subject to change”, the direction seems clear. The integration of AI Advanced Services directly into Apps Script could represent the next major evolution of the platform, moving it from a tool for automation to an integrated high-value platform for intelligent application development. As always, we’ll be watching this space closely.

Community Tip: Restrict a Google Form to Your Domain with Apps Script

How to publish a Google Form so that it can only be completed by people in your domain.

Screenshot of code for publishing a Google Form

Screenshot of code for publishing a Google Form

This was developed in anticipation of the changes from 31 March 2026 with the Forms API.

In a helpful post on “The Gift of Script,” Phil Bainbridge tackles the common need to publish a Google Form so it can only be completed by users within your organisation.

He shares the complete Apps Script snippet, highlighting the crucial (and often missed) step of first setting the form as published (.setPublished(true)) before using the Drive API to apply domain-wide “reader” permissions. It’s a quick, effective solution for securing internal-only forms.

Source: The Gift of Script: Publish a Google Form to your domain

A Framework for Integrating Agentic AI into Google Workspace Add-ons

A fact-check custom function for Google Sheets to be used as a bound Apps Script project powered by a Vertex AI agent and Gemini model.

This sample demonstrates how you can use two powerful types of AI resources directly into your Google Sheets spreadsheets

A new tutorial published by Google’s Pierrick Voulet, while ostensibly about creating a fact-checking tool in Google Sheets, offers something more valuable to developers: a clear framework for integrating powerful, multi-step Vertex AI Agents into Google Workspace solutions.

While the example is a FACT_CHECK custom function, the underlying architecture caught my eye as it provides a blueprint for connecting Google Workspace applications with the sophisticated reasoning capabilities of custom AI agents.

The Core Architectural Pattern

The solution uses Google Apps Script as a bridge between the user-facing application (Google Sheets) and the backend AI services on Google Cloud.

When a user calls the custom function, the script authenticates with a service account and makes a call to a deployed Vertex AI Agent. This agent then performs its multi-step reasoning. The result is then optionally passed to a Gemini model for final formatting before being returned to the Sheet.

This pattern is highly adaptable for various use cases, allowing developers to bring advanced Agentic AI into your own solutions.

Key Components for Developers

For developers looking to adapt this framework, the tutorial outlines the essential components:

  • A configured Google Cloud Project with the Vertex AI API enabled.
  • A deployed ADK Agent on the Vertex AI Agent Engine.
  • An Apps Script project using a service account for secure, server-to-server authentication with Google Cloud.

The provided Apps Script code (Code.gs and AiVertex.js) serves as a robust starting point, handling the API calls to both the reasoning agent and a Gemini model for final output formatting.

Ultimately, the fact-checking tool serves as an excellent proof of concept. The true value for the developer community, however, lies in the architectural blueprint it provides. This tutorial offers a clear model for integrating multi-step AI reasoning into Google Workspace add-ons, opening the door to a new class of intelligent applications.

Source: Fact-check statements with an ADK AI agent and Gemini model  |  Apps Script  |  Google for Developers