AppsScriptPulse

Vertex AI Advanced Service in Apps Script: A Step Forward or a Missed Opportunity?

As Google Apps Script developers, we are used to waiting. We wait for new runtime features, we wait for quotas to reset, and recently, we have been waiting for a first-class way to integrate Gemini into our projects.

With the recent release of the Vertex AI Advanced Service, the wait is technically over. But as detailed in Justin Poehnelt’s recent post, Using Gemini in Apps Script, you might find yourself asking if this was the solution we were actually looking for.

While the new service undoubtedly reduces the boilerplate code required to call Google’s AI models, it brings its own set of frustrations that leave me, and others in the community, feeling somewhat underwhelmed.

The “Wrapper” Trap

On the surface, the new VertexAI service looks like a win. As Justin highlights, replacing complex UrlFetchApp headers with a single VertexAI.Endpoints.generateContent() call is a significant cleanup.

However, this convenience comes with an administrative price tag. The Vertex AI Advanced Service requires a standard Google Cloud Project, understandable for billing, but requires the creation of an oAuth consent screen. For the majority of internal enterprise applications, I would imagine either a service account or a https://www.googleapis.com/auth/cloud-platform scope and associated IAM will be the preferred approach. This removes the need for a consent screen and, in the case of Service Accounts, rules out the Vertex AI Advanced Service.

It begs the question: Why didn’t Google take the approach of the Google Gen AI SDK?

In the Node.js and JavaScript world, the new Google Gen AI SDK offers a unified interface. You can start with a simple API key (using Google AI Studio) for prototyping, and switch to Vertex AI (via OAuth) for production, all without changing your core code logic. The Apps Script service, by contrast, locks us strictly into the “Enterprise” Vertex path. We seem to have traded boilerplate code for boilerplate configuration.

A Third Way: The Community Approach

If you are looking for that Unified SDK experience I mentioned earlier, where you can use the standard Google AI Studio code patterns within Apps Script, there is a third way.

I have published a library, GeminiApp, which wraps UrlFetchApp but mimics the official Google Gen AI SDK for Node.js. This allows you to write code that looks and feels like the modern JavaScript SDK, handling the complex UrlFetchApp configuration under the hood.

As you can see in the comparison above, the Advanced Service (left) abstracts away the request complexity, the UrlFetchApp method (middle) gives you the transparency and control you often need in production, and the GeminiApp library (right) offers a balance of both.

Disclaimer: As the creator of this library, I admit some bias, but it was built specifically to address the gap.

It is important to note a distinction in scope. Both the Google Gen AI SDK and GeminiApp are focused strictly on generative AI features. The Vertex AI Advanced Service, much like the platform it wraps, offers a broader range of methods beyond just content generation.

If your needs extend into those wider Vertex AI capabilities, but you still require the authentication flexibility of UrlFetchApp (such as using Service Accounts), I have a solution for that as well. My Google API Client Library Generator for Apps Script includes a build for the full Vertex AI (AI Platform) API. This gives you the comprehensive coverage of the Advanced Service with the architectural flexibility of an open-source library.

Here is how you can use the generated client library to authenticate with a Service Account, something impossible with the official Advanced Service:

/**
 * Example using the generated Aiplatform library with a Service Account.
 * Library: https://github.com/mhawksey/Google-API-Client-Library-Generator-for-Apps-Script/tree/main/build/Aiplatform
 */
function callGemini(prompt) {
  const projectId = 'GOOGLE_CLOUD_PROJECT_ID';
  const region = 'us-central1';
  const modelName = 'gemini-2.5-flash';

  const modelResourceName = `projects/${projectId}/locations/${region}/publishers/google/models/${modelName}`;

  const serviceAccountToken = getServiceAccountToken_(); 

  const vertexai = new Aiplatform({
    token: serviceAccountToken
  });

  const payload = {
    contents: [{
      role: 'user',
      parts: [{
        text: prompt
      }]
    }],
    generationConfig: {
      temperature: 0.1,
      maxOutputTokens: 2048
    }
  };

  const result = vertexai.projects.locations.publishers.models.generateContent({
    model: modelResourceName,
    requestBody: payload
  });

  return result.data.candidates?.[0]?.content?.parts?.[0]?.text || 'No response generated.';
}

When “Advanced” Means “Behind”

There is another catch that Justin uncovered during his testing: the service struggles with the bleeding edge.

If you are trying to access the latest “Preview” models to prototype, such as the highly anticipated gemini-3-pro-preview, the advanced service may fail you. It appears the wrapper doesn’t yet support the auto-discovery needed for these newer endpoints.

In his companion post, UrlFetchApp: The Unofficial Documentation, Justin reminds us why UrlFetchApp is still the backbone of Apps Script development. When the “official” wrapper doesn’t support a specific header or a beta model, UrlFetchApp is the only way to bypass the limitations.

The Verdict

The Vertex AI service is a welcome addition for stable, enterprise-focused applications. But for developers, particularly those who want to test the latest Gemini 3 capabilities, it feels rigid compared to the flexibility seen in other Google developer ecosystems.

It serves as a good reminder that in Apps Script, convenience services are great, but understanding the underlying HTTP requests via UrlFetchApp  extends what you can achieve.

Breaking the Grid: 4 Ways to Generate Excel Files from Google Workspace

We have all been there: You build a beautiful, automated dashboard in Google Sheets. It’s dynamic, it’s collaborative, and it’s perfectly integrated with your data sources. Then, the stakeholder request comes in: “Can you email this to me as an Excel file every Monday morning?”

For Google Workspace developers, the “Excel Requirement” is often a stumbling block. While Google Sheets is powerful, the bridge to Excel can be shaky. In this post, I’m going to explore four architectural approaches to solving this problem. If you would like to play along here is the Google Sheet Invoice Template and the Excel Invoice Template.

1. The Low-Code Agent (AppSheet Automation)

Before you write a single line of code, you should consider if the platform can do the heavy lifting for you. Since AppSheet is now a core part of Google Workspace, it offers a native “No-Code” way to generate Excel files that is robust, scheduled, and template-based.

The Strategy

Instead of writing a script to fetch data and build a file, you connect your Google Sheet to an AppSheet app. You then configure an Automation that triggers on a schedule or a data change.

The Workflow:

  1. Trigger: A new row is added to your “Invoices” sheet (or a weekly schedule fires).
  2. Task: AppSheet runs a “Create a new file” task.
  3. Template: It reads a simplified Excel template stored in Drive, replaces tags like <<[Qty]>> with actual data, and generates the file.
  4. Delivery: The file is saved to Drive or emailed directly to the client.

Why This Wins

  • Zero Maintenance: No scripts to debug, no OAuth tokens to manage, and no breaking changes when APIs update.
  • Native Templates: It supports complex Excel formatting out of the box.

The Limitation

  • Syntax: You must use AppSheet’s specific templating syntax (<< >>) inside your Excel file.
  • Scaling: Generating reports with thousands of line items can hit automation timeout limits.

2. The Native Export (Apps Script)

If you need more granular control or don’t want to spin up an AppSheet app, the most common starting point is attempting to do everything within Apps Script. You create a temporary Google Sheet, fill it with data, and then export it using the Drive API.

The Code

In this scenario, we assume you have a Google Sheet version of your Invoice template. We first write the specific line items to the Sheet, then export it.

function generateInvoiceExport() {
  const ss = SpreadsheetApp.getActiveSpreadsheet();
  const sheet = ss.getSheetByName('InvoiceTemplate');
  
  // 1. Prepare Invoice Data
  const invoiceData = [
    { desc: "Web Development - Phase 1", qty: 40, unit: 100 },
    { desc: "Server Hosting (Annual)", qty: 1, unit: 250 },
    { desc: "Domain Registration", qty: 2, unit: 15 },
    { desc: "Maintenance Retainer", qty: 10, unit: 85 }
  ];
  
  // 2. Map data to the Sheet structure (Cols B, C, D, E, F)
  // B=Desc, E=Qty, F=Unit. (C and D are skipped/empty)
  const rows = invoiceData.map(item => [item.desc, "", "", item.qty, item.unit]);

  // 3. Write data to the Google Sheet
  // Matches Template: Start at Row 19, Column B (2)
  if (rows.length > 0) {
    // Corrected: ensure the number of columns (5) matches your data structure
    sheet.getRange(19, 2, rows.length, 5).setValues(rows);
  }
  
  // 4. Construct the export URL for the populated sheet
  const fileName = `Invoice_${new Date().toISOString().slice(0,10)}.xlsx`;
  const url = `https://docs.google.com/spreadsheets/d/${ss.getId()}/export?format=xlsx&gid=${sheet.getSheetId()}`;
  
  // 5. Fetch and Save
  const params = {
    method: "GET",
    headers: { "Authorization": `Bearer ${ScriptApp.getOAuthToken()}` },
    muteHttpExceptions: true
  };
  
  const response = UrlFetchApp.fetch(url, params);
  
  if (response.getResponseCode() === 200) {
    const blob = response.getBlob().setName(fileName);
    const file = DriveApp.createFile(blob);
    console.log(`Success: Invoice saved to Drive (ID: ${file.getId()})`);
  } else {
    console.log(`Error: ${response.getContentText()}`);
  }
};

The Limitations

  • Speed & Scale: The overhead of creating a temporary sheet and exporting it adds up. Generating 50 invoices in a loop will take a long time.
  • The “Source” Gap: Your source material must be a Google Sheet, meaning you lose Excel-specific features like Power Query or VBA macros from your template.

3. The Browser Round-Trip (ExcelJS)

If you need to maintain stricter fidelity to an existing Excel template (preserving fonts, colors, and layouts) and are interested in speed, you can use the “Round Trip” technique.

This method uses the user’s browser as a processing engine. You fetch the Invoice.xlsx template from Drive, process it client-side with ExcelJS, and send the result back.

View the Code Project

The Strategy

  1. Parallel Fetching (Client): The browser fires concurrent requests to Apps Script to fetch the template (Base64) and invoice data.
  2. Client-Side Processing: ExcelJS loads the template, injects data into specific cells (preserving layout), and recompiles the file.
  3. Upload: The new file is encoded and sent back to Drive.

The Code

This snippet focuses on the core orchestration logic. For the full UI setup and helper functions, see the linked project.

/* --- CLIENT SIDE (Core Logic) --- */
async function startGeneration() {
  
  // 1. Parallel Fetching
  // Fire both requests at once to speed up the wait time
  const [templateResponse, invoiceData] = await Promise.all([
    runGoogleScript('getTemplateBase64', null), 
    runGoogleScript('getInvoiceData')
  ]);

  // 2. Load into ExcelJS
  const workbook = new ExcelJS.Workbook();
  const binaryString = atob(templateResponse.base64);
  const bytes = new Uint8Array(binaryString.length);
  for (let i = 0; i < binaryString.length; i++) { bytes[i] = binaryString.charCodeAt(i); } await workbook.xlsx.load(bytes.buffer); // 3. Modify the Sheet const sheet = workbook.worksheets[0]; let currentRow = 19; // Starting at Row 19 based on template invoiceData.forEach((item, index) => {
    // Ensure we do not write past the visual template limit (e.g. Row 22)
    if (currentRow + index > 22) return;

    const row = sheet.getRow(currentRow + index);
    
    // Inject data into specific cells (B, E, F)
    row.getCell(2).value = item.desc;  
    row.getCell(5).value = item.qty;   
    row.getCell(6).value = item.unit;  
    
    // ExcelJS preserves the existing styles of these cells!
    row.commit();
  });

  // 4. Export & Save
  const outBuffer = await workbook.xlsx.writeBuffer();
  
  // Convert buffer to base64 (simplified for brevity)
  let binary = '';
  const outBytes = new Uint8Array(outBuffer);
  for (let i = 0; i < outBytes.length; i++) {
    binary += String.fromCharCode(outBytes[i]);
  }
  const outBase64 = btoa(binary);

  // Upload back to Drive
  await runGoogleScript('saveProcessedFile', outBase64, `Invoice_${Date.now()}.xlsx`);
}

/* --- SERVER SIDE (Apps Script) --- */
function getTemplateBase64(templateId) {
  const file = DriveApp.getFileById(templateId);
  return {
    base64: Utilities.base64Encode(file.getBlob().getBytes())
  };
}

function saveProcessedFile(base64Content, fileName) {
  const blob = Utilities.newBlob(
    Utilities.base64Decode(base64Content), 
    MimeType.MICROSOFT_EXCEL, 
    fileName
  );
  return { url: DriveApp.createFile(blob).getUrl() };
}

The Trade-off

  • Payload Limits: google.script.run calls can become unstable when the payload exceeds 50MB.
  • Base64 Inflation: Remember that Base64 encoding increases file size by ~33%.

4. The Enterprise Engine (Cloud Functions + Python)

For robust, high-fidelity reporting, we need to leave the Google Sheets conversion pipeline behind. By using Google Cloud Functions with Python, we can manipulate native Excel files directly using libraries like openpyxl.

The “User Identity” Pattern

Instead of managing complex Service Account permissions, this solution uses the User’s Identity to perform Drive operations.

  1. Apps Script: Generates an OIDC ID Token (for identity), using getIdentityToken()) and an OAuth Access Token (for Drive permissions) with getOAuthToken().
  2. Cloud Function: Verifies the ID Token and uses the Access Token to initialise the Drive API client as the user.

Setup Instructions

1. Google Cloud Function (Python)

  • Deploy the Function:
    • Source: cloud-fn.py
    • Runtime: Python 3.x
    • Entry point: generate_excel_report
    • Requirements: functions-frameworkgoogle-authgoogle-api-python-clientopenpyxl
  • Configure Environment:
    • Open cloud-fn.py and update APPS_SCRIPT_CLIENT_ID with your Apps Script’s OAuth Client ID.
      Note: You can find this ID by running the logClientId() helper function in Code.js.

2. Google Apps Script (Code.js)

  • Update Manifest (appsscript.json): Add the https://www.googleapis.com/auth/userinfo.email scope.
  • Configuration: Update YOUR_CLOUD_FUNCTION_URL in callInvoiceEngine() with your deployed function URL.

The Code

Google Apps Script (Code.js)

function callInvoiceEngine() {
    const url = "YOUR_CLOUD_FUNCTION_URL";

    // 1. Get Tokens
    const accessToken = ScriptApp.getOAuthToken();
    const idToken = ScriptApp.getIdentityToken(); // Requires 'openid' scope in manifest

    // --- LOGGING FOR LOCAL TESTING ---
    console.log("--- CREDENTIALS FOR LOCAL TESTING ---");
    console.log("ID Token:", idToken);
    console.log("Access Token:", accessToken);
    console.log("-------------------------------------");

    const payload = {
        templateId: "ID_OF_TEMPLATE_FILE",
        folderId: "ID_OF_DESTINATION_FOLDER",
        // Matches the same dataset used in our other examples
        data: [
            { desc: "Web Development - Phase 1", qty: 40, unit: 100 },
            { desc: "Server Hosting (Annual)", qty: 1, unit: 250},
            { desc: "Domain Registration", qty: 2, unit: 15},
            { desc: "Maintenance Retainer", qty: 10, unit: 85}
        ],
        // The Magic Key: Passing the active user's Access Token for Drive operations
        userToken: accessToken
    };

    const options = {
        method: "post",
        contentType: "application/json",
        headers: {
            "Authorization": `Bearer ${idToken}`
        },
        payload: JSON.stringify(payload)
    };

    UrlFetchApp.fetch(url, options);
}

function logClientId() {
    // Helper to find the Project's OAuth Client ID
    var token = ScriptApp.getOAuthToken();
    var url = 'https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=' + token;
    var response = UrlFetchApp.fetch(url);
    var data = JSON.parse(response.getContentText());
    
    console.log('Client ID:', data.issued_to);
}

Pro Tip: Batching Requests

If you need to generate 50 invoices at once, don’t loop through them one by one. Use UrlFetchApp.fetchAll(requests) to fire off parallel requests to your Cloud Function. Since the heavy lifting happens in the cloud, Apps Script just acts as the orchestrator.

Google Apps Script (appsscript.json)

{
    "oauthScopes": [
        "https://www.googleapis.com/auth/drive",
        "https://www.googleapis.com/auth/script.external_request",
        "https://www.googleapis.com/auth/userinfo.email"
    ]
}

Cloud Function (cloud-fn.py)

import requests
import json

def run_test():
    print("--- Local Cloud Function Tester ---")
    print("Ensure you are running: functions-framework --target=generate_excel_report --source=cloud-fn.py --port=8080\n")

    # 1. Gather Tokens & IDs
    print("Paste your Apps Script Tokens (use Logger.log in GAS):")
    id_token = input("1. ID Token (ScriptApp.getIdentityToken()): ").strip()
    if not id_token: return
    
    access_token = input("2. Access Token (ScriptApp.getOAuthToken()): ").strip()
    if not access_token: return

    template_id = input("3. Google Sheet/Excel Template ID: ").strip()
    folder_id = input("4. Destination Folder ID: ").strip()

    # 2. Prepare Payload
    payload = {
        "templateId": template_id,
        "folderId": folder_id,
        "data": [
            { "desc": "Local Test Item 1", "qty": 1, "unit": 10 },
            { "desc": "Local Test Item 2", "qty": 5, "unit": 20 }
        ],
        "userToken": access_token
    }

    headers = {
        "Authorization": f"Bearer {id_token}",
        "Content-Type": "application/json"
    }

    # 3. Send Request
    url = "http://localhost:8080"
    print(f"\nSending POST request to {url}...")
    
    try:
        response = requests.post(url, json=payload, headers=headers)
        print(f"\nStatus Code: {response.status_code}")
        print(f"Response Body: {response.text}")
            
    except requests.exceptions.ConnectionError:
        print(f"\n[!] Could not connect to {url}.")
        print("Is the functions-framework running?")

if __name__ == "__main__":
    run_test()

Local Test Script (test_request.py)

Before deploying, you can test the entire flow locally to ensure your tokens and permissions are working correctly.

  1. Start the Function: functions-framework --target=generate_excel_report --source=cloud-fn.py --port=8080
  2. Run the Tester: python3 test_request.py (Paste tokens from Apps Script logs when prompted).
import requests
import json

def run_test():
    print("--- Local Cloud Function Tester ---")
    print("Ensure you are running: functions-framework --target=generate_excel_report --source=cloud-fn.py --port=8080\n")

    # 1. Gather Tokens & IDs
    print("Paste your Apps Script Tokens (use Logger.log in GAS):")
    id_token = input("1. ID Token (ScriptApp.getIdentityToken()): ").strip()
    if not id_token: return
    
    access_token = input("2. Access Token (ScriptApp.getOAuthToken()): ").strip()
    if not access_token: return

    template_id = input("3. Google Sheet/Excel Template ID: ").strip()
    folder_id = input("4. Destination Folder ID: ").strip()

    # 2. Prepare Payload
    payload = {
        "templateId": template_id,
        "folderId": folder_id,
        "data": [
            { "desc": "Local Test Item 1", "qty": 1, "unit": 10 },
            { "desc": "Local Test Item 2", "qty": 5, "unit": 20 }
        ],
        "userToken": access_token
    }

    headers = {
        "Authorization": f"Bearer {id_token}",
        "Content-Type": "application/json"
    }

    # 3. Send Request
    url = "http://localhost:8080"
    print(f"\nSending POST request to {url}...")
    
    try:
        response = requests.post(url, json=payload, headers=headers)
        print(f"\nStatus Code: {response.status_code}")
        print(f"Response Body: {response.text}")
            
    except requests.exceptions.ConnectionError:
        print(f"\n[!] Could not connect to {url}.")
        print("Is the functions-framework running?")

if __name__ == "__main__":
    run_test()

Why This Wins

  • True Excel Support: You are editing the actual binary file, not a conversion.
  • Separation of Concerns: The heavy lifting is done in the cloud, not in the browser or the Apps Script runtime.

Summary: Which Tool for Which Job?

Feature AppSheet Apps Script (Native) Client-Side (ExcelJS) Cloud Functions (Python)
Complexity Very Low Low High High
Cost Free Free Free GCP Usage (Low)
Best For Standard Workflows Simple Reports Interactive Template Fills Enterprise Reports

Final Thoughts

There is no “one size fits all” here. The “Excel Requirement” is a spectrum:

  • For quick internal tools: Stick to Method 1 (AppSheet) and Method 2 (Native). It’s fast to build and free.
  • For polished client reports: Method 3 (ExcelJS) is your best friend. It keeps the heavy lifting in the browser and preserves your branding.
  • For enterprise scale: If you are generating hundreds of invoices a week, Method 4 (Cloud Functions) is the robust choice. It decouples the generation logic from the spreadsheet UI.

Choose the architecture that fits your volume and your “pixel-perfect” tolerance. I’m also keen to hear have got another ‘Excel Generator’ approach? Happy scripting!

The Age of the Agent: Google’s Managed MCP Servers and Community Contributions

Image credit: Kanshi Tanaike

 

The “USB-C for AI” has officially arrived at Google.

If you have been following the rapid rise of Anthropic’s Model Context Protocol (MCP), you know it promises a standardised way for AI models to connect to data and tools. Until now, connecting an LLM to your Google Drive or BigQuery data meant building custom connectors or relying on fragmented community solutions.

That changed this week with Google’s announcement of fully managed, remote MCP servers for Google and Google Cloud services. But typically for the Google Workspace developer community, we aren’t just waiting for official endpoints—we are building our own local solutions too.

Here is an overview of the different approaches emerging for MCP tools and what the future looks like for Workspace developers.

The Official Path: Managed MCP Servers

Google’s announcement represents a massive shift in how they view AI agents. Rather than asking developers to wrap every granular API endpoint (like compute.instances.insert) into a tool, Google is releasing Managed MCP Servers.

Think of these as “Client Libraries for Agents.” Just as Google provides SDKs for Python or Node.js to handle the heavy lifting of authentication and typing, these managed servers provide a simplified, reasoned interface for agents.

The announcement lists a massive rollout over the “next few months” covering everything from Cloud Run and Storage to the Android Management API. The goal described is to make MCP the standard way agents connect to the entire Google ecosystem. To start with Google announced MCPs for:

  • Google Maps: instead of just raw data, the agent gets “Grounding Lite” to accurately answer location queries without hallucinating.
  • BigQuery: The agent can interpret schemas and execute queries in place, meaning massive datasets don’t need to be moved into the context window.
  • Google Kubernetes Engine (GKE): Agents can diagnose issues and optimize costs without needing to string together brittle CLI commands.

The Community Path: Privacy-First and Scriptable

While managed servers are excellent for scale and standardisation, there are some other notable approaches using local execution.

1. The Gemini CLI Workspace Extension As detailed in a recent tutorial by Romin Irani, the official google-workspace extension for the Gemini CLI takes a privacy-first approach. Instead of running in the cloud, the MCP server runs locally on your machine.

It uses your own local OAuth token, meaning your data flow is strictly Google Cloud <-> User's Local Machine <-> Gemini CLI. This is ideal for tasks like searching Drive, checking your Calendar, or drafting Gmail replies where you want to ensure no third-party intermediary is processing your data.

2. The Power of Apps Script (gas-fakes) For true customisability, Kanshi Tanaike has been pushing Google Apps Script. A recent proof-of-concept using gas-fakes CLI demonstrates how to build MCP tools directly using Apps Script.

This is a big step for Apps Script developers. It allows you to write standard GAS code (e.g., DriveApp.getFilesByName()) and expose it as a tool to an MCP client like Gemini CLI or Google Antigravity. Because it uses the gas-fakes sandbox, you can execute these scripts locally, bridging the gap between your local AI agent and your cloud-based Workspace automation.

The Future: A Universal MCP Strategy?

We are likely moving toward a world where every Google product launch includes three things: API documentation, Client SDKs, and an Official MCP Server Endpoint.

For developers, the choice will come down to granularity and control:

  • Use Managed Servers when you need reliable, standard access to robust services like BigQuery or Maps with zero infrastructure overhead.
  • Use Community/Local Tools (like the Workspace extension or gas-fakes) when you need rapid prototyping, deep customization, or strict data privacy on your local machine.

The “Agentic” future is here, and whether you are using Google’s managed infrastructure or writing your own tools, the ecosystem is ready for you to build.

Sources:

Bring Your AI Agents to Life Across Google Workspace with the New Travel Concierge Sample

This tutorial shows you how to publish AI agents to Google Workspace as Google Workspace add-ons, using Apps Script or HTTP endpoints. After your publish your add-on, your users can interact with the AI agents within their workflows.

Google has released a comprehensive new tutorial that demonstrates how to bridge the gap between complex AI agents and the daily workflows of Google Workspace users. The Travel Concierge sample uses the Agent Development Kit (ADK) to deploy a conversational, multi-agent AI directly into the Google Workspace platform.

While many AI demos focus on standalone chat interfaces, this solution stands out by implementing the agent as a Google Workspace Add-on. This means the AI isn’t just a browser tab; it is a helpful sidebar accessible directly within Gmail, Calendar, Drive, Docs, Sheets, and Slides, as well as a fully functioning Google Chat app.

Key Features for Developers

The sample provides a robust blueprint for developers looking to build “agentic” workflows. Key capabilities include:

  • Multi-Agent Architecture: The solution uses Vertex AI to manage a “Travel Concierge” that orchestrates sub-agents for specific tasks, utilising tools like the Google Maps Platform Places API and Google Search Grounding.
  • Context Awareness: In applications like Gmail, the agent can read the context of the currently selected email to help plan trips based on actual correspondence.
  • Rich User Interfaces: The add-on moves beyond simple text, utilising the Card framework to display rich widgets and interactive elements.
  • Cross-Platform Persistence: User sessions are managed by Vertex AI, ensuring that a conversation started in Google Chat can be seamlessly continued in a Google Doc sidebar.

Flexible Implementation

The Apps Script implementation demonstrates how to connect a standard Workspace Add-on to the powerful Vertex AI backend using UrlFetchApp. It serves as an excellent reference for handling synchronous execution limits and rendering complex agent responses within the constraints of the Add-on sidebar.

Whether you are looking to build a travel planner or a complex enterprise assistant, this sample provides the architectural patterns needed to bring your AI agents to where your users actually work.

Source: Plan travels with an AI agent accessible across Google Workspace  |  Google Workspace add-ons  |  Google for Developers

Mastering Workspace API Authentication in ADK Agents with a Reusable Pattern

For developers seasoned in the Google Workspace ecosystem, the promise of agentic AI is not just about automating tasks, but about creating intelligent, nuanced interactions with the services we know so well. Google’s Agent Development Kit (ADK) provides the foundation, but integrating the familiar, user-centric OAuth 2.0 flow into this new world requires a deliberate architectural approach.

This article explores the patterns for building a sophisticated, multi-capability agent. It explores why you might choose a custom tool implementation (ADK’s “Journey 2”) over auto-generation, and presents a reusable authentication pattern that you can apply to any Workspace API.

The full source code for this project is available in the companion GitHub repository. It’s designed as a high-fidelity local workbench, perfect for development, debugging, and rapid iteration.

The Architectural Crossroads: Journey 1 vs. Journey 2

When integrating a REST API with the ADK, you face a choice, as detailed in the official ADK authentication documentation:

  • Journey 1: The “Auto-Toolify” Approach. This involves using components like OpenAPIToolset or the pre-built GoogleApiToolSet sets. You provide an API specification, and the ADK instantly generates tools for the entire API surface. This is incredibly fast for prototyping but can lead to an agent that is “noisy” (has too many tools and scopes) and lacks the robustness to handle API-specific complexities like pagination.
  • Journey 2: The “Crafted Tool” Approach. This involves using FunctionTool to wrap your own Python functions. This is the path taken in this project. It requires more initial effort but yields a far superior agent for several reasons:
    • Control: This approach exposes only the high-level capabilities needed (e.g., search_all_chat_spaces), not every raw API endpoint.
    • Robustness: Logic can be built directly into the tools to handle real-world challenges like pagination, abstracting this complexity away from the LLM.
    • Efficiency: The tool can pre-process data from the API, returning a concise summary to the LLM and preserving its valuable context window.

Journey 2 was chosen because the goal is not just to call an API, but to provide the agent with a reliable, intelligent capability.

The Core Implementation: A Reusable Pattern

The cornerstone of this integration is a single, centralised function: get_credentials. This function lives in agent.py and is called by every tool that needs to access a protected resource.

It elegantly manages the entire lifecycle of an OAuth token within the ADK session by following a clear, four-step process:

  1. Check Cache: It first looks in the session state (tool_context.state) for a valid, cached token.
  2. Refresh: If a token exists but is expired, it uses the refresh token to get a new one and updates the cache.
  3. Check for Auth Response: If no token is found, it checks if the user has just completed the OAuth consent flow using tool_context.get_auth_response().
  4. Request Credentials: If all else fails, it formally requests credentials via tool_context.request_credential(), which signals the ADK runtime to pause and trigger the interactive user flow.

This pattern is completely generic. You can see in the agent.py file how, by simply changing the SCOPES constant, you could reuse this exact function for Google Drive, Calendar, Sheets, or any other Workspace API.

Defining the Agent and Tools

With the authentication pattern defined, building the agent becomes a matter of composition. The agent.py file also defines the “Orchestrator/Worker” structure—a best practice that uses a fast, cheap model (gemini-2.5-flash) for routing and a more powerful, specialised model (gemini-2.5-pro) for analysis.

The Other Half: The Client-Side Auth Flow

The get_credentials function is only half the story. When it calls tool_context.request_credential(), it signals to the runtime that it needs user authentication. The cli.py script acts as that runtime, or “Agent Client.”

The cli.py script is responsible for the user-facing interaction. As you can see in the handle_agent_run function within cli.py, the script’s main loop does two key things:

  1. It iterates through agent events and uses a helper to check for the specific adk_request_credential function call.
  2. When it detects this request, it pauses, prints the authorisation URL for the user, and waits to receive the callback URL.
  3. It then bundles this callback URL into a FunctionResponse and sends it back to the agent, which can then resume the tool call, this time successfully.

This client-side loop is the essential counterpart to the get_credentials pattern.

Production Considerations: Moving Beyond the Workbench

This entire setup is designed as a high-fidelity local workbench. Before moving to a production environment, several changes are necessary:

  • Persistent Sessions: The InMemorySessionService is ephemeral. For production, you must switch to a persistent service like DatabaseSessionService (backed by a database like PostgreSQL) or VertexAiSessionService. This is critical for remembering conversation history and, most importantly, for securely storing the user’s OAuth tokens.
  • Secure Credential Storage: While caching tokens in the session state is acceptable for local testing, production systems should encrypt the token data in the database or use a dedicated secret manager.
  • Runtime Environment: The cli.py would be replaced by a web server framework (like FastAPI or Flask) if you are deploying this as a web application, or adapted to the specific requirements of a platform like Vertex AI Agent Engine.

By building with this modular, Journey 2 approach, you create agents that are not only more capable and robust but are also well-architected for the transition from local development to a production-grade deployment.

Explore the Code

Explore the full source code in the GitHub repository to see these concepts in action. The README.md file provides all the necessary steps to get the agent up and running on your local machine. By combining the conceptual overview in this article with the hands-on code, you’ll be well-equipped to build your own robust, production-ready agents with the Google ADK.

Source: GitHub – mhawksey/ADK-Workspace-User-oAuth-Example

Building Agentic Workflows: A 3-Part Series from Aryan Irani on ADK, Vertex AI, and Google Apps Script Tags: ADK, Vertex AI, Apps Script, Gemini, AI, Agents

 

Thanks to Google’s Agent Development Kit (ADK) and Vertex AI’s Agent Engine, that’s no longer just an idea. These frameworks let you build intelligent agents, deploy them at scale, and integrate them seamlessly into your Google Workspace tools - enabling a new era of agentic productivity.

In this three-part tutorial series, we’ll explore exactly how to do that.

The ability to build and integrate custom AI agents directly into Google Workspace is rapidly moving from a far-off idea to a practical reality. For developers looking for a clear, end-to-end example, Google Developer Expert Aryan Irani has published an excellent three-part tutorial series.

This comprehensive guide, which includes video tutorials for each part, offers a complete end-to-end journey for building agentic workflows. It starts by building a local ‘AI Auditor’ agent with the new Agent Development Kit (ADK)—programming it to verify claims with the Google Search tool. The series then guides developers through the entire cloud deployment process to Vertex AI Agent Engine, covering project setup and permissions. Finally, it ties everything together with Google Apps Script, providing the complete code (using UrlFetchApp and the OAuth2 for Apps Script library) to integrate the agent as a powerful, fact-checking tool inside Google Docs.

Why This Series Matters

This series is a fantastic, practical guide for any Google Workspace developer looking to bridge the gap between classic Apps Script solutions and Google’s powerful new generative AI tools. It provides a complete, end-to-end blueprint for building truly ‘agentic’ workflows inside the tools your organisation already uses.

A big thank you to Aryan Irani for creating and sharing this detailed resource with the developer community!

Sources:

A Framework for Integrating Agentic AI into Google Workspace Add-ons

A fact-check custom function for Google Sheets to be used as a bound Apps Script project powered by a Vertex AI agent and Gemini model.

This sample demonstrates how you can use two powerful types of AI resources directly into your Google Sheets spreadsheets

A new tutorial published by Google’s Pierrick Voulet, while ostensibly about creating a fact-checking tool in Google Sheets, offers something more valuable to developers: a clear framework for integrating powerful, multi-step Vertex AI Agents into Google Workspace solutions.

While the example is a FACT_CHECK custom function, the underlying architecture caught my eye as it provides a blueprint for connecting Google Workspace applications with the sophisticated reasoning capabilities of custom AI agents.

The Core Architectural Pattern

The solution uses Google Apps Script as a bridge between the user-facing application (Google Sheets) and the backend AI services on Google Cloud.

When a user calls the custom function, the script authenticates with a service account and makes a call to a deployed Vertex AI Agent. This agent then performs its multi-step reasoning. The result is then optionally passed to a Gemini model for final formatting before being returned to the Sheet.

This pattern is highly adaptable for various use cases, allowing developers to bring advanced Agentic AI into your own solutions.

Key Components for Developers

For developers looking to adapt this framework, the tutorial outlines the essential components:

  • A configured Google Cloud Project with the Vertex AI API enabled.
  • A deployed ADK Agent on the Vertex AI Agent Engine.
  • An Apps Script project using a service account for secure, server-to-server authentication with Google Cloud.

The provided Apps Script code (Code.gs and AiVertex.js) serves as a robust starting point, handling the API calls to both the reasoning agent and a Gemini model for final output formatting.

Ultimately, the fact-checking tool serves as an excellent proof of concept. The true value for the developer community, however, lies in the architectural blueprint it provides. This tutorial offers a clear model for integrating multi-step AI reasoning into Google Workspace add-ons, opening the door to a new class of intelligent applications.

Source: Fact-check statements with an ADK AI agent and Gemini model  |  Apps Script  |  Google for Developers

Customising the Gemini CLI with Extensions: A gas-fakes Case Study

The Gemini CLI is a powerful tool for developers, but its true potential lies in its extensibility. I’m excited to share a project that showcases this flexibility: the gas-fakes Gemini CLI Extension. This extension is a case study in tailoring the Gemini CLI to your specific needs, particularly when it comes to secure software development in Google Apps Script.

At its core, this project addresses a critical security concern in the age of AI-driven development: how can you safely test and execute AI-generated code that requires broad access to your Google Workspace data? The answer lies in the pioneering work of Bruce Mcpherson on his gas-fakes library, which this extension integrates into a seamless and secure workflow, thanks to the invaluable contributions of Kanshi Tanaike. I’m looking forward to discussing this project in more detail with Bruce at the upcoming Google Workspace Developer Summit in Paris.

The Power of Gemini CLI Extensions

The ability to create extensions for the Gemini CLI opens up a world of possibilities for developers. By packaging together a collection of tools and resources, you can create a customised experience that is perfectly suited to your workflow. An extension can include three key components:

  • System Prompts (GEMINI.md): A GEMINI.md file allows you to provide the model with custom instructions and context, guiding its behaviour to ensure it generates code that aligns with your specific requirements.
  • Custom Commands: You can create custom commands to automate common tasks and streamline your development process.
  • MCP Tools: The Model Context Protocol (MCP) allows you to integrate external tools and services with the Gemini CLI, enabling powerful, interactive experiences.

The gas-fakes Extension: A Case Study in Secure Development

The gas-fakes Gemini CLI Extension is a practical example of how these components can be combined to create a powerful and secure development environment for Google Apps Script.

The extension tackles the challenge of safely executing AI-generated code by creating a sandboxed environment where scripts can be tested without granting them access to your Google account. Here’s how it works:

  • GEMINI.md: The GEMINI.md file provides the model with detailed instructions on how to use the gas-fakes library, ensuring that the generated code is compatible with the sandboxed environment.
  • Custom Commands: The extension includes custom commands like /gas:init and /gas:new that automate the process of setting up a new project and generating code.
  • MCP Tool: The packaged MCP tool allows the Gemini CLI to interact with the gas-fakes sandbox, enabling it to execute code and receive feedback in a secure environment. This extension also includes the new official Model Context Protocol (MCP) for Google Workspace Development to interact directly with Google Workspace APIs.

Getting Started

To get started with the gas-fakes extension, you’ll first need to have the Google Gemini CLI installed. Once that’s set up, you can install the extension with the following command:

gemini extensions install https://github.com/mhawksey/gas-fakes-ext

For more information on managing extensions, including uninstallation and updates, please see the official documentation.

Usage

Once the extension is installed, you can start a new sandboxed Apps Script project directly from your terminal.

First, create a new directory for your project, navigate into it, and start the Gemini CLI. From there, you can use the /gas:init command to scaffold a complete project structure, which includes all the necessary files for local development and testing.

With the project initialised, you can then use the /gas:new command to generate code from a natural language prompt. For example:

/gas:new "create a new Google Doc and write 'Hello, World!' to it"

This command generates the Apps Script code in src/Code.js and a corresponding runner script in run.js. From here, you can continue the conversation with the Gemini CLI to refine and build upon your code, testing each iteration locally in the secure, sandboxed environment.

This project structure is deliberate: the run.js file is your sandbox for testing, while the src folder contains the clean, production-ready Apps Script code. This separation makes it easy to use other command-line tools like clasp to push only the code in the /src directory to your online Apps Script project when you are ready to deploy.

Fine-Grained Security with Conversational Controls

Beyond creating new files, a common requirement is to have a script interact with existing documents in a user’s Google Drive. The gas-fakes extension provides a robust solution for this, and because it’s integrated into the Gemini CLI, you can configure these advanced security settings using natural language.

This conversational control is powered by the extension’s MCP tool, run-gas-fakes-test. When you ask Gemini to “whitelist this Google Doc for read access”, the model doesn’t write the configuration code itself. Instead, it calls this tool, translating your request into a set of structured parameters that the tool understands. The MCP tool then dynamically assembles and executes the run.js script with the precise security settings you requested. This abstraction is what makes the natural language interface so powerful.

For example, instead of requesting broad access to all of a user’s files, you can create a specific whitelist of file IDs that your script is allowed to interact with, specifying read-only or write access on a per-file basis. This granular approach ensures your script only ever touches the files it is supposed to.

For even tighter security, you can ask Gemini to:

  • Disable entire services: If your script only needs to work with Sheets, you can completely disable DriveApp and DocumentApp.
  • Whitelist specific methods: Lock down a service to only allow certain actions, for example, permitting DriveApp.getFiles() but blocking DriveApp.createFile().
  • Manage test artefacts: For debugging, you can disable the automatic cleanup feature to inspect any files created during a test run.

These advanced features provide developers with the confidence to build and test powerful automations, knowing that the execution is contained within a secure, predictable environment.

Conclusion

The Gemini CLI is more than just a command-line interface; it’s a powerful platform for creating customised and intelligent experiences. The gas-fakes Gemini CLI Extension is just one example of what is possible. I encourage you to explore the world of Gemini CLI Extensions and see what you can create.

Acknowledgements

This extension stands on the shoulders of giants. It directly builds upon the pioneering work of Bruce Mcpherson and his gas-fakes library. I’d also like to express my sincere gratitude to Kanshi Tanaike, whose work on the gas-fakes sandbox and MCP server has been instrumental in the development of this extension.

Source: GitHub – mhawksey/gas-fakes-ext

The Eject Button: How to Write Portable Apps Script Code for the Google Cloud

As Google Apps Script developers, we live in a world of incredible convenience. Services like DriveApp, SpreadsheetApp, and GmailApp are powerful, intuitive, and deeply integrated into Google Workspace. They allow us to build solutions with remarkable speed.

But this convenience comes with a trade-off: our code becomes tightly coupled to the Apps Script runtime. When you write DriveApp.getFiles(), you’re writing code that can only ever run inside Apps Script.

What happens when you hit a wall? Your script might need more than the 30-minute execution limit, require more memory than the shared environment provides, or need to handle a level of concurrency that Apps Script isn’t designed for. In these moments, you need an “eject button”—a way to lift your code out of Apps Script and into a more scalable environment like Google Cloud Run, without starting from scratch.

This article is a guide to a professional workflow that makes this possible. It’s about writing portable Apps Script code from day one, so you’re always ready for the cloud.

The Portability Problem: Built-in Services vs. Raw APIs

The core of the issue lies in how we interact with Google’s services.

  • Built-in Services (DriveApp, etc.): These are high-level abstractions. They provide incredibly useful helper functions (like getBlob()) that handle complex operations behind the scenes. However, they are a “black box” and exist only in Apps Script. For a fascinating look at the work involved in unpacking what these services do, check out Bruce Mcpherson’s gas-fakes project, which emulates these services for local testing.
  • Direct API Calls: The alternative is to interact directly with the underlying Google Drive API (or Sheets API, etc.). This gives you granular control but requires you to handle raw HTTP requests with UrlFetchApp, manage authentication tokens, and implement your own error handling like exponential backoff.

Let’s be clear: this isn’t a call to abandon the built-in services. Far from it. The built-in services are indispensable for anything that interacts directly with the user’s live session. Examples include creating custom menus, displaying dialogs with HtmlService, acting on an onEdit(e) event object, or manipulating the user’s cursor position. For these tasks, the built-in services are not just the best choice; they are often the only choice. The “portable philosophy” is about being strategic. It’s about identifying the core, data-processing engine of your script—the part you suspect might one day need more power or time—and deliberately building that piece for portability.

Writing portable code means finding a middle ground: the convenience of a high-level interface with the cross-platform nature of direct API calls.

The Solution: An Abstraction Layer with Client Libraries

The secret to portability is to abstract away the platform-specific parts of your code. Instead of using the built-in services, you can use client libraries that work in both Apps Script and a standard Node.js environment.

  1. In Apps Script: You can use the Google API Client Library Generator for Apps Script libraries. This repository includes over 400 Apps Script client libraries for Google APIs that provide a familiar, object-oriented interface (e.g., drive.files.list()) but handle the UrlFetchApp and token logic for you.
  2. In Node.js (for Cloud Run): You use the official Google API Node.js Client Libraries (e.g., googleapis). These libraries are the industry standard and provide the exact same structural interface (e.g., drive.files.list()).

A Real-World Example: Bulk PDF Generation

Let’s imagine a common business need: a script that takes data from a Google Sheet to generate hundreds of personalized offer letters as PDFs from a Google Doc template. Doing this one-by-one in Apps Script would be slow and might time out. This is a perfect candidate for the “eject button.”

Here is how we can write a single, portable function for the core logic.

The Portable Core Logic

This function contains the business logic. It’s agnostic to its environment; it only needs an initialized driveClient and docsClient to do its work. To ‘see’ what this looks like view this Apps Script project which has already had the Drive and Docs client libraries from the client library generator build directory.

/**
 * PORTABLE CORE LOGIC
 * This function is almost perfectly portable between Apps Script and Node.js.
 * @param {object} job The job details (templateId, replacements, etc.).
 * @param {object} driveClient An initialized Drive API client.
 * @param {object} docsClient An initialized Docs API client.
 * @param {object} logger A logger object (like `console`).
 */
async function _portablePdfGeneratorLogic(job, driveClient, docsClient, logger) {
  const { templateId, destinationFolderId, replacements, newFileName } = job;
  let newFileId = null;

  try {
    // 1. Copy the template document
    logger.log(`Copying template: ${templateId}`);
    // NOTE for Node.js: The response is often nested, e.g., `response.data.id`
    const copyResponse = await driveClient.files.copy({
      fileId: templateId,
      requestBody: { name: newFileName },
    });
    newFileId = copyResponse.id || copyResponse.data.id;

    // 2. Perform the text replacements in the new document
    logger.log(`Replacing text in new file: ${newFileId}`);
    await docsClient.documents.batchUpdate({
      documentId: newFileId,
      requestBody: { requests: replacements },
    });

    // 3. Export the document as a PDF
    logger.log(`Exporting file as PDF...`);
    // NOTE for Node.js: The response is a stream, not a blob.
    const pdfResponse = await driveClient.files.export({
      fileId: newFileId,
      mimeType: 'application/pdf',
    }, { responseType: 'stream' });

    // 4. Create the final PDF file in the destination folder
    logger.log(`Creating PDF file in folder: ${destinationFolderId}`);
    await driveClient.files.create({
      requestBody: {
        name: `${newFileName}.pdf`,
        parents: [destinationFolderId],
      },
      media: {
        mimeType: 'application/pdf',
        body: pdfResponse.body || pdfResponse.data, // Accommodate both runtimes
      },
      fields: 'id',
    });

    logger.log(`Successfully created PDF for ${newFileName}`);
    return { status: 'success' };

  } catch (error) {
    const errorMessage = error.response ? JSON.stringify(error.response.data) : error.message;
    logger.error(`[Core Logic] Failed: ${errorMessage}`);
    return { status: 'error', reason: errorMessage };
  } finally {
    // 5. Clean up the temporary Google Doc
    if (newFileId) {
      logger.log(`Cleaning up temporary file: ${newFileId}`);
      await driveClient.files.delete({ fileId: newFileId });
    }
  }
}

The Platform-Specific “Bootstrap” Code

The only part that needs to change is the code that initializes the clients and calls the core logic.

In Apps Script (Code.gs for testing and delegation):

/**
 * This function is for testing the portable logic *within* Apps Script.
 */
function runPdfJobInAppsScript(job) {
  // 1. Initialize the Apps Script community client libraries
  // These libraries automatically use the user's token via ScriptApp.getOAuthToken()
  const driveClient = new Drive(); // Assumes Drive.gs is in the project
  const docsClient = new Docs();   // Assumes Docs.gs is in the project

  // 2. Call the portable logic
  _portablePdfGeneratorLogic(job, driveClient, docsClient, Logger);
}
/**
 * This function delegates a job to a Cloud Run service, passing the user's
 * OAuth token to extend the "run as user" paradigm.
 */
function delegateJobToCloudRun(job) {
  const CLOUD_RUN_URL = 'https://your-service-url.a.run.app';
  
  const options = {
    method: 'post',
    contentType: 'application/json',
    // Get the user's OAuth token and pass it in the Authorization header.
    headers: {
      Authorization: 'Bearer ' + ScriptApp.getOAuthToken(),
    },
    payload: JSON.stringify(job),
    muteHttpExceptions: true,
  };
  
  const response = UrlFetchApp.fetch(CLOUD_RUN_URL, options);
  Logger.log(response.getContentText());
}

In Node.js (index.js for Cloud Run):

In our Node.js server, we’ll use the OAuth2Client from the google-auth-library. While many Cloud Run examples use GoogleAuth for service accounts, OAuth2Client is specifically designed to work with user-provided OAuth tokens, allowing us to seamlessly continue acting on behalf of the original user.

// Import OAuth2Client instead of GoogleAuth for this purpose
const { google } = require('googleapis');
const { OAuth2Client } = require('google-auth-library'); // 👈 Change this line
const express = require('express');

const app = express();
app.use(express.json());
const PORT = process.env.PORT || 8080;

// --- Main Express Route Handler ---
app.post('/', async (req, res) => {
  // 1. Extract the token from the Authorization header
  const authHeader = req.headers.authorization;
  if (!authHeader || !authHeader.startsWith('Bearer ')) {
    return res.status(401).send('Unauthorized: Missing or invalid Authorization header.');
  }
  const userToken = authHeader.split(' ')[1];

  const { job } = req.body;
  if (!job) {
    return res.status(400).send('Invalid payload. Expected JSON with a "job" object.');
  }

  try {
    // 2. Create an OAuth2Client instance
    const auth = new OAuth2Client();

    // 3. Set the credentials using the token from Apps Script (or swap out for a service account config)
    auth.setCredentials({
      access_token: userToken,
    });

    // 4. Initialize the Google clients with the user-authed client
    const driveClient = google.drive({ version: 'v3', auth });
    const docsClient = google.docs({ version: 'v1', auth });

    // Now, any calls made with driveClient or docsClient will act on
    // behalf of the user who ran the Apps Script. 🚀
    const result = await _portablePdfGeneratorLogic(job, driveClient, docsClient, console);

    if (result.status === 'success') {
      res.status(200).send('PDF generated successfully.');
    } else {
      res.status(500).send(`Failed to generate PDF: ${result.reason}`);
    }
  } catch (err) {
    console.error(err);
    // Check for authentication errors specifically
    if (err.code === 401 || err.code === 403) {
      res.status(401).send('Authentication error with Google APIs. The token may be expired or invalid.');
    } else {
      res.status(500).send('An unexpected error occurred.');
    }
  }
});

app.listen(PORT, () => {
  console.log(`PDF generator service listening on port ${PORT}`);
  console.log('Ready to receive a test request...');
});

// (Paste the _portablePdfGeneratorLogic function here)
}

Conclusion: Start Portable, Scale with Confidence

The “eject button” is your strategic advantage, transforming Apps Script from just a scripting environment into the first stage of a professional development lifecycle. By separating your core logic from the convenient but platform-specific built-in services, you give your applications a future-proof foundation.

The key takeaway is simple: write your business logic for the Google APIs, not for the Apps Script runtime.

Your Next Step:The next time you start a project in Apps Script, challenge yourself to write just one core function in this portable style. You’ll not only prepare your code for future scaling but also gain a deeper understanding of the Google APIs themselves.

Apps Script Engine v2.0 is Here! Your Workflow Just Got a TypeScript Power-Up 🚀

 

You heard it right. This is the update you’ve been waiting for. Apps Script Engine now has full-blown TypeScript support! 🤯

We’re talking server-side .gs files and client-side HtmlService code. The whole shebang. While clasp has been on-again, off-again with its TypeScript support, we decided to go all in and deliver the robust, seamless experience you deserve.

Getting started is ridiculously simple. Just tack on the --ts flag during installation:

npx apps-script-engine [directory] --ts

Why TypeScript is a Game-Changer for Apps Script

For larger or more complex Apps Script projects, the introduction of TypeScript offers substantial benefits that can improve code quality and developer productivity:

  • Static Typing: Helps catch common errors during development rather than at runtime, preventing entire classes of bugs related to data types.
  • Enhanced Autocompletion: Provides intelligent code suggestions and autocompletion in the IDE, speeding up the development process.
  • Improved Code Readability: Makes code easier to understand and maintain, as types explicitly define the data structures being used.

What’s New in Version 2.0?

This release is packed with features that modernise the Apps Script development experience:

  • Full TypeScript Support: Write modern, type-safe JavaScript for both frontend and backend code.
  • ES6+ Modules: Organise your code into clean, reusable modules.
  • Frontend Frameworks: Comes with out-of-the-box support for Alpine.js and Tailwind CSS (including Daisy UI).
  • NPM Modules: Leverage the vast ecosystem of NPM modules in your server-side and client-side code.
  • Robust Mocking: The mocking system for google.script.run has been re-engineered to allow for easy simulation of both success and failure handlers during local development.
  • Migration to Vitest: The testing framework has been switched from Jest to Vitest, offering faster performance and a simpler configuration.
  • Gemini CLI Integration: A new GEMINI.md file is included in the project template, providing the Gemini CLI with the necessary context to assist with coding and debugging tasks.
  • CI/CD and Environment Management: The engine includes built-in support for GitHub Actions workflows and managing multiple deployment environments (DEV, UAT, PROD).

This update positions Apps Script Engine as a powerful tool for developers who want to apply modern web development practices to the Google Workspace platform.

Source: Apps Script Engine v2.0 is Here! Your Workflow Just Got a TypeScript Power-Up 🚀