A fact-check custom function for Google Sheets to be used as a bound Apps Script project powered by a Vertex AI agent and Gemini model.
This sample demonstrates how you can use two powerful types of AI resources directly into your Google Sheets spreadsheets
A new tutorial published by Google’s Pierrick Voulet, while ostensibly about creating a fact-checking tool in Google Sheets, offers something more valuable to developers: a clear framework for integrating powerful, multi-step Vertex AI Agents into Google Workspace solutions.
While the example is a FACT_CHECK custom function, the underlying architecture caught my eye as it provides a blueprint for connecting Google Workspace applications with the sophisticated reasoning capabilities of custom AI agents.
The Core Architectural Pattern
The solution uses Google Apps Script as a bridge between the user-facing application (Google Sheets) and the backend AI services on Google Cloud.
When a user calls the custom function, the script authenticates with a service account and makes a call to a deployed Vertex AI Agent. This agent then performs its multi-step reasoning. The result is then optionally passed to a Gemini model for final formatting before being returned to the Sheet.
This pattern is highly adaptable for various use cases, allowing developers to bring advanced Agentic AI into your own solutions.
Key Components for Developers
For developers looking to adapt this framework, the tutorial outlines the essential components:
A configured Google Cloud Project with the Vertex AI API enabled.
A deployed ADK Agent on the Vertex AI Agent Engine.
An Apps Script project using a service account for secure, server-to-server authentication with Google Cloud.
The provided Apps Script code (Code.gs and AiVertex.js) serves as a robust starting point, handling the API calls to both the reasoning agent and a Gemini model for final output formatting.
Ultimately, the fact-checking tool serves as an excellent proof of concept. The true value for the developer community, however, lies in the architectural blueprint it provides. This tutorial offers a clear model for integrating multi-step AI reasoning into Google Workspace add-ons, opening the door to a new class of intelligent applications.
Learn how to use the Append method in Google Sheets Advanced Service in Google Apps Script and see how it differs from the SpreadsheetApp option
While most Google Apps Script developers are familiar with the standard appendRow() method, it does come with its limitations, namely being restricted to a single row and always appending after the very last row with content on a sheet. For those looking for more control and flexibility, community expert Scott Donald (aka Yagisanatode) has published an excellent tutorial on using the spreadsheets.values.append() method available through the Google Sheets API Advanced Service.
Scott’s guide provides a deep dive into the powerful features this method unlocks, offering a more versatile approach to handling data in your Google Sheets.
The Power of the Append Method
In his tutorial, Scott highlights several key advantages over the traditional SpreadsheetApp service:
Intelligently append to a table: The API automatically finds the first empty row after a data table within a specified range. This allows you to append new rows directly to the correct location in a single API call, without first needing to find the last row of content.
Choose Your Insert Option: The API allows you to decide whether you want to INSERT_ROWS, which pushes existing data down, or OVERWRITE the empty cells below your target table.
Control Value Input: You can specify USER_ENTERED to have Sheets parse the data as if it were typed in by a user (which processes formulas and formats dates), or RAW to insert the values exactly as they are in your array.
Scott’s tutorial provides a detailed breakdown of each parameter and includes a helpful video walkthrough and a starter sheet for you to get hands-on with the concepts. He also notes some current documentation errors and bugs, which makes this a valuable resource for anyone looking to implement this feature.
A big thank you to Scott for sharing this insightful guide with the community!
The Gemini CLI is a powerful tool for developers, but its true potential lies in its extensibility. I’m excited to share a project that showcases this flexibility: the gas-fakes Gemini CLI Extension. This extension is a case study in tailoring the Gemini CLI to your specific needs, particularly when it comes to secure software development in Google Apps Script.
At its core, this project addresses a critical security concern in the age of AI-driven development: how can you safely test and execute AI-generated code that requires broad access to your Google Workspace data? The answer lies in the pioneering work of Bruce Mcpherson on his gas-fakes library, which this extension integrates into a seamless and secure workflow, thanks to the invaluable contributions of Kanshi Tanaike. I’m looking forward to discussing this project in more detail with Bruce at the upcoming Google Workspace Developer Summit in Paris.
The Power of Gemini CLI Extensions
The ability to create extensions for the Gemini CLI opens up a world of possibilities for developers. By packaging together a collection of tools and resources, you can create a customised experience that is perfectly suited to your workflow. An extension can include three key components:
System Prompts (GEMINI.md): A GEMINI.md file allows you to provide the model with custom instructions and context, guiding its behaviour to ensure it generates code that aligns with your specific requirements.
Custom Commands: You can create custom commands to automate common tasks and streamline your development process.
MCP Tools: The Model Context Protocol (MCP) allows you to integrate external tools and services with the Gemini CLI, enabling powerful, interactive experiences.
The gas-fakes Extension: A Case Study in Secure Development
The gas-fakes Gemini CLI Extension is a practical example of how these components can be combined to create a powerful and secure development environment for Google Apps Script.
The extension tackles the challenge of safely executing AI-generated code by creating a sandboxed environment where scripts can be tested without granting them access to your Google account. Here’s how it works:
GEMINI.md: The GEMINI.md file provides the model with detailed instructions on how to use the gas-fakes library, ensuring that the generated code is compatible with the sandboxed environment.
Custom Commands: The extension includes custom commands like /gas:init and /gas:new that automate the process of setting up a new project and generating code.
MCP Tool: The packaged MCP tool allows the Gemini CLI to interact with the gas-fakes sandbox, enabling it to execute code and receive feedback in a secure environment. This extension also includes the new official Model Context Protocol (MCP) for Google Workspace Development to interact directly with Google Workspace APIs.
Getting Started
To get started with the gas-fakes extension, you’ll first need to have the Google Gemini CLI installed. Once that’s set up, you can install the extension with the following command:
For more information on managing extensions, including uninstallation and updates, please see the official documentation.
Usage
Once the extension is installed, you can start a new sandboxed Apps Script project directly from your terminal.
First, create a new directory for your project, navigate into it, and start the Gemini CLI. From there, you can use the /gas:init command to scaffold a complete project structure, which includes all the necessary files for local development and testing.
With the project initialised, you can then use the /gas:new command to generate code from a natural language prompt. For example:
/gas:new "create a new Google Doc and write 'Hello, World!'to it"
This command generates the Apps Script code in src/Code.js and a corresponding runner script in run.js. From here, you can continue the conversation with the Gemini CLI to refine and build upon your code, testing each iteration locally in the secure, sandboxed environment.
This project structure is deliberate: the run.js file is your sandbox for testing, while the src folder contains the clean, production-ready Apps Script code. This separation makes it easy to use other command-line tools like clasp to push only the code in the /src directory to your online Apps Script project when you are ready to deploy.
Fine-Grained Security with Conversational Controls
Beyond creating new files, a common requirement is to have a script interact with existing documents in a user’s Google Drive. The gas-fakes extension provides a robust solution for this, and because it’s integrated into the Gemini CLI, you can configure these advanced security settings using natural language.
This conversational control is powered by the extension’s MCP tool, run-gas-fakes-test. When you ask Gemini to “whitelist this Google Doc for read access”, the model doesn’t write the configuration code itself. Instead, it calls this tool, translating your request into a set of structured parameters that the tool understands. The MCP tool then dynamically assembles and executes the run.js script with the precise security settings you requested. This abstraction is what makes the natural language interface so powerful.
For example, instead of requesting broad access to all of a user’s files, you can create a specific whitelist of file IDs that your script is allowed to interact with, specifying read-only or write access on a per-file basis. This granular approach ensures your script only ever touches the files it is supposed to.
For even tighter security, you can ask Gemini to:
Disable entire services: If your script only needs to work with Sheets, you can completely disable DriveApp and DocumentApp.
Whitelist specific methods: Lock down a service to only allow certain actions, for example, permitting DriveApp.getFiles() but blocking DriveApp.createFile().
Manage test artefacts: For debugging, you can disable the automatic cleanup feature to inspect any files created during a test run.
These advanced features provide developers with the confidence to build and test powerful automations, knowing that the execution is contained within a secure, predictable environment.
Conclusion
The Gemini CLI is more than just a command-line interface; it’s a powerful platform for creating customised and intelligent experiences. The gas-fakes Gemini CLI Extension is just one example of what is possible. I encourage you to explore the world of Gemini CLI Extensions and see what you can create.
Acknowledgements
This extension stands on the shoulders of giants. It directly builds upon the pioneering work of Bruce Mcpherson and his gas-fakes library. I’d also like to express my sincere gratitude to Kanshi Tanaike, whose work on the gas-fakes sandbox and MCP server has been instrumental in the development of this extension.
When and how to use Application default credentials for Google API auth, with example scripts to set them up and example code to consume them
In my recent post, ‘The Eject Button‘, I explored the idea of writing Apps Script code that can be easily moved to other environments like Google Cloud. This was inspired by the seamless local development workflows available in other languages, which begs the question: how can Apps Script developers replicate that experience?
A big piece of that puzzle is authentication. How do you securely test code on your local machine that needs to talk to Google APIs?
Google Workspace Developer Expert Bruce Mcpherson provides the answer in his excellent guide, ‘Application Default Credentials with Google Cloud and Workspace APIs‘. His post is the perfect starting point for any developer looking to streamline their workflow. The key takeaway is that by using the Google Cloud CLI, your local code can securely use your own credentials to access Google APIs.
When you combine ADC for authenticating real API calls with his gas-fakes mocking library for simulating Apps Script services, you have a powerful toolkit that brings a professional development cycle to the Apps Script world.
If you’re looking to level up your skills and knowledge, I highly recommend diving into Bruce’s article.
The third quarter of 2025 has been a busy one, with a host of updates across various APIs. This quarter, there has been a strong focus on enhancing app authentication, improving API capabilities, and providing more granular control for Google Drive. Here’s a roundup of the key updates from July, August, and September that you need to know about.
Chat API
The Chat API has received a number of significant updates, with a strong emphasis on expanding the capabilities of Chat apps through app authentication. In a new developer preview, several methods, including those for getting and listing space events, now support app authentication with administrator approval. This allows apps to act on behalf of a user who has granted consent, opening up new possibilities for automation and integration.
In addition to the new authentication features, the Chat API now allows for the creation of carousels in card messages, providing a richer and more interactive user experience. We’ve also seen an increase in per-space quota limits for writing messages and reactions, which will be a welcome change for high-traffic apps.
The Admin SDK has seen several important updates in Q3, particularly within the Reports API. Developers can now get activity events forGmail andGemini in Workspace Apps, providing greater insight into how these tools are being used within an organisation.
A key feature for Drive management is the introduction of activity events for pre-fetched content, allowing administrators to better monitor and understand data access patterns.
Gmail API
The Gmail API has introduced Deal Cards in the Promotions tab, allowing developers to create more engaging and actionable promotional content. For organisations using client-side encryption, the API now supports smart cards, enhancing security for sensitive communications.
Google Drive API
A significant change has been made to how download, print, and copy restrictions are applied, giving owners and organisers more granular control over content. Previously, these restrictions could only be applied to users with reader roles, but now, owner and organizer roles can extend these restrictions to users with writer permissions as well.
For individual files, developers should now use the itemDownloadRestriction field, with the corresponding capabilities.canChangeItemDownloadRestriction field available to check modification permissions. Similar controls have been introduced for shared drives, using the downloadRestriction field within the Restrictions object. While the previous copyRequiresWriterPermission field remains, its functionality has changed, and Google recommends using the new system to manage content restrictions.
For developers working with events, the Drive API, in conjunction with the Google Workspace Events API, now supports subscriptions to a variety of Drive events, including file additions, moves, edits, and access proposals. This is a powerful feature for building reactive and real-time applications.
Google Workspace Events API
The Events API has been a major focus this quarter, with new features for both Chat and Drive. As mentioned the ability to subscribe to Drive events is a significant step forward, enabling developers to build applications that respond in real-time to changes in a user’s Drive.
The big news for add-on developers is the ability to build Google Chat apps as Google Workspace Add-ons, which is now generally available. This provides a unified framework for extending the functionality of Google Workspace applications.
Key Deprecations and Breaking Changes
As with any quarter, there have been some deprecations and breaking changes to be aware of:
Google Apps Script: The advanced service for the Google Analytics Management API and Reporting API is deprecated and has been replaced by the Google Analytics Data API Advanced Service.
As Google Apps Script developers, we live in a world of incredible convenience. Services like DriveApp, SpreadsheetApp, and GmailApp are powerful, intuitive, and deeply integrated into Google Workspace. They allow us to build solutions with remarkable speed.
But this convenience comes with a trade-off: our code becomes tightly coupled to the Apps Script runtime. When you write DriveApp.getFiles(), you’re writing code that can only ever run inside Apps Script.
What happens when you hit a wall? Your script might need more than the 30-minute execution limit, require more memory than the shared environment provides, or need to handle a level of concurrency that Apps Script isn’t designed for. In these moments, you need an “eject button”—a way to lift your code out of Apps Script and into a more scalable environment like Google Cloud Run, without starting from scratch.
This article is a guide to a professional workflow that makes this possible. It’s about writing portable Apps Script code from day one, so you’re always ready for the cloud.
The Portability Problem: Built-in Services vs. Raw APIs
The core of the issue lies in how we interact with Google’s services.
Built-in Services (DriveApp, etc.): These are high-level abstractions. They provide incredibly useful helper functions (like getBlob()) that handle complex operations behind the scenes. However, they are a “black box” and exist only in Apps Script. For a fascinating look at the work involved in unpacking what these services do, check out Bruce Mcpherson’s gas-fakes project, which emulates these services for local testing.
Direct API Calls: The alternative is to interact directly with the underlying Google Drive API (or Sheets API, etc.). This gives you granular control but requires you to handle raw HTTP requests with UrlFetchApp, manage authentication tokens, and implement your own error handling like exponential backoff.
Let’s be clear: this isn’t a call to abandon the built-in services. Far from it. The built-in services are indispensable for anything that interacts directly with the user’s live session. Examples include creating custom menus, displaying dialogs with HtmlService, acting on an onEdit(e) event object, or manipulating the user’s cursor position. For these tasks, the built-in services are not just the best choice; they are often the only choice. The “portable philosophy” is about being strategic. It’s about identifying the core, data-processing engine of your script—the part you suspect might one day need more power or time—and deliberately building that piece for portability.
Writing portable code means finding a middle ground: the convenience of a high-level interface with the cross-platform nature of direct API calls.
The Solution: An Abstraction Layer with Client Libraries
The secret to portability is to abstract away the platform-specific parts of your code. Instead of using the built-in services, you can use client libraries that work in both Apps Script and a standard Node.js environment.
In Apps Script: You can use the Google API Client Library Generator for Apps Script libraries. This repository includes over 400 Apps Script client libraries for Google APIs that provide a familiar, object-oriented interface (e.g., drive.files.list()) but handle the UrlFetchApp and token logic for you.
In Node.js (for Cloud Run): You use the official Google API Node.js Client Libraries (e.g., googleapis). These libraries are the industry standard and provide the exact same structural interface (e.g., drive.files.list()).
A Real-World Example: Bulk PDF Generation
Let’s imagine a common business need: a script that takes data from a Google Sheet to generate hundreds of personalized offer letters as PDFs from a Google Doc template. Doing this one-by-one in Apps Script would be slow and might time out. This is a perfect candidate for the “eject button.”
Here is how we can write a single, portable function for the core logic.
The Portable Core Logic
This function contains the business logic. It’s agnostic to its environment; it only needs an initialized driveClient and docsClient to do its work. To ‘see’ what this looks like view this Apps Script project which has already had the Drive and Docs client libraries from the client library generator build directory.
/**
* PORTABLE CORE LOGIC
* This function is almost perfectly portable between Apps Script and Node.js.
* @param {object} job The job details (templateId, replacements, etc.).
* @param {object} driveClient An initialized Drive API client.
* @param {object} docsClient An initialized Docs API client.
* @param {object} logger A logger object (like `console`).
*/
async function _portablePdfGeneratorLogic(job, driveClient, docsClient, logger) {
const { templateId, destinationFolderId, replacements, newFileName } = job;
let newFileId = null;
try {
// 1. Copy the template document
logger.log(`Copying template: ${templateId}`);
// NOTE for Node.js: The response is often nested, e.g., `response.data.id`
const copyResponse = await driveClient.files.copy({
fileId: templateId,
requestBody: { name: newFileName },
});
newFileId = copyResponse.id || copyResponse.data.id;
// 2. Perform the text replacements in the new document
logger.log(`Replacing text in new file: ${newFileId}`);
await docsClient.documents.batchUpdate({
documentId: newFileId,
requestBody: { requests: replacements },
});
// 3. Export the document as a PDF
logger.log(`Exporting file as PDF...`);
// NOTE for Node.js: The response is a stream, not a blob.
const pdfResponse = await driveClient.files.export({
fileId: newFileId,
mimeType: 'application/pdf',
}, { responseType: 'stream' });
// 4. Create the final PDF file in the destination folder
logger.log(`Creating PDF file in folder: ${destinationFolderId}`);
await driveClient.files.create({
requestBody: {
name: `${newFileName}.pdf`,
parents: [destinationFolderId],
},
media: {
mimeType: 'application/pdf',
body: pdfResponse.body || pdfResponse.data, // Accommodate both runtimes
},
fields: 'id',
});
logger.log(`Successfully created PDF for ${newFileName}`);
return { status: 'success' };
} catch (error) {
const errorMessage = error.response ? JSON.stringify(error.response.data) : error.message;
logger.error(`[Core Logic] Failed: ${errorMessage}`);
return { status: 'error', reason: errorMessage };
} finally {
// 5. Clean up the temporary Google Doc
if (newFileId) {
logger.log(`Cleaning up temporary file: ${newFileId}`);
await driveClient.files.delete({ fileId: newFileId });
}
}
}
The Platform-Specific “Bootstrap” Code
The only part that needs to change is the code that initializes the clients and calls the core logic.
In Apps Script (Code.gs for testing and delegation):
/**
* This function is for testing the portable logic *within* Apps Script.
*/
function runPdfJobInAppsScript(job) {
// 1. Initialize the Apps Script community client libraries
// These libraries automatically use the user's token via ScriptApp.getOAuthToken()
const driveClient = new Drive(); // Assumes Drive.gs is in the project
const docsClient = new Docs(); // Assumes Docs.gs is in the project
// 2. Call the portable logic
_portablePdfGeneratorLogic(job, driveClient, docsClient, Logger);
}
/**
* This function delegates a job to a Cloud Run service, passing the user's
* OAuth token to extend the "run as user" paradigm.
*/
function delegateJobToCloudRun(job) {
const CLOUD_RUN_URL = 'https://your-service-url.a.run.app';
const options = {
method: 'post',
contentType: 'application/json',
// Get the user's OAuth token and pass it in the Authorization header.
headers: {
Authorization: 'Bearer ' + ScriptApp.getOAuthToken(),
},
payload: JSON.stringify(job),
muteHttpExceptions: true,
};
const response = UrlFetchApp.fetch(CLOUD_RUN_URL, options);
Logger.log(response.getContentText());
}
In Node.js (index.js for Cloud Run):
In our Node.js server, we’ll use the OAuth2Client from the google-auth-library. While many Cloud Run examples use GoogleAuth for service accounts, OAuth2Client is specifically designed to work with user-provided OAuth tokens, allowing us to seamlessly continue acting on behalf of the original user.
// Import OAuth2Client instead of GoogleAuth for this purpose
const { google } = require('googleapis');
const { OAuth2Client } = require('google-auth-library'); // 👈 Change this line
const express = require('express');
const app = express();
app.use(express.json());
const PORT = process.env.PORT || 8080;
// --- Main Express Route Handler ---
app.post('/', async (req, res) => {
// 1. Extract the token from the Authorization header
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).send('Unauthorized: Missing or invalid Authorization header.');
}
const userToken = authHeader.split(' ')[1];
const { job } = req.body;
if (!job) {
return res.status(400).send('Invalid payload. Expected JSON with a "job" object.');
}
try {
// 2. Create an OAuth2Client instance
const auth = new OAuth2Client();
// 3. Set the credentials using the token from Apps Script (or swap out for a service account config)
auth.setCredentials({
access_token: userToken,
});
// 4. Initialize the Google clients with the user-authed client
const driveClient = google.drive({ version: 'v3', auth });
const docsClient = google.docs({ version: 'v1', auth });
// Now, any calls made with driveClient or docsClient will act on
// behalf of the user who ran the Apps Script. 🚀
const result = await _portablePdfGeneratorLogic(job, driveClient, docsClient, console);
if (result.status === 'success') {
res.status(200).send('PDF generated successfully.');
} else {
res.status(500).send(`Failed to generate PDF: ${result.reason}`);
}
} catch (err) {
console.error(err);
// Check for authentication errors specifically
if (err.code === 401 || err.code === 403) {
res.status(401).send('Authentication error with Google APIs. The token may be expired or invalid.');
} else {
res.status(500).send('An unexpected error occurred.');
}
}
});
app.listen(PORT, () => {
console.log(`PDF generator service listening on port ${PORT}`);
console.log('Ready to receive a test request...');
});
// (Paste the _portablePdfGeneratorLogic function here)
}
Conclusion: Start Portable, Scale with Confidence
The “eject button” is your strategic advantage, transforming Apps Script from just a scripting environment into the first stage of a professional development lifecycle. By separating your core logic from the convenient but platform-specific built-in services, you give your applications a future-proof foundation.
The key takeaway is simple: write your business logic for the Google APIs, not for the Apps Script runtime.
Your Next Step:The next time you start a project in Apps Script, challenge yourself to write just one core function in this portable style. You’ll not only prepare your code for future scaling but also gain a deeper understanding of the Google APIs themselves.
You heard it right. This is the update you’ve been waiting for. Apps Script Engine now has full-blown TypeScript support! 🤯
We’re talking server-side .gs files and client-side HtmlService code. The whole shebang. While clasp has been on-again, off-again with its TypeScript support, we decided to go all in and deliver the robust, seamless experience you deserve.
Getting started is ridiculously simple. Just tack on the --ts flag during installation:
npx apps-script-engine [directory] --ts
Why TypeScript is a Game-Changer for Apps Script
For larger or more complex Apps Script projects, the introduction of TypeScript offers substantial benefits that can improve code quality and developer productivity:
Static Typing: Helps catch common errors during development rather than at runtime, preventing entire classes of bugs related to data types.
Enhanced Autocompletion: Provides intelligent code suggestions and autocompletion in the IDE, speeding up the development process.
Improved Code Readability: Makes code easier to understand and maintain, as types explicitly define the data structures being used.
What’s New in Version 2.0?
This release is packed with features that modernise the Apps Script development experience:
Full TypeScript Support: Write modern, type-safe JavaScript for both frontend and backend code.
ES6+ Modules: Organise your code into clean, reusable modules.
Frontend Frameworks: Comes with out-of-the-box support for Alpine.js and Tailwind CSS (including Daisy UI).
NPM Modules: Leverage the vast ecosystem of NPM modules in your server-side and client-side code.
Robust Mocking: The mocking system for google.script.run has been re-engineered to allow for easy simulation of both success and failure handlers during local development.
Migration to Vitest: The testing framework has been switched from Jest to Vitest, offering faster performance and a simpler configuration.
Gemini CLI Integration: A new GEMINI.md file is included in the project template, providing the Gemini CLI with the necessary context to assist with coding and debugging tasks.
CI/CD and Environment Management: The engine includes built-in support for GitHub Actions workflows and managing multiple deployment environments (DEV, UAT, PROD).
This update positions Apps Script Engine as a powerful tool for developers who want to apply modern web development practices to the Google Workspace platform.
A step-by-step guide to dynamically import, filter, and format your data.
How many times have you found yourself copying data from one sheet to another, applying filters, and then manually formatting everything as a table to make it more readable and functional? It is a sometimes tedious and error-prone process.
What if we could automate all of this with a single click? Welcome to the world of Google Apps Script!
In this article, we will analyze a script that not only imports and filters data between two spreadsheets but also uses the power of the Sheets API to dynamically create a formatted table. Ready? Let’s start! 🚀
What does the script do?
In summary, our script performs these steps in sequence:
Connects to a source spreadsheet (even one different from where the script is located).
Imports all the data from the specified sheet.
Filters the rows based on a specific condition (in our case, a numeric value in a column).
Clears the destination sheet and writes the filtered data to it.
Creates (or recreates) a formatted table that perfectly adapts to the size of the new data.
The Sample File
To follow this tutorial, you can create a copy of the sample spreadsheet:
The file contains the source sheet DataSet with the starting data. In your destination sheet, you can, for example, use a sheet named Filtered where the filtered data will be inserted. Nothing prevents you from using this script to filter and create a table in the same file. In any case, for my example, you will need the file I linked and a second file where you import this data.
In the script we will analyze, the filter is applied to the “Billing” column, selecting only rows with a value greater than 2000.
Before starting, there is a fundamental step. Our script uses advanced methods that are not part of the standard SpreadsheetApp service, but of the advanced Sheets API service.
To enable it:
Open the Apps Script editor.
In the left menu, click on Services (+).
Find Google Sheets API in the list, select it, and click Add.
Perfect! Now your script has the “superpowers” to manipulate tables, conditional formatting, and much more.
Script Analysis
Here is the complete script that you will need to copy and paste into your file:
function importFilterCreateTable() {
const idOrigin = "ID_FILE"; // Source File ID
const nameOrigin = "DataSet"; // source sheet name
const nameDest = "Filtered" // destination sheet name
const indexCol = "Billing" // name of Column to filter
const nameTable = "TBL_Filtered" // name of the table
const ss = SpreadsheetApp.getActive();
const sheetOrigin = SpreadsheetApp.openById(idOrigin).getSheetByName(nameOrigin);
const sheetDest = ss.getSheetByName(nameDest);
const destID = sheetDest.getSheetId();
// Import & Filter the data
const values = sheetOrigin.getDataRange().getValues();
const col = values[0].indexOf(indexCol); // Find the column number to filter
const filterValues = values.filter((rw, h) => h === 0 || rw[col] > 2000); //Filter by condition
if (!filterValues.length) return;
// Retrieve the table data via the Sheets API
const ssObj = Sheets.Spreadsheets.get(ss.getId(), { includeGridData: false });
const sheetObj = ssObj.sheets.find(s => s.properties.sheetId === destID);
const requests = [];
// Checks if the table exists, if so it deletes it
if (sheetObj.tables) {
const table = sheetObj.tables[0];
requests.push({ deleteTable: { tableId: table.tableId } });
}
// Adds the new data
sheetDest.getRange(1, 1, filterValues.length, filterValues[0].length).setValues(filterValues);
// Adds a new table
requests.push({
addTable: {
table: {
name: nameTable,
range: {
sheetId: destID,
startRowIndex: 0,
endRowIndex: filterValues.length,
startColumnIndex: 0,
endColumnIndex: filterValues[0].length
}
}
}
});
if (requests.length) {
Sheets.Spreadsheets.batchUpdate({ requests }, ss.getId());
}
}
💪🏻Let’s analyze the interesting part about the table.
The Creation of the Table
This is the most interesting and powerful part of the script. The standard SpreadsheetApp service does not have a direct method like sheet.createTable(). To do this, we need to speak directly with the Google Sheets API.
How does it work?
Sheets.Spreadsheets.get(ss.getId(), { includeGridData: false }); With this line, we ask the API to provide us with all the properties of our spreadsheet (like the sheets within it, their IDs, and of course, the existing tables), but without downloading the cell data (includeGridData: false), making the call much faster.
ssObj.sheets.find(s => s.properties.sheetId === destID); From the object we have received, we look for the specific sheet we are working on by comparing its ID (destID).
Preventive deletion: if (sheetObj.tables) Our script is designed to be run multiple times. For simplicity, we delete the old table before creating a new one. How? This block of code checks if a table already exists in the sheetObj sheet (sheetObj.tables). If so, it adds a deletion request (deleteTable) to an array of “requests” (requests).
Creation: requests.push({ addTable: { ... } }); We add a second request to our array: the creation of the new table (addTable). Here we define all its properties:
name: The unique name of the table.
range: The cell range it will occupy. It is crucial to note that the API uses “zero-based” indexes (the first row/column is 0) and the final index (endRowIndex) is excluded. This is why filterValues.length works perfectly to define the last row.
📌 In this example, the table will be placed in cell A1. If you want the table to appear in a different row, for example in row 3 of a column, you must set startRowIndex: 2 and endRowIndex: filterValues.length + 2. If you want it in a different column instead, for example in column D, you will have to put startColumnIndex: 3 and endColumnIndex: filterValues[0].length + 3.
Sheets.Spreadsheets.batchUpdate({ requests }, ss.getId()); This is the final call. Instead of making one API call to delete and another to create, we group them into a single batchUpdate. It is more efficient and ensures that the operations are performed in sequence.
This “delete and recreate” approach is extremely robust and ensures that the table always has the exact dimensions of the filtered data. The previous data? It will be removed when we delete the table. In fact, it is only after possibly deleting the table that the new data will be inserted.
Updating an Existing Table
What if we just wanted to update the dimensions of an existing table instead of deleting and recreating it? In this case, the import and filter logic remains the same, but the table manipulation part changes.
This method becomes useful when your table already has a structure that you want to preserve (special formatting, data validation, drop-down menus, etc.) and you just want to update its data and dimensions.
Here is what the final part of the script would look like in this case:
Instead of deleting and creating, we use an updateTable request.
We must specify the ID of the table to be modified (tableId).
The fields: "range" field is crucial: it tells the API to update only and exclusively the range property of the table, leaving everything else (like the name) unchanged.
Conclusion
Which method to choose? It always depends on the use you want to make of it, so it depends a lot on your needs. Remembering that the “delete and recreate” method of the first script is generally used if we always want to have a new and clean table, while the second case is useful when we have already structured our table.
Automating tasks in Google Sheets with Apps Script can save you hours of work. Learning to use the Sheets API opens up a world of possibilities that go far beyond simple data manipulation, allowing you to programmatically control almost every aspect of your spreadsheets.
Gartner has predicted that by 2025, 70% of new applications developed by enterprises will utilise low-code or no-code technologies. Furthermore, the global no-code AI platforms market is expected to grow to $24.8 billion by 2029. These numbers are no surprise, knowing that no-code solutions bring significant cost reductions and speed up development processes. The best news? If you are using Google Workspace, you probably already have access to a powerful no-code tool – Google AppSheet.
I recently co-authored a guide at Devoteam on building no-code apps with Google AppSheet and Gemini.
The guide hopefully provides a clear overview of AppSheet’s capabilities, highlighting its ability to connect to existing data sources like Google Sheets and SQL databases. It also delves into how the new Gemini integration allows anyone to generate an application’s structure simply by describing their idea in plain English. The guide includes some real-world use cases—from inventory management to streamlining hotel audits—demonstrating the tangible impact of empowering “citizen developers” within an organisation.
For the Google Workspace Developer community, hopefully this guide offers a useful lens through which to view the no-code landscape. Understanding these platforms—their capabilities, their audience, and their limitations—can be useful when deciding whether to build a full custom solution or to empower a team with a no-code tool.
gas-fakes includes a sandbox feature in an emulation of Apps Script on Node that allows you to control file access and manage test artifacts
The rise of generative AI tools like the Gemini CLI has unlocked incredible potential for automating Google Workspace, allowing developers to generate Google Apps Script code from simple, natural language prompts. However, this power comes with a significant security challenge: how can you safely execute code generated by an AI without granting it broad, potentially risky permissions to your Google Drive?
In a brilliant example of community collaboration, two distinguished developers, Bruce Mcpherson and Kanshi Tanaike, have provided a powerful solution to this very problem. Their combined work showcases an elegant architecture for creating a secure “Fake-Sandbox,” allowing developers to test and run AI-generated code with confidence.
The Problem: The All-or-Nothing Permission Model
At the heart of the issue is that Google Apps Script’s core services often require sweeping permissions. For example, a script designed to organise files often needs the https://www.googleapis.com/auth/drive scope, which grants it sweeping access to all of a user’s files and folders in Google Drive. This presents a clear risk when running code from a source you don’t fully control, as an unintended command could have far-reaching consequences.
Bruce Mcpherson’s Foundation: gas-fakes
The technical cornerstone of the solution is gas-fakes, a powerful Node.js library created by Bruce Mcpherson. This library ingeniously emulates a Google Apps Script environment locally. It works by translating familiar Apps Script calls (like SpreadsheetApp.create()) into their equivalent, direct Google API requests. The library already provides extensive support for the most-used services, including SpreadsheetApp, DocumentApp, DriveApp, and SlidesApp, along with their advanced service counterparts.
This abstraction is the key. By funnelling actions through the underlying APIs, gas-fakes makes it possible to work with more granular, file-specific permissions—a level of control not available with the standard Apps Script services.
Kanshi Tanaike’s Vision: The “Fake-Sandbox”
Seeing the potential of this library, Google Workspace Developer Kanshi Tanaike authored a detailed feasibility study proposing its use as a “Fake-Sandbox.” Tanaike demonstrated how gas-fakes could be integrated directly into the Gemini CLI workflow.
His approach is both simple and effective:
Generate Code: A user provides a prompt to the Gemini CLI (e.g., “Create a new Google Sheet and add the value ‘Hello, World’ to cell A1”).
Execute Locally: The generated Apps Script code is then executed locally on Node.js.
Translate and Secure: The gas-fakes library intercepts the Apps Script commands, translates them into permission-controlled Google API calls, and executes them securely.
Crucially, any files created during this process are contained within the sandbox session. At the end of the run, a simple command (ScriptApp.__behavior.trash()) automatically cleans up by moving all created files to the Google Drive trash, leaving the user’s Drive pristine.
Taking it Further: Granular Controls
Building on this concept, the gas-fakes library now includes even more sophisticated sandbox controls, as detailed in a follow-up post. Developers can now implement:
Whitelists to grant read-only or write access to specific, pre-existing files.
Per-Service Rules to disable entire services (like DocumentApp) or restrict a service to a specific list of approved methods.
Ultimately, this work provides a vital safety net for innovation. It gives developers the confidence to experiment with AI-generated code, knowing that their core data is protected by a robust and highly configurable sandbox. This practical solution removes a major barrier to entry, establishing a clear blueprint for safely embracing the next wave of Google Workspace automation.
To dive deeper into this solution check out the source links below: