AppsScriptPulse

Automate creating or updating tables in Google Sheets with Apps Script🚀

A step-by-step guide to dynamically import, filter, and format your data.

How many times have you found yourself copying data from one sheet to another, applying filters, and then manually formatting everything as a table to make it more readable and functional? It is a sometimes tedious and error-prone process.

What if we could automate all of this with a single click? Welcome to the world of Google Apps Script!

In this article, we will analyze a script that not only imports and filters data between two spreadsheets but also uses the power of the Sheets API to dynamically create a formatted table. Ready? Let’s start! 🚀


What does the script do?

In summary, our script performs these steps in sequence:

  • Connects to a source spreadsheet (even one different from where the script is located).
  • Imports all the data from the specified sheet.
  • Filters the rows based on a specific condition (in our case, a numeric value in a column).
  • Clears the destination sheet and writes the filtered data to it.
  • Creates (or recreates) a formatted table that perfectly adapts to the size of the new data.

The Sample File

To follow this tutorial, you can create a copy of the sample spreadsheet:

➡️ Click here to create a copy of the sample file

The file contains the source sheet DataSet with the starting data. In your destination sheet, you can, for example, use a sheet named Filtered where the filtered data will be inserted. Nothing prevents you from using this script to filter and create a table in the same file. In any case, for my example, you will need the file I linked and a second file where you import this data.

In the script we will analyze, the filter is applied to the “Billing” column, selecting only rows with a value greater than 2000.


✨Extra: If you want to learn how to use tables:


Preparation: Enable the Sheets API

Before starting, there is a fundamental step. Our script uses advanced methods that are not part of the standard SpreadsheetApp service, but of the advanced Sheets API service.

To enable it:

  • Open the Apps Script editor.
  • In the left menu, click on Services (+).
  • Find Google Sheets API in the list, select it, and click Add.

Perfect! Now your script has the “superpowers” to manipulate tables, conditional formatting, and much more.


Script Analysis

Here is the complete script that you will need to copy and paste into your file:

function importFilterCreateTable() {
    const idOrigin = "ID_FILE"; // Source File ID
    const nameOrigin = "DataSet"; // source sheet name
    const nameDest = "Filtered" // destination sheet name
    const indexCol = "Billing" // name of Column to filter
    const nameTable = "TBL_Filtered" // name of the table

    const ss = SpreadsheetApp.getActive();
    const sheetOrigin = SpreadsheetApp.openById(idOrigin).getSheetByName(nameOrigin);
    const sheetDest = ss.getSheetByName(nameDest);
    const destID = sheetDest.getSheetId();

    // Import & Filter the data
    const values = sheetOrigin.getDataRange().getValues();

    const col = values[0].indexOf(indexCol); // Find the column number to filter

    const filterValues = values.filter((rw, h) => h === 0 || rw[col] > 2000); //Filter by condition

    if (!filterValues.length) return;

    // Retrieve the table data via the Sheets API
    const ssObj = Sheets.Spreadsheets.get(ss.getId(), { includeGridData: false });
    const sheetObj = ssObj.sheets.find(s => s.properties.sheetId === destID);

    const requests = [];

    // Checks if the table exists, if so it deletes it
    if (sheetObj.tables) {
        const table = sheetObj.tables[0];
        requests.push({ deleteTable: { tableId: table.tableId } });
    }

    // Adds the new data
    sheetDest.getRange(1, 1, filterValues.length, filterValues[0].length).setValues(filterValues);

    // Adds a new table
    requests.push({
        addTable: {
            table: {
                name: nameTable,
                range: {
                    sheetId: destID,
                    startRowIndex: 0,
                    endRowIndex: filterValues.length,
                    startColumnIndex: 0,
                    endColumnIndex: filterValues[0].length
                }
            }
        }
    });

    if (requests.length) {
        Sheets.Spreadsheets.batchUpdate({ requests }, ss.getId());
    }
}

💪🏻Let’s analyze the interesting part about the table.

The Creation of the Table

This is the most interesting and powerful part of the script. The standard SpreadsheetApp service does not have a direct method like sheet.createTable(). To do this, we need to speak directly with the Google Sheets API.

How does it work?

  • Sheets.Spreadsheets.get(ss.getId(), { includeGridData: false }); With this line, we ask the API to provide us with all the properties of our spreadsheet (like the sheets within it, their IDs, and of course, the existing tables), but without downloading the cell data (includeGridData: false), making the call much faster.
  • ssObj.sheets.find(s => s.properties.sheetId === destID); From the object we have received, we look for the specific sheet we are working on by comparing its ID (destID).
  • Preventive deletion: if (sheetObj.tables) Our script is designed to be run multiple times. For simplicity, we delete the old table before creating a new one. How? This block of code checks if a table already exists in the sheetObj sheet (sheetObj.tables). If so, it adds a deletion request (deleteTable) to an array of “requests” (requests).
  • Creation: requests.push({ addTable: { ... } }); We add a second request to our array: the creation of the new table (addTable). Here we define all its properties:
    • name: The unique name of the table.
    • range: The cell range it will occupy. It is crucial to note that the API uses “zero-based” indexes (the first row/column is 0) and the final index (endRowIndex) is excluded. This is why filterValues.length works perfectly to define the last row.

📌 In this example, the table will be placed in cell A1. If you want the table to appear in a different row, for example in row 3 of a column, you must set startRowIndex: 2 and endRowIndex: filterValues.length + 2. If you want it in a different column instead, for example in column D, you will have to put startColumnIndex: 3 and endColumnIndex: filterValues[0].length + 3.

  • Sheets.Spreadsheets.batchUpdate({ requests }, ss.getId()); This is the final call. Instead of making one API call to delete and another to create, we group them into a single batchUpdate. It is more efficient and ensures that the operations are performed in sequence.

This “delete and recreate” approach is extremely robust and ensures that the table always has the exact dimensions of the filtered data. The previous data? It will be removed when we delete the table. In fact, it is only after possibly deleting the table that the new data will be inserted.


Updating an Existing Table

What if we just wanted to update the dimensions of an existing table instead of deleting and recreating it? In this case, the import and filter logic remains the same, but the table manipulation part changes.

This method becomes useful when your table already has a structure that you want to preserve (special formatting, data validation, drop-down menus, etc.) and you just want to update its data and dimensions.

Here is what the final part of the script would look like in this case:

    // Delete the values
    sheetDest.getRange(2, 1, sheetDest.getMaxRows(), sheetDest.getMaxColumns()).clearContent()

    // Insert new values
    sheetDest.getRange(2, 1, filterValues.length, filterValues[0].length).setValues(filterValues);

    // Retrieve the table data via the Sheets API
    const obj = Sheets.Spreadsheets.get(ss.getId(), { includeGridData: false });
    const sheetObj = obj.sheets.find(s => s.properties.sheetId === destID)

    const tableID = sheetObj.tables[0].tableId

    const requests = [{
        updateTable: {
            table: {
                tableId: tableID,
                range: {
                    sheetId: destID,
                    startRowIndex: 0,
                    endRowIndex: sheetDest.getLastRow(),
                    startColumnIndex: 0,
                    endColumnIndex: sheetDest.getLastColumn()
                }
            },
            fields: "range"
        }
    }];

    Sheets.Spreadsheets.batchUpdate({ requests }, ss.getId());

What changes?

  • Instead of deleting and creating, we use an updateTable request.
  • We must specify the ID of the table to be modified (tableId).
  • The fields: "range" field is crucial: it tells the API to update only and exclusively the range property of the table, leaving everything else (like the name) unchanged.

Conclusion

Which method to choose? It always depends on the use you want to make of it, so it depends a lot on your needs. Remembering that the “delete and recreate” method of the first script is generally used if we always want to have a new and clean table, while the second case is useful when we have already structured our table.

Automating tasks in Google Sheets with Apps Script can save you hours of work. Learning to use the Sheets API opens up a world of possibilities that go far beyond simple data manipulation, allowing you to programmatically control almost every aspect of your spreadsheets.

Build Apps Without Coding: Guide to Google AppSheet With Gemini AI

Gartner has predicted that by 2025, 70% of new applications developed by enterprises will utilise low-code or no-code technologies. Furthermore, the global no-code AI platforms market is expected to grow to $24.8 billion by 2029. These numbers are no surprise, knowing that no-code solutions bring significant cost reductions and speed up development processes. The best news? If you are using Google Workspace, you probably already have access to a powerful no-code tool – Google AppSheet.

I recently co-authored a guide at Devoteam on building no-code apps with Google AppSheet and Gemini.

The guide hopefully provides a clear overview of AppSheet’s capabilities, highlighting its ability to connect to existing data sources like Google Sheets and SQL databases. It also delves into how the new Gemini integration allows anyone to generate an application’s structure simply by describing their idea in plain English. The guide includes some real-world use cases—from inventory management to streamlining hotel audits—demonstrating the tangible impact of empowering “citizen developers” within an organisation.

For the Google Workspace Developer community, hopefully this guide offers a useful lens through which to view the no-code landscape. Understanding these platforms—their capabilities, their audience, and their limitations—can be useful when deciding whether to build a full custom solution or to empower a team with a no-code tool.

Source: Build Apps Without Coding: Guide to Google AppSheet With Gemini AI

A Secure Sandbox for AI-Generated Apps Script: How to Experiment with Confidence


gas-fakes includes a sandbox feature in an emulation of Apps Script on Node that allows you to control file access and manage test artifacts

The rise of generative AI tools like the Gemini CLI has unlocked incredible potential for automating Google Workspace, allowing developers to generate Google Apps Script code from simple, natural language prompts. However, this power comes with a significant security challenge: how can you safely execute code generated by an AI without granting it broad, potentially risky permissions to your Google Drive?

In a brilliant example of community collaboration, two distinguished developers, Bruce Mcpherson and Kanshi Tanaike, have provided a powerful solution to this very problem. Their combined work showcases an elegant architecture for creating a secure “Fake-Sandbox,” allowing developers to test and run AI-generated code with confidence.

The Problem: The All-or-Nothing Permission Model

At the heart of the issue is that Google Apps Script’s core services often require sweeping permissions. For example, a script designed to organise files often needs the https://www.googleapis.com/auth/drive scope, which grants it sweeping access to all of a user’s files and folders in Google Drive. This presents a clear risk when running code from a source you don’t fully control, as an unintended command could have far-reaching consequences.

Bruce Mcpherson’s Foundation: gas-fakes

The technical cornerstone of the solution is gas-fakes, a powerful Node.js library created by Bruce Mcpherson. This library ingeniously emulates a Google Apps Script environment locally. It works by translating familiar Apps Script calls (like SpreadsheetApp.create()) into their equivalent, direct Google API requests. The library already provides extensive support for the most-used services, including SpreadsheetApp, DocumentApp, DriveApp, and SlidesApp, along with their advanced service counterparts.

This abstraction is the key. By funnelling actions through the underlying APIs, gas-fakes makes it possible to work with more granular, file-specific permissions—a level of control not available with the standard Apps Script services.

Kanshi Tanaike’s Vision: The “Fake-Sandbox”

Seeing the potential of this library, Google Workspace Developer Kanshi Tanaike authored a detailed feasibility study proposing its use as a “Fake-Sandbox.” Tanaike demonstrated how gas-fakes could be integrated directly into the Gemini CLI workflow.

His approach is both simple and effective:

  1. Generate Code: A user provides a prompt to the Gemini CLI (e.g., “Create a new Google Sheet and add the value ‘Hello, World’ to cell A1”).
  2. Execute Locally: The generated Apps Script code is then executed locally on Node.js.
  3. Translate and Secure: The gas-fakes library intercepts the Apps Script commands, translates them into permission-controlled Google API calls, and executes them securely.

Crucially, any files created during this process are contained within the sandbox session. At the end of the run, a simple command (ScriptApp.__behavior.trash()) automatically cleans up by moving all created files to the Google Drive trash, leaving the user’s Drive pristine.

Taking it Further: Granular Controls

Building on this concept, the gas-fakes library now includes even more sophisticated sandbox controls, as detailed in a follow-up post. Developers can now implement:

  • Whitelists to grant read-only or write access to specific, pre-existing files.
  • Per-Service Rules to disable entire services (like DocumentApp) or restrict a service to a specific list of approved methods.

Ultimately, this work provides a vital safety net for innovation. It gives developers the confidence to experiment with AI-generated code, knowing that their core data is protected by a robust and highly configurable sandbox. This practical solution removes a major barrier to entry, establishing a clear blueprint for safely embracing the next wave of Google Workspace automation.

To dive deeper into this solution check out the source links below:

Source: A Fake-Sandbox for Google Apps Script: A Feasibility Study On Securely Executing Code Generated by Gemini CLI by Kanshi Tanaike
Source: Fake-Sandbox for Google Apps Script: Granular controls by Bruce Mcpherson
Code: gas-fakes on GitHub by Bruce Mcpherson

Taming the Timeout: A Robust Webhook Strategy for Apps Script from Max Makhrov

Webhooks are a powerful way for developers to connect different applications, allowing them to automatically send data to a Google Apps Script project whenever an event occurs. However, this powerful technique comes with a significant challenge: if your script takes too long to process an incoming request, the sending service will time out, leading to lost data and unreliable automations.

In a recent article, Google Workspace Developer Max Makhrov shares a clever and effective architecture designed to make your webhooks both fast and reliable.

The Problem: When a Few Seconds is Too Long

The core issue is that services sending webhooks—like GitHub, Shopify, or Stripe—expect a nearly instant 200 OK response. These services have strict timeout policies, often between 5 to 10 seconds. If your doPost(e) function is busy with complex logic, writing to a spreadsheet, or making other API calls, it can easily fail to respond in time, causing the service to report a delivery failure.

The Solution: Cache Now, Process Later

Max’s approach decouples the initial reception of data from the heavy processing. The strategy is elegantly simple:

  1. Immediate Caching: The doPost(e) function is kept lean. Its only job is to immediately write the raw, incoming webhook data to Apps Script’s CacheService.
  2. Instant Response: With the data safely stored, the function quickly returns a success message, satisfying the sending service well within its required timeout window.
  3. Asynchronous Processing: A separate, time-driven trigger runs another function on a regular schedule (e.g., every minute). This function retrieves one or more tasks from the cache and performs the long-running processes.

To manage the tasks in the cache, Max introduces the concept of using a lightweight collection, which allows multiple webhook payloads to be queued and processed in an orderly fashion. Crucially, this pattern also uses LockService to prevent multiple processing functions from running simultaneously and trying to handle the same data.

While CacheService is perfect for this, it’s important to remember its limits: the 100KB size limit per item can be a challenge for webhooks with large payloads, and items expire after 10 minutes, so your processing trigger must run frequently enough to handle the workload.

It’s worth noting that this “fire and forget” pattern is ideal for notification-style webhooks, where the sending service only needs a 200 OK status to confirm receipt. For interactive webhooks that require a specific JSON response to be sent back immediately (like a Slack command), this asynchronous approach would not be suitable.

Here is a look at the lean doPost(e) function that captures the incoming data:

function doPost(e) {  
  try {  
    // Immediately write the raw payload to CacheService  
    writeHook2Cache\_(e);  
  } catch (err) {  
    // If caching fails, log the error  
    set2Memory\_(webhookKeys.error, err);  
  }  
  // Instantly return a 200 OK success response  
  return ContentService  
   .createTextOutput(JSON.stringify({status: 'success'}))  
   .setMimeType(ContentService.MimeType.JSON);  
}

This asynchronous architecture is a powerful pattern for any developer building robust integrations.

A big thank you to Max Makhrov for sharing this valuable technique with the community! You can find a more detailed explanation and links to the complete code on his Medium page.

Source: Using Collections for Handling Web-Hooks with App Script

Handling Granular OAuth Consent in Editor Add-ons with a Ready-Made Solution from Dave Abouav

Image credit: Dave Abouav

A collection of helpful functions for Apps Script powered scripts, Add-ons and Chat Apps for checking if all of a script’s OAuth scopes (as listed in the project’s manifest file) have been authorized/granted by the end-user, and if not, then to prompt the user to re-authorize.

As we’ve covered previously, Google continues to roll out granular OAuth consent for published Editor Add-ons. This change gives users more control but requires developers to adapt their code to prevent a broken experience for new users.

While Google has provided new Apps Script methods for developers, distinguished Google Workspace Developer Dave Abouav has gone a step further by creating a simple, drop-in solution to help developers address this change quickly.

The Challenge: Handling Partial Permissions

With the new consent screen, users can grant only a subset of the OAuth scopes your add-on requests. If a new user deselects a scope that is essential for a particular feature, your add-on could fail, leaving the user without a clear path forward.

A Simple, Ready-Made Solution

Dave’s solution addresses this by providing a function that checks for incomplete authorization. When the function is triggered and finds that necessary permissions are missing, it automatically displays a user-friendly dialog. This prompt explains the situation and provides a direct link for the user to grant the required access.

The implementation is straightforward. After adding the granularAuth.gs file to your project, you’ll need to configure two constants at the top of the file:

  1. Set AUTH_APP_NAME to your add-on’s name to ensure the re-authorization prompt is correctly branded.
  2. Crucially, update the ADD_ON_CONTAINERS array to list only the host applications your add-on is designed for. This prevents your add-on from requesting unnecessary permissions (e.g., asking for Google Docs access when it’s a Sheets-only add-on).
// Set your Add-on's name here.
const AUTH_APP_NAME = 'My Awesome Add-on';

// To avoid requesting unnecessary scopes, list ONLY the hosts this add-on is for.
// For a Sheets-only add-on, for example, you would use:
const ADD_ON_CONTAINERS = [SpreadsheetApp];

Once configured, you can call the main function at the entry points of your add-on, such as from a menu item:

function menuItemStartAction() {
  if (authHandleMissingScopeGrants()) {
    // The prompt has been shown, so abort normal execution.
    return;
  }

 // Continue with normal execution
 handleStartAction();
}

This simple check ensures your add-on has the permissions it needs to run, and if not, it provides the user with an immediate path to resolution. Crucially, deploying this update doesn’t trigger the need for a new review from the OAuth or Marketplace teams.

It’s worth noting that this solution is specifically designed for Editor Add-ons (Forms, Sheets, Docs, and Slides). Adapting it for Workspace Add-ons would require modifications to use the Card Service for the user interface.

A big thank you to Dave Abouav for creating and sharing this valuable resource with the community! You can find the complete code, setup instructions, and more details on his GitHub repository.

Source: Dave Abouav’s granularOAuth GitHub Repository

New Feature for the AppSheetApp Library: Run API Calls in Parallel

A Google Apps Script wrapper for the Google AppSheet API – bienlim/AppSheetApp

Recently I had a nice ping into my LinkedIn feed from App Developer Bien Lim. Bien has enhanced my original AppSheetApp library for Apps Script with a powerful new feature: fetchAll(). This addition lets you run multiple AppSheet API calls at the same time, significantly boosting the performance of your Apps Script integrations.

Building on the library’s existing methods like Add, Delete, Edit, and Find, this new function allows developers to execute these calls in parallel. This can drastically reduce execution time, which is a big improvement for applications that rely on multiple API operations.

You can find the updated library on the code repository to start using this new feature today.

Example Code

Here is a snippet showing how you can use the new fetchAll() method:

/**
 * Executes multiple AppSheet API requests in parallel.
 * This example shows how to perform Add, Delete, Edit, and Find operations simultaneously.
 */
function parallelRequest(){
  // Replace with your actual App ID and Access Key
  const AppSheet = new AppSheetApp('YOUR_APP_ID', 'YOUR_ACCESS_KEY');

  // Placeholder data for demonstration
  const properties = { "Locale": "en-US" };

  // Sample data for adding a new record. The key field is usually omitted if it's auto-generated.
  const dataToAdd = [{"Name": "John Doe", "Age": 30}];

  // Sample data for editing an existing record. The key field is required to identify the row.
  const dataToEdit = [{"ID": "unique-id-123", "Age": 31}];

  // Sample data for deleting an existing record. The key field is required.
  const dataToDelete = [{"ID": "unique-id-456"}];

  // The FetchAll method takes multiple API calls as arguments.
  // The 'true' argument tells each method to return a parameter object instead of
  // making an immediate API call. These parameter objects are then passed to fetchAll().
  const responses = AppSheet.fetchAll(
    AppSheet.Add('People', dataToAdd, properties, true),
    AppSheet.Delete('People', dataToDelete, properties, true),
    AppSheet.Edit('People', dataToEdit, properties, true),
    AppSheet.Find('People', [], properties, true)
  );

  // The responses are returned in an array, in the same order as the requests.
  const [ respFromAdd, respFromDelete, respFromEdit, respFromFind ] = responses;

  // You can now handle each response individually
  console.log('Add Response:', respFromAdd);
  console.log('Delete Response:', respFromDelete);
  console.log('Edit Response:', respFromEdit);
  console.log('Find Response:', respFromFind);
}

Source: GitHub – bienlim/AppSheetApp: A Google Apps Script wrapper for the Google AppSheet API

Granular Consent for Published Google Workspace Editor Add-ons: What Developers Need to Know

This screenshot shows the new OAuth consent screen, which lets the user provide consent for a subset of the requested OAuth scopes. (Image credit: Google)

Earlier this year, we launched an improved version of the OAuth consent screen to the Apps Script IDE and unpublished Editor Add-ons that allows users to specify which individual scopes they would like to authorize for that script. For example, if a script requests access to a user’s Sheets and Forms files, and the users only intends to use the script with Sheets files, they can decide to only allow access to their spreadsheets and not their forms.

Google is continuing to enhance user privacy and control by extending the granular OAuth consent screen to published Editor Add-ons. This latest move builds upon the changes first introduced to the Apps Script IDE back in January, which we covered in detail at the time.

For those who missed it, this feature allows users to choose exactly which permissions (scopes) an application can access, rather than being forced into an “all-or-nothing” decision.

What’s Changing Now?

This more granular consent experience will now be presented to users when they install or re-authorize published Editor Add-ons from the Google Workspace Marketplace. When an add-on requests access, for instance to both a user’s Docs and Sheets, the user can now choose to approve one, both, or neither of those permissions.

While Google notes that “Pre-existing scope grants will not be affected”, it’s crucial for publishers of existing add-ons to understand that this new consent screen will appear if a user needs to re-consent. This can happen for several reasons, such as a user manually revoking the add-on’s access from their Google Account, or if a refresh token expires (e.g., after six months of inactivity). In these cases, existing users will be presented with the new granular options, and developers must be prepared for the possibility of partial authorisation.

A Reminder for Developers

As this functionality rolls out, starting on August 19 2025, it’s crucial for developers to ensure their add-ons can handle partial consent. If a user denies a specific scope that is essential for a feature, the add-on should fail gracefully without breaking.

As we detailed in our January post, the ScriptApp and AuthorizationInfo classes are your key tools for managing this. These allow you to programmatically check the granted scopes and build safeguards into your code. For a deeper dive into the methods and a guide on handling these scenarios, we recommend reviewing our original article.

Source: Granular OAuth consent in Google Apps Editor Add-ons

Supercharge Your Apps Script Code Reviews with Gemini for Workspace and a Simple Exporter Tool

As developers, we are constantly looking for ways to improve our workflow and the quality of our code. The recent rise of powerful AI assistants, like Google’s Gemini, has opened up new frontiers for productivity, particularly in the realm of code reviews. These tools can offer instant feedback, suggest optimisations, and spot potential bugs we might have missed.

However, there’s often a practical hurdle: providing the AI with the complete context of our project. For Google Apps Script developers, this can mean the tedious process of copying and pasting code from multiple .gs and .html files, not to mention the crucial appsscript.json manifest.

To solve this, I’ve developed a simple tool built, fittingly, with Apps Script itself. It’s a Google Sheet that lets you export an entire Apps Script project into a single, neatly formatted JSON file, ready to be added directly to the Gemini App or a Gemini Gem.

Why This Approach?

  • Holistic Reviews: By packaging every file—including the manifest with its scopes and dependencies—you give the AI the bigger picture leading to more accurate analysis.
  • Boosted Efficiency: Forget manual copy-pasting. A couple of clicks are all it takes to get a complete project export, saving you time and preventing errors.

How It Works & How to Prompt Gemini

To get the most out of your exported project file, it helps to understand its structure and how to tell your AI assistant what it’s looking at.

The JSON Structure

The tool uses the official Google Apps Script API to fetch your project’s content. The resulting .json file contains a list (an array) of File objects. Based on the official documentation, each object in the list represents one file from your project and looks something like this:

{
  "name": "Code",
  "type": "SERVER_JS",
  "source": "function myFunction() {\\n  Logger.log('Hello, world!');\\n}"
}
  • name: The name of the file (e.g., “Code”, “Index”, “appsscript”).
  • type: The kind of file it is. This will be SERVER_JS for .gs files, HTML for .html files, and JSON for the appsscript.json manifest.
  • source: The actual code or content of the file as a string.

Prompting Gemini for the Best Results

When you upload this file, you can give Gemini a simple instruction to ensure it understands the context. Here is a sample prompt you can adapt:

“Please review the attached Google Apps Script project. The JSON file contains an array of file objects, where each object has a ‘name’, ‘type’, and ‘source’. Please analyse all the files together as a single project to provide a comprehensive review.”

This prompt tells Gemini exactly how to interpret the file, ensuring it sees the appsscript.json manifest for its scopes, the server-side logic in the .gs files, and any client-side .html files as a complete, interconnected project.

Tip: For scripts bound to a Google Sheets, Docs, or Slides, to get the most accurate review, add these documents to your Gemini conversation alongside the exported JSON file to give Gemini the complete context.

Take It to the Next Level: Create a Custom Gem

To save even more time, you can embed these instructions into a custom Gem in the Gemini App. This creates a reusable ‘persona’ for your code reviews. When creating your Gem, you can provide it with more detailed instructions to focus its analysis.

Here is a more advanced set of instructions you could use for your “Apps Script Code Reviewer” Gem:

You are an expert Google Apps Script developer who specialises in writing clean, efficient, and secure code.

When I provide a JSON file, it will represent a complete Apps Script project. The file contains an array of file objects, each with a 'name', 'type', and 'source'.

Your task is to perform a thorough code review of the entire project. Please analyse all files together.

Specifically, focus on:

1. **Best Practices:** Check if the code follows the official Google Apps Script style guide and modern JavaScript (ES6+) conventions.  
2. **Performance:** Identify any potential bottlenecks, inefficient loops, or excessive API calls.  
3. **Security:** Look for any potential security vulnerabilities, especially concerning data handling and web app permissions.  
4. **Clarity and Readability:** Assess the code for clarity, and check for adequate comments and meaningful variable names.

Please provide your feedback in a structured format, starting with a high-level summary and then a detailed list of suggestions, grouped by file.

If you are suggesting code revisions, provide the fully revised file.

Getting Started

Here’s how to set up your own Apps Script project exporter.

Step 1: Prerequisites – Enabling the Apps Script API

The tool relies on the Apps Script API to fetch project files. To use it, you first need to link your script project to a Google Cloud Platform (GCP) project where the API is enabled.

  1. Create a GCP Project: If you don’t already have one, create a standard GCP project by visiting the Google Cloud Console.
  2. Enable the Apps Script API: Within your GCP project, navigate to the “APIs & Services” dashboard, click “+ ENABLE APIS AND SERVICES”, search for “Google Apps Script API”, and enable it.
  3. Configure the OAuth Consent Screen: Before the script can be authorised, you must configure the consent screen.
  • In the GCP Console, navigate to “APIs & Services” > “OAuth consent screen”.
  • Choose Internal for the User Type and click Create.
  • Fill in the required fields (App name, User support email, and Developer contact information) and click Save and Continue. No other configuration is needed. As this is for internal use, you do not need to submit the app for verification.
  1. Link to Your Apps Script Project:
  • Open your Apps Script project (the one containing the exporter code).
  • Go to “Project Settings” (the cog icon ⚙️).
  • Under “Google Cloud Platform (GCP) Project”, click “Change project” and enter your GCP project number.

Step 2: Setting Up the Exporter Sheet

  1. Make a Copy: Click this link to make your own copy of the pre-configured Google Sheet Template.
  2. Configure the Destination Folder:
  • In your new sheet, go to Extensions > Apps Script.
  • In the Code.gs file, find the line const DEST_FOLDER_ID = '...';
  • Replace the placeholder ID with the ID of the Google Drive folder where you want your JSON exports to be saved. You can get this from the folder’s URL (it’s the string of characters after folders/). Tip: Your copy of this template can be shared with your colleagues. Also share the folder with them so that the script can write the export file.
  1. Save and Authorise: Save the script project. Return to your sheet and reload the page. A new “Script Exporter” menu will appear. The first time you run it, you will be prompted to authorise the script’s required permissions.

Step 3: Exporting Your First Project

Using the tool is the easiest part!

  1. Find the Script ID: First, you need the ID of the project you wish to review. The easiest way to find this is by copying it from the script editor’s URL.
  • For standalone scripts: The URL will look like https://script.google.com/d/THIS_IS_THE_ID/edit. The ID is the long string of characters between /d/ and /edit.
  • For container-bound scripts (in a Sheet, Doc, etc.): From your Google Sheet or Doc, open the script editor via Extensions > Apps Script. The URL in this new browser tab will have the same format. Copy the ID from there.
  1. Run the Exporter: In your exporter sheet, click Script Exporter > Export Script Project.
  2. Paste the ID: When the dialog box appears, paste in the Script ID you just copied and click OK.
  3. Check the Output: The script will fetch the project files, create a JSON file in your designated Drive folder, and confirm with a success message. A log of the export, including a link to the file, will be automatically added to the “Execution Log” sheet.

You can now head over to the Gemini App (gemini.google.com), add the JSON file from Google Drive, and start your comprehensive code review!

I hope you find this tool as useful, it’s a small optimisation, but one that hopefully makes integrating Gemini for Workspace into our daily Apps Script development practices that much smoother. I’d love to hear if you use this or similar approaches and any other tips you have for Gemini assisted Google Workspace development. Happy scripting!

Automating Personalised Gmail Drafts with the AI Formula and Apps Script

The AI formula in Google Sheets is a powerful and easy way to use Gemini AI directly inside of your Google Sheets. In this example, we use the AI formula to generate draft emails providing feedback on student exam scores. Then we use AI to write Apps Script that moves the draft emails from our Sheet into drafts in our Gmail, ready to send.

For many, the process of drafting personalised emails from a list of contacts in Google Sheets is a tedious, manual task. It involves switching between tabs, copying and pasting data, and carefully composing each message. But what if you could automate the entire workflow, generating perfectly tailored draft emails in Gmail with the click of a button?

In a recent video tutorial, Google Developer Expert Ben Collins demonstrates a powerful and efficient method to achieve exactly this. He showcases how to combine the intelligence of the new AI() formula in Google Sheets with the automation capabilities of Google Apps Script to turn structured data into ready-to-review email drafts.

What makes this solution so compelling is that it’s more than just a clever solution; it’s a glimpse into the future of work in Google Workspace.  As Google continues to embed AI and end-to-end automations like Workspace Flows, the ability to effectively guide these systems becomes paramount. Ben’s solution provides a perfect, practical sandbox for honing a crucial new skill of prompt engineering and mastering the art of communicating with Large Language Models.

To see the full demonstration and get started on your own prompting journey, be sure to watch Ben’s complete video.

Source: Turn Sheets Data into Gmail Drafts with the AI Formula and Apps Script 

Unlocking Google’s Full API Universe and Authentication Flows in Google Apps Script

A Google Apps Script project that dynamically generates modern, feature-rich client libraries for any public Google API directly from the Google API Discovery Service.

For many developers, Google Apps Script is the go-to platform for rapidly automating and extending Google Workspace. Its simplicity and deep integration are powerful. But as projects grow in ambition, developers often encounter two significant walls: needing to connect to a Google API that isn’t built-in, or requiring an authentication flow, like a service account, that the standard services don’t support.

What if you could break through those walls? What if you could use a robust client library for almost any Google API—from Firebase to Cloud Billing—in seconds, right from Apps Script itself?

Today, I’m excited to introduce the Google API Client Library Generator for Apps Script, a project and code repository designed to solve these very challenges. It comes with a full suite of pre-built libraries, enabling you to significantly expand what’s possible on the platform.

Why This Matters: Beyond the Built-in Services

While Apps Script’s built-in and advanced services are excellent, they represent just a fraction of Google’s vast API ecosystem. This collection of generated libraries unlocks the rest, offering two critical advantages:

  1. Complete API Coverage: The generator uses the Google APIs Discovery Service to create clients for over 400 APIs. If it’s in the Google Discovery Service then there is a library for it in the repository.
  2. Flexible Authentication Flows: Generated libraries are not tied to the standard user-permission model. This means you can use service accounts for server-to-server authentication, or implement other OAuth2 flows as needed.

Getting Started

Getting started is as simple as finding the library you need in the build/ directory of the GitHub repository and copying the code into your Apps Script project. For detailed setup instructions and other authentication options, please refer to the main README.md in the repository.

Unlocking Professional Workflows with Service Accounts

The real power of this approach comes from flexible authentication. For Google Workspace developers, this also unlocks the ability to use service accounts with domain-wide delegation to make API calls on behalf of other users in your domain.

To handle the authentication flow, you’ll need an OAuth2 library. The following example uses the OAuth2 for Apps Script library, but other options are also available. Once you have your chosen library set up, you can use the following pattern:

// The email of the user to impersonate (for domain-wide delegation).
const USER_EMAIL = '[email protected]'; 

function getService_() {
  // Credentials from the service account's JSON key file.
  const serviceAccountCreds = {
    "private_key": "-----BEGIN PRIVATE KEY-----\n...",
    "client_email": "[email protected]",
  };

  // Use a unique name for the service, like 'Drive' or 'BigQuery', to avoid
  // token collisions between different services.
  return OAuth2.createService('Drive:' + USER_EMAIL)
      .setTokenUrl('https://oauth2.googleapis.com/token')
      .setPrivateKey(serviceAccountCreds.private_key)
      .setIssuer(serviceAccountCreds.client_email)
      .setSubject(USER_EMAIL)
      .setCache(CacheService.getUserCache())
      .setScope('https://www.googleapis.com/auth/drive');
}

/**
 * Lists files using the configured service account.
 */
function listFilesWithServiceAccount() {
  const service = getService_();
  if (!service.hasAccess()) {
    console.log('Service Account authentication failed: ' + service.getLastError());
    return;
  }
  
  const token = service.getAccessToken();

  // This is where the generated library is used with the custom token.
  const drive = new Drive({
    token: token 
  });

  const files = drive.files.list({
    supportsAllDrives: true,
    pageSize: 10
  });
  
  console.log('Files found via service account:', JSON.stringify(files, null, 2));
}

Under the Bonnet: The Generator

For those who want to tweak the generation logic or integrate it into their own workflows, the generator itself is included in the repository. It’s a self-contained Apps Script project that fetches API metadata and programmatically constructs modern, robust ES6 classes with features like automatic exponential backoff for handling API errors.

Join the Community and Contribute

This project was heavily inspired by the work of Spencer Easton and aims to build upon that foundation. It’s a community effort.

While libraries for all APIs are generated, not all have been extensively tested. If you use a library and find a bug, or have an idea for an improvement, please open a GitHub Issue. Pull requests are, of course, always welcome. Happy scripting!

Source: GitHub – mhawksey/Google-API-Client-Library-Generator-for-Apps-Script