Have you ever used Google Workspace Studio? If so, you might have seen its “Google Drive Folder” trigger that instantly reacts to new files. It is an excellent feature for no-code automation, however, it has its limitations in what you can do in subsequent Studio steps.
If you are interested in this capability but would like to do more, I’ve just published an example repo which explores Google Drive events which is currently in the Developer Preview as part of the new Workspace Events API. With this you can subscribe to real-time changes including file uploads to specific folders, which opens opportunities for programmatically interacting with Google Drive and other services.
A Practical Application: Enterprise Data Pipelines
For me one of the most valuable applications for this capability lies within enterprise data pipelines. In many corporate environments, business users lack direct access to Google Cloud Storage buckets which are often used in data pipelines and instead they rely entirely on Google Drive for file sharing and collaboration.
By subscribing to folder events, developers can automatically trigger the movement or digestion of data the exact moment a user drops a new file into a designated Drive folder. This creates a seamless, automated bridge between user-facing storage interfaces and backend processing systems.
Architectural Highlights from the Demo
To demonstrate this I’ve published a complete proof of concept in the Google Drive to Google Sheets Notification Add-on Tutorial. This is a basic Google Workspace Add-on which lets users ‘subscribe’ to a Google Drive folder and get realtime updates to a Google Sheet. The example replicates what is possible already with Workspace Studio, but the idea is once you know how to set up a subscription you can do anything you want with that file. The example, unusually for me, has zero Apps Script meaning for chunkier data problems you can easily scale on Google infrastructure. If you look at the codebase, hopefully there are several design patterns that are useful for Workspace developers:
Infrastructure as Code (IaC): Rather than configuring the Google Cloud project services manually via the console, the repository includes Terraform configuration files to automatically provision the necessary Pub/Sub topics, service accounts, and IAM bindings required for secure message delivery. Using a ‘Infrastructure as Code’ makes it easy to ‘apply’ updates to your project.
Built-in Resilience: The Terraform configuration explicitly sets up a dead-letter topic. If your Event Processor fails to handle a file notification, the message routes to this secondary queue, ensuring no data events are lost during temporary outages or bugs.
Decoupled Execution: The example also separates user interaction from backend processing. An Add-on Handler manages user authorisation and API subscription creation meanwhile, a dedicated Node Cloud Function acts as a secure Event Processor, picking up messages asynchronously via Pub/Sub to prevent timeout issues.
The 3 Gotchas We Are Solving
The Drive API already has a watch endpoint but there three common obstacles that the new Workspace Events API gracefully solves:
The “Blind Ping”: When watching a folder with the legacy files.watch, the notification only tells you that the folder’s contents changed; it does not explicitly tell you which file was added. You are therefore forced to perform a secondary query to find the new file. The new Workspace Events API solves this by handing you the exact ID of the newly added file directly in the payload.
The Public URL Requirement: Standard webhooks require a public HTTPS URL and domain verification. By routing our events through Google Cloud Pub/Sub, this approach allows you to receive notifications privately and securely.
The Empty Payload: Google typically sends legacy notification data hidden within HTTP headers, which can be difficult to parse in certain environments, such as Apps Script. This is less of a concern in the shared example as we are using only Cloud Run Functions.
Summary
Hopefully, the demo is helpful showcasing what is possible using the Google Workspace Events API and Infrastructure as Code architectures. If there are any hints or tips I have missed, feel free to drop them in the comments below.
Understanding and spotting trends in organic YouTube traffic based on specific queries can often be a manual and time-consuming process. To solve this, the Google Marketing Solutions team has published an open-source repository for a tool called YouTube Topic Insights. Developed by Google Customer Solutions Engineer Francesco Landolina, the project transforms raw video data into actionable summaries by combining the data-gathering capabilities of the YouTube Data API with the video and language understanding of Gemini models.
For Google Apps Script developers interested in building scalable tools, this repository is a fantastic example of using Vertex AI at scale. It orchestrates complex, long-running processes directly from a Google Sheet and pairs them with Vertex AI batch prediction. In this post we will highlight some interesting infrastructure setup and the pattern used to bypass common Apps Script execution limits.
Project architecture and setup
The solution is divided into two main components that make it easier to setup and process the data:
GCP Infrastructure as Code: The necessary cloud environment is defined using Terraform infrastructure as code. This makes the deployment easier by enabling required APIs like Vertex AI and the YouTube Data API. While the Terraform script attempts to set up the OAuth Consent Screen using the google_iap_brand resource, you will need to configure the OAuth Consent Screen manually within the Google Cloud Console as the official Terraform documentation notes that the underlying IAP OAuth Admin APIs were permanently shut down on March 19, 2026.
Google Apps Script: The brain of the operation lives within a Google Sheet. It calls the YouTube API to find videos, prepares and submits analysis jobs to Vertex AI batch prediction, fetches the completed results, and writes the human-readable insights back into the spreadsheet interface.
For developers setting this up, the repository includes a Quick Start template. The script automatically handles the initialisation of the spreadsheet structure when first opened, creating the necessary configuration and logging sheets in a safe, non-destructive way.
Overcoming the execution limit with Vertex AI batch prediction
As you can imagine, analysing hundreds of videos using a LLM is going to be a compute-heavy task. If you attempt to process these synchronously in a simple loop, you will quickly hit the execution limit for Google Apps Script.
To get around this, the solution uses Vertex AI batch prediction (see Batch inference with Gemini). Instead of waiting for Gemini to process each video, the script initiates an asynchronous batch job and uses a chained-trigger system to poll for completion. Once the batch job completes, the script retrieves this context, parses the JSON response from Gemini, and writes the summarised data back to the Google Sheet.
Summary
The YouTube Topic Insights solution is an excellent example of how to build complex applications using Vertex AI Batch Prediction. It provides practical solutions for managing asynchronous tasks, interacting with advanced Vertex AI features.
If you are interested in exploring the code further or setting up the tool for your own research, you can find the complete source code and deployment instructions in the YouTube Topic Insights GitHub repository.
Looker Studio has supported custom community connectors for many years, allowing developers to bridge the gap between various APIs and their dashboards. While the capability exists, the technical barrier often leads users toward paid, third-party middleware. In a recent post, Toufik shares how they bypassed these monthly fees by building a free, self-hosted Airtable connector using Google Apps Script and AI coding tools.
TL;DR: Toufik demonstrates how to build a free, open-source Airtable connector for Looker Studio. By using Apps Script as the engine and an AI assistant to handle the implementation, the project was completed in about three hours. The result is a direct connection that avoids middleware and monthly subscriptions.
The Technical Barrier and the Apps Script Solution
As many developers know, building a community connector requires a specific understanding of the Apps Script environment: the UrlFetchApp for HTTP requests, the DataStudioApp service, and the semantics of field types. For those more comfortable with data analysis than backend services, this often feels like a steep climb.
Toufik highlights that the knowledge required to build these bridges hasn’t changed, but who holds that knowledge during the build has. By using Apps Script, they were able to create a direct connection where Airtable data appears as a native source. This approach ensures that data flows directly from the source to the dashboard through the user’s own account, removing the need for external servers or data collection.
Orchestrating AI with Domain Knowledge
A significant takeaway from this project is the method used to guide the AI assistant. Rather than asking the tool to write code immediately, Toufik created a “skill file.” This structured reference document provided the AI with domain-specific knowledge: the required function signatures for connectors (getAuthType(), getConfig(), getSchema(), and getData()), authentication types, and the specific manifest format rules required in appsscript.json.
As Toufik notes:
“The main cost was knowledge. And AI changed the price of knowledge. If your data lives behind an API and you can describe what you need, you’re closer to building it than you think.”
This shift in methodology suggests that for many niche connectors, the hurdle is no longer the typing, but the ability to describe the requirements and iterate through testing cycles. The build process involved testing the connector in Looker Studio, describing errors to the AI, and reviewing the proposed fixes. This iterative “test-describe-fix” loop allowed for the correct mapping of over 25 Airtable field types and the implementation of advanced features like pagination and rate-limit handling with exponential backoff.
New Avenues and the Risk of “Cheap” Code
This project serves as a reminder that the cost of building custom tools has dropped significantly. When developers can orchestrate an AI code assistant with a fundamental understanding of a platform like Apps Script, solutions that were previously behind paywalls become accessible. The repository includes not only the source code for authentication and schema detection but also the Claude Skill file used to prime the AI on the Looker Studio domain.
However, this ease of production brings its own set of challenges. As discussed in The New Stack, there is a growing concern that AI could flood the marketplace with low-quality, unmaintained “knockoffs” that lack the depth of human-led projects. The differentiator in Toufik’s approach is the rigor of the “skill file” and the domain expertise used to validate the output. For the Apps Script community, the value likely remains in the quality of orchestration and the commitment to maintaining the bridge once it is built, and the challenge for Google is maintaining the quality of its third party marketplaces.
For those interested in exploring the implementation or deploying their own version, Toufik has provided the full source code and the skill files on GitHub.
The Google Admin Console is a powerful engine, but it often becomes a bottleneck for delegated tasks. IT teams frequently find themselves trapped in “Admin Debt,” repeating manual steps because granting full administrative access to others is a security risk.
At the Google for Education IT Admin Summit in London, I shared a session on how to move from a static interface to a dynamic, automated engine. The goal is to build an “Admin Toolbox” that showcases how some Admin SDK and other Workspace API capabilities can be integrated into secure, self-service applications using AppSheet and Google Apps Script.
For those who couldn’t attend, or for those who want to dig into the code, I’ve made the session guide available below.
“We are building a data bridge that turns raw directory records into a functional database that understands who people are and, more importantly, who they report to.” — Session Guide: The Automated IT Toolbox
Practical Builds in the Toolbox
The session guide covers four distinct areas where these technologies intersect to solve common IT headaches:
Hierarchical Directory Apps: Building a connection to the Admin SDK to create a searchable directory with security filters based on reporting lines.
Automated Shared Drive Provisioning: A workflow where AppSheet handles the request and Apps Script acts as the administrator to create and name drives automatically upon approval.
ChromeOS Management: Using the Reports API to create a live activity feed of login events and issuing remote commands like “Lock Device.”
AI-Powered Damage Reports: Utilising the Gemini API to analyse photos of damaged hardware. Users can snap a photo, and the AI provides a structured analysis of the severity and required parts.
Gemini as a Development Partner
A key takeaway from the session was that I didn’t write any of this code from scratch. Instead, I used the Gemini App as a pair programmer. While Gemini was excellent for standard data tasks, it reached its limits when handling more obscure or less documented API calls. In these areas, my existing knowledge of the Workspace platform was essential. I had to refine my prompts and provide specific technical context to guide the model toward a reliable solution. It highlights that while AI is a powerful assistant, it still needs a knowledgeable pilot to navigate the complexities of advanced APIs.
The full session guide includes code snippets as well as some of the advanced ‘watch’ and Gemini API structured output.
Connect Google Apps Script to PostgreSQL via JDBC. Covers connection strings, JSONB/UUID workarounds, parameterized queries, transactions, and PostGIS.
Google recently added support for PostgreSQL to the Apps Script JDBC service, providing a way to connect your spreadsheets and automations directly to one of the most popular relational databases. Justin Poehnelt, Senior Developer Relations Engineer at Google, has recently published a guide which provides a detailed look at how to get this connection working and the specific hurdles you might encounter.
It is always a great to see contributions that bridge the gap between enterprise data and Workspace productivity. Justin’s walkthrough is a great example of how a few lines of configuration can open up massive storage possibilities beyond the traditional limits of Google Sheets.
Beyond just making the connection, Justin provides a series of examples and techniques and for those managing high-traffic projects, includes advice on using connection poolers like PgBouncer. If you are ready to connect your Workspace projects to a PostgreSQL instance, you can find the full guide, configuration tips, and complete code samples on Justin’s blog.
We know that to get the most out of generative AI tools like Gemini (gemini.google.com), you need to provide them with high-quality context. For AppSheet creators, there is a “hidden gem” within the platform that generates automatic documentation for your app. By navigating to Settings > Information > App documentation, you can access a comprehensive PDF that details your app’s definition; this is an excellent resource for human review and archiving.
However, when working with tools like the Gemini to generate code or API calls, a flattened PDF isn’t always the most efficient format for the model to parse. For creators looking for a more machine-friendly alternative, QREW Apps recently suggested a clever method to access the OpenAPI specification of your app directly.
App owners can retrieve this structured data by appending their app’s UUID to the AppSheet API v2 URL: https://www.appsheet.com/api/v2/apps/{app-guid}/openapi.json
This OpenAPI JSON export provides a structured blueprint of your app’s API capabilities. Unlike the PDF documentation, this JSON format is highly digestible for an AI. For creators beginning to experiment with the AppSheet API, uploading this JSON file into gemini.google.com allows the model to understand the exact schema and capabilities of your specific application.
With this context loaded, Gemini can assist in constructing accurate API calls. For example, if you are looking to Call a webhook from an automation, Gemini can generate valid payloads, enable batch updates, or script complex data interactions that would typically require significant manual trial and error.
For those of you working with Google Apps Script, you can pair this technique with my AppSheetApp library. With the OpenAPI spec providing the schema, providing the library code which handles the API communication, you can prompt Gemini to write a script for your AppSheet automation.
I am keen to hear how you get on with this workflow. If you discover other interesting ways to combine OpenAPI specs with Gemini to accelerate your Google Workspace development, feel free to share them in the comments. As always, happy scripting!
If you’ve built a Google Chat bot, you’ve likely hit this wall: the API sends you a membership event with a User ID… but omits the email field entirely.
For Google Workspace developers, this limitation in the Chat API is a frequent stumbling block. As detailed in the User resource documentation, while you can use an email address as an alias in requests (e.g., referencing users/[email protected]), the API insists on returning only the canonical resource name (e.g., users/123456789). This ID corresponds to the user’s profile in the People API or Admin SDK, but the email itself is stripped from the response, forcing developers to perform a secondary lookup.
/**
* Fetches user details from the People API.
* @private
* @param {string} userResourceName The "users/{user}" string from the Chat API.
* @return {{name: string, displayName: string, email: string|null}} An object with user details.
*/
function getUserDetails_(userResourceName) {
const defaultUserDetails = {
name: userResourceName,
displayName: userResourceName.replace('users/', 'Unknown User '),
email: null,
};
// Fail fast for app users or invalid formats
if (!userResourceName.startsWith('users/') || userResourceName === 'users/app') {
return defaultUserDetails;
}
try {
const peopleApiResourceName = userResourceName.replace(/^users\//, 'people/');
const person = People.People.get(peopleApiResourceName, {
personFields: 'names,emailAddresses',
});
const displayName = person.names?.[0]?.displayName ?? defaultUserDetails.displayName;
const email = person.emailAddresses?.[0]?.value ?? null;
return {
name: userResourceName,
displayName: displayName,
email: email,
};
} catch (e) {
console.warn(`Could not fetch details for ${userResourceName} from People API: ${e.message}`);
return defaultUserDetails;
}
}
However, bridging this gap for a Service Account leads to a security dilemma. The People API often returns empty fields because the Service Account lacks a contact list. You might find yourself reaching for Domain-Wide Delegation to impersonate an admin—effectively using a sledgehammer to crack a nut.
In a recent guide, Justin Poehnelt outlines a more secure strategy that avoids granting blanket domain access. By assigning a custom “Users > Read” Admin Role directly to a Service Account, developers can resolve emails securely without the risks associated with full impersonation.
The Strategy at a Glance
Custom Roles: Create a strictly read-only role in the Admin Console.
Direct Assignment: Assign this role specifically to the Service Account’s email.
No Long-Lived Keys: Use Application Default Credentials (ADC) in production and Service Account Impersonation for local development.
This approach ensures your bot has just enough permission to identify users, keeping your security team happy and your audit logs clean. For more information into the implementation, including the specific configuration steps, I encourage you to read Justin’s full post linked below.
We have all been there: You build a beautiful, automated dashboard in Google Sheets. It’s dynamic, it’s collaborative, and it’s perfectly integrated with your data sources. Then, the stakeholder request comes in: “Can you email this to me as an Excel file every Monday morning?”
For Google Workspace developers, the “Excel Requirement” is often a stumbling block. While Google Sheets is powerful, the bridge to Excel can be shaky. In this post, I’m going to explore four architectural approaches to solving this problem. If you would like to play along here is the Google Sheet Invoice Template and the Excel Invoice Template.
1. The Low-Code Agent (AppSheet Automation)
Before you write a single line of code, you should consider if the platform can do the heavy lifting for you. Since AppSheet is now a core part of Google Workspace, it offers a native “No-Code” way to generate Excel files that is robust, scheduled, and template-based.
The Strategy
Instead of writing a script to fetch data and build a file, you connect your Google Sheet to an AppSheet app. You then configure an Automation that triggers on a schedule or a data change.
The Workflow:
Trigger: A new row is added to your “Invoices” sheet (or a weekly schedule fires).
Task: AppSheet runs a “Create a new file” task.
Template: It reads a simplified Excel template stored in Drive, replaces tags like <<[Qty]>> with actual data, and generates the file.
Delivery: The file is saved to Drive or emailed directly to the client.
Why This Wins
Zero Maintenance: No scripts to debug, no OAuth tokens to manage, and no breaking changes when APIs update.
Native Templates: It supports complex Excel formatting out of the box.
The Limitation
Syntax: You must use AppSheet’s specific templating syntax (<< >>) inside your Excel file.
Scaling: Generating reports with thousands of line items can hit automation timeout limits.
2. The Native Export (Apps Script)
If you need more granular control or don’t want to spin up an AppSheet app, the most common starting point is attempting to do everything within Apps Script. You create a temporary Google Sheet, fill it with data, and then export it using the Drive API.
The Code
In this scenario, we assume you have a Google Sheet version of your Invoice template. We first write the specific line items to the Sheet, then export it.
function generateInvoiceExport() {
const ss = SpreadsheetApp.getActiveSpreadsheet();
const sheet = ss.getSheetByName('InvoiceTemplate');
// 1. Prepare Invoice Data
const invoiceData = [
{ desc: "Web Development - Phase 1", qty: 40, unit: 100 },
{ desc: "Server Hosting (Annual)", qty: 1, unit: 250 },
{ desc: "Domain Registration", qty: 2, unit: 15 },
{ desc: "Maintenance Retainer", qty: 10, unit: 85 }
];
// 2. Map data to the Sheet structure (Cols B, C, D, E, F)
// B=Desc, E=Qty, F=Unit. (C and D are skipped/empty)
const rows = invoiceData.map(item => [item.desc, "", "", item.qty, item.unit]);
// 3. Write data to the Google Sheet
// Matches Template: Start at Row 19, Column B (2)
if (rows.length > 0) {
// Corrected: ensure the number of columns (5) matches your data structure
sheet.getRange(19, 2, rows.length, 5).setValues(rows);
}
// 4. Construct the export URL for the populated sheet
const fileName = `Invoice_${new Date().toISOString().slice(0,10)}.xlsx`;
const url = `https://docs.google.com/spreadsheets/d/${ss.getId()}/export?format=xlsx&gid=${sheet.getSheetId()}`;
// 5. Fetch and Save
const params = {
method: "GET",
headers: { "Authorization": `Bearer ${ScriptApp.getOAuthToken()}` },
muteHttpExceptions: true
};
const response = UrlFetchApp.fetch(url, params);
if (response.getResponseCode() === 200) {
const blob = response.getBlob().setName(fileName);
const file = DriveApp.createFile(blob);
console.log(`Success: Invoice saved to Drive (ID: ${file.getId()})`);
} else {
console.log(`Error: ${response.getContentText()}`);
}
};
The Limitations
Speed & Scale: The overhead of creating a temporary sheet and exporting it adds up. Generating 50 invoices in a loop will take a long time.
The “Source” Gap: Your source material must be a Google Sheet, meaning you lose Excel-specific features like Power Query or VBA macros from your template.
3. The Browser Round-Trip (ExcelJS)
If you need to maintain stricter fidelity to an existing Excel template (preserving fonts, colors, and layouts) and are interested in speed, you can use the “Round Trip” technique.
This method uses the user’s browser as a processing engine. You fetch the Invoice.xlsx template from Drive, process it client-side with ExcelJS, and send the result back.
Parallel Fetching (Client): The browser fires concurrent requests to Apps Script to fetch the template (Base64) and invoice data.
Client-Side Processing: ExcelJS loads the template, injects data into specific cells (preserving layout), and recompiles the file.
Upload: The new file is encoded and sent back to Drive.
The Code
This snippet focuses on the core orchestration logic. For the full UI setup and helper functions, see the linked project.
/* --- CLIENT SIDE (Core Logic) --- */
async function startGeneration() {
// 1. Parallel Fetching
// Fire both requests at once to speed up the wait time
const [templateResponse, invoiceData] = await Promise.all([
runGoogleScript('getTemplateBase64', null),
runGoogleScript('getInvoiceData')
]);
// 2. Load into ExcelJS
const workbook = new ExcelJS.Workbook();
const binaryString = atob(templateResponse.base64);
const bytes = new Uint8Array(binaryString.length);
for (let i = 0; i < binaryString.length; i++) { bytes[i] = binaryString.charCodeAt(i); } await workbook.xlsx.load(bytes.buffer); // 3. Modify the Sheet const sheet = workbook.worksheets[0]; let currentRow = 19; // Starting at Row 19 based on template invoiceData.forEach((item, index) => {
// Ensure we do not write past the visual template limit (e.g. Row 22)
if (currentRow + index > 22) return;
const row = sheet.getRow(currentRow + index);
// Inject data into specific cells (B, E, F)
row.getCell(2).value = item.desc;
row.getCell(5).value = item.qty;
row.getCell(6).value = item.unit;
// ExcelJS preserves the existing styles of these cells!
row.commit();
});
// 4. Export & Save
const outBuffer = await workbook.xlsx.writeBuffer();
// Convert buffer to base64 (simplified for brevity)
let binary = '';
const outBytes = new Uint8Array(outBuffer);
for (let i = 0; i < outBytes.length; i++) {
binary += String.fromCharCode(outBytes[i]);
}
const outBase64 = btoa(binary);
// Upload back to Drive
await runGoogleScript('saveProcessedFile', outBase64, `Invoice_${Date.now()}.xlsx`);
}
/* --- SERVER SIDE (Apps Script) --- */
function getTemplateBase64(templateId) {
const file = DriveApp.getFileById(templateId);
return {
base64: Utilities.base64Encode(file.getBlob().getBytes())
};
}
function saveProcessedFile(base64Content, fileName) {
const blob = Utilities.newBlob(
Utilities.base64Decode(base64Content),
MimeType.MICROSOFT_EXCEL,
fileName
);
return { url: DriveApp.createFile(blob).getUrl() };
}
The Trade-off
Payload Limits:google.script.run calls can become unstable when the payload exceeds 50MB.
Base64 Inflation: Remember that Base64 encoding increases file size by ~33%.
4. The Enterprise Engine (Cloud Functions + Python)
For robust, high-fidelity reporting, we need to leave the Google Sheets conversion pipeline behind. By using Google Cloud Functions with Python, we can manipulate native Excel files directly using libraries like openpyxl.
The “User Identity” Pattern
Instead of managing complex Service Account permissions, this solution uses the User’s Identity to perform Drive operations.
Apps Script: Generates an OIDC ID Token (for identity), using getIdentityToken()) and an OAuth Access Token (for Drive permissions) with getOAuthToken().
Cloud Function: Verifies the ID Token and uses the Access Token to initialise the Drive API client as the user.
Open cloud-fn.py and update APPS_SCRIPT_CLIENT_ID with your Apps Script’s OAuth Client ID. Note: You can find this ID by running the logClientId() helper function in Code.js.
2. Google Apps Script (Code.js)
Update Manifest (appsscript.json): Add the https://www.googleapis.com/auth/userinfo.email scope.
Configuration: Update YOUR_CLOUD_FUNCTION_URL in callInvoiceEngine() with your deployed function URL.
The Code
Google Apps Script (Code.js)
function callInvoiceEngine() {
const url = "YOUR_CLOUD_FUNCTION_URL";
// 1. Get Tokens
const accessToken = ScriptApp.getOAuthToken();
const idToken = ScriptApp.getIdentityToken(); // Requires 'openid' scope in manifest
// --- LOGGING FOR LOCAL TESTING ---
console.log("--- CREDENTIALS FOR LOCAL TESTING ---");
console.log("ID Token:", idToken);
console.log("Access Token:", accessToken);
console.log("-------------------------------------");
const payload = {
templateId: "ID_OF_TEMPLATE_FILE",
folderId: "ID_OF_DESTINATION_FOLDER",
// Matches the same dataset used in our other examples
data: [
{ desc: "Web Development - Phase 1", qty: 40, unit: 100 },
{ desc: "Server Hosting (Annual)", qty: 1, unit: 250},
{ desc: "Domain Registration", qty: 2, unit: 15},
{ desc: "Maintenance Retainer", qty: 10, unit: 85}
],
// The Magic Key: Passing the active user's Access Token for Drive operations
userToken: accessToken
};
const options = {
method: "post",
contentType: "application/json",
headers: {
"Authorization": `Bearer ${idToken}`
},
payload: JSON.stringify(payload)
};
UrlFetchApp.fetch(url, options);
}
function logClientId() {
// Helper to find the Project's OAuth Client ID
var token = ScriptApp.getOAuthToken();
var url = 'https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=' + token;
var response = UrlFetchApp.fetch(url);
var data = JSON.parse(response.getContentText());
console.log('Client ID:', data.issued_to);
}
Pro Tip: Batching Requests
If you need to generate 50 invoices at once, don’t loop through them one by one. Use UrlFetchApp.fetchAll(requests) to fire off parallel requests to your Cloud Function. Since the heavy lifting happens in the cloud, Apps Script just acts as the orchestrator.
import requests
import json
def run_test():
print("--- Local Cloud Function Tester ---")
print("Ensure you are running: functions-framework --target=generate_excel_report --source=cloud-fn.py --port=8080\n")
# 1. Gather Tokens & IDs
print("Paste your Apps Script Tokens (use Logger.log in GAS):")
id_token = input("1. ID Token (ScriptApp.getIdentityToken()): ").strip()
if not id_token: return
access_token = input("2. Access Token (ScriptApp.getOAuthToken()): ").strip()
if not access_token: return
template_id = input("3. Google Sheet/Excel Template ID: ").strip()
folder_id = input("4. Destination Folder ID: ").strip()
# 2. Prepare Payload
payload = {
"templateId": template_id,
"folderId": folder_id,
"data": [
{ "desc": "Local Test Item 1", "qty": 1, "unit": 10 },
{ "desc": "Local Test Item 2", "qty": 5, "unit": 20 }
],
"userToken": access_token
}
headers = {
"Authorization": f"Bearer {id_token}",
"Content-Type": "application/json"
}
# 3. Send Request
url = "http://localhost:8080"
print(f"\nSending POST request to {url}...")
try:
response = requests.post(url, json=payload, headers=headers)
print(f"\nStatus Code: {response.status_code}")
print(f"Response Body: {response.text}")
except requests.exceptions.ConnectionError:
print(f"\n[!] Could not connect to {url}.")
print("Is the functions-framework running?")
if __name__ == "__main__":
run_test()
Local Test Script (test_request.py)
Before deploying, you can test the entire flow locally to ensure your tokens and permissions are working correctly.
Start the Function:functions-framework --target=generate_excel_report --source=cloud-fn.py --port=8080
Run the Tester:python3 test_request.py (Paste tokens from Apps Script logs when prompted).
import requests
import json
def run_test():
print("--- Local Cloud Function Tester ---")
print("Ensure you are running: functions-framework --target=generate_excel_report --source=cloud-fn.py --port=8080\n")
# 1. Gather Tokens & IDs
print("Paste your Apps Script Tokens (use Logger.log in GAS):")
id_token = input("1. ID Token (ScriptApp.getIdentityToken()): ").strip()
if not id_token: return
access_token = input("2. Access Token (ScriptApp.getOAuthToken()): ").strip()
if not access_token: return
template_id = input("3. Google Sheet/Excel Template ID: ").strip()
folder_id = input("4. Destination Folder ID: ").strip()
# 2. Prepare Payload
payload = {
"templateId": template_id,
"folderId": folder_id,
"data": [
{ "desc": "Local Test Item 1", "qty": 1, "unit": 10 },
{ "desc": "Local Test Item 2", "qty": 5, "unit": 20 }
],
"userToken": access_token
}
headers = {
"Authorization": f"Bearer {id_token}",
"Content-Type": "application/json"
}
# 3. Send Request
url = "http://localhost:8080"
print(f"\nSending POST request to {url}...")
try:
response = requests.post(url, json=payload, headers=headers)
print(f"\nStatus Code: {response.status_code}")
print(f"Response Body: {response.text}")
except requests.exceptions.ConnectionError:
print(f"\n[!] Could not connect to {url}.")
print("Is the functions-framework running?")
if __name__ == "__main__":
run_test()
Why This Wins
True Excel Support: You are editing the actual binary file, not a conversion.
Separation of Concerns: The heavy lifting is done in the cloud, not in the browser or the Apps Script runtime.
Summary: Which Tool for Which Job?
Feature
AppSheet
Apps Script (Native)
Client-Side (ExcelJS)
Cloud Functions (Python)
Complexity
Very Low
Low
High
High
Cost
Free
Free
Free
GCP Usage (Low)
Best For
Standard Workflows
Simple Reports
Interactive Template Fills
Enterprise Reports
Final Thoughts
There is no “one size fits all” here. The “Excel Requirement” is a spectrum:
For quick internal tools: Stick to Method 1 (AppSheet) andMethod 2 (Native). It’s fast to build and free.
For polished client reports:Method 3 (ExcelJS) is your best friend. It keeps the heavy lifting in the browser and preserves your branding.
For enterprise scale: If you are generating hundreds of invoices a week, Method 4 (Cloud Functions) is the robust choice. It decouples the generation logic from the spreadsheet UI.
Choose the architecture that fits your volume and your “pixel-perfect” tolerance. I’m also keen to hear have got another ‘Excel Generator’ approach? Happy scripting!
“Clicking through the Google [Workspace] Admin Console to verify hundreds of settings is manual labour. How to create an agent with Gemini 3 (Flash) that will check the settings continuously?”
In a recent post, Ivan Kutil explores how to move beyond manual checks for the Google Workspace Admin console by building a specialised “Auditing Agent” using the Agent Development Kit (ADK) and the Policy API.
“Clicking through the Google [Workspace] Admin Console to verify hundreds of settings is manual labour. How to create an agent with Gemini 3 (Flash) that will check the settings continuously?”
The Architecture: Policy API meets ADK
At the heart of Ivan’s solution is a very repeatable pattern, bridging the gap between Google Workspace APIs, in this case the Policy API, and a natural language interface. By using the ADK, developers can create tools that allow an LLM to “handshake” with Workspace data without writing a custom function for every single API endpoint:
ADK simplifies interacting with external REST APIs by automatically generating callable tools directly from an OpenAPI Specification (v3.x). This eliminates the need to manually define individual function tools for each API endpoint [ref].
In Ivan’s example, he uses AI Studio to generate the OpenAPI 3 YAML file required by the OpenAPIToolset in ADK. Google currently defines many of its APIs through the Google Discovery service in JSON syntax, and Ivan highlights the need for transformation. Gemini is very good at this, but there are a number of community tools and repositories to transform the Discovery Document into an OpenAPI file, such as the StackQL Provider Google project and its associated YAML repository (the Policy API is found in the Cloud Identity service).
Note: While the Policy API provides a powerful programmatic window into your Workspace domain, it is worth noting that it does not yet provide 100% coverage of all settings available in the Admin Console. Developers should verify the specific policy types supported when designing their auditing logic.
Agentic Auditing in Action
The resulting agent allows an administrator to simply ask questions about their Workspace environment, the agent then fetches live data, help identify risks (such as missing MFA or risky API access), and provides a structured summary with actionable steps.
This project serves as a powerful blueprint for developers looking to build “agentic” workflows enhances capabilities by grounding them with data from Workspace APIs with intelligent, conversational automation.
Google Workspace Flows is currently available as part of the Gemini Alpha program for Google Workspace customers. If you don’t have access to Workspace Flows you will need to request access via your Google Workspace Administrator.
As developers, we often find ourselves building complex automations to glue different services together. But with the introduction of Google Workspace Flows, Google is handing some of that power directly to business users, allowing them to string together tasks—like summarising emails or posting to Chat—without writing a single line of code.
But where does that leave us? Right in the driver’s seat.
While Flows provides a “no-code” interface for users, it offers a robust “code” backend for us. Developers can extend the platform by building custom steps, in Google Apps Script. These steps act as reusable building blocks that users can drop into their flows to perform specific, business-critical tasks that aren’t available out of the box.
If you have an existing Google Workspace Add-on, you can even extend its functionality to include these Flows steps. Since Google Workspace Flows is currently part of the Gemini Alpha program, custom steps are intended for internal use and testing within your own Google Workspace organization rather than public distribution. You can deploy them as test deployments or private add-ons for your domain, but a public listing is not yet supported.
Here is a high-level overview of how it works and where to find the documentation to get started.
The Anatomy of a Step
Building a custom step in Apps Script isn’t too dissimilar from building an Editor Add-on, but there are specific architectural differences to be aware of. A step essentially consists of three parts:
The Definition (Manifest): You define the step’s identity, required inputs, and expected outputs in the appsscript.json file.
The Configuration (UI): You build a card interface (using CardService) that allows the user to configure the step when they add it to their flow.
The Execution (Logic): You write the actual function that processes the inputs and returns the outputs.
Key Capabilities
The documentation provided by Google is comprehensive. Here are the key areas you should explore to understand what is possible:
1. Configuration Cards & Variables
Unlike a static settings page, configuration cards in Flows are dynamic. You can build cards that accept input variables from previous steps. For example, if a user has a “New Email” trigger, your step can ingest the email body as a variable.
2. Passing Data
Your step doesn’t just run in isolation; it talks to the rest of the flow. By defining output variables, your script can return data (like a calculated value or a file ID) that subsequent steps can use.
3. Robust Validation
To ensure users don’t break your script with bad data, Flows supports two types of validation. You can use Client-side validation (using Common Expression Language, or CEL) for instant feedback on the UI, or Server-side validation for complex checks against external databases.
4. Handling Complexity
For more advanced use cases, simple strings and integers might not be enough. The platform supports Custom Resources for grouping complex data structures (like a CRM lead object) and Dynamic Variables for inputs that change based on context (like selecting a specific question from a Google Form).
Getting Started
If you are part of the Gemini Alpha program and want to get your hands dirty, the best place to start is the Quickstart guide. It walks you through building a simple calculator step that takes two numbers and an operator to output a result.