AppsScriptPulse

From Prompt to Pull Request: Automating Development with Jules and Apps Script

Late last year, Google introduced Jules, an experimental coding agent designed to handle the heavy lifting of software development asynchronously. While you may have explored Jules through its web interface, the recent release of a dedicated Apps Script SDK by David East opens up interesting possibilities for the Google Workspace developer community.

What is Jules?

For those who missed the initial announcement, Jules is an autonomous agent that integrates with GitHub to fix bugs, refactor code, and build new features. Unlike a standard code completion tool, Jules works in the background within a secure virtual machine. As Google describes it:

“Jules is an asynchronous, agentic coding assistant that integrates directly with your existing repositories”

As of the latest updates, Jules now can use Gemini 3.1 Pro. This upgrade provides a smarter baseline for complex problem-solving, allowing the agent to create multi-step plans with clearer reasoning and stronger intent alignment.

Google also released an alpha API which allows management of your Jules agent programmatically. This means developers can move beyond manual prompting. One potential use case is building a triage system that listens for bug reports in a support channel and automatically invokes Jules to analyse the codebase and prepare a fix.

Integrating with Google Apps Script

The new jules-apps-script-sdk simplifies the process of making these API calls from within the Apps Script environment. The library handles the authentication and request structures required to interact with the Jules service, allowing you to trigger agentic tasks directly from a spreadsheet, a form submission, or a time-based trigger.

For example, you could set up a workflow where a specific comment in a Google Doc triggers an Apps Script function to send a prompt to Jules, requesting a unit test for a new function in your repository. You can find out more by following the source link on GitHub.

Source: JulesApp SDK for Google Apps Script

Orchestrating AI to Build a Free Looker Studio Connector for Airtable

Looker Studio has supported custom community connectors for many years, allowing developers to bridge the gap between various APIs and their dashboards. While the capability exists, the technical barrier often leads users toward paid, third-party middleware. In a recent post, Toufik shares how they bypassed these monthly fees by building a free, self-hosted Airtable connector using Google Apps Script and AI coding tools.

TL;DR: Toufik demonstrates how to build a free, open-source Airtable connector for Looker Studio. By using Apps Script as the engine and an AI assistant to handle the implementation, the project was completed in about three hours. The result is a direct connection that avoids middleware and monthly subscriptions.

The Technical Barrier and the Apps Script Solution

As many developers know, building a community connector requires a specific understanding of the Apps Script environment: the UrlFetchApp for HTTP requests, the DataStudioApp service, and the semantics of field types. For those more comfortable with data analysis than backend services, this often feels like a steep climb.

Toufik highlights that the knowledge required to build these bridges hasn’t changed, but who holds that knowledge during the build has. By using Apps Script, they were able to create a direct connection where Airtable data appears as a native source. This approach ensures that data flows directly from the source to the dashboard through the user’s own account, removing the need for external servers or data collection.

Orchestrating AI with Domain Knowledge

A significant takeaway from this project is the method used to guide the AI assistant. Rather than asking the tool to write code immediately, Toufik created a “skill file.” This structured reference document provided the AI with domain-specific knowledge: the required function signatures for connectors (getAuthType(), getConfig(), getSchema(), and getData()), authentication types, and the specific manifest format rules required in appsscript.json.

As Toufik notes:

“The main cost was knowledge. And AI changed the price of knowledge. If your data lives behind an API and you can describe what you need, you’re closer to building it than you think.”

This shift in methodology suggests that for many niche connectors, the hurdle is no longer the typing, but the ability to describe the requirements and iterate through testing cycles. The build process involved testing the connector in Looker Studio, describing errors to the AI, and reviewing the proposed fixes. This iterative “test-describe-fix” loop allowed for the correct mapping of over 25 Airtable field types and the implementation of advanced features like pagination and rate-limit handling with exponential backoff.

New Avenues and the Risk of “Cheap” Code

This project serves as a reminder that the cost of building custom tools has dropped significantly. When developers can orchestrate an AI code assistant with a fundamental understanding of a platform like Apps Script, solutions that were previously behind paywalls become accessible. The repository includes not only the source code for authentication and schema detection but also the Claude Skill file used to prime the AI on the Looker Studio domain.

However, this ease of production brings its own set of challenges. As discussed in The New Stack, there is a growing concern that AI could flood the marketplace with low-quality, unmaintained “knockoffs” that lack the depth of human-led projects. The differentiator in Toufik’s approach is the rigor of the “skill file” and the domain expertise used to validate the output. For the Apps Script community, the value likely remains in the quality of orchestration and the commitment to maintaining the bridge once it is built, and the challenge for Google is maintaining the quality of its third party marketplaces.

For those interested in exploring the implementation or deploying their own version, Toufik has provided the full source code and the skill files on GitHub.

Source: How I Built a Free Airtable Connector for Looker Studio (AI-Assisted) | GitHub Repository

Cross-cloud automation: Deploying Google Apps Script logic to Azure

Bruce Mcpherson has been working on a series of articles exploring how to liberate native Apps Script logic from its browser-based sandbox. If you’ve been following along, you’ll know he’s previously covered AWS Lambda, Google Cloud Run, and Kubernetes. Bruce has now added Azure Container Apps (ACA) to the list of supported environments.

TL;DR: By using the @mcpher/gas-fakes library and Workload Identity Federation, you can run native Apps Script logic on Azure Container Apps Jobs. This approach offers a massive 24-hour execution window, making it suitable for high-intensity tasks like large data migrations or security scans that would otherwise time out in the standard Google environment.

Moving logic to Azure introduces some complexity, particularly around authentication, which Bruce covers in his post and to bridge this gap, the project includes a custom Identity Bridge to handle the handshake between Azure and Google.

You can read the full technical breakdown and access the comparison matrix on Bruce’s blog.

Source: Yes – you can run native apps script code on Azure ACA as well! – Desktop liberation

Delegated Power: Showcasing a Google Workspace Admin Toolbox Concept in AppSheet and Apps Script

The Google Admin Console is a powerful engine, but it often becomes a bottleneck for delegated tasks. IT teams frequently find themselves trapped in “Admin Debt,” repeating manual steps because granting full administrative access to others is a security risk.

At the Google for Education IT Admin Summit in London, I shared a session on how to move from a static interface to a dynamic, automated engine. The goal is to build an “Admin Toolbox” that showcases how some Admin SDK and other Workspace API capabilities can be integrated into secure, self-service applications using AppSheet and Google Apps Script.

For those who couldn’t attend, or for those who want to dig into the code, I’ve made the session guide available below.

“We are building a data bridge that turns raw directory records into a functional database that understands who people are and, more importantly, who they report to.” — Session Guide: The Automated IT Toolbox

Practical Builds in the Toolbox

The session guide covers four distinct areas where these technologies intersect to solve common IT headaches:

  1. Hierarchical Directory Apps: Building a connection to the Admin SDK to create a searchable directory with security filters based on reporting lines.
  2. Automated Shared Drive Provisioning: A workflow where AppSheet handles the request and Apps Script acts as the administrator to create and name drives automatically upon approval.
  3. ChromeOS Management: Using the Reports API to create a live activity feed of login events and issuing remote commands like “Lock Device.”
  4. AI-Powered Damage Reports: Utilising the Gemini API to analyse photos of damaged hardware. Users can snap a photo, and the AI provides a structured analysis of the severity and required parts.

Gemini as a Development Partner

A key takeaway from the session was that I didn’t write any of this code from scratch. Instead, I used the Gemini App as a pair programmer. While Gemini was excellent for standard data tasks, it reached its limits when handling more obscure or less documented API calls. In these areas, my existing knowledge of the Workspace platform was essential. I had to refine my prompts and provide specific technical context to guide the model toward a reliable solution. It highlights that while AI is a powerful assistant, it still needs a knowledgeable pilot to navigate the complexities of advanced APIs.

The full session guide includes code snippets as well as some of the advanced ‘watch’ and Gemini API structured output.

Source: Google Doc | GitHub Repo

How to Connect PostgreSQL to Google Apps Script (JDBC Guide)

Connect Google Apps Script to PostgreSQL via JDBC. Covers connection strings, JSONB/UUID workarounds, parameterized queries, transactions, and PostGIS.

Google recently added support for PostgreSQL to the Apps Script JDBC service, providing a way to connect your spreadsheets and automations directly to one of the most popular relational databases. Justin Poehnelt, Senior Developer Relations Engineer at Google, has recently published a guide which provides a detailed look at how to get this connection working and the specific hurdles you might encounter.

It is always a great to see contributions that bridge the gap between enterprise data and Workspace productivity. Justin’s walkthrough is a great example of how a few lines of configuration can open up massive storage possibilities beyond the traditional limits of Google Sheets.

Beyond just making the connection, Justin provides a series of examples and techniques and for those managing high-traffic projects, includes advice on using connection poolers like PgBouncer. If you are ready to connect your Workspace projects to a PostgreSQL instance, you can find the full guide, configuration tips, and complete code samples on Justin’s blog.

Source: How to Connect PostgreSQL to Google Apps Script (JDBC Guide)

Supercharging AppSheet API development with Gemini and OpenAPI specs

We know that to get the most out of generative AI tools like Gemini (gemini.google.com), you need to provide them with high-quality context. For AppSheet creators, there is a “hidden gem” within the platform that generates automatic documentation for your app. By navigating to Settings > Information > App documentation, you can access a comprehensive PDF that details your app’s definition; this is an excellent resource for human review and archiving.

However, when working with tools like the Gemini to generate code or API calls, a flattened PDF isn’t always the most efficient format for the model to parse. For creators looking for a more machine-friendly alternative, QREW Apps recently suggested a clever method to access the OpenAPI specification of your app directly.

App owners can retrieve this structured data by appending their app’s UUID to the AppSheet API v2 URL: https://www.appsheet.com/api/v2/apps/{app-guid}/openapi.json

This OpenAPI JSON export provides a structured blueprint of your app’s API capabilities. Unlike the PDF documentation, this JSON format is highly digestible for an AI. For creators beginning to experiment with the AppSheet API, uploading this JSON file into gemini.google.com allows the model to understand the exact schema and capabilities of your specific application.

With this context loaded, Gemini can assist in constructing accurate API calls. For example, if you are looking to Call a webhook from an automation, Gemini can generate valid payloads, enable batch updates, or script complex data interactions that would typically require significant manual trial and error.

For those of you working with Google Apps Script, you can pair this technique with my AppSheetApp library. With the OpenAPI spec providing the schema, providing the library code which handles the API communication, you can prompt Gemini to write a script for your AppSheet automation.

I am keen to hear how you get on with this workflow. If you discover other interesting ways to combine OpenAPI specs with Gemini to accelerate your Google Workspace development, feel free to share them in the comments. As always, happy scripting!

Major update to the website framework for Apps Script Web Apps

Editor: We’ve previously featured Zig Mandel’s comprehensive framework that integrates Google Apps Script web apps into a standard website. Zig has been busy with an update which we are reposting from Reddit with permission:

I’ve shipped a major update to my Apps Script Website Integration Framework. The framework now allows running an HTMLService frontend entirely outside the GAS iframe, directly on your website.

Why use this?
HTMLService is convenient, but the iframe environment blocks a lot of modern web-dev capabilities: slow load, limited browser APIs, no TypeScript, no React, no Vite/live-reload, no custom domains, etc.

This update removes all of those constraints. You can develop, debug, and deploy a GAS webapp like a normal website – using any tooling, libraries, or build process you want.

How this compares to the previous method
The original method already bypassed several HTMLService limitations. The new approach goes further by running completely outside the iframe (faster, full capabilities), with one trade-off: it doesn’t support HTML templates. If you rely on templates, you can start with the original method and later migrate to this new method once templates are no longer needed.

The monorepo includes live working examples. Star if you like it!

Repo here: https://github.com/zmandel/demosite_appscript

Beyond external libraries: Google Workspace document and data processing with Gemini Code Execution

Why use a dedicated app when you can simply ask Gemini to write and run the Python code for you? A look at the power of Google Apps Script and GenAI

For many Google Workspace developers, handling complex file formats or performing advanced data analysis has traditionally meant navigating the limitations of Apps Script’s built-in services. We have previously featured solutions for these challenges on Pulse, such as merging PDFs and converting pages to images or using the PDFApp library to “cook” documents. While effective, these methods often rely on loading external JavaScript libraries like pdf-lib, which can be complex to manage and subject to the script’s memory and execution limits.

While users of Gemini for Google Workspace may already be familiar with its ability to summarise documents or analyse data in the side panel, those features are actually powered by the same “Code Execution” technology under the hood. The real opportunity for developers lies in using this same engine within Apps Script to build custom, programmatic workflows that go far beyond standard chat interactions.

A recent project by Stéphane Giron highlights this path. By leveraging the Code Execution capability of the Gemini API, it is possible to offload intricate document and data manipulation to a secure Python sandbox, returning the results directly to Google Drive.

Moving beyond static logic

The traditional approach to automation involves writing specific code for every anticipated action. The shift here is that Gemini Code Execution does not rely on a pre-defined set of functions. Instead, when provided with a file and a natural language instruction, the model generates the necessary Python logic on the fly. Because the execution environment includes robust libraries for binary file handling and data analysis, the model can perform varied tasks without the developer needing to hardcode each individual routine. Notably, the model can learn iteratively which means if the generated code fails, it can refine and retry the script up to five times until it reaches a successful output.

While basic data analysis is now a standard part of Gemini for Workspace, having direct access to the library list in the Gemini sandbox opens up additional specialised, developer-focused avenues:

  • Dynamic Document Generation: Using python-docx and python-pptx, you can programmatically generate high-fidelity Office documents or presentations based on data from Google Workspace, bridging the gap between ecosystems without manual copy-pasting. [Here is a container bound script based on Stéphane code for Google Docs that generates a summary PowerPoint file]
  • Programmatic Image Inspection: Using Gemini 3 Flash, you can build tools that inspect images at a granular level. For example, a script could process a batch of site inspection photos, using Python code to “zoom and inspect” specific equipment labels or gauges, and then log those values directly into a database.

The mechanics and constraints

The bridge between Google Drive and this dynamic execution environment follows a straightforward pattern:

  1. File Preparation: The script retrieves the target file from Drive and converts the blob into a format compatible with the Gemini API.
  2. Instruction & Execution: Along with the file, a prompt is sent describing the desired outcome.
  3. The Sandbox: Gemini writes and runs the Python code required to fulfil the request.
  4. Completion: Apps Script receives the modified file or data and saves it back to the user’s Drive.

However, there are important boundaries to consider. The execution environment is strictly Python-based and has a maximum runtime of 30 seconds. Furthermore, developers should be mindful of the billing model: you are billed for “intermediate tokens,” which include the generated code and the results of the execution, before the final summary is produced.

Get started

For those interested in the implementation details, Stéphane has shared a repository containing the Apps Script logic and the specific configuration needed to enable the code_execution tool.

Source: Stéphane Giron on Medium | Code: GitHub Repository

Breaking boundaries: Running native Apps Script on Google Cloud Run

Containerize apps script code and run it on cloud platforms such as Cloud Run outside the context limitations of the Apps Script IDE.

Bruce Mcpherson continues to expand the capabilities of his gas-fakes library, a tool that has already proven valuable for running Google Apps Script locally on Node.js. In his latest update, Bruce demonstrates how to take this a step further by containerising Apps Script code to run on Google Cloud Run.

For developers familiar with the constraints of the Apps Script IDE, particularly the execution time limits, moving logic to a serverless container environment offers a powerful alternative. With the release of version 2.0.2, gas-fakes now includes a managed configuration for Workspace Domain-Wide Delegation (DWD) enabling secure keyless authentication.

Essentially, this allows developers to package their Apps Script logic into containers, enabling execution on scalable platforms like Cloud Run, free from the constraints of the standard IDE.

Authentication and Service Accounts

One of the friction points in moving from a bound script to a cloud environment is authentication. Bruce highlights that while Application Default Credentials (ADC) work well for local development, a more secure method is required for Cloud Run. The updated gas-fakes CLI simplifies this by handling the service account configuration automatically.

The library supports two primary authentication types:

  • Domain-Wide Delegation (DWD): Recommended for production environments and cross-platform scenarios, such as Cloud Run or Kubernetes.
  • Application Default Credentials (ADC): A fallback method primarily for local development.

Containerisation Workflow

For those looking to deploy their own scripts, Bruce’s guide walks through the essentials required to containerise a project.

It is important to note that while the core logic can be written in JavaScript, you will need to manually set up the infrastructure configuration. The guide provides the specific code required for:

  1. The Dockerfile: To mimic the Apps Script runtime environment.
  2. Cloud Build Configuration: A cloudbuild.yaml file to manage the build steps.

To tie it all together, the article includes a deployment script (referred to as deploy-cloudrun.sh in the text) which automates the pipeline. It handles everything from creating the Artifact Registry repository and submitting the build, to monitoring the deployment.

Code Example

To help developers get started, Bruce has provided a dedicated repository containing a working example.

Note on usage: The repository contains the core Node.js logic (example.js) and a deployment helper script (named exgcp.sh in the repo). However, to deploy this successfully, you will need to combine these files with the Dockerfile and cloudbuild.yaml configurations detailed in the main article. Bruce notes there is more to come.

The example.js file illustrates how standard Apps Script services are imported and used within the Node.js environment. By requiring gas-fakes, you can access services like DriveApp or SpreadsheetApp using the exact same syntax you use in the Apps Script editor, bridging the gap between local Node development and the Google Cloud runtime.

Summary

This development opens up interesting possibilities for hybrid workflows where lightweight tasks remain in Apps Script, while heavier processing is offloaded to Cloud Run without needing to rewrite the core logic in a different language. If you are interested in trying this out, Bruce has provided example deployment scripts and a troubleshooting guide to help you get started.

Source: Yes – you can now run native Apps Script on Cloud Run! – Desktop liberation

Identifying the “human in the machine”: Advanced email reply detection in Gmail with Google Apps Script

Learn how to check if an email has been replied to in Gmail using Google Apps Script.

“Gmail doesn’t have a built-in feature to check for replies, but you can use Google Apps Script to create a custom script that will automatically check for replies to your emails.”

Amit Agarwal has shared another practical tutorial for anyone looking to add more intelligence to their automated Gmail workflows. While Gmail organises messages into threads, it does not provide a simple out-of-the-box method to programmatically determine if a specific email has received a human response.

In his latest post, Amit explores the technical side of email headers: specifically how the Message-Id of an original email is referenced in the In-Reply-To and References headers of subsequent replies. This logic forms the basis for his Apps Script solution, which helps developers identify valid replies while ignoring automated clutter.

One aspect I found particularly interesting is the filtering logic used to try and separate human interaction from the rest. The script accounts for several common hurdles as it ignores out-of-office auto-responses, delivery failure notifications, and messages sent from the user’s own account. By checking the auto-submitted header and using regex to filter common bot addresses like no-reply or mailer-daemon, the solution ensures you only track meaningful engagement.

Crucially, this logic is not confined to the Google ecosystem. Because these headers follow standard email protocols, the detection works seamlessly even when recipients reply from external clients like Microsoft Outlook or Apple Mail.

As Amit notes, if you want to use this logic effectively, you should store the rfcMessageId and threadId in a Google Sheet or database immediately after sending. This approach allows you to run your reply-checking scripts at any time without losing the original context.

You can find the full code and detailed header explanations in Amit’s blog post.

Source: How to Check for Email Replies in Gmail