The Google Admin Console is a powerful engine, but it often becomes a bottleneck for delegated tasks. IT teams frequently find themselves trapped in “Admin Debt,” repeating manual steps because granting full administrative access to others is a security risk.
At the Google for Education IT Admin Summit in London, I shared a session on how to move from a static interface to a dynamic, automated engine. The goal is to build an “Admin Toolbox” that showcases how some Admin SDK and other Workspace API capabilities can be integrated into secure, self-service applications using AppSheet and Google Apps Script.
For those who couldn’t attend, or for those who want to dig into the code, I’ve made the session guide available below.
“We are building a data bridge that turns raw directory records into a functional database that understands who people are and, more importantly, who they report to.” — Session Guide: The Automated IT Toolbox
Practical Builds in the Toolbox
The session guide covers four distinct areas where these technologies intersect to solve common IT headaches:
Hierarchical Directory Apps: Building a connection to the Admin SDK to create a searchable directory with security filters based on reporting lines.
Automated Shared Drive Provisioning: A workflow where AppSheet handles the request and Apps Script acts as the administrator to create and name drives automatically upon approval.
ChromeOS Management: Using the Reports API to create a live activity feed of login events and issuing remote commands like “Lock Device.”
AI-Powered Damage Reports: Utilising the Gemini API to analyse photos of damaged hardware. Users can snap a photo, and the AI provides a structured analysis of the severity and required parts.
Gemini as a Development Partner
A key takeaway from the session was that I didn’t write any of this code from scratch. Instead, I used the Gemini App as a pair programmer. While Gemini was excellent for standard data tasks, it reached its limits when handling more obscure or less documented API calls. In these areas, my existing knowledge of the Workspace platform was essential. I had to refine my prompts and provide specific technical context to guide the model toward a reliable solution. It highlights that while AI is a powerful assistant, it still needs a knowledgeable pilot to navigate the complexities of advanced APIs.
The full session guide includes code snippets as well as some of the advanced ‘watch’ and Gemini API structured output.
Connect Google Apps Script to PostgreSQL via JDBC. Covers connection strings, JSONB/UUID workarounds, parameterized queries, transactions, and PostGIS.
Google recently added support for PostgreSQL to the Apps Script JDBC service, providing a way to connect your spreadsheets and automations directly to one of the most popular relational databases. Justin Poehnelt, Senior Developer Relations Engineer at Google, has recently published a guide which provides a detailed look at how to get this connection working and the specific hurdles you might encounter.
It is always a great to see contributions that bridge the gap between enterprise data and Workspace productivity. Justin’s walkthrough is a great example of how a few lines of configuration can open up massive storage possibilities beyond the traditional limits of Google Sheets.
Beyond just making the connection, Justin provides a series of examples and techniques and for those managing high-traffic projects, includes advice on using connection poolers like PgBouncer. If you are ready to connect your Workspace projects to a PostgreSQL instance, you can find the full guide, configuration tips, and complete code samples on Justin’s blog.
We know that to get the most out of generative AI tools like Gemini (gemini.google.com), you need to provide them with high-quality context. For AppSheet creators, there is a “hidden gem” within the platform that generates automatic documentation for your app. By navigating to Settings > Information > App documentation, you can access a comprehensive PDF that details your app’s definition; this is an excellent resource for human review and archiving.
However, when working with tools like the Gemini to generate code or API calls, a flattened PDF isn’t always the most efficient format for the model to parse. For creators looking for a more machine-friendly alternative, QREW Apps recently suggested a clever method to access the OpenAPI specification of your app directly.
App owners can retrieve this structured data by appending their app’s UUID to the AppSheet API v2 URL: https://www.appsheet.com/api/v2/apps/{app-guid}/openapi.json
This OpenAPI JSON export provides a structured blueprint of your app’s API capabilities. Unlike the PDF documentation, this JSON format is highly digestible for an AI. For creators beginning to experiment with the AppSheet API, uploading this JSON file into gemini.google.com allows the model to understand the exact schema and capabilities of your specific application.
With this context loaded, Gemini can assist in constructing accurate API calls. For example, if you are looking to Call a webhook from an automation, Gemini can generate valid payloads, enable batch updates, or script complex data interactions that would typically require significant manual trial and error.
For those of you working with Google Apps Script, you can pair this technique with my AppSheetApp library. With the OpenAPI spec providing the schema, providing the library code which handles the API communication, you can prompt Gemini to write a script for your AppSheet automation.
I am keen to hear how you get on with this workflow. If you discover other interesting ways to combine OpenAPI specs with Gemini to accelerate your Google Workspace development, feel free to share them in the comments. As always, happy scripting!
Editor: We’ve previously featured Zig Mandel’s comprehensive framework that integrates Google Apps Script web apps into a standard website. Zig has been busy with an update which we are reposting from Reddit with permission:
I’ve shipped a major update to my Apps Script Website Integration Framework. The framework now allows running an HTMLService frontend entirely outside the GAS iframe, directly on your website.
Why use this?
HTMLService is convenient, but the iframe environment blocks a lot of modern web-dev capabilities: slow load, limited browser APIs, no TypeScript, no React, no Vite/live-reload, no custom domains, etc.
This update removes all of those constraints. You can develop, debug, and deploy a GAS webapp like a normal website – using any tooling, libraries, or build process you want.
How this compares to the previous method
The original method already bypassed several HTMLService limitations. The new approach goes further by running completely outside the iframe (faster, full capabilities), with one trade-off: it doesn’t support HTML templates. If you rely on templates, you can start with the original method and later migrate to this new method once templates are no longer needed.
The monorepo includes live working examples. Star if you like it!
Why use a dedicated app when you can simply ask Gemini to write and run the Python code for you? A look at the power of Google Apps Script and GenAI
For many Google Workspace developers, handling complex file formats or performing advanced data analysis has traditionally meant navigating the limitations of Apps Script’s built-in services. We have previously featured solutions for these challenges on Pulse, such as merging PDFs and converting pages to images or using the PDFApp library to “cook” documents. While effective, these methods often rely on loading external JavaScript libraries like pdf-lib, which can be complex to manage and subject to the script’s memory and execution limits.
While users of Gemini for Google Workspace may already be familiar with its ability to summarise documents or analyse data in the side panel, those features are actually powered by the same “Code Execution” technology under the hood. The real opportunity for developers lies in using this same engine within Apps Script to build custom, programmatic workflows that go far beyond standard chat interactions.
A recent project by Stéphane Giron highlights this path. By leveraging the Code Execution capability of the Gemini API, it is possible to offload intricate document and data manipulation to a secure Python sandbox, returning the results directly to Google Drive.
Moving beyond static logic
The traditional approach to automation involves writing specific code for every anticipated action. The shift here is that Gemini Code Execution does not rely on a pre-defined set of functions. Instead, when provided with a file and a natural language instruction, the model generates the necessary Python logic on the fly. Because the execution environment includes robust libraries for binary file handling and data analysis, the model can perform varied tasks without the developer needing to hardcode each individual routine. Notably, the model can learn iteratively which means if the generated code fails, it can refine and retry the script up to five times until it reaches a successful output.
While basic data analysis is now a standard part of Gemini for Workspace, having direct access to the library list in the Gemini sandbox opens up additional specialised, developer-focused avenues:
Dynamic Document Generation: Using python-docx and python-pptx, you can programmatically generate high-fidelity Office documents or presentations based on data from Google Workspace, bridging the gap between ecosystems without manual copy-pasting. [Here is a container bound script based on Stéphane code for Google Docs that generates a summary PowerPoint file]
Programmatic Image Inspection: Using Gemini 3 Flash, you can build tools that inspect images at a granular level. For example, a script could process a batch of site inspection photos, using Python code to “zoom and inspect” specific equipment labels or gauges, and then log those values directly into a database.
The mechanics and constraints
The bridge between Google Drive and this dynamic execution environment follows a straightforward pattern:
File Preparation: The script retrieves the target file from Drive and converts the blob into a format compatible with the Gemini API.
Instruction & Execution: Along with the file, a prompt is sent describing the desired outcome.
The Sandbox: Gemini writes and runs the Python code required to fulfil the request.
Completion: Apps Script receives the modified file or data and saves it back to the user’s Drive.
However, there are important boundaries to consider. The execution environment is strictly Python-based and has a maximum runtime of 30 seconds. Furthermore, developers should be mindful of the billing model: you are billed for “intermediate tokens,” which include the generated code and the results of the execution, before the final summary is produced.
Get started
For those interested in the implementation details, Stéphane has shared a repository containing the Apps Script logic and the specific configuration needed to enable the code_execution tool.
Containerize apps script code and run it on cloud platforms such as Cloud Run outside the context limitations of the Apps Script IDE.
Bruce Mcpherson continues to expand the capabilities of his gas-fakes library, a tool that has already proven valuable for running Google Apps Script locally on Node.js. In his latest update, Bruce demonstrates how to take this a step further by containerising Apps Script code to run on Google Cloud Run.
For developers familiar with the constraints of the Apps Script IDE, particularly the execution time limits, moving logic to a serverless container environment offers a powerful alternative. With the release of version 2.0.2, gas-fakes now includes a managed configuration for Workspace Domain-Wide Delegation (DWD) enabling secure keyless authentication.
Essentially, this allows developers to package their Apps Script logic into containers, enabling execution on scalable platforms like Cloud Run, free from the constraints of the standard IDE.
Authentication and Service Accounts
One of the friction points in moving from a bound script to a cloud environment is authentication. Bruce highlights that while Application Default Credentials (ADC) work well for local development, a more secure method is required for Cloud Run. The updated gas-fakes CLI simplifies this by handling the service account configuration automatically.
The library supports two primary authentication types:
Domain-Wide Delegation (DWD): Recommended for production environments and cross-platform scenarios, such as Cloud Run or Kubernetes.
Application Default Credentials (ADC): A fallback method primarily for local development.
Containerisation Workflow
For those looking to deploy their own scripts, Bruce’s guide walks through the essentials required to containerise a project.
It is important to note that while the core logic can be written in JavaScript, you will need to manually set up the infrastructure configuration. The guide provides the specific code required for:
The Dockerfile: To mimic the Apps Script runtime environment.
Cloud Build Configuration: A cloudbuild.yaml file to manage the build steps.
To tie it all together, the article includes a deployment script (referred to as deploy-cloudrun.sh in the text) which automates the pipeline. It handles everything from creating the Artifact Registry repository and submitting the build, to monitoring the deployment.
Code Example
To help developers get started, Bruce has provided a dedicated repository containing a working example.
Note on usage: The repository contains the core Node.js logic (example.js) and a deployment helper script (named exgcp.sh in the repo). However, to deploy this successfully, you will need to combine these files with the Dockerfile and cloudbuild.yaml configurations detailed in the main article. Bruce notes there is more to come.
The example.js file illustrates how standard Apps Script services are imported and used within the Node.js environment. By requiring gas-fakes, you can access services like DriveApp or SpreadsheetApp using the exact same syntax you use in the Apps Script editor, bridging the gap between local Node development and the Google Cloud runtime.
Summary
This development opens up interesting possibilities for hybrid workflows where lightweight tasks remain in Apps Script, while heavier processing is offloaded to Cloud Run without needing to rewrite the core logic in a different language. If you are interested in trying this out, Bruce has provided example deployment scripts and a troubleshooting guide to help you get started.
Learn how to check if an email has been replied to in Gmail using Google Apps Script.
“Gmail doesn’t have a built-in feature to check for replies, but you can use Google Apps Script to create a custom script that will automatically check for replies to your emails.”
Amit Agarwal has shared another practical tutorial for anyone looking to add more intelligence to their automated Gmail workflows. While Gmail organises messages into threads, it does not provide a simple out-of-the-box method to programmatically determine if a specific email has received a human response.
In his latest post, Amit explores the technical side of email headers: specifically how the Message-Id of an original email is referenced in the In-Reply-To and References headers of subsequent replies. This logic forms the basis for his Apps Script solution, which helps developers identify valid replies while ignoring automated clutter.
One aspect I found particularly interesting is the filtering logic used to try and separate human interaction from the rest. The script accounts for several common hurdles as it ignores out-of-office auto-responses, delivery failure notifications, and messages sent from the user’s own account. By checking the auto-submitted header and using regex to filter common bot addresses like no-reply or mailer-daemon, the solution ensures you only track meaningful engagement.
Crucially, this logic is not confined to the Google ecosystem. Because these headers follow standard email protocols, the detection works seamlessly even when recipients reply from external clients like Microsoft Outlook or Apple Mail.
As Amit notes, if you want to use this logic effectively, you should store the rfcMessageId and threadId in a Google Sheet or database immediately after sending. This approach allows you to run your reply-checking scripts at any time without losing the original context.
You can find the full code and detailed header explanations in Amit’s blog post.
The conversation around Enterprise AI is shifting. In 2024, we were preoccupied with the novelty of the prompt box, marveling at the ability of large language models to draft emails or summarise long documents. By 2026, that novelty has faded. The real frontier is no longer about asking an AI to “write this”; it is about instructing it to “execute this process.” We are witnessing the birth of a managed Action Layer within our digital environments.
The DORA Reality Check: AI as an Amplifier
To understand why this shift is happening, we must look at the data. The 2025 DORA Report provides a sobering reality check for technology leaders. Its central finding is that AI is an amplifier of your existing organisational systems. If your workflows are streamlined and your data is clean, AI accelerates excellence. However, if your processes are fragmented, AI simply creates “unproductive productivity,” generating isolated pockets of speed that eventually lead to downstream chaos.
DORA’s insights suggest that the bottleneck for AI value isn’t the model’s intelligence; it is the “Context Constraint.” AI cannot be effective if it is trapped in a silo, unable to see the broader business environment or act upon the tools that employees use every day. To move from hype to value, we must stop treating AI as a separate chatbot and start treating it as a governed co-worker.
Demystifying MCP
An emerging key component at the heart of this transition will be the Model Context Protocol (MCP). This protocol acts as a universal bridge that allows AI models to connect directly with data sources and local tools without the need for custom code for every new integration. It works by standardising how a model requests information from a database, a file system, or a specific software application. Because this protocol creates a common language between the AI and the external world, developers can swap different models or data sets in and out of their workflows with much less friction.
We are moving away from the era of closed silos. Instead, we see a future where an AI assistant can securely reach into your local environment to read a specific document or query a live database using a single, unified interface. It is a fundamental shift in how we think about connectivity.
Two Philosophies of Action: Managed Features vs. Configurable Platforms
Given the scale and pace of change, it is not surprising that Google’s AI efforts are not singular. The reality is that Gemini Enterprise (the platform formerly known as Agentspace) and Gemini for Workspace are two distinct products developed by separate teams with very different philosophies of control.
Gemini for Workspace primarily operates as a “managed service.” We see a steady expansion of features, but these remain “black boxes” entirely defined and controlled by Google. The user receives the feature, but the administrator has very little say in how that tool is connected, calibrated, or exposed to the model’s reasoning engine. Gemini Enterprise, by contrast, is a more configurable platform. It allows organisations to define their own data connectors, tailor system prompts and even deploy enterprise-owned agents.
Whilst the internal implementation details of every Gemini feature remain proprietary, Google’s recent announcement of official MCP support suggests that the protocol is becoming the standard connective tissue for their entire AI stack. The Enterprise team is simply the first to expose this plumbing for organisational governance and configuration. To understand the practical power of this protocol when it is stripped of its corporate packaging, we only need look at how MCP is being used in other generative AI tooling.
Lessons from the Gemini CLI
While most Workspace users interact with AI through polished side panels and the Gemini App, some power users are turning to the Gemini CLI. This command-line interface is the playground often being used to explore the automation of AI-driven workflows.
Unlike the “black box” nature of Gemini Enterprise and Gemini for Workspace, the Gemini CLI is easy to extend and customise. Users can easily install and create Extensions which can wrap commands, system prompts and MCP tools. The Google Workspace Extension is a wonderful demonstration of the power of MCP. By providing a few lines of configuration, a user grants the model permission to query and manage core productivity data: it can search files in Drive, draft emails in Gmail, dispatch Chat alerts, or organise Calendar appointments directly from the command line.
The value of the CLI lies in its demonstration that generative AI is capable of much more than just summarisation and analysis. By extending its capabilities through MCP, a user can hand-pick the actions they need based on their specific persona and the systems they inhabit. This shift toward action-oriented tools allows the model to move from passive reasoning to active execution without the user ever leaving their primary environment.
The Aspiration: A Managed Action Marketplace
The challenge now is to take this raw, unfiltered power seen in the CLI and translate it into a governed enterprise experience. Individual productivity is a start, but true organisational scale requires a bridge between these flexible protocols and the managed safety of the Workspace environment. I believe the defining moment for enterprise productivity will arrive when Google closes this gap by utilising its existing, battle-tested infrastructure.
For Gemini for Workspace to truly align with the DORA findings on systemic stability, it must move away from the current “one-size-fits-all” managed feature-set and toward a framework of granular administrative control. This is the aspiration for a truly professional Action Layer.
The Google Workspace Add-on Manifest is an existing way for enterprises to customise Google Workspace. Today, developers use a manifest to define which products an add-on supports, such as Gmail or Docs. To enable the next wave of productivity, Google could add the Gemini App (gemini.google.com) as a first-class, configurable product within this manifest. This would effectively turn the “Add-on” from a static sidebar tool into a dynamic Action that Gemini can call upon.
In this vision, a developer defines the location of their MCP server directly within the manifest. This creates a standardised way to deliver agentic tools through the existing Marketplace. This is not about building another static side panel. Rather, it is about injecting new capabilities directly into the core reasoning engine of the Gemini App and the integrated Gemini side panels across Workspace.
Imagine a “Tableau for Workspace” add-on. Instead of a user switching tabs to query a dashboard, the add-on uses the Tableau MCP toolset to allow Gemini to “see” live visualisations. The user can simply ask their Gemini side panel in Google Docs to “Insert a summary of our regional performance from Tableau,” and the agent executes the query, interprets the data, and drafts the text in situ.
This transformation would provide the necessary governance layer to ensure that AI-accelerated workflows remain under human control. By allowing admins to curate and configure these agentic tools, Google can turn Workspace into a high-quality internal platform. It would magnify the strengths of its users without introducing the chaos of ungoverned automation. The path to enterprise productivity is paved by protocol, not just better models. The winners of the next decade will not be the companies that buy the most AI licenses, but those that have the best platforms for those AI agents to inhabit.
As Google Apps Script developers, we are used to waiting. We wait for new runtime features, we wait for quotas to reset, and recently, we have been waiting for a first-class way to integrate Gemini into our projects.
With the recent release of the Vertex AI Advanced Service, the wait is technically over. But as detailed in Justin Poehnelt’s recent post, Using Gemini in Apps Script, you might find yourself asking if this was the solution we were actually looking for.
While the new service undoubtedly reduces the boilerplate code required to call Google’s AI models, it brings its own set of frustrations that leave me, and others in the community, feeling somewhat underwhelmed.
The “Wrapper” Trap
On the surface, the new VertexAI service looks like a win. As Justin highlights, replacing complex UrlFetchApp headers with a single VertexAI.Endpoints.generateContent() call is a significant cleanup.
However, this convenience comes with an administrative price tag. The Vertex AI Advanced Service requires a standard Google Cloud Project, understandable for billing, but requires the creation of an oAuth consent screen. For the majority of internal enterprise applications, I would imagine either a service account or a https://www.googleapis.com/auth/cloud-platform scope and associated IAM will be the preferred approach. This removes the need for a consent screen and, in the case of Service Accounts, rules out the Vertex AI Advanced Service.
It begs the question: Why didn’t Google take the approach of the Google Gen AI SDK?
In the Node.js and JavaScript world, the new Google Gen AI SDK offers a unified interface. You can start with a simple API key (using Google AI Studio) for prototyping, and switch to Vertex AI (via OAuth) for production, all without changing your core code logic. The Apps Script service, by contrast, locks us strictly into the “Enterprise” Vertex path. We seem to have traded boilerplate code for boilerplate configuration.
A Third Way: The Community Approach
If you are looking for that Unified SDK experience I mentioned earlier, where you can use the standard Google AI Studio code patterns within Apps Script, there is a third way.
I have published a library, GeminiApp, which wraps UrlFetchApp but mimics the official Google Gen AI SDK for Node.js. This allows you to write code that looks and feels like the modern JavaScript SDK, handling the complex UrlFetchApp configuration under the hood.
As you can see in the comparison above, the Advanced Service (left) abstracts away the request complexity, the UrlFetchApp method (middle) gives you the transparency and control you often need in production, and the GeminiApp library (right) offers a balance of both.
Disclaimer: As the creator of this library, I admit some bias, but it was built specifically to address the gap.
It is important to note a distinction in scope. Both the Google Gen AI SDK and GeminiApp are focused strictly on generative AI features. The Vertex AI Advanced Service, much like the platform it wraps, offers a broader range of methods beyond just content generation.
If your needs extend into those wider Vertex AI capabilities, but you still require the authentication flexibility of UrlFetchApp (such as using Service Accounts), I have a solution for that as well. My Google API Client Library Generator for Apps Script includes a build for the full Vertex AI (AI Platform) API. This gives you the comprehensive coverage of the Advanced Service with the architectural flexibility of an open-source library.
Here is how you can use the generated client library to authenticate with a Service Account, something impossible with the official Advanced Service:
/**
* Example using the generated Aiplatform library with a Service Account.
* Library: https://github.com/mhawksey/Google-API-Client-Library-Generator-for-Apps-Script/tree/main/build/Aiplatform
*/
function callGemini(prompt) {
const projectId = 'GOOGLE_CLOUD_PROJECT_ID';
const region = 'us-central1';
const modelName = 'gemini-2.5-flash';
const modelResourceName = `projects/${projectId}/locations/${region}/publishers/google/models/${modelName}`;
const serviceAccountToken = getServiceAccountToken_();
const vertexai = new Aiplatform({
token: serviceAccountToken
});
const payload = {
contents: [{
role: 'user',
parts: [{
text: prompt
}]
}],
generationConfig: {
temperature: 0.1,
maxOutputTokens: 2048
}
};
const result = vertexai.projects.locations.publishers.models.generateContent({
model: modelResourceName,
requestBody: payload
});
return result.data.candidates?.[0]?.content?.parts?.[0]?.text || 'No response generated.';
}
When “Advanced” Means “Behind”
There is another catch that Justin uncovered during his testing: the service struggles with the bleeding edge.
If you are trying to access the latest “Preview” models to prototype, such as the highly anticipated gemini-3-pro-preview, the advanced service may fail you. It appears the wrapper doesn’t yet support the auto-discovery needed for these newer endpoints.
In his companion post, UrlFetchApp: The Unofficial Documentation, Justin reminds us why UrlFetchApp is still the backbone of Apps Script development. When the “official” wrapper doesn’t support a specific header or a beta model, UrlFetchApp is the only way to bypass the limitations.
The Verdict
The Vertex AI service is a welcome addition for stable, enterprise-focused applications. But for developers, particularly those who want to test the latest Gemini 3 capabilities, it feels rigid compared to the flexibility seen in other Google developer ecosystems.
It serves as a good reminder that in Apps Script, convenience services are great, but understanding the underlying HTTP requests via UrlFetchApp extends what you can achieve.
If you’ve built a Google Chat bot, you’ve likely hit this wall: the API sends you a membership event with a User ID… but omits the email field entirely.
For Google Workspace developers, this limitation in the Chat API is a frequent stumbling block. As detailed in the User resource documentation, while you can use an email address as an alias in requests (e.g., referencing users/[email protected]), the API insists on returning only the canonical resource name (e.g., users/123456789). This ID corresponds to the user’s profile in the People API or Admin SDK, but the email itself is stripped from the response, forcing developers to perform a secondary lookup.
/**
* Fetches user details from the People API.
* @private
* @param {string} userResourceName The "users/{user}" string from the Chat API.
* @return {{name: string, displayName: string, email: string|null}} An object with user details.
*/
function getUserDetails_(userResourceName) {
const defaultUserDetails = {
name: userResourceName,
displayName: userResourceName.replace('users/', 'Unknown User '),
email: null,
};
// Fail fast for app users or invalid formats
if (!userResourceName.startsWith('users/') || userResourceName === 'users/app') {
return defaultUserDetails;
}
try {
const peopleApiResourceName = userResourceName.replace(/^users\//, 'people/');
const person = People.People.get(peopleApiResourceName, {
personFields: 'names,emailAddresses',
});
const displayName = person.names?.[0]?.displayName ?? defaultUserDetails.displayName;
const email = person.emailAddresses?.[0]?.value ?? null;
return {
name: userResourceName,
displayName: displayName,
email: email,
};
} catch (e) {
console.warn(`Could not fetch details for ${userResourceName} from People API: ${e.message}`);
return defaultUserDetails;
}
}
However, bridging this gap for a Service Account leads to a security dilemma. The People API often returns empty fields because the Service Account lacks a contact list. You might find yourself reaching for Domain-Wide Delegation to impersonate an admin—effectively using a sledgehammer to crack a nut.
In a recent guide, Justin Poehnelt outlines a more secure strategy that avoids granting blanket domain access. By assigning a custom “Users > Read” Admin Role directly to a Service Account, developers can resolve emails securely without the risks associated with full impersonation.
The Strategy at a Glance
Custom Roles: Create a strictly read-only role in the Admin Console.
Direct Assignment: Assign this role specifically to the Service Account’s email.
No Long-Lived Keys: Use Application Default Credentials (ADC) in production and Service Account Impersonation for local development.
This approach ensures your bot has just enough permission to identify users, keeping your security team happy and your audit logs clean. For more information into the implementation, including the specific configuration steps, I encourage you to read Justin’s full post linked below.