Amit is a Google Workspace developer, the founder of Digital Inspiration, and a Google Developer Expert. He has published multiple Google Workspace Add-ons to the Workspace Marketplace and they have a combined 46 million+ installs.
Looking to create successful Google Workspace Add-ons? Chanel Greco (Google Workspace DevRel) recently interviewed Amit Agarwal (Founder, Digital Inspiration), who shared valuable insights for aspiring developers.
Amit Agarwal highlighted several critical areas for add-on development success:
Solve Real Problems with Great UX: Focus on addressing specific user needs with a clean, intuitive interface. A compelling first-run experience is key.
Build Trust with Documentation: Provide clear, comprehensive documentation and tutorials. A detailed privacy policy is crucial for user trust and smoother Google reviews. Prompt communication with Google during review is also important.
Smart Business & Future Focus: A freemium model can drive adoption. Keep an eye on future trends like AI integration to enhance your add-on.
Engage the Community: Participate in the Google Developer Community and programs like Google Developer Experts to learn and grow.
To find out more about these areas and more, check out the full interview.
Following the improvements made to tables in Google Sheets in March and April, we’re excited to introduce API support for tables. Now, users will be able to take basic actions to create and modify tables via the Sheets API.
Following enhancements to Google Sheets tables last year, Google recently highlighted an update for developers: the Sheets API now supports tables. This is great news for the Google Workspace developer community, as it allows for direct programmatic creation and modification of tables, moving beyond previous reliance on workarounds.
For a while, developers have found clever ways to interact with table-like structures, such as Kanshi Tanaike’s notable 2024 solution using a mix of Apps Script and Sheet formulas. While these methods were very clever, the new, direct API support offers a more robust and straightforward way for interacting with tables in Google Sheets. For Google Apps Script users, for now this will require using the Google Sheets Advanced Service to call these new table methods, as direct integration into the SpreadsheetApp service hasn’t been announced at this time.
Key API Capabilities for Tables:
The Sheets API now lets developers:
Add Tables: Create tables with defined names, ranges, and specific column properties (like ‘PERCENT‘ or ‘DROPDOWN‘ with validation).
Update Tables: Modify table size (add/remove rows/columns), and toggle table footers. The API also provides methods like InsertRangeRequest and DeleteRangeRequest for more granular control.
Append Values: Easily add new rows to the end of a table using AppendCellsRequest, which intelligently handles existing data and footers.
Delete Tables: Remove entire tables and their content (DeleteTableRequest) or just the formatting while retaining data (DeleteBandingRequest).
Utilise Column Types: Work with various column types including numeric, date, dropdown, smart chip, and checkbox.
This is a sample Google Apps Script designed to replace all instances of specific text within a Google Slides presentation, while simultaneously applying a desired text style. The built-in Presentation.replaceAllText() method within the Google Slides service is limited; it efficiently replaces text strings but lacks the functionality to modify text formatting during the replacement process. This limitation poses a challenge when aiming for styled text replacements. This report presents a detailed script solution that overcomes this constraint.
Have you ever needed to replace text across your Google Slides presentation but also wanted to apply specific formatting to the new text at the same time? The standard replaceAllText() method in Apps Script is handy for bulk text replacement, but it falls short when you need to control the styling – like font, size, or colour – during the replacement process.
Community contributor Kanshi Tanaike has developed a clever solution to overcome this limitation. Tanaike has shared a Google Apps Script function that not only finds and replaces text throughout all elements in your slides (including shapes, tables, and grouped objects) but also applies your desired text styles simultaneously.
The script works by iterating through the elements on each slide. When it finds the text you want to replace, it uses TextRange methods to perform the replacement and apply the specified formatting attributes, such as font family, size, colour, bold, italics, underline, and more.
This approach provides significantly more control than the built-in method, allowing you to ensure that automatically replaced text matches the exact styling you need for visually consistent and polished presentations. Tanaike’s post includes the full script, configuration details for specifying the text and styles, and sample slides showing the results.
I was thrilled to be invited by Chanel Greco (Google Workspace Developer Advocate) to join Vinay Vyas (Software Engineer, Google) and Steve Bazyl (Developer Program Engineer, Google) for a Developer Spotlight session recorded at Google Cloud Next ‘25.
We explored the exciting evolution of Google Workspace development, focusing on the tools and technologies shaping the future of automation and AI integration.
What We Covered:
The Rise of Flows for Workspace Automation: We kicked things off discussing the potential of Flows, a powerful new tool Google had just announced which is designed to simplify task automation in Google Workspace. We explored how developers can tap into its potential by building custom actions, including integrating with Vertex AI [01:17].
Extending Capabilities with Apps Script: Your existing Apps Script skills remain crucial! We discussed how they empower developers to extend Flows capabilities as the product evolves [02:08], offering significant opportunities within this user-centric automation tool [01:52].
Workspace as an Integrated Platform: Learn about the ongoing efforts to enhance connectivity with third-party services [02:35] and how add-ons (including Chat apps deployed as add-ons [03:32]) are key to keeping users productive within the Workspace context [02:52].
The Gemini Effect on Development: Hear our experiences with Gemini 2.5 and its significant impact on code development, particularly for Apps Script [05:29]. We discuss its impressive ability to generate substantial, high-quality code for complex tasks [06:35].
AI Agents & The Future: We explored the concept of AI Agents powered by Gemini within Google Apps Script for automating tasks using natural language and discussed the exciting prospects for deeper Gemini integration directly within Apps Script [14:41]. I checky asked Steve if there was going to be a Gemini Advanced Service for Apps Script (you’ll have to watch the video to find out the answer :).
Why Watch?
If you are interested in understanding the future direction of Google Workspace development, this discussion hopefully provides some practical perspectives and explores exciting possibilities.
We’re thrilled to announce that Gemini in AppSheet Solutions is now available in Public Preview for Enterprise users! As announced during the AppSheet breakout session at Google Cloud Next 2025 just a few weeks ago, this powerful new capability allows AppSheet Enterprise Plus users to integrate Google’s Gemini models directly into their automation workflows.
Google recently announced some exciting developments for AppSheet, with the news that creators can now directly integrate Gemini AI capabilities into your AppSheet apps. While developers have previously found ways to connect AppSheet to Gemini including using Google Apps Script (like the invoice processing example using the GeminiApp library previously discussed on Pulse), this new update promises to make AI-powered automation accessible to everyone, no coding required.
What This Means for AppSheet Creators
This move significantly lowers the barrier for integrating AI into AppSheet workflows. Instead of setting up API keys, writing Apps Script functions, and managing libraries like GeminiApp to call the Gemini API for tasks like data extraction, creators can now use a native AI Task step within AppSheet automations.
As highlighted in Google’s announcement, this new approach offers several advantages:
Simplified AI Integration: The native AI Task aims to handle jobs like extracting specific data from uploaded photos or PDFs and categorising text – directly within AppSheet. This replaces the need for external scripts for many common AI tasks.
Build with Confidence: The GA AI Task Step Testing feature allows creators to test and refine AI prompts and configurations directly in the editor using sample data before deployment, a crucial step for reliable AI.
No-Code Accessibility: By embedding Gemini capabilities directly, AppSheet makes powerful AI accessible to creators who may not have coding expertise.
Controlled Deployment & Oversight: Admins still control feature access, and workflows can still incorporate human review steps alongside the AI Task.
Potential Use Cases
Google suggests the AI task can be used for:
Information Extraction: Getting details from images (serial numbers, meter readings) or documents (PO numbers, tracking info, report details).
Record Categorisation: Classifying expenses, maintenance requests, or customer feedback automatically.
These are similar goals to what was achievable via Apps Script previously, but now potentially much simpler to implement directly in AppSheet.
A Big Step for No-Code AI
This native integration represents a significant simplification compared to earlier methods requiring Apps Script. By embedding Gemini directly into AppSheet automations, Google is making advanced AI capabilities much more accessible for business users of all levels.
For full details and setup instructions, refer to the official Google announcement and linked resources.
Storing JSON files in Google Drive offers a practical approach for Google Workspace developers, especially when combined with Google Apps Script. While Drive lacks a built-in JSON editor, AI tools like Gemini can rapidly prototype solutions, such as web-based editors. However, creating a robust application requires more than just a quick fix. It involves understanding syntax highlighting, managing dependencies, and navigating the constraints of the Apps Script platform, highlighting the importance of both rapid prototyping and robust engineering skills.
I recently explored how to effectively manage JSON configuration files within Google Drive using Google Apps Script, and the journey from quick AI-assisted prototypes to robust solutions.
As part of this I delve into the benefits of storing JSON in Drive, the challenges of editing, and how AI tools like Gemini can provide a great starting point. However, as I discovered, building a truly polished tool requires deeper technical knowledge and problem-solving.
Check out the article for insights on navigating the development process and, if you want one, an Apps Script powered JSON Editor for Google Drive.
Use Gemini and Imagen 3 on Vertex AI to create the images you envision. Generate tailored images from reference sources with Apps Script.
Have you ever generated an image using Gemini in Google Workspace and wished you could easily tweak or iterate on it? While Gemini for Workspace is great for initial creation, iterating on those images directly isn’t currently straightforward. A recent post by Stéphane Giron highlights a programmatic approach using Google Apps Script, Vertex AI (with Imagen 3 and Gemini models) to achieve the goal of generating image variations based on a source image.
Stéphane’s method uses a source image (which could be one previously generated or found elsewhere) which is provided to the Vertex AI API for Gemini (e.g., gemini-2.0-pro) along with text instructions for the required changes. The Gemini model analyzes the image and the request to generate a new prompt. This new prompt is then used with the Imagen 3 model (e.g., imagen-3.0-generate-001) via Vertex AI to generate the final image variation.
It’s interesting to contrast this with another solution we’ve previously featured from Kanshi Tanaike on ‘Iterative Image Generation with the Gemini API and Google Apps Script‘. While Tanaike’s method uses the Gemini API’s chat history to iteratively build an image from sequential text prompts, Stéphane’s focuses on reinterpreting a source image with specific modifications via a newly generated prompt.
You can check out Stéphane Giron’s source post for the complete code and setup instructions.
Last week at Google Cloud Next ’25 was packed with announcements, but one that particularly grabbed my attention was the unveiling of Google Workspace Flows. As Google Apps Script developers, many of us are familiar with automating simple tasks. However, the Flows demo hinted at a more accessible approach for tackling complex business processes.
Think about those workflows that go beyond basic if-this-then-that and into a world where you can easily configure Gemini to be your virtual assistant. Updating specific spreadsheet entries based on nuanced analysis, or finding and summarising information scattered across different files before replying to a customer. Traditional automation often hits a wall here because these tasks require context, reasoning, and sometimes even creative generation – capabilities standard automation tools lack.
Workspace Flows: AI Agents Joining the Workflow
What Google presented with Flows is a new solution for Google Workspace designed to automate these kinds of multi-step processes with AI providing assistance. Instead of just triggering actions, Flows uses AI models, including Gemini, as agents within the loop. This means the AI isn’t just kicking off a process; it’s actively participating – researching, analysing, creating, and reasoning to help get work done more efficiently and intelligently.
Having used tools like IFTTT it feels like a conceptual shift from simple automation to building intelligent, agentic workflows directly within the Workspace environment, without writing a single line of code.
Why Flows Looks Interesting for Developers: Current Capabilities and Future Vision
Beyond the core AI capabilities shown in the demo, Google outlined a vision for Flows that is particularly relevant for developers, indicating where the platform is heading:
No-Code/Low-Code Interface (Current): The initial preview allows configuring triggers (like new emails, form responses, etc.) and actions across core Workspace apps and integrating Gemini and Gems for AI-driven steps, removing the need to find alternative solutions (code you write or third party solutions).
Apps Script Extensibility (Future Vision): Google announced plans to allow developers to build their own custom triggers and actions using Apps Script. This creates the opportunity to integrate your own systems or add specific logic to get more out of Flows. The presentation briefly showed an example appsscript.json manifest snippet for declaring these elements around 13:08).
Workspace Connectors Platform (Future Vision): A dedicated platform for third-party integrations was also announced in the presentation as part of the roadmap. The plan is to enable connections to tools like Jira, Asana, Salesforce, HubSpot, etc., allowing them to be used as triggers or actions. The stated goal is to include the ability to build end-to-end workflows spanning beyond Google Workspace, with connectors built once potentially working in both Flows and Gemini Extensions.
Bring Your Own Models via Vertex AI (Future Vision): For advanced AI needs, Google shared the vision for integrating your own custom or fine-tuned models hosted on Google Cloud’s Vertex AI. The concept shown involved an ‘Ask an LLM’ step where you could select a ‘Custom Model’ directly within the flow builder, pointing towards future capabilities for incorporating highly specialized AI into Workspace automations.
Looking Ahead
Google Workspace Flows is definitely a platform I’ll be watching closely. The initial preview, focusing on AI agents-in-the-loop and core Workspace automation, is already compelling. But the announced roadmap for developer extensibility – adding Apps Script support, a robust connector platform, and the ability to call custom Vertex AI models – is what makes Flows truly exciting from a development perspective.
Flows is currently in alpha with its initial feature set. If this vision sounds as interesting to you as it did to me, you can sign up for the early access waitlist.
Getting started with generative AI in your Google Apps Script projects just got a whole lot easier! Google AI Studio has introduced a handy new feature allowing you to directly export your AI interactions as ready-to-use Apps Script code. If you’re new to Apps Script or integrating AI, this is a fantastic way to quickly add powerful features to your automations. Here’s how you can grab the code:
Click the Get code icon (</>) above the chat prompt.
In the ‘Get code’ window, click the language dropdown (this might initially show ‘Python’).
Select Apps Script from the dropdown list.
Click the Copy button to copy the generated Apps Script code to your clipboard.
If you are using AI Studio with your enterprise data, make sure you’re using a billable account so that your data is protected. This addition is perfect for rapid prototyping and understanding the basic API interaction. However, for applications needing more features essential for production environments, including robust error handling like exponential back-off, then you might want to look at my open source GeminiApp library.
Gemini API now generates images via Flash Experimental and Imagen 3. This report introduces image evolution within conversations using Gemini API with Google Apps Script.
The Gemini API recently gained the ability to generate images. Taking this a step further, Kanshi Tanaike has explored how to create evolving images within a conversation using Google Apps Script.
Often, you might want to generate an image and then iteratively add or modify elements in subsequent steps. Kanshi’s approach cleverly uses the chat functionality of the Gemini API (gemini-2.0-flash-exp model). By sending prompts sequentially within a chat, the API uses the conversation history, allowing each new image to build upon the previous one. This enables the generation of images that evolve step-by-step based on your prompts, as demonstrated in the original post with examples like drawing successive items on a whiteboard.
This technique is particularly useful because, as noted in the post, using chat history provides better results for this kind of sequential image generation compared to generating images from isolated prompts.
Kanshi Tanaike’s original post includes a detailed explanation, setup instructions (including API key usage and library installation ), and complete sample code snippets that you can adapt for your own Google Workspace projects.