Why use a dedicated app when you can simply ask Gemini to write and run the Python code for you? A look at the power of Google Apps Script and GenAI
For many Google Workspace developers, handling complex file formats or performing advanced data analysis has traditionally meant navigating the limitations of Apps Script’s built-in services. We have previously featured solutions for these challenges on Pulse, such as merging PDFs and converting pages to images or using the PDFApp library to “cook” documents. While effective, these methods often rely on loading external JavaScript libraries like pdf-lib, which can be complex to manage and subject to the script’s memory and execution limits.
While users of Gemini for Google Workspace may already be familiar with its ability to summarise documents or analyse data in the side panel, those features are actually powered by the same “Code Execution” technology under the hood. The real opportunity for developers lies in using this same engine within Apps Script to build custom, programmatic workflows that go far beyond standard chat interactions.
A recent project by Stéphane Giron highlights this path. By leveraging the Code Execution capability of the Gemini API, it is possible to offload intricate document and data manipulation to a secure Python sandbox, returning the results directly to Google Drive.
Moving beyond static logic
The traditional approach to automation involves writing specific code for every anticipated action. The shift here is that Gemini Code Execution does not rely on a pre-defined set of functions. Instead, when provided with a file and a natural language instruction, the model generates the necessary Python logic on the fly. Because the execution environment includes robust libraries for binary file handling and data analysis, the model can perform varied tasks without the developer needing to hardcode each individual routine. Notably, the model can learn iteratively which means if the generated code fails, it can refine and retry the script up to five times until it reaches a successful output.
While basic data analysis is now a standard part of Gemini for Workspace, having direct access to the library list in the Gemini sandbox opens up additional specialised, developer-focused avenues:
Dynamic Document Generation: Using python-docx and python-pptx, you can programmatically generate high-fidelity Office documents or presentations based on data from Google Workspace, bridging the gap between ecosystems without manual copy-pasting. [Here is a container bound script based on Stéphane code for Google Docs that generates a summary PowerPoint file]
Programmatic Image Inspection: Using Gemini 3 Flash, you can build tools that inspect images at a granular level. For example, a script could process a batch of site inspection photos, using Python code to “zoom and inspect” specific equipment labels or gauges, and then log those values directly into a database.
The mechanics and constraints
The bridge between Google Drive and this dynamic execution environment follows a straightforward pattern:
File Preparation: The script retrieves the target file from Drive and converts the blob into a format compatible with the Gemini API.
Instruction & Execution: Along with the file, a prompt is sent describing the desired outcome.
The Sandbox: Gemini writes and runs the Python code required to fulfil the request.
Completion: Apps Script receives the modified file or data and saves it back to the user’s Drive.
However, there are important boundaries to consider. The execution environment is strictly Python-based and has a maximum runtime of 30 seconds. Furthermore, developers should be mindful of the billing model: you are billed for “intermediate tokens,” which include the generated code and the results of the execution, before the final summary is produced.
Get started
For those interested in the implementation details, Stéphane has shared a repository containing the Apps Script logic and the specific configuration needed to enable the code_execution tool.
We have all been there: You build a beautiful, automated dashboard in Google Sheets. It’s dynamic, it’s collaborative, and it’s perfectly integrated with your data sources. Then, the stakeholder request comes in: “Can you email this to me as an Excel file every Monday morning?”
For Google Workspace developers, the “Excel Requirement” is often a stumbling block. While Google Sheets is powerful, the bridge to Excel can be shaky. In this post, I’m going to explore four architectural approaches to solving this problem. If you would like to play along here is the Google Sheet Invoice Template and the Excel Invoice Template.
1. The Low-Code Agent (AppSheet Automation)
Before you write a single line of code, you should consider if the platform can do the heavy lifting for you. Since AppSheet is now a core part of Google Workspace, it offers a native “No-Code” way to generate Excel files that is robust, scheduled, and template-based.
The Strategy
Instead of writing a script to fetch data and build a file, you connect your Google Sheet to an AppSheet app. You then configure an Automation that triggers on a schedule or a data change.
The Workflow:
Trigger: A new row is added to your “Invoices” sheet (or a weekly schedule fires).
Task: AppSheet runs a “Create a new file” task.
Template: It reads a simplified Excel template stored in Drive, replaces tags like <<[Qty]>> with actual data, and generates the file.
Delivery: The file is saved to Drive or emailed directly to the client.
Why This Wins
Zero Maintenance: No scripts to debug, no OAuth tokens to manage, and no breaking changes when APIs update.
Native Templates: It supports complex Excel formatting out of the box.
The Limitation
Syntax: You must use AppSheet’s specific templating syntax (<< >>) inside your Excel file.
Scaling: Generating reports with thousands of line items can hit automation timeout limits.
2. The Native Export (Apps Script)
If you need more granular control or don’t want to spin up an AppSheet app, the most common starting point is attempting to do everything within Apps Script. You create a temporary Google Sheet, fill it with data, and then export it using the Drive API.
The Code
In this scenario, we assume you have a Google Sheet version of your Invoice template. We first write the specific line items to the Sheet, then export it.
function generateInvoiceExport() {
const ss = SpreadsheetApp.getActiveSpreadsheet();
const sheet = ss.getSheetByName('InvoiceTemplate');
// 1. Prepare Invoice Data
const invoiceData = [
{ desc: "Web Development - Phase 1", qty: 40, unit: 100 },
{ desc: "Server Hosting (Annual)", qty: 1, unit: 250 },
{ desc: "Domain Registration", qty: 2, unit: 15 },
{ desc: "Maintenance Retainer", qty: 10, unit: 85 }
];
// 2. Map data to the Sheet structure (Cols B, C, D, E, F)
// B=Desc, E=Qty, F=Unit. (C and D are skipped/empty)
const rows = invoiceData.map(item => [item.desc, "", "", item.qty, item.unit]);
// 3. Write data to the Google Sheet
// Matches Template: Start at Row 19, Column B (2)
if (rows.length > 0) {
// Corrected: ensure the number of columns (5) matches your data structure
sheet.getRange(19, 2, rows.length, 5).setValues(rows);
}
// 4. Construct the export URL for the populated sheet
const fileName = `Invoice_${new Date().toISOString().slice(0,10)}.xlsx`;
const url = `https://docs.google.com/spreadsheets/d/${ss.getId()}/export?format=xlsx&gid=${sheet.getSheetId()}`;
// 5. Fetch and Save
const params = {
method: "GET",
headers: { "Authorization": `Bearer ${ScriptApp.getOAuthToken()}` },
muteHttpExceptions: true
};
const response = UrlFetchApp.fetch(url, params);
if (response.getResponseCode() === 200) {
const blob = response.getBlob().setName(fileName);
const file = DriveApp.createFile(blob);
console.log(`Success: Invoice saved to Drive (ID: ${file.getId()})`);
} else {
console.log(`Error: ${response.getContentText()}`);
}
};
The Limitations
Speed & Scale: The overhead of creating a temporary sheet and exporting it adds up. Generating 50 invoices in a loop will take a long time.
The “Source” Gap: Your source material must be a Google Sheet, meaning you lose Excel-specific features like Power Query or VBA macros from your template.
3. The Browser Round-Trip (ExcelJS)
If you need to maintain stricter fidelity to an existing Excel template (preserving fonts, colors, and layouts) and are interested in speed, you can use the “Round Trip” technique.
This method uses the user’s browser as a processing engine. You fetch the Invoice.xlsx template from Drive, process it client-side with ExcelJS, and send the result back.
Parallel Fetching (Client): The browser fires concurrent requests to Apps Script to fetch the template (Base64) and invoice data.
Client-Side Processing: ExcelJS loads the template, injects data into specific cells (preserving layout), and recompiles the file.
Upload: The new file is encoded and sent back to Drive.
The Code
This snippet focuses on the core orchestration logic. For the full UI setup and helper functions, see the linked project.
/* --- CLIENT SIDE (Core Logic) --- */
async function startGeneration() {
// 1. Parallel Fetching
// Fire both requests at once to speed up the wait time
const [templateResponse, invoiceData] = await Promise.all([
runGoogleScript('getTemplateBase64', null),
runGoogleScript('getInvoiceData')
]);
// 2. Load into ExcelJS
const workbook = new ExcelJS.Workbook();
const binaryString = atob(templateResponse.base64);
const bytes = new Uint8Array(binaryString.length);
for (let i = 0; i < binaryString.length; i++) { bytes[i] = binaryString.charCodeAt(i); } await workbook.xlsx.load(bytes.buffer); // 3. Modify the Sheet const sheet = workbook.worksheets[0]; let currentRow = 19; // Starting at Row 19 based on template invoiceData.forEach((item, index) => {
// Ensure we do not write past the visual template limit (e.g. Row 22)
if (currentRow + index > 22) return;
const row = sheet.getRow(currentRow + index);
// Inject data into specific cells (B, E, F)
row.getCell(2).value = item.desc;
row.getCell(5).value = item.qty;
row.getCell(6).value = item.unit;
// ExcelJS preserves the existing styles of these cells!
row.commit();
});
// 4. Export & Save
const outBuffer = await workbook.xlsx.writeBuffer();
// Convert buffer to base64 (simplified for brevity)
let binary = '';
const outBytes = new Uint8Array(outBuffer);
for (let i = 0; i < outBytes.length; i++) {
binary += String.fromCharCode(outBytes[i]);
}
const outBase64 = btoa(binary);
// Upload back to Drive
await runGoogleScript('saveProcessedFile', outBase64, `Invoice_${Date.now()}.xlsx`);
}
/* --- SERVER SIDE (Apps Script) --- */
function getTemplateBase64(templateId) {
const file = DriveApp.getFileById(templateId);
return {
base64: Utilities.base64Encode(file.getBlob().getBytes())
};
}
function saveProcessedFile(base64Content, fileName) {
const blob = Utilities.newBlob(
Utilities.base64Decode(base64Content),
MimeType.MICROSOFT_EXCEL,
fileName
);
return { url: DriveApp.createFile(blob).getUrl() };
}
The Trade-off
Payload Limits:google.script.run calls can become unstable when the payload exceeds 50MB.
Base64 Inflation: Remember that Base64 encoding increases file size by ~33%.
4. The Enterprise Engine (Cloud Functions + Python)
For robust, high-fidelity reporting, we need to leave the Google Sheets conversion pipeline behind. By using Google Cloud Functions with Python, we can manipulate native Excel files directly using libraries like openpyxl.
The “User Identity” Pattern
Instead of managing complex Service Account permissions, this solution uses the User’s Identity to perform Drive operations.
Apps Script: Generates an OIDC ID Token (for identity), using getIdentityToken()) and an OAuth Access Token (for Drive permissions) with getOAuthToken().
Cloud Function: Verifies the ID Token and uses the Access Token to initialise the Drive API client as the user.
Open cloud-fn.py and update APPS_SCRIPT_CLIENT_ID with your Apps Script’s OAuth Client ID. Note: You can find this ID by running the logClientId() helper function in Code.js.
2. Google Apps Script (Code.js)
Update Manifest (appsscript.json): Add the https://www.googleapis.com/auth/userinfo.email scope.
Configuration: Update YOUR_CLOUD_FUNCTION_URL in callInvoiceEngine() with your deployed function URL.
The Code
Google Apps Script (Code.js)
function callInvoiceEngine() {
const url = "YOUR_CLOUD_FUNCTION_URL";
// 1. Get Tokens
const accessToken = ScriptApp.getOAuthToken();
const idToken = ScriptApp.getIdentityToken(); // Requires 'openid' scope in manifest
// --- LOGGING FOR LOCAL TESTING ---
console.log("--- CREDENTIALS FOR LOCAL TESTING ---");
console.log("ID Token:", idToken);
console.log("Access Token:", accessToken);
console.log("-------------------------------------");
const payload = {
templateId: "ID_OF_TEMPLATE_FILE",
folderId: "ID_OF_DESTINATION_FOLDER",
// Matches the same dataset used in our other examples
data: [
{ desc: "Web Development - Phase 1", qty: 40, unit: 100 },
{ desc: "Server Hosting (Annual)", qty: 1, unit: 250},
{ desc: "Domain Registration", qty: 2, unit: 15},
{ desc: "Maintenance Retainer", qty: 10, unit: 85}
],
// The Magic Key: Passing the active user's Access Token for Drive operations
userToken: accessToken
};
const options = {
method: "post",
contentType: "application/json",
headers: {
"Authorization": `Bearer ${idToken}`
},
payload: JSON.stringify(payload)
};
UrlFetchApp.fetch(url, options);
}
function logClientId() {
// Helper to find the Project's OAuth Client ID
var token = ScriptApp.getOAuthToken();
var url = 'https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=' + token;
var response = UrlFetchApp.fetch(url);
var data = JSON.parse(response.getContentText());
console.log('Client ID:', data.issued_to);
}
Pro Tip: Batching Requests
If you need to generate 50 invoices at once, don’t loop through them one by one. Use UrlFetchApp.fetchAll(requests) to fire off parallel requests to your Cloud Function. Since the heavy lifting happens in the cloud, Apps Script just acts as the orchestrator.
import requests
import json
def run_test():
print("--- Local Cloud Function Tester ---")
print("Ensure you are running: functions-framework --target=generate_excel_report --source=cloud-fn.py --port=8080\n")
# 1. Gather Tokens & IDs
print("Paste your Apps Script Tokens (use Logger.log in GAS):")
id_token = input("1. ID Token (ScriptApp.getIdentityToken()): ").strip()
if not id_token: return
access_token = input("2. Access Token (ScriptApp.getOAuthToken()): ").strip()
if not access_token: return
template_id = input("3. Google Sheet/Excel Template ID: ").strip()
folder_id = input("4. Destination Folder ID: ").strip()
# 2. Prepare Payload
payload = {
"templateId": template_id,
"folderId": folder_id,
"data": [
{ "desc": "Local Test Item 1", "qty": 1, "unit": 10 },
{ "desc": "Local Test Item 2", "qty": 5, "unit": 20 }
],
"userToken": access_token
}
headers = {
"Authorization": f"Bearer {id_token}",
"Content-Type": "application/json"
}
# 3. Send Request
url = "http://localhost:8080"
print(f"\nSending POST request to {url}...")
try:
response = requests.post(url, json=payload, headers=headers)
print(f"\nStatus Code: {response.status_code}")
print(f"Response Body: {response.text}")
except requests.exceptions.ConnectionError:
print(f"\n[!] Could not connect to {url}.")
print("Is the functions-framework running?")
if __name__ == "__main__":
run_test()
Local Test Script (test_request.py)
Before deploying, you can test the entire flow locally to ensure your tokens and permissions are working correctly.
Start the Function:functions-framework --target=generate_excel_report --source=cloud-fn.py --port=8080
Run the Tester:python3 test_request.py (Paste tokens from Apps Script logs when prompted).
import requests
import json
def run_test():
print("--- Local Cloud Function Tester ---")
print("Ensure you are running: functions-framework --target=generate_excel_report --source=cloud-fn.py --port=8080\n")
# 1. Gather Tokens & IDs
print("Paste your Apps Script Tokens (use Logger.log in GAS):")
id_token = input("1. ID Token (ScriptApp.getIdentityToken()): ").strip()
if not id_token: return
access_token = input("2. Access Token (ScriptApp.getOAuthToken()): ").strip()
if not access_token: return
template_id = input("3. Google Sheet/Excel Template ID: ").strip()
folder_id = input("4. Destination Folder ID: ").strip()
# 2. Prepare Payload
payload = {
"templateId": template_id,
"folderId": folder_id,
"data": [
{ "desc": "Local Test Item 1", "qty": 1, "unit": 10 },
{ "desc": "Local Test Item 2", "qty": 5, "unit": 20 }
],
"userToken": access_token
}
headers = {
"Authorization": f"Bearer {id_token}",
"Content-Type": "application/json"
}
# 3. Send Request
url = "http://localhost:8080"
print(f"\nSending POST request to {url}...")
try:
response = requests.post(url, json=payload, headers=headers)
print(f"\nStatus Code: {response.status_code}")
print(f"Response Body: {response.text}")
except requests.exceptions.ConnectionError:
print(f"\n[!] Could not connect to {url}.")
print("Is the functions-framework running?")
if __name__ == "__main__":
run_test()
Why This Wins
True Excel Support: You are editing the actual binary file, not a conversion.
Separation of Concerns: The heavy lifting is done in the cloud, not in the browser or the Apps Script runtime.
Summary: Which Tool for Which Job?
Feature
AppSheet
Apps Script (Native)
Client-Side (ExcelJS)
Cloud Functions (Python)
Complexity
Very Low
Low
High
High
Cost
Free
Free
Free
GCP Usage (Low)
Best For
Standard Workflows
Simple Reports
Interactive Template Fills
Enterprise Reports
Final Thoughts
There is no “one size fits all” here. The “Excel Requirement” is a spectrum:
For quick internal tools: Stick to Method 1 (AppSheet) andMethod 2 (Native). It’s fast to build and free.
For polished client reports:Method 3 (ExcelJS) is your best friend. It keeps the heavy lifting in the browser and preserves your branding.
For enterprise scale: If you are generating hundreds of invoices a week, Method 4 (Cloud Functions) is the robust choice. It decouples the generation logic from the spreadsheet UI.
Choose the architecture that fits your volume and your “pixel-perfect” tolerance. I’m also keen to hear have got another ‘Excel Generator’ approach? Happy scripting!
This tutorial shows you how to publish AI agents to Google Workspace as Google Workspace add-ons, using Apps Script or HTTP endpoints. After your publish your add-on, your users can interact with the AI agents within their workflows.
Google has released a comprehensive new tutorial that demonstrates how to bridge the gap between complex AI agents and the daily workflows of Google Workspace users. The Travel Concierge sample uses the Agent Development Kit (ADK) to deploy a conversational, multi-agent AI directly into the Google Workspace platform.
While many AI demos focus on standalone chat interfaces, this solution stands out by implementing the agent as a Google Workspace Add-on. This means the AI isn’t just a browser tab; it is a helpful sidebar accessible directly within Gmail, Calendar, Drive, Docs, Sheets, and Slides, as well as a fully functioning Google Chat app.
Key Features for Developers
The sample provides a robust blueprint for developers looking to build “agentic” workflows. Key capabilities include:
Multi-Agent Architecture: The solution uses Vertex AI to manage a “Travel Concierge” that orchestrates sub-agents for specific tasks, utilising tools like the Google Maps Platform Places API and Google Search Grounding.
Context Awareness: In applications like Gmail, the agent can read the context of the currently selected email to help plan trips based on actual correspondence.
Rich User Interfaces: The add-on moves beyond simple text, utilising the Card framework to display rich widgets and interactive elements.
Cross-Platform Persistence: User sessions are managed by Vertex AI, ensuring that a conversation started in Google Chat can be seamlessly continued in a Google Doc sidebar.
Flexible Implementation
The Apps Script implementation demonstrates how to connect a standard Workspace Add-on to the powerful Vertex AI backend using UrlFetchApp. It serves as an excellent reference for handling synchronous execution limits and rendering complex agent responses within the constraints of the Add-on sidebar.
Whether you are looking to build a travel planner or a complex enterprise assistant, this sample provides the architectural patterns needed to bring your AI agents to where your users actually work.
In this video we build a Chat app as a Google Workspace add-on with Apps Script and extend it to other Workspace applications (Calendar, Gmail, Drive, Docs, Sheets, and Slides).
To see this new framework in action, Pierrick Voulet from the Google Developer Relations team has released a fantastic video tutorial, ‘Build a Chat app as a Google Workspace add-on with Apps Script’.
The ‘OOO Assistant’: A Practical Demonstration
The video centres on building an ‘Out-of-Office Assistant’ that can block out the user’s Calendar for the day and set a Gmail out-of-office auto-reply. The real magic is how this Chat-based logic extends into a full Workspace Add-on, available in the sidebars of Gmail, Calendar, Drive, Docs, Sheets and Slides. This is achieved with minimal code changes, primarily by configuring the appsscript.json manifest to include:
A homepageTrigger to display a UI card in the add-on sidebar.
universalActions that reuse the exact same functions (blockDayOut(), setAutoReply()) originally built for the Chat quick commands.
The video provides a complete overview of the process, and you can get hands-on with the shared code yourself. For more technical details, check out the official Extend Google Chat documentation.
Learn how to add, update and search or query for custom Google Drive properties in file metadata in with Google Apps Script
Managing and searching for project files in Google Drive can be a challenge, especially when naming conventions are not followed. While Google’s Drive Labels API offers a solution, its use is often restricted to administrators, leaving many developers looking for a more accessible alternative.
Over on his blog, Yagisanatode, developer Scott Donald provides an in-depth tutorial on using custom file properties with Apps Script. This technique allows you to programme-matically attach your own searchable metadata—like a project ID, version number, or status—to any file in Drive.
The tutorial walks you through the entire process, including:
Enabling the Drive API v3 advanced service.
Using Drive.Files.update() to add properties to a file.
Crafting queries with Drive.Files.list() to find files based on your custom properties.
For developers working at scale, the tutorial also covers how to avoid slow, iterative updates. Scott introduces a powerful batch update method that uses UrlFetchApp to efficiently modify thousands of files in a single request.
This is an excellent, practical guide for any intermediate developer looking to build more organised and robust file management systems in Google Drive.
Google Drive is now integrated with the Workspace Events API, which allows third-party developers to create subscriptions on Drive items and receive notifications via Cloud Pub/Sub when those resources change. This offers developers a more reliable, featureful way of receiving events over the current files.watch and changes.watch methods that exist today.
Good news for developers working with Google Drive! Google has just announced that the Google Drive Events API is now available in a Developer Public Preview. This is a significant update for anyone who needs to track changes to files in Google Drive, offering a more robust and feature-rich way to receive notifications.
For a long time, developers have relied on the files.watch and changes.watch methods to monitor file changes. Whilst these methods have been useful, the new Drive Events API, integrated with the Google Workspace Events API, promises a more reliable and scalable solution.
What can you do with the new API?
The new API allows you to subscribe to events on Drive items and receive notifications through Cloud Pub/Sub. In this initial public preview, the following events are supported:
A file has been added to a folder or shared drive.
A file has been moved to a folder or shared drive.
A file has been edited, or a new revision is uploaded.
A file has been trashed or restored from the trash.
An access proposal has been created or resolved for a file.
This opens up a whole range of possibilities for building powerful applications that react in real-time to changes in Google Drive. Imagine creating automated workflows that trigger when a new file is added to a specific folder, or a security tool that alerts you when a sensitive document is moved or trashed.
How to get started
To start using the new Google Drive Events API, you’ll need to be enrolled in the Workspace Developer Public Preview program. Once you’re in, you can find the relevant documentation and getting started guides on the Google Developers site.
A Note for Google Apps Script Developers
You can absolutely use Google Apps Script to make the API calls to create and manage your event subscriptions. However, it is important to be aware that receiving the actual notifications requires a different approach to traditional Apps Script triggers.
The Drive Events API delivers all notifications to a Cloud Pub/Sub topic. This means you will need a Google Cloud Platform (GCP) project to handle these incoming messages. It’s also important to note that, according to the documentation, the Pub/Sub topic you use for notifications must be in the same Google Cloud project that you use to create the event subscription itself.
These new capabilities hopefully unlock some more efficient ways for developers and third parties to monitor Google Drive content.
Storing JSON files in Google Drive offers a practical approach for Google Workspace developers, especially when combined with Google Apps Script. While Drive lacks a built-in JSON editor, AI tools like Gemini can rapidly prototype solutions, such as web-based editors. However, creating a robust application requires more than just a quick fix. It involves understanding syntax highlighting, managing dependencies, and navigating the constraints of the Apps Script platform, highlighting the importance of both rapid prototyping and robust engineering skills.
I recently explored how to effectively manage JSON configuration files within Google Drive using Google Apps Script, and the journey from quick AI-assisted prototypes to robust solutions.
As part of this I delve into the benefits of storing JSON in Drive, the challenges of editing, and how AI tools like Gemini can provide a great starting point. However, as I discovered, building a truly polished tool requires deeper technical knowledge and problem-solving.
Check out the article for insights on navigating the development process and, if you want one, an Apps Script powered JSON Editor for Google Drive.
When working with Google Sheets, you often use formulas like IMPORTRANGE to transfer data between files. However, IMPORTRANGE is not always reliable, especially when files become too large and you see the message “The file is too large.” In such cases, switching to a Google Apps Script-based solution that uses CSV files for data export and import is more efficient and flexible.
Here’s how you can create a system to export data from a Google Sheet to a CSV file and then import it into another sheet.
Export Data Script
This script exports data from a sheet named YOUR_TAB_NAME to a CSV file saved in Google Drive.
function exportToCSV() {
const SheetName = "YOUR_TAB_NAME";
const NameFile = "YOUR_FILE_NAME";
const ss = SpreadsheetApp.getActive();
const sourceSheet = ss.getSheetByName(SheetName);
// Get the values as displayed in the sheet
const dataValues = sourceSheet.getDataRange().getDisplayValues();
// Creating CSV content
const csvContent = dataValues.map(row => row.join(";")).join("\n");
// Check if there is data
if(csvContent == "") return;
try {
const fileId = DriveApp.getFilesByName(NameFile + ".csv").next().getId();
DriveApp.getFileById(fileId).setContent(csvContent);
} catch {
DriveApp.createFile(NameFile + ".csv", csvContent, MimeType.CSV);
}
}
How It Works
Fetches data from the specified sheet (YOUR_TAB_NAME).
Creates CSV content, joining data with the ; separator.
Updates the CSV file if it already exists or creates a new one.
Import Data Script
This script imports data from a CSV file into a Google Sheet named YOUR_TAB_NAME.
function importFromCSV() {
const SheetName = "YOUR_TAB_NAME";
const NameFile = "YOUR_FILE_NAME";
const Separator = ";"; // Change your separator if needed
const destinationSheet = SpreadsheetApp.getActive().getSheetByName(SheetName);
let fileId;
try {
// Search for the file and get its ID
fileId = DriveApp.getFilesByName(NameFile + ".csv").next().getId();
} catch {
Logger.log("Il file '" + NameFile + ".csv' non è stato trovato in Google Drive.");
return;
}
const file = DriveApp.getFileById(fileId);
const property = PropertiesService.getDocumentProperties();
// Check if the last imported data has already been loaded
const lastModified = property.getProperty("lastModified");
const timeStamp = file.getLastUpdated().toUTCString();
property.setProperty("lastModified", timeStamp);
if (lastModified == timeStamp) return;
// Retrieve the CSV content
const csvContent = file.getBlob().getDataAsString();
// Split the content into rows and then into columns using the specified separator
const data = csvContent
.split("\n") // Split into rows
.map(row => row.split(Separator)); // Split each row into columns using the specified separator
destinationSheet.clearContents();
destinationSheet.getRange(1, 1, data.length, data[0].length).setValues(data);
}
How It Works
Finds the CSV file in Google Drive.
Checks for changes by comparing the file’s last modification time with a stored property value.
Reads data from the CSV file and imports it into the sheet, clearing existing data first.
Automation with Triggers
To automate the import process, you can use a time-based trigger to run the script at regular intervals (e.g., every minute, hour, etc.).
Setting Up a Trigger
Go to Apps Script Editor (Extensions > Apps Script).
Click on Triggers (Clock icon) or Tools > Triggers in the editor.
Create a new trigger:
Choose the importFromCSV function.
Select “Time-driven trigger.”
Specify the frequency (e.g., every minute).
With this trigger, your script will regularly check for updates in the CSV file and automatically import new data.
Explanation of the Separator Usage
Why the semicolon (;) is used: Descriptions or text fields may already use commas (,), and using them as a separator could lead to incorrect data splits. Using semicolons avoids this issue.
Alternative separator: If semicolons (;) are also present in the data, it’s recommended to use a unique symbol, such as §.
To update the separator, replace “;” with “§” in the following line:
.map(row => row.split("§"));// Update the separator here
This makes the script adaptable to various data scenarios.
Why Use This Method?
Advantages
Avoid IMPORTRANGE Limits: No errors related to overly large files.
Efficiency: Data is transferred as CSV, reducing connectivity issues between files.
Automation: Imports happen without manual intervention.
Limitations
Maintenance: Scripts need to be managed and updated manually if changes occur.
Security: Ensure file and script access is secure.
With these scripts and a configured trigger, you can reliably and efficiently handle large volumes of data between Google Sheets. If you need further customizations or help setting it up, feel free to ask!
Quickly copy or move existing files into folders within Google drive via a Google Sheet.
Quickly copy or move existing files into folders via a Google Sheet
The following Google Apps Script tool allows you to quickly copy or move existing files into existing folders within Google/Shared drive. This is quite a niche tool so it is anticipated you would already have a Google Sheet of file IDs/URLs, though there is the option to select an existing Drive folder of files and automatically bring them into this tool.
You can continue to append further rows to the Google Sheet as existing ones will be skipped once the ‘Status’ column has automatically been marked as ‘Completed’. Alternatively you can clear the Sheet each time for a fresh start.
I use Google Apps Script to support staff and students in my job. I enjoy dabbling with creating tools to help with automation and I freely share my learning experiences on my blog, where I also have a number of useful Google Add-ons: www.pbainbridge.co.uk
Quickly append new permissions to existing files within Google Drive. Insert email addresses using a comma-space format with optional notifications.
Quickly append new file permissions via a Google Sheet
The following Google Apps Script tool allows you to quickly append new permissions to existing files within Google/Shared drive. This is quite a niche tool so it is anticipated you would already have a Google Sheet of file IDs/URLs, though there is the option to select an existing Drive folder of files and automatically bring them into this tool.
By inserting email addresses using a comma-and-space format you can optionally select to send a notification email to the new user(s)
You can continue to append further rows to the Google Sheet as existing ones will be skipped once the ‘Status’ column has automatically been marked as ‘Completed’. Alternatively you can clear the Sheet each time for a fresh start.
I use Google Apps Script to support staff and students in my job. I enjoy dabbling with creating tools to help with automation and I freely share my learning experiences on my blog, where I also have a number of useful Google Add-ons: www.pbainbridge.co.uk