Skip to main content
← Back to plugins
Google Cloud Tasks

Google Cloud Tasks

Job Queue implementation using Google Cloud Tasks to push jobs to your worker instance

Version2.1.0
Compatibility>=2.2.0
CategoryInfrastructure
Downloads606 monthly
Last updatedFeb 5, 2026

Official documentation here

Plugin for using Vendure worker with Google Cloud Tasks. This plugin will show ending, successful and failed jobs in the admin UI under sytem/jobs, but not running jobs. Only jobs of the past 7 days are kept in the DB.

Getting started

Plugin setup

  1. Remove DefaultJobQueuePlugin from your vendure-config. Add this plugin to your vendure-config.ts:
Ts
  1. Run a database migration to add the JobRecordBuffer table.
  2. Start the Vendure server, log in to the admin dashboard and trigger a reindex job via Products > (cog icon) > reindex to test the Cloud Tasks Plugin.

This plugin installs the SQLJobBufferStrategy from Vendure's default JobQueue plugin, to buffer jobs in the database. This is because most projects that are using Google Cloud Tasks will also have multiple instances of the Vendure server.

You can call the endpoint /cloud-tasks/clear-jobs/X with the secret as Auth header to clear jobs older than X days. For example:

Shell

Will clear all jobs older than 1 day.

DEADLINE_EXCEEDED errors when pushing tasks to queue

When pushing multiple tasks concurrently to a queue in serverless environments, you might see DEADLINE_EXCEEDED errors. If that happens, you can instantiate the plugin with fallback: true to make the Google Cloud Tasks client fallback to HTTP instead of GRPC. For more details see https://github.com/googleapis/nodejs-tasks/issues/397#issuecomment-618580649

Ts

Request entity too large

This means the Job data is larger than NestJS's configured request limit. You can set a large limit in your vendure-config.ts:

Ts

We don't include this in the plugin, because it affects the entire NestJS instance

ER_OUT_OF_SORTMEMORY: Out of sort memory, consider increasing server sort buffer size on MySQL

If you get this error, you should create an index on the createdAt column of the job table:

Sql

The error is caused by the fact that the job_record.data column is a json column and can contain a lot of data. More information can be found here: https://stackoverflow.com/questions/29575835/error-1038-out-of-sort-memory-consider-increasing-sort-buffer-size

Changelog
  • Upgraded to Vendure 3.5.3
  • Documentation update
  • Updated official documentation URL
  • IMPORTANT: Better truncate of job.data, and retry saving without data if it fails. V2.0.0 crashes the Vendure server when job.data is too large.
  • BREAKING: Included SQLJobBufferStrategy, to buffer jobs in DB instead of in memory, to support multiple instances. A database migration is required.
  • Added documentation to fix ER_OUT_OF_SORTMEMORY: Out of sort memory, consider increasing server sort buffer size on MySQL
  • Constrain data size to 64kb to prevent ER_DATA_TOO_LONG errors on MySQL
  • Allow configuring deletion of jobs older than X days
  • Save original queue names in database. Only use suffix for creating queues in Google Cloud. This fixes admin UI job filtering
  • Add stack trace to error message when job fails
  • Upgrade to Vendure to 3.3.2
  • Update Vendure to 3.1.1
  • Update compatibility range (#480)
  • Updated @google-cloud/tasks to "^5.4.0"
  • Generates a unique task name; this reduces the latency during creation of the task
  • Updated Vendure to 2.2.6
  • Don't store job.data when it's too big for MySQL text column
  • Remove jobs older than 30 days on application startup
  • Only log error when job is not added to queue after configured retries
  • Apply exponential backoff when adding to queue doesn't work.
  • Updated vendure to 2.1.1
  • Added onJobFailure option to inspect errors from failed jobs(#262)