Skip to main content

pipeline_jobs

Creates, updates, deletes, gets or lists a pipeline_jobs resource.

Overview

Namepipeline_jobs
TypeResource
Idgoogle.aiplatform.pipeline_jobs

Fields

The following fields are returned by SELECT queries:

Successful response

NameDatatypeDescription
namestringOutput only. The resource name of the PipelineJob.
createTimestring (google-datetime)Output only. Pipeline creation time.
displayNamestringThe display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.
encryptionSpecobjectCustomer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key. (id: GoogleCloudAiplatformV1EncryptionSpec)
endTimestring (google-datetime)Output only. Pipeline end time.
errorobjectThe Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. (id: GoogleRpcStatus)
jobDetailobjectOutput only. The details of pipeline run. Not available in the list view. (id: GoogleCloudAiplatformV1PipelineJobDetail)
labelsobjectThe labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id, user set value will get overrided.
networkstringThe full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.
pipelineSpecobjectThe spec of the pipeline.
preflightValidationsbooleanOptional. Whether to do component level validations before job creation.
pscInterfaceConfigobjectOptional. Configuration for PSC-I for PipelineJob. (id: GoogleCloudAiplatformV1PscInterfaceConfig)
reservedIpRangesarrayA list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
runtimeConfigobjectRuntime config of the pipeline. (id: GoogleCloudAiplatformV1PipelineJobRuntimeConfig)
scheduleNamestringOutput only. The schedule resource name. Only returned if the Pipeline is created by Schedule API.
serviceAccountstringThe service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.
startTimestring (google-datetime)Output only. Pipeline start time.
statestringOutput only. The detailed state of the job.
templateMetadataobjectOutput only. Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry. (id: GoogleCloudAiplatformV1PipelineTemplateMetadata)
templateUristringA template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.
updateTimestring (google-datetime)Output only. Timestamp when this PipelineJob was most recently updated.

Methods

The following methods are available for this resource:

NameAccessible byRequired ParamsOptional ParamsDescription
getselectprojectsId, locationsId, pipelineJobsIdGets a PipelineJob.
listselectprojectsId, locationsIdfilter, pageSize, pageToken, orderBy, readMaskLists PipelineJobs in a Location.
createinsertprojectsId, locationsIdpipelineJobIdCreates a PipelineJob. A PipelineJob will run immediately when created.
deletedeleteprojectsId, locationsId, pipelineJobsIdDeletes a PipelineJob.
batch_deletedeleteprojectsId, locationsIdBatch deletes PipelineJobs The Operation is atomic. If it fails, none of the PipelineJobs are deleted. If it succeeds, all of the PipelineJobs are deleted.
cancelexecprojectsId, locationsId, pipelineJobsIdCancels a PipelineJob. Starts asynchronous cancellation on the PipelineJob. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use PipelineService.GetPipelineJob or other methods to check whether the cancellation succeeded or whether the pipeline completed despite cancellation. On successful cancellation, the PipelineJob is not deleted; instead it becomes a pipeline with a PipelineJob.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and PipelineJob.state is set to CANCELLED.
batch_cancelexecprojectsId, locationsIdBatch cancel PipelineJobs. Firstly the server will check if all the jobs are in non-terminal states, and skip the jobs that are already terminated. If the operation failed, none of the pipeline jobs are cancelled. The server will poll the states of all the pipeline jobs periodically to check the cancellation status. This operation will return an LRO.

Parameters

Parameters can be passed in the WHERE clause of a query. Check the Methods section to see which parameters are required or optional for each operation.

NameDatatypeDescription
locationsIdstring
pipelineJobsIdstring
projectsIdstring
filterstring
orderBystring
pageSizeinteger (int32)
pageTokenstring
pipelineJobIdstring
readMaskstring (google-fieldmask)

SELECT examples

Gets a PipelineJob.

SELECT
name,
createTime,
displayName,
encryptionSpec,
endTime,
error,
jobDetail,
labels,
network,
pipelineSpec,
preflightValidations,
pscInterfaceConfig,
reservedIpRanges,
runtimeConfig,
scheduleName,
serviceAccount,
startTime,
state,
templateMetadata,
templateUri,
updateTime
FROM google.aiplatform.pipeline_jobs
WHERE projectsId = '{{ projectsId }}' -- required
AND locationsId = '{{ locationsId }}' -- required
AND pipelineJobsId = '{{ pipelineJobsId }}' -- required;

INSERT examples

Creates a PipelineJob. A PipelineJob will run immediately when created.

INSERT INTO google.aiplatform.pipeline_jobs (
data__displayName,
data__pipelineSpec,
data__labels,
data__runtimeConfig,
data__encryptionSpec,
data__serviceAccount,
data__network,
data__reservedIpRanges,
data__pscInterfaceConfig,
data__templateUri,
data__preflightValidations,
projectsId,
locationsId,
pipelineJobId
)
SELECT
'{{ displayName }}',
'{{ pipelineSpec }}',
'{{ labels }}',
'{{ runtimeConfig }}',
'{{ encryptionSpec }}',
'{{ serviceAccount }}',
'{{ network }}',
'{{ reservedIpRanges }}',
'{{ pscInterfaceConfig }}',
'{{ templateUri }}',
{{ preflightValidations }},
'{{ projectsId }}',
'{{ locationsId }}',
'{{ pipelineJobId }}'
RETURNING
name,
createTime,
displayName,
encryptionSpec,
endTime,
error,
jobDetail,
labels,
network,
pipelineSpec,
preflightValidations,
pscInterfaceConfig,
reservedIpRanges,
runtimeConfig,
scheduleName,
serviceAccount,
startTime,
state,
templateMetadata,
templateUri,
updateTime
;

DELETE examples

Deletes a PipelineJob.

DELETE FROM google.aiplatform.pipeline_jobs
WHERE projectsId = '{{ projectsId }}' --required
AND locationsId = '{{ locationsId }}' --required
AND pipelineJobsId = '{{ pipelineJobsId }}' --required;

Lifecycle Methods

Cancels a PipelineJob. Starts asynchronous cancellation on the PipelineJob. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use PipelineService.GetPipelineJob or other methods to check whether the cancellation succeeded or whether the pipeline completed despite cancellation. On successful cancellation, the PipelineJob is not deleted; instead it becomes a pipeline with a PipelineJob.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and PipelineJob.state is set to CANCELLED.

EXEC google.aiplatform.pipeline_jobs.cancel 
@projectsId='{{ projectsId }}' --required,
@locationsId='{{ locationsId }}' --required,
@pipelineJobsId='{{ pipelineJobsId }}' --required;