pipeline_jobs
Creates, updates, deletes, gets or lists a pipeline_jobs
resource.
Overview
Name | pipeline_jobs |
Type | Resource |
Id | google.aiplatform.pipeline_jobs |
Fields
The following fields are returned by SELECT
queries:
- get
- list
Successful response
Name | Datatype | Description |
---|---|---|
name | string | Output only. The resource name of the PipelineJob. |
createTime | string (google-datetime) | Output only. Pipeline creation time. |
displayName | string | The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec | object | Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key. (id: GoogleCloudAiplatformV1EncryptionSpec) |
endTime | string (google-datetime) | Output only. Pipeline end time. |
error | object | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. (id: GoogleRpcStatus) |
jobDetail | object | Output only. The details of pipeline run. Not available in the list view. (id: GoogleCloudAiplatformV1PipelineJobDetail) |
labels | object | The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id , user set value will get overrided. |
network | string | The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC . Format is of the form projects/{project}/global/networks/{network} . Where {project} is a project number, as in 12345 , and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network. |
pipelineSpec | object | The spec of the pipeline. |
preflightValidations | boolean | Optional. Whether to do component level validations before job creation. |
pscInterfaceConfig | object | Optional. Configuration for PSC-I for PipelineJob. (id: GoogleCloudAiplatformV1PscInterfaceConfig) |
reservedIpRanges | array | A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. |
runtimeConfig | object | Runtime config of the pipeline. (id: GoogleCloudAiplatformV1PipelineJobRuntimeConfig) |
scheduleName | string | Output only. The schedule resource name. Only returned if the Pipeline is created by Schedule API. |
serviceAccount | string | The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account. |
startTime | string (google-datetime) | Output only. Pipeline start time. |
state | string | Output only. The detailed state of the job. |
templateMetadata | object | Output only. Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry. (id: GoogleCloudAiplatformV1PipelineTemplateMetadata) |
templateUri | string | A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template. |
updateTime | string (google-datetime) | Output only. Timestamp when this PipelineJob was most recently updated. |
Successful response
Name | Datatype | Description |
---|---|---|
name | string | Output only. The resource name of the PipelineJob. |
createTime | string (google-datetime) | Output only. Pipeline creation time. |
displayName | string | The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
encryptionSpec | object | Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key. (id: GoogleCloudAiplatformV1EncryptionSpec) |
endTime | string (google-datetime) | Output only. Pipeline end time. |
error | object | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. (id: GoogleRpcStatus) |
jobDetail | object | Output only. The details of pipeline run. Not available in the list view. (id: GoogleCloudAiplatformV1PipelineJobDetail) |
labels | object | The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id , user set value will get overrided. |
network | string | The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC . Format is of the form projects/{project}/global/networks/{network} . Where {project} is a project number, as in 12345 , and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network. |
pipelineSpec | object | The spec of the pipeline. |
preflightValidations | boolean | Optional. Whether to do component level validations before job creation. |
pscInterfaceConfig | object | Optional. Configuration for PSC-I for PipelineJob. (id: GoogleCloudAiplatformV1PscInterfaceConfig) |
reservedIpRanges | array | A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. |
runtimeConfig | object | Runtime config of the pipeline. (id: GoogleCloudAiplatformV1PipelineJobRuntimeConfig) |
scheduleName | string | Output only. The schedule resource name. Only returned if the Pipeline is created by Schedule API. |
serviceAccount | string | The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account. |
startTime | string (google-datetime) | Output only. Pipeline start time. |
state | string | Output only. The detailed state of the job. |
templateMetadata | object | Output only. Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry. (id: GoogleCloudAiplatformV1PipelineTemplateMetadata) |
templateUri | string | A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template. |
updateTime | string (google-datetime) | Output only. Timestamp when this PipelineJob was most recently updated. |
Methods
The following methods are available for this resource:
Name | Accessible by | Required Params | Optional Params | Description |
---|---|---|---|---|
get | select | projectsId , locationsId , pipelineJobsId | Gets a PipelineJob. | |
list | select | projectsId , locationsId | filter , pageSize , pageToken , orderBy , readMask | Lists PipelineJobs in a Location. |
create | insert | projectsId , locationsId | pipelineJobId | Creates a PipelineJob. A PipelineJob will run immediately when created. |
delete | delete | projectsId , locationsId , pipelineJobsId | Deletes a PipelineJob. | |
batch_delete | delete | projectsId , locationsId | Batch deletes PipelineJobs The Operation is atomic. If it fails, none of the PipelineJobs are deleted. If it succeeds, all of the PipelineJobs are deleted. | |
cancel | exec | projectsId , locationsId , pipelineJobsId | Cancels a PipelineJob. Starts asynchronous cancellation on the PipelineJob. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use PipelineService.GetPipelineJob or other methods to check whether the cancellation succeeded or whether the pipeline completed despite cancellation. On successful cancellation, the PipelineJob is not deleted; instead it becomes a pipeline with a PipelineJob.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED , and PipelineJob.state is set to CANCELLED . | |
batch_cancel | exec | projectsId , locationsId | Batch cancel PipelineJobs. Firstly the server will check if all the jobs are in non-terminal states, and skip the jobs that are already terminated. If the operation failed, none of the pipeline jobs are cancelled. The server will poll the states of all the pipeline jobs periodically to check the cancellation status. This operation will return an LRO. |
Parameters
Parameters can be passed in the WHERE
clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
Name | Datatype | Description |
---|---|---|
locationsId | string | |
pipelineJobsId | string | |
projectsId | string | |
filter | string | |
orderBy | string | |
pageSize | integer (int32) | |
pageToken | string | |
pipelineJobId | string | |
readMask | string (google-fieldmask) |
SELECT
examples
- get
- list
Gets a PipelineJob.
SELECT
name,
createTime,
displayName,
encryptionSpec,
endTime,
error,
jobDetail,
labels,
network,
pipelineSpec,
preflightValidations,
pscInterfaceConfig,
reservedIpRanges,
runtimeConfig,
scheduleName,
serviceAccount,
startTime,
state,
templateMetadata,
templateUri,
updateTime
FROM google.aiplatform.pipeline_jobs
WHERE projectsId = '{{ projectsId }}' -- required
AND locationsId = '{{ locationsId }}' -- required
AND pipelineJobsId = '{{ pipelineJobsId }}' -- required;
Lists PipelineJobs in a Location.
SELECT
name,
createTime,
displayName,
encryptionSpec,
endTime,
error,
jobDetail,
labels,
network,
pipelineSpec,
preflightValidations,
pscInterfaceConfig,
reservedIpRanges,
runtimeConfig,
scheduleName,
serviceAccount,
startTime,
state,
templateMetadata,
templateUri,
updateTime
FROM google.aiplatform.pipeline_jobs
WHERE projectsId = '{{ projectsId }}' -- required
AND locationsId = '{{ locationsId }}' -- required
AND filter = '{{ filter }}'
AND pageSize = '{{ pageSize }}'
AND pageToken = '{{ pageToken }}'
AND orderBy = '{{ orderBy }}'
AND readMask = '{{ readMask }}';
INSERT
examples
- create
- Manifest
Creates a PipelineJob. A PipelineJob will run immediately when created.
INSERT INTO google.aiplatform.pipeline_jobs (
data__displayName,
data__pipelineSpec,
data__labels,
data__runtimeConfig,
data__encryptionSpec,
data__serviceAccount,
data__network,
data__reservedIpRanges,
data__pscInterfaceConfig,
data__templateUri,
data__preflightValidations,
projectsId,
locationsId,
pipelineJobId
)
SELECT
'{{ displayName }}',
'{{ pipelineSpec }}',
'{{ labels }}',
'{{ runtimeConfig }}',
'{{ encryptionSpec }}',
'{{ serviceAccount }}',
'{{ network }}',
'{{ reservedIpRanges }}',
'{{ pscInterfaceConfig }}',
'{{ templateUri }}',
{{ preflightValidations }},
'{{ projectsId }}',
'{{ locationsId }}',
'{{ pipelineJobId }}'
RETURNING
name,
createTime,
displayName,
encryptionSpec,
endTime,
error,
jobDetail,
labels,
network,
pipelineSpec,
preflightValidations,
pscInterfaceConfig,
reservedIpRanges,
runtimeConfig,
scheduleName,
serviceAccount,
startTime,
state,
templateMetadata,
templateUri,
updateTime
;
# Description fields are for documentation purposes
- name: pipeline_jobs
props:
- name: projectsId
value: string
description: Required parameter for the pipeline_jobs resource.
- name: locationsId
value: string
description: Required parameter for the pipeline_jobs resource.
- name: displayName
value: string
description: >
The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- name: pipelineSpec
value: object
description: >
The spec of the pipeline.
- name: labels
value: object
description: >
The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - `vertex-ai-pipelines-run-billing-id`, user set value will get overrided.
- name: runtimeConfig
value: object
description: >
Runtime config of the pipeline.
- name: encryptionSpec
value: object
description: >
Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.
- name: serviceAccount
value: string
description: >
The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the `iam.serviceAccounts.actAs` permission on this service account.
- name: network
value: string
description: >
The full name of the Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) to which the Pipeline Job's workload should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.
- name: reservedIpRanges
value: array
description: >
A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- name: pscInterfaceConfig
value: object
description: >
Optional. Configuration for PSC-I for PipelineJob.
- name: templateUri
value: string
description: >
A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.
- name: preflightValidations
value: boolean
description: >
Optional. Whether to do component level validations before job creation.
- name: pipelineJobId
value: string
DELETE
examples
- delete
- batch_delete
Deletes a PipelineJob.
DELETE FROM google.aiplatform.pipeline_jobs
WHERE projectsId = '{{ projectsId }}' --required
AND locationsId = '{{ locationsId }}' --required
AND pipelineJobsId = '{{ pipelineJobsId }}' --required;
Batch deletes PipelineJobs The Operation is atomic. If it fails, none of the PipelineJobs are deleted. If it succeeds, all of the PipelineJobs are deleted.
DELETE FROM google.aiplatform.pipeline_jobs
WHERE projectsId = '{{ projectsId }}' --required
AND locationsId = '{{ locationsId }}' --required;
Lifecycle Methods
- cancel
- batch_cancel
Cancels a PipelineJob. Starts asynchronous cancellation on the PipelineJob. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use PipelineService.GetPipelineJob or other methods to check whether the cancellation succeeded or whether the pipeline completed despite cancellation. On successful cancellation, the PipelineJob is not deleted; instead it becomes a pipeline with a PipelineJob.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED
, and PipelineJob.state is set to CANCELLED
.
EXEC google.aiplatform.pipeline_jobs.cancel
@projectsId='{{ projectsId }}' --required,
@locationsId='{{ locationsId }}' --required,
@pipelineJobsId='{{ pipelineJobsId }}' --required;
Batch cancel PipelineJobs. Firstly the server will check if all the jobs are in non-terminal states, and skip the jobs that are already terminated. If the operation failed, none of the pipeline jobs are cancelled. The server will poll the states of all the pipeline jobs periodically to check the cancellation status. This operation will return an LRO.
EXEC google.aiplatform.pipeline_jobs.batch_cancel
@projectsId='{{ projectsId }}' --required,
@locationsId='{{ locationsId }}' --required
@@json=
'{
"names": "{{ names }}"
}';