model_deployment_monitoring_jobs
Creates, updates, deletes, gets or lists a model_deployment_monitoring_jobs
resource.
Overview
Name | model_deployment_monitoring_jobs |
Type | Resource |
Id | google.aiplatform.model_deployment_monitoring_jobs |
Fields
The following fields are returned by SELECT
queries:
- get
- list
Successful response
Name | Datatype | Description |
---|---|---|
name | string | Output only. Resource name of a ModelDeploymentMonitoringJob. |
analysisInstanceSchemaUri | string | YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string. |
bigqueryTables | array | Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response |
createTime | string (google-datetime) | Output only. Timestamp when this ModelDeploymentMonitoringJob was created. |
displayName | string | Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob. |
enableMonitoringPipelineLogs | boolean | If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing. |
encryptionSpec | object | Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key. (id: GoogleCloudAiplatformV1EncryptionSpec) |
endpoint | string | Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint} |
error | object | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. (id: GoogleRpcStatus) |
labels | object | The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
latestMonitoringPipelineMetadata | object | Output only. Latest triggered monitoring pipeline metadata. (id: GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadata) |
logTtl | string (google-duration) | The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day. |
loggingSamplingStrategy | object | Required. Sample Strategy for logging. (id: GoogleCloudAiplatformV1SamplingStrategy) |
modelDeploymentMonitoringObjectiveConfigs | array | Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately. |
modelDeploymentMonitoringScheduleConfig | object | Required. Schedule config for running the monitoring job. (id: GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfig) |
modelMonitoringAlertConfig | object | Alert config for model monitoring. (id: GoogleCloudAiplatformV1ModelMonitoringAlertConfig) |
nextScheduleTime | string (google-datetime) | Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round. |
predictInstanceSchemaUri | string | YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests. |
samplePredictInstance | any | Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests. |
satisfiesPzi | boolean | Output only. Reserved for future use. |
satisfiesPzs | boolean | Output only. Reserved for future use. |
scheduleState | string | Output only. Schedule state when the monitoring job is in Running state. |
state | string | Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'. |
statsAnomaliesBaseDirectory | object | Stats anomalies base folder path. (id: GoogleCloudAiplatformV1GcsDestination) |
updateTime | string (google-datetime) | Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently. |
Successful response
Name | Datatype | Description |
---|---|---|
name | string | Output only. Resource name of a ModelDeploymentMonitoringJob. |
analysisInstanceSchemaUri | string | YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string. |
bigqueryTables | array | Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response |
createTime | string (google-datetime) | Output only. Timestamp when this ModelDeploymentMonitoringJob was created. |
displayName | string | Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob. |
enableMonitoringPipelineLogs | boolean | If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing. |
encryptionSpec | object | Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key. (id: GoogleCloudAiplatformV1EncryptionSpec) |
endpoint | string | Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint} |
error | object | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. (id: GoogleRpcStatus) |
labels | object | The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
latestMonitoringPipelineMetadata | object | Output only. Latest triggered monitoring pipeline metadata. (id: GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadata) |
logTtl | string (google-duration) | The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day. |
loggingSamplingStrategy | object | Required. Sample Strategy for logging. (id: GoogleCloudAiplatformV1SamplingStrategy) |
modelDeploymentMonitoringObjectiveConfigs | array | Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately. |
modelDeploymentMonitoringScheduleConfig | object | Required. Schedule config for running the monitoring job. (id: GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfig) |
modelMonitoringAlertConfig | object | Alert config for model monitoring. (id: GoogleCloudAiplatformV1ModelMonitoringAlertConfig) |
nextScheduleTime | string (google-datetime) | Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round. |
predictInstanceSchemaUri | string | YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests. |
samplePredictInstance | any | Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests. |
satisfiesPzi | boolean | Output only. Reserved for future use. |
satisfiesPzs | boolean | Output only. Reserved for future use. |
scheduleState | string | Output only. Schedule state when the monitoring job is in Running state. |
state | string | Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'. |
statsAnomaliesBaseDirectory | object | Stats anomalies base folder path. (id: GoogleCloudAiplatformV1GcsDestination) |
updateTime | string (google-datetime) | Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently. |
Methods
The following methods are available for this resource:
Name | Accessible by | Required Params | Optional Params | Description |
---|---|---|---|---|
get | select | projectsId , locationsId , modelDeploymentMonitoringJobsId | Gets a ModelDeploymentMonitoringJob. | |
list | select | projectsId , locationsId | filter , pageSize , pageToken , readMask | Lists ModelDeploymentMonitoringJobs in a Location. |
create | insert | projectsId , locationsId | Creates a ModelDeploymentMonitoringJob. It will run periodically on a configured interval. | |
patch | update | projectsId , locationsId , modelDeploymentMonitoringJobsId | updateMask | Updates a ModelDeploymentMonitoringJob. |
delete | delete | projectsId , locationsId , modelDeploymentMonitoringJobsId | Deletes a ModelDeploymentMonitoringJob. | |
search_model_deployment_monitoring_stats_anomalies | exec | projectsId , locationsId , modelDeploymentMonitoringJobsId | Searches Model Monitoring Statistics generated within a given time window. | |
pause | exec | projectsId , locationsId , modelDeploymentMonitoringJobsId | Pauses a ModelDeploymentMonitoringJob. If the job is running, the server makes a best effort to cancel the job. Will mark ModelDeploymentMonitoringJob.state to 'PAUSED'. | |
resume | exec | projectsId , locationsId , modelDeploymentMonitoringJobsId | Resumes a paused ModelDeploymentMonitoringJob. It will start to run from next scheduled time. A deleted ModelDeploymentMonitoringJob can't be resumed. |
Parameters
Parameters can be passed in the WHERE
clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
Name | Datatype | Description |
---|---|---|
locationsId | string | |
modelDeploymentMonitoringJobsId | string | |
projectsId | string | |
filter | string | |
pageSize | integer (int32) | |
pageToken | string | |
readMask | string (google-fieldmask) | |
updateMask | string (google-fieldmask) |
SELECT
examples
- get
- list
Gets a ModelDeploymentMonitoringJob.
SELECT
name,
analysisInstanceSchemaUri,
bigqueryTables,
createTime,
displayName,
enableMonitoringPipelineLogs,
encryptionSpec,
endpoint,
error,
labels,
latestMonitoringPipelineMetadata,
logTtl,
loggingSamplingStrategy,
modelDeploymentMonitoringObjectiveConfigs,
modelDeploymentMonitoringScheduleConfig,
modelMonitoringAlertConfig,
nextScheduleTime,
predictInstanceSchemaUri,
samplePredictInstance,
satisfiesPzi,
satisfiesPzs,
scheduleState,
state,
statsAnomaliesBaseDirectory,
updateTime
FROM google.aiplatform.model_deployment_monitoring_jobs
WHERE projectsId = '{{ projectsId }}' -- required
AND locationsId = '{{ locationsId }}' -- required
AND modelDeploymentMonitoringJobsId = '{{ modelDeploymentMonitoringJobsId }}' -- required;
Lists ModelDeploymentMonitoringJobs in a Location.
SELECT
name,
analysisInstanceSchemaUri,
bigqueryTables,
createTime,
displayName,
enableMonitoringPipelineLogs,
encryptionSpec,
endpoint,
error,
labels,
latestMonitoringPipelineMetadata,
logTtl,
loggingSamplingStrategy,
modelDeploymentMonitoringObjectiveConfigs,
modelDeploymentMonitoringScheduleConfig,
modelMonitoringAlertConfig,
nextScheduleTime,
predictInstanceSchemaUri,
samplePredictInstance,
satisfiesPzi,
satisfiesPzs,
scheduleState,
state,
statsAnomaliesBaseDirectory,
updateTime
FROM google.aiplatform.model_deployment_monitoring_jobs
WHERE projectsId = '{{ projectsId }}' -- required
AND locationsId = '{{ locationsId }}' -- required
AND filter = '{{ filter }}'
AND pageSize = '{{ pageSize }}'
AND pageToken = '{{ pageToken }}'
AND readMask = '{{ readMask }}';
INSERT
examples
- create
- Manifest
Creates a ModelDeploymentMonitoringJob. It will run periodically on a configured interval.
INSERT INTO google.aiplatform.model_deployment_monitoring_jobs (
data__displayName,
data__endpoint,
data__modelDeploymentMonitoringObjectiveConfigs,
data__modelDeploymentMonitoringScheduleConfig,
data__loggingSamplingStrategy,
data__modelMonitoringAlertConfig,
data__predictInstanceSchemaUri,
data__samplePredictInstance,
data__analysisInstanceSchemaUri,
data__logTtl,
data__labels,
data__statsAnomaliesBaseDirectory,
data__encryptionSpec,
data__enableMonitoringPipelineLogs,
projectsId,
locationsId
)
SELECT
'{{ displayName }}',
'{{ endpoint }}',
'{{ modelDeploymentMonitoringObjectiveConfigs }}',
'{{ modelDeploymentMonitoringScheduleConfig }}',
'{{ loggingSamplingStrategy }}',
'{{ modelMonitoringAlertConfig }}',
'{{ predictInstanceSchemaUri }}',
'{{ samplePredictInstance }}',
'{{ analysisInstanceSchemaUri }}',
'{{ logTtl }}',
'{{ labels }}',
'{{ statsAnomaliesBaseDirectory }}',
'{{ encryptionSpec }}',
{{ enableMonitoringPipelineLogs }},
'{{ projectsId }}',
'{{ locationsId }}'
RETURNING
name,
analysisInstanceSchemaUri,
bigqueryTables,
createTime,
displayName,
enableMonitoringPipelineLogs,
encryptionSpec,
endpoint,
error,
labels,
latestMonitoringPipelineMetadata,
logTtl,
loggingSamplingStrategy,
modelDeploymentMonitoringObjectiveConfigs,
modelDeploymentMonitoringScheduleConfig,
modelMonitoringAlertConfig,
nextScheduleTime,
predictInstanceSchemaUri,
samplePredictInstance,
satisfiesPzi,
satisfiesPzs,
scheduleState,
state,
statsAnomaliesBaseDirectory,
updateTime
;
# Description fields are for documentation purposes
- name: model_deployment_monitoring_jobs
props:
- name: projectsId
value: string
description: Required parameter for the model_deployment_monitoring_jobs resource.
- name: locationsId
value: string
description: Required parameter for the model_deployment_monitoring_jobs resource.
- name: displayName
value: string
description: >
Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- name: endpoint
value: string
description: >
Required. Endpoint resource name. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
- name: modelDeploymentMonitoringObjectiveConfigs
value: array
description: >
Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- name: modelDeploymentMonitoringScheduleConfig
value: object
description: >
Required. Schedule config for running the monitoring job.
- name: loggingSamplingStrategy
value: object
description: >
Required. Sample Strategy for logging.
- name: modelMonitoringAlertConfig
value: object
description: >
Alert config for model monitoring.
- name: predictInstanceSchemaUri
value: string
description: >
YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- name: samplePredictInstance
value: any
description: >
Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- name: analysisInstanceSchemaUri
value: string
description: >
YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- name: logTtl
value: string
description: >
The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- name: labels
value: object
description: >
The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- name: statsAnomaliesBaseDirectory
value: object
description: >
Stats anomalies base folder path.
- name: encryptionSpec
value: object
description: >
Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- name: enableMonitoringPipelineLogs
value: boolean
description: >
If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging#pricing).
UPDATE
examples
- patch
Updates a ModelDeploymentMonitoringJob.
UPDATE google.aiplatform.model_deployment_monitoring_jobs
SET
data__displayName = '{{ displayName }}',
data__endpoint = '{{ endpoint }}',
data__modelDeploymentMonitoringObjectiveConfigs = '{{ modelDeploymentMonitoringObjectiveConfigs }}',
data__modelDeploymentMonitoringScheduleConfig = '{{ modelDeploymentMonitoringScheduleConfig }}',
data__loggingSamplingStrategy = '{{ loggingSamplingStrategy }}',
data__modelMonitoringAlertConfig = '{{ modelMonitoringAlertConfig }}',
data__predictInstanceSchemaUri = '{{ predictInstanceSchemaUri }}',
data__samplePredictInstance = '{{ samplePredictInstance }}',
data__analysisInstanceSchemaUri = '{{ analysisInstanceSchemaUri }}',
data__logTtl = '{{ logTtl }}',
data__labels = '{{ labels }}',
data__statsAnomaliesBaseDirectory = '{{ statsAnomaliesBaseDirectory }}',
data__encryptionSpec = '{{ encryptionSpec }}',
data__enableMonitoringPipelineLogs = {{ enableMonitoringPipelineLogs }}
WHERE
projectsId = '{{ projectsId }}' --required
AND locationsId = '{{ locationsId }}' --required
AND modelDeploymentMonitoringJobsId = '{{ modelDeploymentMonitoringJobsId }}' --required
AND updateMask = '{{ updateMask}}'
RETURNING
name,
done,
error,
metadata,
response;
DELETE
examples
- delete
Deletes a ModelDeploymentMonitoringJob.
DELETE FROM google.aiplatform.model_deployment_monitoring_jobs
WHERE projectsId = '{{ projectsId }}' --required
AND locationsId = '{{ locationsId }}' --required
AND modelDeploymentMonitoringJobsId = '{{ modelDeploymentMonitoringJobsId }}' --required;
Lifecycle Methods
- search_model_deployment_monitoring_stats_anomalies
- pause
- resume
Searches Model Monitoring Statistics generated within a given time window.
EXEC google.aiplatform.model_deployment_monitoring_jobs.search_model_deployment_monitoring_stats_anomalies
@projectsId='{{ projectsId }}' --required,
@locationsId='{{ locationsId }}' --required,
@modelDeploymentMonitoringJobsId='{{ modelDeploymentMonitoringJobsId }}' --required
@@json=
'{
"deployedModelId": "{{ deployedModelId }}",
"featureDisplayName": "{{ featureDisplayName }}",
"objectives": "{{ objectives }}",
"pageSize": {{ pageSize }},
"pageToken": "{{ pageToken }}",
"startTime": "{{ startTime }}",
"endTime": "{{ endTime }}"
}';
Pauses a ModelDeploymentMonitoringJob. If the job is running, the server makes a best effort to cancel the job. Will mark ModelDeploymentMonitoringJob.state to 'PAUSED'.
EXEC google.aiplatform.model_deployment_monitoring_jobs.pause
@projectsId='{{ projectsId }}' --required,
@locationsId='{{ locationsId }}' --required,
@modelDeploymentMonitoringJobsId='{{ modelDeploymentMonitoringJobsId }}' --required;
Resumes a paused ModelDeploymentMonitoringJob. It will start to run from next scheduled time. A deleted ModelDeploymentMonitoringJob can't be resumed.
EXEC google.aiplatform.model_deployment_monitoring_jobs.resume
@projectsId='{{ projectsId }}' --required,
@locationsId='{{ locationsId }}' --required,
@modelDeploymentMonitoringJobsId='{{ modelDeploymentMonitoringJobsId }}' --required;