evaluation_runs
Creates, updates, deletes, gets or lists an evaluation_runs resource.
Overview
| Name | evaluation_runs |
| Type | Resource |
| Id | google.aiplatform.evaluation_runs |
Fields
The following fields are returned by SELECT queries:
- get
- list
| Name | Datatype | Description |
|---|---|---|
name | string | Identifier. The resource name of the EvaluationRun. This is a unique identifier. Format: projects/{project}/locations/{location}/evaluationRuns/{evaluation_run} |
completionTime | string (google-datetime) | Output only. Time when the evaluation run was completed. |
createTime | string (google-datetime) | Output only. Time when the evaluation run was created. |
dataSource | object | Required. The data source for the evaluation run. (id: GoogleCloudAiplatformV1EvaluationRunDataSource) |
displayName | string | Required. The display name of the Evaluation Run. |
error | object | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. (id: GoogleRpcStatus) |
evaluationConfig | object | Required. The configuration used for the evaluation. (id: GoogleCloudAiplatformV1EvaluationRunEvaluationConfig) |
evaluationResults | object | Output only. The results of the evaluation run. Only populated when the evaluation run's state is SUCCEEDED. (id: GoogleCloudAiplatformV1EvaluationResults) |
evaluationSetSnapshot | string | Output only. The specific evaluation set of the evaluation run. For runs with an evaluation set input, this will be that same set. For runs with BigQuery input, it's the sampled BigQuery dataset. |
inferenceConfigs | object | Optional. The candidate to inference config map for the evaluation run. The candidate can be up to 128 characters long and can consist of any UTF-8 characters. |
labels | object | Optional. Labels for the evaluation run. |
metadata | any | Optional. Metadata about the evaluation run, can be used by the caller to store additional tracking information about the evaluation run. |
state | string | Output only. The state of the evaluation run. |
| Name | Datatype | Description |
|---|---|---|
name | string | Identifier. The resource name of the EvaluationRun. This is a unique identifier. Format: projects/{project}/locations/{location}/evaluationRuns/{evaluation_run} |
completionTime | string (google-datetime) | Output only. Time when the evaluation run was completed. |
createTime | string (google-datetime) | Output only. Time when the evaluation run was created. |
dataSource | object | Required. The data source for the evaluation run. (id: GoogleCloudAiplatformV1EvaluationRunDataSource) |
displayName | string | Required. The display name of the Evaluation Run. |
error | object | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. (id: GoogleRpcStatus) |
evaluationConfig | object | Required. The configuration used for the evaluation. (id: GoogleCloudAiplatformV1EvaluationRunEvaluationConfig) |
evaluationResults | object | Output only. The results of the evaluation run. Only populated when the evaluation run's state is SUCCEEDED. (id: GoogleCloudAiplatformV1EvaluationResults) |
evaluationSetSnapshot | string | Output only. The specific evaluation set of the evaluation run. For runs with an evaluation set input, this will be that same set. For runs with BigQuery input, it's the sampled BigQuery dataset. |
inferenceConfigs | object | Optional. The candidate to inference config map for the evaluation run. The candidate can be up to 128 characters long and can consist of any UTF-8 characters. |
labels | object | Optional. Labels for the evaluation run. |
metadata | any | Optional. Metadata about the evaluation run, can be used by the caller to store additional tracking information about the evaluation run. |
state | string | Output only. The state of the evaluation run. |
Methods
The following methods are available for this resource:
| Name | Accessible by | Required Params | Optional Params | Description |
|---|---|---|---|---|
get | select | projectsId, locationsId, evaluationRunsId | Gets an Evaluation Run. | |
list | select | projectsId, locationsId | orderBy, pageToken, filter, pageSize | Lists Evaluation Runs. |
create | insert | projectsId, locationsId | Creates an Evaluation Run. | |
delete | delete | projectsId, locationsId, evaluationRunsId | Deletes an Evaluation Run. | |
cancel | exec | projectsId, locationsId, evaluationRunsId | Cancels an Evaluation Run. Attempts to cancel a running Evaluation Run asynchronously. Status of run can be checked via GetEvaluationRun. |
Parameters
Parameters can be passed in the WHERE clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
| Name | Datatype | Description |
|---|---|---|
evaluationRunsId | string | |
locationsId | string | |
projectsId | string | |
filter | string | |
orderBy | string | |
pageSize | integer (int32) | |
pageToken | string |
SELECT examples
- get
- list
Gets an Evaluation Run.
SELECT
name,
completionTime,
createTime,
dataSource,
displayName,
error,
evaluationConfig,
evaluationResults,
evaluationSetSnapshot,
inferenceConfigs,
labels,
metadata,
state
FROM google.aiplatform.evaluation_runs
WHERE projectsId = '{{ projectsId }}' -- required
AND locationsId = '{{ locationsId }}' -- required
AND evaluationRunsId = '{{ evaluationRunsId }}' -- required
;
Lists Evaluation Runs.
SELECT
name,
completionTime,
createTime,
dataSource,
displayName,
error,
evaluationConfig,
evaluationResults,
evaluationSetSnapshot,
inferenceConfigs,
labels,
metadata,
state
FROM google.aiplatform.evaluation_runs
WHERE projectsId = '{{ projectsId }}' -- required
AND locationsId = '{{ locationsId }}' -- required
AND orderBy = '{{ orderBy }}'
AND pageToken = '{{ pageToken }}'
AND filter = '{{ filter }}'
AND pageSize = '{{ pageSize }}'
;
INSERT examples
- create
- Manifest
Creates an Evaluation Run.
INSERT INTO google.aiplatform.evaluation_runs (
data__inferenceConfigs,
data__evaluationConfig,
data__labels,
data__displayName,
data__name,
data__dataSource,
data__metadata,
projectsId,
locationsId
)
SELECT
'{{ inferenceConfigs }}',
'{{ evaluationConfig }}',
'{{ labels }}',
'{{ displayName }}',
'{{ name }}',
'{{ dataSource }}',
'{{ metadata }}',
'{{ projectsId }}',
'{{ locationsId }}'
RETURNING
name,
completionTime,
createTime,
dataSource,
displayName,
error,
evaluationConfig,
evaluationResults,
evaluationSetSnapshot,
inferenceConfigs,
labels,
metadata,
state
;
# Description fields are for documentation purposes
- name: evaluation_runs
props:
- name: projectsId
value: string
description: Required parameter for the evaluation_runs resource.
- name: locationsId
value: string
description: Required parameter for the evaluation_runs resource.
- name: inferenceConfigs
value: object
description: >
Optional. The candidate to inference config map for the evaluation run. The candidate can be up to 128 characters long and can consist of any UTF-8 characters.
- name: evaluationConfig
value: object
description: >
Required. The configuration used for the evaluation.
- name: labels
value: object
description: >
Optional. Labels for the evaluation run.
- name: displayName
value: string
description: >
Required. The display name of the Evaluation Run.
- name: name
value: string
description: >
Identifier. The resource name of the EvaluationRun. This is a unique identifier. Format: `projects/{project}/locations/{location}/evaluationRuns/{evaluation_run}`
- name: dataSource
value: object
description: >
Required. The data source for the evaluation run.
- name: metadata
value: any
description: >
Optional. Metadata about the evaluation run, can be used by the caller to store additional tracking information about the evaluation run.
DELETE examples
- delete
Deletes an Evaluation Run.
DELETE FROM google.aiplatform.evaluation_runs
WHERE projectsId = '{{ projectsId }}' --required
AND locationsId = '{{ locationsId }}' --required
AND evaluationRunsId = '{{ evaluationRunsId }}' --required
;
Lifecycle Methods
- cancel
Cancels an Evaluation Run. Attempts to cancel a running Evaluation Run asynchronously. Status of run can be checked via GetEvaluationRun.
EXEC google.aiplatform.evaluation_runs.cancel
@projectsId='{{ projectsId }}' --required,
@locationsId='{{ locationsId }}' --required,
@evaluationRunsId='{{ evaluationRunsId }}' --required
;