evaluations
Creates, updates, deletes, gets or lists an evaluations
resource.
Overview
Name | evaluations |
Type | Resource |
Id | google.datalabeling.evaluations |
Fields
The following fields are returned by SELECT
queries:
- projects_datasets_evaluations_get
Successful response
Name | Datatype | Description |
---|---|---|
name | string | Output only. Resource name of an evaluation. The name has the following format: "projects/{project_id}/datasets/{dataset_id}/evaluations/ {evaluation_id}' |
annotationType | string | Output only. Type of task that the model version being evaluated performs, as defined in the evaluationJobConfig.inputConfig.annotationType field of the evaluation job that created this evaluation. |
config | object | Output only. Options used in the evaluation job that created this evaluation. (id: GoogleCloudDatalabelingV1beta1EvaluationConfig) |
createTime | string (google-datetime) | Output only. Timestamp for when this evaluation was created. |
evaluatedItemCount | string (int64) | Output only. The number of items in the ground truth dataset that were used for this evaluation. Only populated when the evaulation is for certain AnnotationTypes. |
evaluationJobRunTime | string (google-datetime) | Output only. Timestamp for when the evaluation job that created this evaluation ran. |
evaluationMetrics | object | Output only. Metrics comparing predictions to ground truth labels. (id: GoogleCloudDatalabelingV1beta1EvaluationMetrics) |
Methods
The following methods are available for this resource:
Name | Accessible by | Required Params | Optional Params | Description |
---|---|---|---|---|
projects_datasets_evaluations_get | select | projectsId , datasetsId , evaluationsId | Gets an evaluation by resource name (to search, use projects.evaluations.search). | |
projects_datasets_evaluations_example_comparisons_search | exec | projectsId , datasetsId , evaluationsId | Searches example comparisons from an evaluation. The return format is a list of example comparisons that show ground truth and prediction(s) for a single input. Search by providing an evaluation ID. | |
projects_evaluations_search | exec | projectsId | filter , pageSize , pageToken | Searches evaluations within a project. |
Parameters
Parameters can be passed in the WHERE
clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
Name | Datatype | Description |
---|---|---|
datasetsId | string | |
evaluationsId | string | |
projectsId | string | |
filter | string | |
pageSize | integer (int32) | |
pageToken | string |
SELECT
examples
- projects_datasets_evaluations_get
Gets an evaluation by resource name (to search, use projects.evaluations.search).
SELECT
name,
annotationType,
config,
createTime,
evaluatedItemCount,
evaluationJobRunTime,
evaluationMetrics
FROM google.datalabeling.evaluations
WHERE projectsId = '{{ projectsId }}' -- required
AND datasetsId = '{{ datasetsId }}' -- required
AND evaluationsId = '{{ evaluationsId }}' -- required;
Lifecycle Methods
- projects_datasets_evaluations_example_comparisons_search
- projects_evaluations_search
Searches example comparisons from an evaluation. The return format is a list of example comparisons that show ground truth and prediction(s) for a single input. Search by providing an evaluation ID.
EXEC google.datalabeling.evaluations.projects_datasets_evaluations_example_comparisons_search
@projectsId='{{ projectsId }}' --required,
@datasetsId='{{ datasetsId }}' --required,
@evaluationsId='{{ evaluationsId }}' --required
@@json=
'{
"pageSize": {{ pageSize }},
"pageToken": "{{ pageToken }}"
}';
Searches evaluations within a project.
EXEC google.datalabeling.evaluations.projects_evaluations_search
@projectsId='{{ projectsId }}' --required,
@filter='{{ filter }}',
@pageSize='{{ pageSize }}',
@pageToken='{{ pageToken }}';