evaluation_jobs
Creates, updates, deletes, gets or lists an evaluation_jobs
resource.
Overview
Name | evaluation_jobs |
Type | Resource |
Id | google.datalabeling.evaluation_jobs |
Fields
The following fields are returned by SELECT
queries:
- projects_evaluation_jobs_get
- projects_evaluation_jobs_list
Successful response
Name | Datatype | Description |
---|---|---|
name | string | Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}" |
annotationSpecSet | string | Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}" |
attempts | array | Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array. |
createTime | string (google-datetime) | Output only. Timestamp of when this evaluation job was created. |
description | string | Required. Description of the job. The description can be up to 25,000 characters long. |
evaluationJobConfig | object | Required. Configuration details for the evaluation job. (id: GoogleCloudDatalabelingV1beta1EvaluationJobConfig) |
labelMissingGroundTruth | boolean | Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true . If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false . |
modelVersion | string | Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version. |
schedule | string | Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day. |
state | string | Output only. Describes the current state of the job. |
Successful response
Name | Datatype | Description |
---|---|---|
name | string | Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}" |
annotationSpecSet | string | Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}" |
attempts | array | Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array. |
createTime | string (google-datetime) | Output only. Timestamp of when this evaluation job was created. |
description | string | Required. Description of the job. The description can be up to 25,000 characters long. |
evaluationJobConfig | object | Required. Configuration details for the evaluation job. (id: GoogleCloudDatalabelingV1beta1EvaluationJobConfig) |
labelMissingGroundTruth | boolean | Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true . If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false . |
modelVersion | string | Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version. |
schedule | string | Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day. |
state | string | Output only. Describes the current state of the job. |
Methods
The following methods are available for this resource:
Name | Accessible by | Required Params | Optional Params | Description |
---|---|---|---|---|
projects_evaluation_jobs_get | select | projectsId , evaluationJobsId | Gets an evaluation job by resource name. | |
projects_evaluation_jobs_list | select | projectsId | filter , pageSize , pageToken | Lists all evaluation jobs within a project with possible filters. Pagination is supported. |
projects_evaluation_jobs_create | insert | projectsId | Creates an evaluation job. | |
projects_evaluation_jobs_patch | update | projectsId , evaluationJobsId | updateMask | Updates an evaluation job. You can only update certain fields of the job's EvaluationJobConfig: humanAnnotationConfig.instruction , exampleCount , and exampleSamplePercentage . If you want to change any other aspect of the evaluation job, you must delete the job and create a new one. |
projects_evaluation_jobs_delete | delete | projectsId , evaluationJobsId | Stops and deletes an evaluation job. | |
projects_evaluation_jobs_pause | exec | projectsId , evaluationJobsId | Pauses an evaluation job. Pausing an evaluation job that is already in a PAUSED state is a no-op. | |
projects_evaluation_jobs_resume | exec | projectsId , evaluationJobsId | Resumes a paused evaluation job. A deleted evaluation job can't be resumed. Resuming a running or scheduled evaluation job is a no-op. |
Parameters
Parameters can be passed in the WHERE
clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
Name | Datatype | Description |
---|---|---|
evaluationJobsId | string | |
projectsId | string | |
filter | string | |
pageSize | integer (int32) | |
pageToken | string | |
updateMask | string (google-fieldmask) |
SELECT
examples
- projects_evaluation_jobs_get
- projects_evaluation_jobs_list
Gets an evaluation job by resource name.
SELECT
name,
annotationSpecSet,
attempts,
createTime,
description,
evaluationJobConfig,
labelMissingGroundTruth,
modelVersion,
schedule,
state
FROM google.datalabeling.evaluation_jobs
WHERE projectsId = '{{ projectsId }}' -- required
AND evaluationJobsId = '{{ evaluationJobsId }}' -- required;
Lists all evaluation jobs within a project with possible filters. Pagination is supported.
SELECT
name,
annotationSpecSet,
attempts,
createTime,
description,
evaluationJobConfig,
labelMissingGroundTruth,
modelVersion,
schedule,
state
FROM google.datalabeling.evaluation_jobs
WHERE projectsId = '{{ projectsId }}' -- required
AND filter = '{{ filter }}'
AND pageSize = '{{ pageSize }}'
AND pageToken = '{{ pageToken }}';
INSERT
examples
- projects_evaluation_jobs_create
- Manifest
Creates an evaluation job.
INSERT INTO google.datalabeling.evaluation_jobs (
data__job,
projectsId
)
SELECT
'{{ job }}',
'{{ projectsId }}'
RETURNING
name,
annotationSpecSet,
attempts,
createTime,
description,
evaluationJobConfig,
labelMissingGroundTruth,
modelVersion,
schedule,
state
;
# Description fields are for documentation purposes
- name: evaluation_jobs
props:
- name: projectsId
value: string
description: Required parameter for the evaluation_jobs resource.
- name: job
value: object
description: >
Required. The evaluation job to create.
UPDATE
examples
- projects_evaluation_jobs_patch
Updates an evaluation job. You can only update certain fields of the job's EvaluationJobConfig: humanAnnotationConfig.instruction
, exampleCount
, and exampleSamplePercentage
. If you want to change any other aspect of the evaluation job, you must delete the job and create a new one.
UPDATE google.datalabeling.evaluation_jobs
SET
data__name = '{{ name }}',
data__description = '{{ description }}',
data__state = '{{ state }}',
data__schedule = '{{ schedule }}',
data__modelVersion = '{{ modelVersion }}',
data__evaluationJobConfig = '{{ evaluationJobConfig }}',
data__annotationSpecSet = '{{ annotationSpecSet }}',
data__labelMissingGroundTruth = {{ labelMissingGroundTruth }},
data__attempts = '{{ attempts }}',
data__createTime = '{{ createTime }}'
WHERE
projectsId = '{{ projectsId }}' --required
AND evaluationJobsId = '{{ evaluationJobsId }}' --required
AND updateMask = '{{ updateMask}}'
RETURNING
name,
annotationSpecSet,
attempts,
createTime,
description,
evaluationJobConfig,
labelMissingGroundTruth,
modelVersion,
schedule,
state;
DELETE
examples
- projects_evaluation_jobs_delete
Stops and deletes an evaluation job.
DELETE FROM google.datalabeling.evaluation_jobs
WHERE projectsId = '{{ projectsId }}' --required
AND evaluationJobsId = '{{ evaluationJobsId }}' --required;
Lifecycle Methods
- projects_evaluation_jobs_pause
- projects_evaluation_jobs_resume
Pauses an evaluation job. Pausing an evaluation job that is already in a PAUSED
state is a no-op.
EXEC google.datalabeling.evaluation_jobs.projects_evaluation_jobs_pause
@projectsId='{{ projectsId }}' --required,
@evaluationJobsId='{{ evaluationJobsId }}' --required;
Resumes a paused evaluation job. A deleted evaluation job can't be resumed. Resuming a running or scheduled evaluation job is a no-op.
EXEC google.datalabeling.evaluation_jobs.projects_evaluation_jobs_resume
@projectsId='{{ projectsId }}' --required,
@evaluationJobsId='{{ evaluationJobsId }}' --required;