sessions
Creates, updates, deletes, gets or lists a sessions
resource.
Overview
Name | sessions |
Type | Resource |
Id | google.spanner.sessions |
Fields
The following fields are returned by SELECT
queries:
- projects_instances_databases_sessions_get
- projects_instances_databases_sessions_list
Successful response
Name | Datatype | Description |
---|---|---|
name | string | Output only. The name of the session. This is always system-assigned. |
approximateLastUseTime | string (google-datetime) | Output only. The approximate timestamp when the session is last used. It's typically earlier than the actual last use time. |
createTime | string (google-datetime) | Output only. The timestamp when the session is created. |
creatorRole | string | The database role which created this session. |
labels | object | The labels for the session. * Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])? . * Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)? . * No more than 64 labels can be associated with a given session. See https://goo.gl/xmQnxf for more information on and examples of labels. |
multiplexed | boolean | Optional. If true , specifies a multiplexed session. Use a multiplexed session for multiple, concurrent read-only operations. Don't use them for read-write transactions, partitioned reads, or partitioned queries. Use sessions.create to create multiplexed sessions. Don't use BatchCreateSessions to create a multiplexed session. You can't delete or list multiplexed sessions. |
Successful response
Name | Datatype | Description |
---|---|---|
name | string | Output only. The name of the session. This is always system-assigned. |
approximateLastUseTime | string (google-datetime) | Output only. The approximate timestamp when the session is last used. It's typically earlier than the actual last use time. |
createTime | string (google-datetime) | Output only. The timestamp when the session is created. |
creatorRole | string | The database role which created this session. |
labels | object | The labels for the session. * Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])? . * Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)? . * No more than 64 labels can be associated with a given session. See https://goo.gl/xmQnxf for more information on and examples of labels. |
multiplexed | boolean | Optional. If true , specifies a multiplexed session. Use a multiplexed session for multiple, concurrent read-only operations. Don't use them for read-write transactions, partitioned reads, or partitioned queries. Use sessions.create to create multiplexed sessions. Don't use BatchCreateSessions to create a multiplexed session. You can't delete or list multiplexed sessions. |
Methods
The following methods are available for this resource:
Name | Accessible by | Required Params | Optional Params | Description |
---|---|---|---|---|
projects_instances_databases_sessions_get | select | projectsId , instancesId , databasesId , sessionsId | Gets a session. Returns NOT_FOUND if the session doesn't exist. This is mainly useful for determining whether a session is still alive. | |
projects_instances_databases_sessions_list | select | projectsId , instancesId , databasesId | pageSize , pageToken , filter | Lists all sessions in a given database. |
projects_instances_databases_sessions_create | insert | projectsId , instancesId , databasesId | Creates a new session. A session can be used to perform transactions that read and/or modify data in a Cloud Spanner database. Sessions are meant to be reused for many consecutive transactions. Sessions can only execute one transaction at a time. To execute multiple concurrent read-write/write-only transactions, create multiple sessions. Note that standalone reads and queries use a transaction internally, and count toward the one transaction limit. Active sessions use additional server resources, so it's a good idea to delete idle and unneeded sessions. Aside from explicit deletes, Cloud Spanner can delete sessions when no operations are sent for more than an hour. If a session is deleted, requests to it return NOT_FOUND . Idle sessions can be kept alive by sending a trivial SQL query periodically, for example, "SELECT 1" . | |
projects_instances_databases_sessions_batch_create | insert | projectsId , instancesId , databasesId | Creates multiple new sessions. This API can be used to initialize a session cache on the clients. See https://goo.gl/TgSFN2 for best practices on session cache management. | |
projects_instances_databases_sessions_delete | delete | projectsId , instancesId , databasesId , sessionsId | Ends a session, releasing server resources associated with it. This asynchronously triggers the cancellation of any operations that are running with this session. | |
projects_instances_databases_sessions_adapter | exec | projectsId , instancesId , databasesId | Creates a new session to be used for requests made by the adapter. A session identifies a specific incarnation of a database resource and is meant to be reused across many AdaptMessage calls. | |
projects_instances_databases_sessions_adapt_message | exec | projectsId , instancesId , databasesId , sessionsId | Handles a single message from the client and returns the result as a stream. The server will interpret the message frame and respond with message frames to the client. | |
projects_instances_databases_sessions_execute_sql | exec | projectsId , instancesId , databasesId , sessionsId | Executes an SQL statement, returning all results in a single reply. This method can't be used to return a result set larger than 10 MiB; if the query yields more data than that, the query fails with a FAILED_PRECONDITION error. Operations inside read-write transactions might return ABORTED . If this occurs, the application should restart the transaction from the beginning. See Transaction for more details. Larger result sets can be fetched in streaming fashion by calling ExecuteStreamingSql instead. The query string can be SQL or Graph Query Language (GQL). | |
projects_instances_databases_sessions_execute_streaming_sql | exec | projectsId , instancesId , databasesId , sessionsId | Like ExecuteSql, except returns the result set as a stream. Unlike ExecuteSql, there is no limit on the size of the returned result set. However, no individual row in the result set can exceed 100 MiB, and no column value can exceed 10 MiB. The query string can be SQL or Graph Query Language (GQL). | |
projects_instances_databases_sessions_execute_batch_dml | exec | projectsId , instancesId , databasesId , sessionsId | Executes a batch of SQL DML statements. This method allows many statements to be run with lower latency than submitting them sequentially with ExecuteSql. Statements are executed in sequential order. A request can succeed even if a statement fails. The ExecuteBatchDmlResponse.status field in the response provides information about the statement that failed. Clients must inspect this field to determine whether an error occurred. Execution stops after the first failed statement; the remaining statements are not executed. | |
projects_instances_databases_sessions_read | exec | projectsId , instancesId , databasesId , sessionsId | Reads rows from the database using key lookups and scans, as a simple key/value style alternative to ExecuteSql. This method can't be used to return a result set larger than 10 MiB; if the read matches more data than that, the read fails with a FAILED_PRECONDITION error. Reads inside read-write transactions might return ABORTED . If this occurs, the application should restart the transaction from the beginning. See Transaction for more details. Larger result sets can be yielded in streaming fashion by calling StreamingRead instead. | |
projects_instances_databases_sessions_streaming_read | exec | projectsId , instancesId , databasesId , sessionsId | Like Read, except returns the result set as a stream. Unlike Read, there is no limit on the size of the returned result set. However, no individual row in the result set can exceed 100 MiB, and no column value can exceed 10 MiB. | |
projects_instances_databases_sessions_begin_transaction | exec | projectsId , instancesId , databasesId , sessionsId | Begins a new transaction. This step can often be skipped: Read, ExecuteSql and Commit can begin a new transaction as a side-effect. | |
projects_instances_databases_sessions_commit | exec | projectsId , instancesId , databasesId , sessionsId | Commits a transaction. The request includes the mutations to be applied to rows in the database. Commit might return an ABORTED error. This can occur at any time; commonly, the cause is conflicts with concurrent transactions. However, it can also happen for a variety of other reasons. If Commit returns ABORTED , the caller should retry the transaction from the beginning, reusing the same session. On very rare occasions, Commit might return UNKNOWN . This can happen, for example, if the client job experiences a 1+ hour networking failure. At that point, Cloud Spanner has lost track of the transaction outcome and we recommend that you perform another read from the database to see the state of things as they are now. | |
projects_instances_databases_sessions_rollback | exec | projectsId , instancesId , databasesId , sessionsId | Rolls back a transaction, releasing any locks it holds. It's a good idea to call this for any transaction that includes one or more Read or ExecuteSql requests and ultimately decides not to commit. Rollback returns OK if it successfully aborts the transaction, the transaction was already aborted, or the transaction isn't found. Rollback never returns ABORTED . | |
projects_instances_databases_sessions_partition_query | exec | projectsId , instancesId , databasesId , sessionsId | Creates a set of partition tokens that can be used to execute a query operation in parallel. Each of the returned partition tokens can be used by ExecuteStreamingSql to specify a subset of the query result to read. The same session and read-only transaction must be used by the PartitionQueryRequest used to create the partition tokens and the ExecuteSqlRequests that use the partition tokens. Partition tokens become invalid when the session used to create them is deleted, is idle for too long, begins a new transaction, or becomes too old. When any of these happen, it isn't possible to resume the query, and the whole operation must be restarted from the beginning. | |
projects_instances_databases_sessions_partition_read | exec | projectsId , instancesId , databasesId , sessionsId | Creates a set of partition tokens that can be used to execute a read operation in parallel. Each of the returned partition tokens can be used by StreamingRead to specify a subset of the read result to read. The same session and read-only transaction must be used by the PartitionReadRequest used to create the partition tokens and the ReadRequests that use the partition tokens. There are no ordering guarantees on rows returned among the returned partition tokens, or even within each individual StreamingRead call issued with a partition_token . Partition tokens become invalid when the session used to create them is deleted, is idle for too long, begins a new transaction, or becomes too old. When any of these happen, it isn't possible to resume the read, and the whole operation must be restarted from the beginning. | |
projects_instances_databases_sessions_batch_write | exec | projectsId , instancesId , databasesId , sessionsId | Batches the supplied mutation groups in a collection of efficient transactions. All mutations in a group are committed atomically. However, mutations across groups can be committed non-atomically in an unspecified order and thus, they must be independent of each other. Partial failure is possible, that is, some groups might have been committed successfully, while some might have failed. The results of individual batches are streamed into the response as the batches are applied. BatchWrite requests are not replay protected, meaning that each mutation group can be applied more than once. Replays of non-idempotent mutations can have undesirable effects. For example, replays of an insert mutation can produce an already exists error or if you use generated or commit timestamp-based keys, it can result in additional rows being added to the mutation's table. We recommend structuring your mutation groups to be idempotent to avoid this issue. |
Parameters
Parameters can be passed in the WHERE
clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
Name | Datatype | Description |
---|---|---|
databasesId | string | |
instancesId | string | |
projectsId | string | |
sessionsId | string | |
filter | string | |
pageSize | integer (int32) | |
pageToken | string |
SELECT
examples
- projects_instances_databases_sessions_get
- projects_instances_databases_sessions_list
Gets a session. Returns NOT_FOUND
if the session doesn't exist. This is mainly useful for determining whether a session is still alive.
SELECT
name,
approximateLastUseTime,
createTime,
creatorRole,
labels,
multiplexed
FROM google.spanner.sessions
WHERE projectsId = '{{ projectsId }}' -- required
AND instancesId = '{{ instancesId }}' -- required
AND databasesId = '{{ databasesId }}' -- required
AND sessionsId = '{{ sessionsId }}' -- required;
Lists all sessions in a given database.
SELECT
name,
approximateLastUseTime,
createTime,
creatorRole,
labels,
multiplexed
FROM google.spanner.sessions
WHERE projectsId = '{{ projectsId }}' -- required
AND instancesId = '{{ instancesId }}' -- required
AND databasesId = '{{ databasesId }}' -- required
AND pageSize = '{{ pageSize }}'
AND pageToken = '{{ pageToken }}'
AND filter = '{{ filter }}';
INSERT
examples
- projects_instances_databases_sessions_create
- projects_instances_databases_sessions_batch_create
- Manifest
Creates a new session. A session can be used to perform transactions that read and/or modify data in a Cloud Spanner database. Sessions are meant to be reused for many consecutive transactions. Sessions can only execute one transaction at a time. To execute multiple concurrent read-write/write-only transactions, create multiple sessions. Note that standalone reads and queries use a transaction internally, and count toward the one transaction limit. Active sessions use additional server resources, so it's a good idea to delete idle and unneeded sessions. Aside from explicit deletes, Cloud Spanner can delete sessions when no operations are sent for more than an hour. If a session is deleted, requests to it return NOT_FOUND
. Idle sessions can be kept alive by sending a trivial SQL query periodically, for example, "SELECT 1"
.
INSERT INTO google.spanner.sessions (
data__session,
projectsId,
instancesId,
databasesId
)
SELECT
'{{ session }}',
'{{ projectsId }}',
'{{ instancesId }}',
'{{ databasesId }}'
RETURNING
name,
approximateLastUseTime,
createTime,
creatorRole,
labels,
multiplexed
;
Creates multiple new sessions. This API can be used to initialize a session cache on the clients. See https://goo.gl/TgSFN2 for best practices on session cache management.
INSERT INTO google.spanner.sessions (
data__sessionTemplate,
data__sessionCount,
projectsId,
instancesId,
databasesId
)
SELECT
'{{ sessionTemplate }}',
{{ sessionCount }},
'{{ projectsId }}',
'{{ instancesId }}',
'{{ databasesId }}'
RETURNING
session
;
# Description fields are for documentation purposes
- name: sessions
props:
- name: projectsId
value: string
description: Required parameter for the sessions resource.
- name: instancesId
value: string
description: Required parameter for the sessions resource.
- name: databasesId
value: string
description: Required parameter for the sessions resource.
- name: session
value: object
description: >
Required. The session to create.
- name: sessionTemplate
value: object
description: >
Parameters to apply to each created session.
- name: sessionCount
value: integer
description: >
Required. The number of sessions to be created in this batch call. The API can return fewer than the requested number of sessions. If a specific number of sessions are desired, the client can make additional calls to `BatchCreateSessions` (adjusting session_count as necessary).
DELETE
examples
- projects_instances_databases_sessions_delete
Ends a session, releasing server resources associated with it. This asynchronously triggers the cancellation of any operations that are running with this session.
DELETE FROM google.spanner.sessions
WHERE projectsId = '{{ projectsId }}' --required
AND instancesId = '{{ instancesId }}' --required
AND databasesId = '{{ databasesId }}' --required
AND sessionsId = '{{ sessionsId }}' --required;
Lifecycle Methods
- projects_instances_databases_sessions_adapter
- projects_instances_databases_sessions_adapt_message
- projects_instances_databases_sessions_execute_sql
- projects_instances_databases_sessions_execute_streaming_sql
- projects_instances_databases_sessions_execute_batch_dml
- projects_instances_databases_sessions_read
- projects_instances_databases_sessions_streaming_read
- projects_instances_databases_sessions_begin_transaction
- projects_instances_databases_sessions_commit
- projects_instances_databases_sessions_rollback
- projects_instances_databases_sessions_partition_query
- projects_instances_databases_sessions_partition_read
- projects_instances_databases_sessions_batch_write
Creates a new session to be used for requests made by the adapter. A session identifies a specific incarnation of a database resource and is meant to be reused across many AdaptMessage
calls.
EXEC google.spanner.sessions.projects_instances_databases_sessions_adapter
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required
@@json=
'{
"name": "{{ name }}"
}';
Handles a single message from the client and returns the result as a stream. The server will interpret the message frame and respond with message frames to the client.
EXEC google.spanner.sessions.projects_instances_databases_sessions_adapt_message
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"protocol": "{{ protocol }}",
"payload": "{{ payload }}",
"attachments": "{{ attachments }}"
}';
Executes an SQL statement, returning all results in a single reply. This method can't be used to return a result set larger than 10 MiB; if the query yields more data than that, the query fails with a FAILED_PRECONDITION
error. Operations inside read-write transactions might return ABORTED
. If this occurs, the application should restart the transaction from the beginning. See Transaction for more details. Larger result sets can be fetched in streaming fashion by calling ExecuteStreamingSql instead. The query string can be SQL or Graph Query Language (GQL).
EXEC google.spanner.sessions.projects_instances_databases_sessions_execute_sql
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"transaction": "{{ transaction }}",
"sql": "{{ sql }}",
"params": "{{ params }}",
"paramTypes": "{{ paramTypes }}",
"resumeToken": "{{ resumeToken }}",
"queryMode": "{{ queryMode }}",
"partitionToken": "{{ partitionToken }}",
"seqno": "{{ seqno }}",
"queryOptions": "{{ queryOptions }}",
"requestOptions": "{{ requestOptions }}",
"directedReadOptions": "{{ directedReadOptions }}",
"dataBoostEnabled": {{ dataBoostEnabled }},
"lastStatement": {{ lastStatement }}
}';
Like ExecuteSql, except returns the result set as a stream. Unlike ExecuteSql, there is no limit on the size of the returned result set. However, no individual row in the result set can exceed 100 MiB, and no column value can exceed 10 MiB. The query string can be SQL or Graph Query Language (GQL).
EXEC google.spanner.sessions.projects_instances_databases_sessions_execute_streaming_sql
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"transaction": "{{ transaction }}",
"sql": "{{ sql }}",
"params": "{{ params }}",
"paramTypes": "{{ paramTypes }}",
"resumeToken": "{{ resumeToken }}",
"queryMode": "{{ queryMode }}",
"partitionToken": "{{ partitionToken }}",
"seqno": "{{ seqno }}",
"queryOptions": "{{ queryOptions }}",
"requestOptions": "{{ requestOptions }}",
"directedReadOptions": "{{ directedReadOptions }}",
"dataBoostEnabled": {{ dataBoostEnabled }},
"lastStatement": {{ lastStatement }}
}';
Executes a batch of SQL DML statements. This method allows many statements to be run with lower latency than submitting them sequentially with ExecuteSql. Statements are executed in sequential order. A request can succeed even if a statement fails. The ExecuteBatchDmlResponse.status field in the response provides information about the statement that failed. Clients must inspect this field to determine whether an error occurred. Execution stops after the first failed statement; the remaining statements are not executed.
EXEC google.spanner.sessions.projects_instances_databases_sessions_execute_batch_dml
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"transaction": "{{ transaction }}",
"statements": "{{ statements }}",
"seqno": "{{ seqno }}",
"requestOptions": "{{ requestOptions }}",
"lastStatements": {{ lastStatements }}
}';
Reads rows from the database using key lookups and scans, as a simple key/value style alternative to ExecuteSql. This method can't be used to return a result set larger than 10 MiB; if the read matches more data than that, the read fails with a FAILED_PRECONDITION
error. Reads inside read-write transactions might return ABORTED
. If this occurs, the application should restart the transaction from the beginning. See Transaction for more details. Larger result sets can be yielded in streaming fashion by calling StreamingRead instead.
EXEC google.spanner.sessions.projects_instances_databases_sessions_read
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"transaction": "{{ transaction }}",
"table": "{{ table }}",
"index": "{{ index }}",
"columns": "{{ columns }}",
"keySet": "{{ keySet }}",
"limit": "{{ limit }}",
"resumeToken": "{{ resumeToken }}",
"partitionToken": "{{ partitionToken }}",
"requestOptions": "{{ requestOptions }}",
"directedReadOptions": "{{ directedReadOptions }}",
"dataBoostEnabled": {{ dataBoostEnabled }},
"orderBy": "{{ orderBy }}",
"lockHint": "{{ lockHint }}"
}';
Like Read, except returns the result set as a stream. Unlike Read, there is no limit on the size of the returned result set. However, no individual row in the result set can exceed 100 MiB, and no column value can exceed 10 MiB.
EXEC google.spanner.sessions.projects_instances_databases_sessions_streaming_read
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"transaction": "{{ transaction }}",
"table": "{{ table }}",
"index": "{{ index }}",
"columns": "{{ columns }}",
"keySet": "{{ keySet }}",
"limit": "{{ limit }}",
"resumeToken": "{{ resumeToken }}",
"partitionToken": "{{ partitionToken }}",
"requestOptions": "{{ requestOptions }}",
"directedReadOptions": "{{ directedReadOptions }}",
"dataBoostEnabled": {{ dataBoostEnabled }},
"orderBy": "{{ orderBy }}",
"lockHint": "{{ lockHint }}"
}';
Begins a new transaction. This step can often be skipped: Read, ExecuteSql and Commit can begin a new transaction as a side-effect.
EXEC google.spanner.sessions.projects_instances_databases_sessions_begin_transaction
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"options": "{{ options }}",
"requestOptions": "{{ requestOptions }}",
"mutationKey": "{{ mutationKey }}"
}';
Commits a transaction. The request includes the mutations to be applied to rows in the database. Commit
might return an ABORTED
error. This can occur at any time; commonly, the cause is conflicts with concurrent transactions. However, it can also happen for a variety of other reasons. If Commit
returns ABORTED
, the caller should retry the transaction from the beginning, reusing the same session. On very rare occasions, Commit
might return UNKNOWN
. This can happen, for example, if the client job experiences a 1+ hour networking failure. At that point, Cloud Spanner has lost track of the transaction outcome and we recommend that you perform another read from the database to see the state of things as they are now.
EXEC google.spanner.sessions.projects_instances_databases_sessions_commit
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"transactionId": "{{ transactionId }}",
"singleUseTransaction": "{{ singleUseTransaction }}",
"mutations": "{{ mutations }}",
"returnCommitStats": {{ returnCommitStats }},
"maxCommitDelay": "{{ maxCommitDelay }}",
"requestOptions": "{{ requestOptions }}",
"precommitToken": "{{ precommitToken }}"
}';
Rolls back a transaction, releasing any locks it holds. It's a good idea to call this for any transaction that includes one or more Read or ExecuteSql requests and ultimately decides not to commit. Rollback
returns OK
if it successfully aborts the transaction, the transaction was already aborted, or the transaction isn't found. Rollback
never returns ABORTED
.
EXEC google.spanner.sessions.projects_instances_databases_sessions_rollback
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"transactionId": "{{ transactionId }}"
}';
Creates a set of partition tokens that can be used to execute a query operation in parallel. Each of the returned partition tokens can be used by ExecuteStreamingSql to specify a subset of the query result to read. The same session and read-only transaction must be used by the PartitionQueryRequest
used to create the partition tokens and the ExecuteSqlRequests
that use the partition tokens. Partition tokens become invalid when the session used to create them is deleted, is idle for too long, begins a new transaction, or becomes too old. When any of these happen, it isn't possible to resume the query, and the whole operation must be restarted from the beginning.
EXEC google.spanner.sessions.projects_instances_databases_sessions_partition_query
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"transaction": "{{ transaction }}",
"sql": "{{ sql }}",
"params": "{{ params }}",
"paramTypes": "{{ paramTypes }}",
"partitionOptions": "{{ partitionOptions }}"
}';
Creates a set of partition tokens that can be used to execute a read operation in parallel. Each of the returned partition tokens can be used by StreamingRead to specify a subset of the read result to read. The same session and read-only transaction must be used by the PartitionReadRequest
used to create the partition tokens and the ReadRequests
that use the partition tokens. There are no ordering guarantees on rows returned among the returned partition tokens, or even within each individual StreamingRead
call issued with a partition_token
. Partition tokens become invalid when the session used to create them is deleted, is idle for too long, begins a new transaction, or becomes too old. When any of these happen, it isn't possible to resume the read, and the whole operation must be restarted from the beginning.
EXEC google.spanner.sessions.projects_instances_databases_sessions_partition_read
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"transaction": "{{ transaction }}",
"table": "{{ table }}",
"index": "{{ index }}",
"columns": "{{ columns }}",
"keySet": "{{ keySet }}",
"partitionOptions": "{{ partitionOptions }}"
}';
Batches the supplied mutation groups in a collection of efficient transactions. All mutations in a group are committed atomically. However, mutations across groups can be committed non-atomically in an unspecified order and thus, they must be independent of each other. Partial failure is possible, that is, some groups might have been committed successfully, while some might have failed. The results of individual batches are streamed into the response as the batches are applied. BatchWrite
requests are not replay protected, meaning that each mutation group can be applied more than once. Replays of non-idempotent mutations can have undesirable effects. For example, replays of an insert mutation can produce an already exists error or if you use generated or commit timestamp-based keys, it can result in additional rows being added to the mutation's table. We recommend structuring your mutation groups to be idempotent to avoid this issue.
EXEC google.spanner.sessions.projects_instances_databases_sessions_batch_write
@projectsId='{{ projectsId }}' --required,
@instancesId='{{ instancesId }}' --required,
@databasesId='{{ databasesId }}' --required,
@sessionsId='{{ sessionsId }}' --required
@@json=
'{
"requestOptions": "{{ requestOptions }}",
"mutationGroups": "{{ mutationGroups }}",
"excludeTxnFromChangeStreams": {{ excludeTxnFromChangeStreams }}
}';