- Loading Data
- Adding a Data Source
Write API
This page covers how you can use a self-managed data source by adding documents to your Rockset collections manually using the Write API.
What is the Write API?
The Write API refers to the subset of APIs in the Rockset API which are used to insert, update, upsert, or delete documents in a Rockset collection.
You should use this option if either Rockset does not support managed integrations with your desired data source, or if you do not want Rockset to automatically sync your data and wish to manage syncing on your own.
Note that if you choose not to use a managed integration, you will have to manage data syncing on your own, whereas Rockset would automatically sync your data when using a managed integration such as S3 or DynamoDB.
Write API Limits
Peak write requests per second (WPS) using the Write API are based on the Virtual Instance size as listed below. These limits apply collectively for the Add/Patch/Delete documents endpoint and for orgs using Kafka Connect as a source. These limits along with the peak ingest throughput limit determine how fast Rockset can receive data.
Virtual Instance | Write Requests Per Second | Peak Streaming Ingest Rate Limit (MiB/s) |
---|---|---|
FREE | 1 | 0.05 |
NANO | 5 | 0.1 |
MICRO | 10 | 1 |
SMALL | 25 | 2 |
MEDIUM | 50 | 6 |
LARGE | 100 | 12 |
XLARGE | 200 | 24 |
2XLARGE | 400 | 48 |
4XLARGE | 800 | 96 |
8XLARGE | 1600 | 192 |
16XLARGE | 2400 | 384 |
Response Error Codes
Invalid Input (400) and Payload Too Large (413)
Write API and Kafka Connect requests are capped at 4 MiB and 5000 documents per request. If you see an error indicating "Payload size exceeds limit of 4194304 bytes" or "The number of documents specified in this request exceeds the maximum allowed limit of 5000 documents", please try again with a smaller payload size, fewer documents per request, or use one of our managed sources.
Too Many Requests (429)
To make sure your VI is sized appropriately to your ingest needs, monitor for the 429 Too Many Requests status code. The client can receive the 429 error code in two cases:
- The client is sending data faster than the Virtual Instance peak throughput limit
- The error message returned by the server is:
Your account is configured with a maximum write rate limit and you have reached this limit.
- Use appropriate retry, backoff and jitter strategies if the client hits this error. Here is a good guide on how to implement this on the client side.
- If the application encounters 429 for a large retry count (10 or more), check the streaming ingest metrics. If the application requires high ingest throughput, then consider increasing your VI size to avoid throttling
- The error message returned by the server is:
- The client is sending more writes per second than the Virtual Instance limit
- The error message returned by the server is:
Your account is configured with a maximum write requests per second limit and you have reached this limit.
- Use appropriate retry, backoff and jitter strategies if the client hits this error. Here is a good guide on how to implement this on the client side. If the application encounters 429 for a large retry count (10 or more), reach out to Rockset support.
- If the client requires sending more requests then consider buffering of records on the client and then sending a batch of records (>100KB in size) per Write API request.
- If the workload still requires a higher write rate, consider forwarding the documents to Amazon Kinesis or managed Kafka service like Confluent or Amazon MSK and then use that integration to sync data with Rockset. Since a managed integration, like Kinesis, is pull-based the limitations on how fast Rockset can pull data are based only on the source.
- The error message returned by the server is:
Create an Empty Collection
While you can directly add documents to any existing collection, you will need to first create an empty collection if you intend to use the Rockset API to add documents to a new collection.
You can create an empty collection by navigating to Collections > Create Collection > Write API in the Rockset Console.
The Rockset API also exposes a Create Collection endpoint enabling you to create an empty collection from your application code.
Add Documents
The Rockset API exposes an Add Documents endpoint so that you can insert data directly into your collections from your application code.
For your convenience, Rockset also maintains SDKs for Node.js, Python, Java, and Go. Each SDK has its own set of methods for using the REST API to add documents which you can find in its documentation.
Delete Documents
To delete existing documents from your collections, simply specify the _id
fields of the documents
you wish to remove and make a request to the Delete Documents
endpoint.
Patch Documents
To update existing documents in a collection using the Rockset API, you can make requests to the Patch Documents endpoint. For each existing document you wish to update, you will need to specify the following two parameters:
_id
holding the_id
field (primary key) of the document which is being patchedpatch
holding a list of patch operations to be applied to that document, following the JSON Patch standard.
Each patch operation is a dictionary with a key opstring
indicating the patch operation, and
additional keys pathstring
, valueobject
, and fromstring
which are used as required arguments
for this patch operation. The required arguments differ from one operation type to another. The JSON
Patch standard defines several types of patch operations, their arguments, and their behavior. Refer
to the JSON Patch documentation for more details.
If a patch operation’s argument is a field path, then it is specified using the JSON Pointer
standard defined by the IETF. In essence, field paths are
represented as a string of tokens separated by /
characters. These tokens either specify keys in
objects or indexes into arrays, and arrays are 0-based.
For example, in this document:
{
"biscuits": [{ "name": "Digestive" }, { "name": "Choco Leibniz" }]
}
The path "/biscuits"
would point to the biscuits
array, while the path "/biscuits/1/name"
would point to "Choco Leibniz"
.
There are six supported JSON patch operations:
add
which adds a value (specified by the value parameter) to an object or inserts it into an array (specified by the path parameter). In the case of an array, the value is inserted before the given index. The-
character can be used instead of an index to insert at the end of an array. The parameterspathstring
andvalueobject
are required for this operation.remove
which removes the first instance of an object or element of an array (specified by the path parameter). The parameterpathstring
is required for this operation.replace
which replaces the first instance of an object or element of an array (specified by the path parameter) with a value (specified by the value parameter). This operation is equivalent to aremove
operation immediately followed by anadd
operation. The parameterspathstring
andvalueobject
are required for this operation.copy
which copies a value from one location (specified by the from parameter) to another location (specified by the path parameter) within the JSON document. The parameterspathstring
andfromstring
are required for this operation.move
which moves a value from one location (specified by the from parameter) to another location (specified by the path parameter) within the JSON document. The parameterspathstring
andfromstring
are required for this operation.test
which runs a test to check if a value (specified by the path parameter) is set in the document. If the test fails, then the patch as a whole will not apply.
Use the SQL INSERT INTO
Statement
You can add or patch documents using the INSERT INTO
statement in a SQL query, which allows you to
insert the result of a query into a collection. If you SELECT
the _id
field of an existing
document in that query, it will update the existing document rather than add a new document. Learn
more about the syntax and usage of the SQL INSERT INTO
statement here.
Note that this is not recommended and should only be used to perform one-off fixes, as this will inefficiently occupy query execution resources not optimized for data ingest. Instead, we generally recommend that you use the Rockset API to regularly update data in your collections.
Upload a File
To manually create a collection using a file as your data source, you can do so from the Rockset Console by navigating to Collections > Create Collection > File Upload. You can also upload files to any existing collections (or to this one after it has been created). The file formats currently supported include JSON, CSV, XML, Parquet, XLS and PDF.
Verify Collection is Updated
Before querying a collection, you can verify specific documents have been added, deleted, or patched
when using the Write API along with the Get Collection Commit API.
The Write API returns written offsets as last_offset
, which follows the encoding format below:
f<version>:<timestamp>:<min-offset>:<max-delta>:<deltas-base32-varints>
* <version> is the version number of the offset.
* <timestamp> is the unix timestamp.
* <min-offset> is the smallest offset value across all partitions.
* <max-delta> is the largest offset minus the smallest offset value.
* <delta-base32-varints> is a sequence of 5-bit varints representing
* Crockford's base 32 (https://www.crockford.com/base32.html) encoding
* of all offsets.
You can verify the data in the returned offset can be queried by making requests to the
Get Collection Commit API endpoint. Simply pass
the last_offset
in the name
field and poll this endpoint until the passed
field in the response returns
true
. This signifies the collection has been updated with the data from the associated write request. Thus,
we can guarantee any subsequent queries will include the data associated with the request to the Write API.