- Loading Data
- Configuring Ingest Transformations
Configuring Ingest Transformations
Ingest transformations enable you run SQL queries over data from your sources and only persist the output of those queries to your collection. This gives you the power of SQL to drop, rename, or combine fields, filter out incoming rows, and even aggregate incoming documents in real-time with rollups.
Rockset's ingestion platform applies these transformations both during the initial load of a new collection's data, and on an ongoing basis to new documents coming from your source, effectivley giving you a real-time materialized view of your data.
We'll cover the following topics:
- Configuring Ingest Transformations
- Use Cases for Ingest Transformations
- Updating Ingest Transformations
Ingest Transformations
You configure ingest transformations by specifying ingest transformation at collection
creation time, either in the Rockset Console or through the REST API's
create collection endpoint using the field field_mapping_query
.
To update an ingest transformation after collection creation see.
Ingest transformations have the following structure:
SELECT
expression [, ...]
FROM
_input [ WHERE condition ] [ GROUP BY field [, ...] ] [ HAVING condition ] [ CLUSTER BY field [, ...] ]
Each component above is similar to its normal query counterpart. Here's a brief overview of each clause and how/why you might use them.
SELECT
Clause
The SELECT
clause is where you define:
- Which input fields to include or exclude (e.g.
SELECT *
orSELECT * EXCEPT (a, b)
orSELECT a, b
). - Field projections and renaming (e.g.
SELECT a AS b
). - New expressions to evaluate and add to the final document (e.g.
SELECT a + 1 AS b
). - Whether object subfields are extracted to top-level fields (e.g.
SELECT a, b.*
). As withSELECT *
you can use the EXCEPT clause to exclude some subfields (e.g.SELECT a, b.* EXCEPT(c)
).
Note: Explicit projections take precedence over fields from the input document of the same name.
For example input document {"a": 1, "b": 2}
with ingest transformation
SELECT *, a AS b FROM _input
will result in document {"a": 1, "b": 1}
and the original value of
b
will be lost.
FROM
Clause
Unlike regular queries, ingest transformations may only reference the pseudo-collection
_input
, which is the stream of input documents coming from your source. You may not have any other
collection/view/alias in your FROM clause.
However you may use the WITH clause to construct a CTE that can be referenced in the FROM clause.
WHERE
Clause
The WHERE
clause filters out input documents from your source based on some condition, similar to
a normal query.
GROUP BY
Clause
The GROUP BY
clause is used to construct a rollup collection, in which source documents
are aggregated at ingestion time and only the resulting summarized aggregates are ingested into your
collection. Much like with the GROUP BY
clause of a regular SQL query, all input rows with the
same values for the fields in the grouping set will be aggregated together to generate one output
row.
With rollups you will often want to aggregate along a time dimension, for example saving
hourly summaries of sensor readings. In those cases, it's best to create a mapping for
_event_time
and use that as one of the fields in your GROUP BY
clause. See the
rollups examples for sample usage.
There are a few key differences however from the GROUP BY
clause of a regular query:
- Fields referenced in the
GROUP BY
clause of an ingest transformation must be explicit fields from theSELECT
clause, such asSELECT a FROM _input GROUP BY a
, or in ordinal form,SELECT a FROM _input GROUP BY 1
. You cannot doSELECT * FROM _input GROUP BY a
, even though you may do this in a regular query. - A rollup query is only allowed to contain one grouping set. This means that you can have
GROUP BY a, b, c
but notGROUP BY GROUPING SETS ((a), (a,b))
as you might do in a regular query.
HAVING
Clause
The HAVING
clause is used in conjunction with the GROUP BY
clause in a rollup
to filter on the aggregated rows generated by the GROUP BY
clause and aggregations. Unlike the
WHERE
clause, which filters out input rows, the HAVING
clause only filters out rows after the
GROUP BY
clause and aggregations have been applied.
As with a normal SQL query, the HAVING
clause can refer to the fields in the GROUP BY
clause or
aggregate functions. For example, if your rollup query groups by fields a
and b
, then including
HAVING a IS NOT NULL
in your rollup query will drop the aggregated rows where a
is null
. If
you change the rollup query to contain HAVING COUNT(*) > 1
instead, this keeps the groups where at
least two rows were used to form the aggregated row.
CLUSTER BY
Clause
The CLUSTER BY
clause is used to configure data clustering
on a collection. Unlike the other clauses covered above, CLUSTER BY
is an ingest transformation
only concept and has no analog in regular query execution.
The fields referenced in the CLUSTER BY
clause must be explicit fields in the SELECT
clause and
can be referenced by their name, e.g. SELECT x FROM _input CLUSTER BY x
or by their ordinal, e.g.
SELECT x FROM _input CLUSTER BY 1
.
Ingest Transformation Use Cases
Type Coercion
You can cast fields to enforce type consistency, detect bad records, or extract advanced types from string representations.
Example:
SELECT
TRY_CAST(last_updated_at AS DATETIME) AS updated_at
FROM
_input
PII/PHI Masking
If your input dataset has PII (personally identifiable information) or PHI (protected health information) fields, you can use a one-way crypto hash function so that Rockset only stores the hashed value and not the original PII/PHI field.
Example:
SELECT
TO_HEX(SHA256(email_address)) AS email_hash
FROM
_input
Precomputing Fields
If your application queries involve complex SQL functions which result in slow query processing, you can pre-compute the result of the expensive SQL expression at ingestion time. Queries on these computed output fields are much faster than executing the SQL expression at query time.
Example:
-- extract the domain from the input email address
SELECT
REGEXP_EXTRACT_ALL(
email_address,
'^([a-zA-Z0-9_.+-]+)@([a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)',
2
) AS domain
FROM
_input
Dropping Fields
If you have unused fields in your input dataset, you can drop them upon data ingestion to save storage space.
Example:
SELECT
*
EXCEPT
(large_field_1, large_field_2)
FROM
_input
Name Standardization
Sometimes the data coming into Rockset can be messy, and you can use the ingest transformation to clean it up.
Example:
SELECT
mnfct AS manufacturer,
ser_new_x AS serial_number
FROM
_input
WHERE
ser_new_x IS NOT NULL
Special Fields
You can use the ingest transformation to specify the values of the special fields
_id
, _event_time
, and/or _op
.
_id
The _id field is the unique identifier of a document in Rockset.
If not specified then Rockset will create a new, random id for the document at ingestion time. But
if your source data already has a unique identifier, it's best to map it to _id
so future updates
to the document from the source are properly reflected in Rockset.
Example:
SELECT
user_id AS _id,
*
FROM
_input
Note: If you map a custom value for _id
, the value must be non-null and of string type,
otherwise the ingestion of the corresponding document will fail.
_event_time
The _event_time field is the document timestamp in Rockset and is used to determine the cutoff for retention.
Example:
SELECT
TIMESTAMP_MILLIS(created_at_ts) AS _event_time,
*
FROM
_input
Note: If you map a custom value for _event_time
, the value must be non-null and of either int
(microseconds since epoch) or timestamp type, otherwise the ingestion of the corresponding document
will fail.
_op
The _op field is used to properly handle CDC operations
from your source. The supported operation types (case insensitive) are: INSERT
, UPDATE
,
UPSERT
(default), DELETE
, REPLACE
, and REPSERT
. For more on their exact semantics, refer
to the documentation for _op
.
Example:
SELECT
CASE
WHEN _delete = true THEN 'DELETE'
ELSE 'UPSERT'
END as _op,
*
FROM
_input
Search Tokenization
Text fields in the input can be tokenized at data ingestion time so you can more efficiently run text search queries later.
Example:
SELECT
TOKENIZE(tweet_text, 'en_US') AS tweet_tokens
FROM
_input
Vector Search
Enable storage and performance optimizations for vector embeddings and ensure compatibility between vectors during query execution.
For more implementation details, refer to the documentation for VECTOR_ENFORCE
.
Example:
SELECT
VECTOR_ENFORCE(embeddings, 1536, 'float') as embeddings
FROM
_input
Importing Geographic Data
You can convert latitude/longitude points or well known text strings to Rockset's native geography type.
Example:
SELECT
ST_GEOGPOINT(longitude, latitude) AS geo
FROM
_input
To validate your geographic data at ingest time the ST_VALIDATE_GEOGRAPHY
function can be useful.
Example:
SELECT
*
FROM
_input
WHERE
ST_VALIDATE_GEOGRAPHY(_input.geo_polygon) IS NULL
CSV File Ingestion
When ingesting data from a CSV or TSV file, you can use ingest transformations to fully control how the data is ingested into your collection just like with any other data source. If you don't need to ingest every column from your CSV file, you can specify only the columns you need inside your select clause. To specify the typing of the fields ingested from your file, you should use type casting statements.
As an example, assume you had a CSV file with three fields ('name', 'sign_up_date', and 'age'). An ingest transformation that drops the age field and converts sign_up_date to a datetime type while also renaming the column could be written in the following way:
SELECT
name,
TRY_CAST(sign_up_date AS DATETIME) AS signUpDate
FROM
_input;
For more examples covering further use cases of ingest transformations, review the rest of this document.
Limitations
There are some limitations related to ingest transformation queries that should be kept in mind.
- JOINs are not allowed in the ingest transformation query. When bringing in data from multiple
sources into the same collection, the sources are automatically unioned and the result available
via the pseudo-collection
_input
. - Some SQL clauses including LIMIT, OFFSET, and ORDER BY are unsupported in the ingest transformation.
Source-specific limitations:
- MongoDB– Rockset's MongoDB connector only receives deltas from your MongoDB collection
instead of the full document on each update in order to reduce load on your MongoDB cluster. For
this reason, with MongoDB any expressions in the ingest transformation query can only involve 1
field. So
SELECT a + 1 AS b
is allowed, but notSELECT a + b AS c
. - DynamoDB– For collections sourced from DynamoDB, you cannot specify a custom mapping for
_id
. SoSELECT user_id AS _id
is invalid. This is so we can properly map updates coming from DynamoDB to the corresponding Rockset document to keep your collection in-sync.