Replicate your data in minutes, through our UI or programmatically with our API.
The Katonic Redshift destination connector allows you to sync data to Redshift, without the headache of writing and maintaining ETL scripts.
This Redshift destination connector is built on top of the destination-jdbc code base and is configured to rely on JDBC 4.2 standard drivers provided by Amazon via Mulesoft here as described in Redshift documentation here.
Check our detailed documentation on how to start syncing data to Redshift.
|Full Refresh Sync||Yes|
Each stream will be output into its own raw table in Redshift. Each table will contain 3 columns:
- ab_id: a uuid assigned by Katonic to each event that is processed. The column type in Redshift is VARCHAR.
- emitted_at: a timestamp representing when the event was pulled from the data source. The column type in Redshift is TIMESTAMP WITH TIME ZONE.
- data: a json blob representing with the event data. The column type in Redshift is VARCHAR but can be parsed with JSON functions.
You will need to choose an existing database or create a new database that will be used to store synced data from Katonic.