Incremental Update¶
Table of Contents
Documentation Conventions¶
Commands and keywords are in this
font
.$OFFLOAD_HOME
is set when the environment file (offload.env
) is sourced, unless already set, and refers to the directory namedoffload
that is created when the software is unpacked. This is also referred to as<OFFLOAD_HOME>
in sections of this guide where the environment file has not been created/sourced.Third party vendor product names might be aliased or shortened for simplicity. See Third Party Vendor Products for cross-references to full product names and trademarks.
Introduction¶
Incremental Update is a data synchronization feature of Gluent Offload Engine that enables offloaded data to be kept up-to-date with any data modifications (DML) that occur in the RDBMS after the initial offload. It is primarily designed to enable users to fully-offload tables such as dimensions - typically these tables continue to be modified after offload and therefore need to be frequently synchronized. Incremental Update can also be used to make updates to offloaded data that no longer exists in the RDBMS (for example, to modify data in partitions that were offloaded and then dropped from the RDBMS). The use-cases and options for Incremental Update are described in this guide.
Note
Incremental Update is currently supported for use with Cloudera Data Hub, Cloudera Data Platform Private Cloud, Google BigQuery, Snowflake and Azure Synapse Analytics. This feature is not currently supported for Cloudera Data Platform Public Cloud. Where possible, this guide describes the Incremental Update feature in relation to all supported backends, but any features that are limited to a specific backend or subset of supported backends are noted as such.
When to Use Incremental Update¶
Incremental Update is designed to support two primary use-cases:
Full offloading of dimensions that are subject to regular updates after offload
Correcting (or “patching”) offloaded data at any time after offload, even if the corresponding data is no longer present in the RDBMS
There are many other ways that Incremental Update can be used, but it is important to note that it is intended to be used for relatively low-volume changes. It is not designed to be a high-volume, low-latency replication tool for tables with significant rates of change. For example, it is not designed to replace Partition-Based Offload or Subpartition-Based Offload (also known as Incremental Partition Append), or to replace Predicate-Based Offload (also known as Incremental Data Append), although it can be used to change data in previously offloaded (sub)partitions or predicates. The use-cases and supporting options will become clearer throughout this document.
How Incremental Update Works¶
Incremental Update is managed by Gluent Offload Engine and can be enabled for tables that have already been offloaded or as part of an initial offload. When Incremental Update is enabled, additional database objects are created (in both RDBMS and backend) to track and store changes to the offloaded data. Changes are periodically synchronized to the backend and, depending on the backend, are either merged with the base data immediately (BigQuery, Snowflake or Synapse) or stored in a delta table and queried with the base offloaded data via a dedicated view, until they are eventually merged with the original offloaded data (Hadoop).
Incremental Update Phases¶
There are three phases to Incremental Update:
Phase 3: Compaction (Hadoop backends only)
Phase 1: Capture¶
The first phase of Incremental Update is capture: that is, the identification and recording of DML changes to an offloaded table. When enabling Incremental Update, an extraction method must be specified. This tells Gluent Offload Engine which RDBMS objects need to be created to track and capture future DML changes to the application table that is to be (or has already been) offloaded. There are six extraction methods: some apply to all offloaded table types and some are more specialized and apply only to certain backends and scenarios.
Incremental Update also requires a primary key to ensure that changes are applied to the correct records in the backend. If the offloaded table does not have a primary key, a set of columns that can be used as a logical key must be provided when enabling the feature. The primary or notional key is used in the setup of some of the additional RDBMS and backend objects (if applicable) that are created when a table is Incremental Update-enabled.
The majority of the extraction methods require the creation of additional RDBMS objects to track and store changes. Most methods create a changelog table, to store change records in the RDBMS prior to transfer to the backend (Phase 2), and a changelog sequence, to journal the DML changes and ensure they are applied in the correct order later on. Depending on the method, additional objects (RDBMS triggers, RDBMS view) are also created.
See also
The Choosing an Extraction Method section assists with selecting a suitable extraction method.
Phase 2: Extraction¶
The second phase of Incremental Update is extraction: that is, extracting changes from the RDBMS, copying them to the backend and applying them to the existing offloaded data. Extractions are run using Offload and can be executed as frequently or infrequently as required, depending on factors such as the workload of the RDBMS application and the latency demands (that is, the acceptable delay between DML changes occurring in the RDBMS and then appearing in the backend).
When enabling Incremental Update for Hadoop backends, all extraction methods create the same Hadoop objects: a delta table and two views known as delta views (one internal, one user-facing). The original offloaded table is renamed to an internal format and is known as a base table and the user-facing delta view is named as the original table. This view is the object that users always query to get the latest state of the data (base + changes) once Incremental Update is in use. During extraction, change records are stored in the delta table and removed from the RDBMS changelog table (where applicable). The delta table contents are physically merged into the offloaded base table data periodically by a process known as compaction (Phase 3).
When enabling Incremental Update for Google BigQuery, Snowflake or Azure Synapse Analytics, no additional objects are created in the backend. During extraction, change records are offloaded to a temporary staging area in the same load dataset that is used by Offload. Once extracted, changes are merged with the base table and the temporary staging objects are removed.
Phase 3: Compaction¶
The final phase of Incremental Update when using Hadoop backends is compaction: a process by which the change records being staged in the delta table are physically merged with the data in the base table. Until a compaction takes place, the delta view (used for all queries) has to join the base table with the delta table to merge the change records with the base data. Extractions continually load data into the delta table and as the volume increases, so does the work that the delta view has to perform. Compaction provides a physical reset, whereby a new base table (or new partitions in the base table, depending on the chosen method) are created out of the existing base and delta tables and the delta table is cleared down.
See also
More details on compaction, together with examples, are provided later in Merging Changes (Running a Compaction).
Note
The delta table, delta view and compaction process are needed because Hadoop does not support updates to existing data. The database objects that Gluent Offload Engine creates provide the latest-state view of the data (i.e. the same results that would be available if the change records were actually applied as DML). The compaction process optimizes data access by clearing down the delta table and therefore reducing the work that the delta view needs to do to join the base and delta tables.
Enabling Incremental Update¶
Incremental Update is enabled per table with the offload
command. It can be enabled for a table at the time of an initial offload or at a later stage if required (the command is the same). It is recommended that Incremental Update is enabled for a table only when it is necessary and not used as a default option for all offloaded tables.
The following examples demonstrate how to enable Incremental Update for a table.
Example 1: Enable Incremental Update¶
$ . $OFFLOAD_HOME/conf/offload.env
$ $OFFLOAD_HOME/bin/offload -t SH.PRODUCTS -x --incremental-updates-enabled --incremental-extraction-method=CHANGELOG
This example chooses the CHANGELOG
extraction method as an example of how to use the --incremental-extraction-method
option. To choose an alternative extraction method, simply substitute CHANGELOG
with the relevant value. Information on all extraction methods directly follows the examples.
Note
This example includes the sourcing of the environment file for an Offload and Incremental Update session. All other examples in this guide will assume this step has occurred.
Example 2: Enable Incremental Update (No Primary Key)¶
As described earlier, if the table being offloaded does not have a primary key, the notional key to use must be provided, as the following example demonstrates.
$ $OFFLOAD_HOME/bin/offload -t SH.PRODUCTS -x \
--incremental-updates-enabled \
--incremental-extraction-method=CHANGELOG \
--incremental-key-columns=PROD_ID
Failure to provide an Incremental Update key will result in an exception and the offload will terminate before any work is done.
Choosing an Extraction Method¶
There are six extraction methods: three basic (common) and three specialist. Choosing the correct extraction method for the offload use-case is important, as it can ensure the best performance and minimize the impact on the system. It is also important to recognize that the specialist extraction methods require an understanding of the implications of their use and care while using. This is covered below.
The following tables summarize the six methods, when to use them, which DML operations they support and which offload type they apply to.
Table 1: Incremental Update Extraction Methods¶
Extraction Method |
Basic |
Supported DML Capture |
Supported Offload Types 1 |
When To Use |
|||||
---|---|---|---|---|---|---|---|---|---|
INSERT |
UPDATE |
DELETE |
MERGE |
Full (100/0) |
Full (100/10) |
Incremental (90/10) |
|||
|
Yes |
Yes |
Yes |
No |
No |
Yes |
Yes |
No |
Changes are |
|
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
No |
The RDBMS table undergoes changes of all DML types |
|
Yes |
Yes |
Yes |
Yes |
No |
Yes |
Yes |
Yes |
Changes need to be made to offloaded data in the backend 3, for example if: * the offloaded data is no longer in the RDBMS * a trigger on the application table (i.e. the
|
|
No |
Yes |
Yes |
Yes |
|
Yes |
Yes |
No |
Changes need to be made to offloaded data that both does and doesn’t still exist in the RDBMS: * * |
|
No |
Yes |
No |
No |
No |
Yes |
Yes |
No |
The RDBMS table is modified with This method is only available for Hadoop backends |
|
No |
Yes |
No |
No |
No |
Yes |
Yes |
Yes |
Data sometimes needs to be added to partitions that have been offloaded but no longer exist in the RDBMS. The data cannot be inserted into the RDBMS table This method is only available for Hadoop backends |
- 1
Offload type refers to the percentage of a table’s data that is offloaded. There are two offload types (
INCREMENTAL
andFULL
) that can be used to support three offload use-cases (known as 90/10 (INCREMENTAL
) 100/0 (FULL
), 100/10 (FULL
)). See the Glossary and Offload Type for more details.- 2
The
ORA_ROWSCN
extraction method can only be used for Oracle Database tables that have theROWDEPENDENCIES
property set. This enables DML changes to be identified at row-level, rather than block-level, and prevents the transfer of too many unchanged records to Hadoop.- 3
Incremental Update operations using any of the
UPDATABLE*
extraction methods is known as data patching. It is generally intended to enable corrections and updates to offloaded data that may or may not exist in the RDBMS. It is also important to note that data patching is performed using hybrid objects in the RDBMS, despite it affecting data in Hadoop.- 4
The
*INSERT
extraction methods are lightweight and cater purely for insert-only workloads. Using this method asserts that no other DML operations are expected and these will not be captured. These methods also have an implication for In-Place Compaction that is described later. The*INSERT
extraction methods can only be enabled when using Gluent Offload Engine with supported Hadoop backends.
Table 2: Incremental Update Extraction Methods and Database Objects¶
The following table describes the six extraction methods and the additional RDBMS and backend objects they create (where applicable; no backend objects are created for BigQuery, Snowflake or Synapse backends).
Extraction Method |
Additional Hybrid Objects |
|||
---|---|---|---|---|
RDBMS (Hybrid Schema) |
Backend (Hadoop-only) 5 |
|||
Object Type |
Naming Format |
Object Type |
Naming Format 6 |
|
|
None |
None |
Base Table 7 |
<SCHEMA>.<TABLE>_DATA |
Delta Table |
<SCHEMA>_LOAD.<TABLE>_CHANGES |
|||
Delta View (User) |
<SCHEMA>.<TABLE> 7 |
|||
Delta View (Internal) |
<SCHEMA>.<TABLE>_INTERNAL_USER_VIEW_<N> |
|||
|
Changelog Table |
<SCHEMA>_H.<TABLE>_LOG |
As |
|
Changelog Sequence |
<SCHEMA>_H.<TABLE>_SEQ |
|||
Changelog Trigger 8 |
<SCHEMA>_H.<TABLE> |
|||
|
Changelog Table |
<SCHEMA>_H.<TABLE>_LOG |
As |
|
Changelog Sequence |
<SCHEMA>_H.<TABLE>_SEQ |
|||
Updatable View |
<SCHEMA>_H.<TABLE>_UV |
|||
Updatable Trigger 9 |
<SCHEMA>_H.<TABLE>_UV |
|||
|
As |
As |
||
|
As |
As |
||
|
As |
As |
- 5
As previously noted, additional backend objects are only created when using Incremental Update with Hadoop-based backends. No additional objects are created for Google BigQuery, Snowflake or Azure Synapse Analytics backends.
- 6
Objects created to support Incremental Update are named according to the name of the application SCHEMA.TABLE, according to the suffixes shown in the above table. If a suffix extends an object name beyond the supported character limits of the database platform, a hash is used instead to keep the derived name within that limit. The
--incremental-changelog-table
option can be used to override the name given to the changelog table.- 7(1,2)
When Incremental Update is enabled for Hadoop backends, the base table (i.e. the offloaded version of the RDBMS table stored in Hadoop) is renamed and the user-facing delta view takes on the original offloaded table name. This ensures that existing queries do not need to be modified and they continue to see the latest state of the data once Incremental Update is enabled. For BigQuery, Snowflake and Synapse, no changes occur to the base table, because updated data is simply merged into the base table during extraction.
- 8
The
CHANGELOG
,UPDATABLE_CHANGELOG
andCHANGELOG_INSERT
extraction methods create a trigger on the application table. The trigger is created in the hybrid schema and fires before insert/update/delete for each row modification to the application table.- 9
The
UPDATABLE
,UPDATABLE_CHANGELOG
andUPDATABLE_INSERT
extraction methods create a trigger on a specialist updatable view in the hybrid schema. Any modifications to the application table are not captured by this trigger.
Capturing Changes with Different Extraction Methods¶
The extraction methods capture data modifications in different ways, based on their design and the objects that are created for change tracking.
Capturing Changes with ORA_ROWSCN¶
With the ORA_ROWSCN
extraction method, INSERT
and UPDATE
changes are automatically captured by Oracle Database’s multiversion read consistency model, so there are no specific user-actions required to ensure that data modifications can be extracted. As noted previously, to ensure the accuracy of this data capture mechanism, the RDBMS table must have been created with ROWDEPENDENCIES
. This ensures that changes are tracked at row-level, rather than block-level, for the table. This method does not support DELETE
operations.
Capturing Changes with CHANGELOG¶
With the CHANGELOG
extraction method, changes are automatically tracked by a trigger (changelog trigger) that is created on the application table when Incremental Update is enabled. The trigger is row-level and fires before any DML operations on the target table, recording all modifications in a changelog table in the hybrid schema. No specific user-actions are required to capture changes with this method.
Capturing Changes with UPDATABLE¶
With the UPDATABLE
extraction method, changes are tracked by a specialist trigger and view (updatable trigger and updatable view) that are created in the hybrid schema when Incremental Update is enabled. The primary use-case is to modify data that exists in the backend but no longer exists in the RDBMS. This is known as data patching. Data patching is executed on the RDBMS, using the hybrid objects as described, and is one of only two extraction methods that is supported for tables that are offloaded with Partition-Based Offload, Subpartition-Based Offload and Predicate-Based Offload.
To capture changes for eventual extraction to the backend, DML statements are made against the updatable view in the hybrid schema. This view is usually suffixed with _UV
. When a DML statement is executed against the updatable view, a trigger of the same name fires and captures the modifications and stores them in the changelog table.
Example 3: Capturing Changes with UPDATABLE¶
In the following example, suppose that most of the partitioned SH.SALES table has been offloaded and the offloaded partitions have been dropped from the RDBMS. It is later discovered that one of the offloaded partitions has incorrect sales amounts and they need to be corrected. The partition is no longer in the RDBMS and therefore needs to be corrected in the backend. The updatable view is used as the target for this data:
SQL> UPDATE sh_h.sales_uv
SET amount_sold = amount_sold * 1.1
WHERE time_id BETWEEN DATE '2017-01-01' AND DATE '2017-01-31';
SQL> COMMIT;
The updatable view will retrieve the required data from the backend and the updatable trigger will fire on the update and modify the values as required. The updated rows will be stored in the changelog table in the RDBMS. The changes will not be visible in the backend until the next extraction takes place.
Important
Care must be taken when using the UPDATABLE
extraction method on tables that are offloaded periodically with Partition-Based Offload, Subpartition-Based Offload or Predicate-Based Offload. In such cases, users must not use the updatable view to modify data in partitions, subpartitions or predicates that have not yet been offloaded. Doing so will prevent the remainder of the partition, subpartition or predicate data from being offloaded at the next offload operation.
Capturing Changes with UPDATABLE_CHANGELOG¶
As its name suggests, the UPDATABLE_CHANGELOG
extraction method covers both the CHANGELOG
and UPDATABLE
use-cases (remembering that CHANGELOG
is primarily used for propagating regular RDBMS changes and UPDATABLE
is primarily used for infrequently patching offloaded data). Changes to the original RDBMS application table are automatically tracked by the CHANGELOG
objects and require no user actions, while changes made to the updatable view are tracked by the updatable trigger (see Capturing Changes with UPDATABLE for an example). Both capture mechanisms store the change records in the same changelog table.
Important
When using the UPDATABLE_CHANGELOG
extraction method, the target data for DMLs must not overlap. The UPDATABLE
part of this extraction method is purely for patching data that exists in the backend and must never be used to repeat DMLs that have already been captured by the changelog trigger. Doing so would risk incorrect change records being propagated to the offloaded data. Any changes to the RDBMS application table will automatically be captured by the changelog trigger so must not be repeated on the updatable view.
Capturing Changes with CHANGELOG_INSERT¶
The CHANGELOG_INSERT
extraction method is an optimization to the CHANGELOG
method for use with insert-only workloads and Hadoop backends. With this method, the changelog trigger captures inserts into the RDBMS application table and stores them in the changelog table ready for extraction to Hadoop. No specific user-actions are required to capture the changes, but note that UPDATE
, DELETE
and MERGE
operations will not be captured by this method.
The optimization with this extraction method is realized in the delta view. In this case, the delta view performs a lightweight UNION ALL
operation between the delta table and the base table, unlike the more general extraction methods that setup a more complex delta view that merges the delta table with the base table. Because the delta view does not merge the change records with the base data when using CHANGELOG_INSERT
, the application table must have a primary key to prevent duplicate keys. Users can override this with the --incremental-no-verify-primary-key
option, but in doing so must guarantee that the application does not insert duplicate data. It is recommended that this option is not used.
The CHANGELOG_INSERT
optimization also has implications for the downtime needed for in-place compaction. See In-Place Compaction for details.
Capturing Changes with UPDATABLE_INSERT¶
The UPDATABLE_INSERT
extraction method is an optimization to the UPDATABLE
method for use with insert-only data patching requirements for Hadoop backends. With this method, data to be added to the offloaded table in Hadoop is inserted into the updatable view in the hybrid schema and the updatable trigger captures and stores the new records in the changelog table, ready for extraction to Hadoop.
The optimization with this extraction method is realized in the delta view. In this case, the delta view performs a lightweight UNION ALL
operation between the delta table and the base table, unlike the more general extraction methods that setup a more complex delta view that merges the delta table with the base table. Because the delta view does not merge the change records with the base data when using UPDATABLE_INSERT
, the application table must have a primary key to prevent duplicate keys. Users can override this with the --incremental-no-verify-primary-key
option, but in doing so must guarantee that the application does not insert duplicate data into the updatable view. It is recommended that this option is not used.
The UPDATABLE_INSERT
optimization also has implications for the downtime needed for in-place compaction. See In-Place Compaction for details.
Example 4: Capturing Changes with UPDATABLE_INSERT¶
In the following example, suppose that most of the partitioned SH.SALES table has been offloaded to Hadoop and the offloaded partitions have been dropped from the RDBMS. It is later discovered that one of the offloaded partitions has missing sales information that needs to be added. The partition is no longer in the RDBMS and therefore needs to be added to Hadoop. The updatable view is used as the target for this data:
SQL> -- Add individual rows
SQL> INSERT INTO sh_h.sales_uv (...columns...) VALUES (...values...);
SQL> -- Add results of a query
SQL> INSERT INTO sh_h.sales_uv (...columns...) SELECT (...expressions...) FROM ...;
SQL> COMMIT;
The updatable trigger will fire on the inserts and the new rows will be stored in the changelog table in the RDBMS. The changes will not be visible in Hadoop until the next extraction takes place.
Important
Care must be taken when using the UPDATABLE
extraction method on tables that are offloaded periodically with Partition-Based Offload, Subpartition-Based Offload or Predicate-Based Offload. In such cases, users must not use the updatable view to add data to partitions, subpartitions or predicates that have not yet been offloaded. Doing so will prevent the remainder of the partition, subpartition or predicate data from being offloaded at the next offload operation.
Note
The UPDATABLE_INSERT
optimization has implications for the downtime needed for in-place compaction. See In-Place Compaction for details.
Extracting Changes to the Backend (Running an Extraction)¶
Changes that are captured in the RDBMS are not visible to queries until they have been extracted and transferred to the backend. While in most cases the capture of changes is automatic, the extraction must either be scheduled or run manually. It is recommended that extractions are scheduled to run frequently to prevent a large backlog of changes. Ultimately, this will depend on the rate of change in the RDBMS and the latency requirements of those parts of the application that use the offloaded data in the backend. Changes continue to be captured while an extraction process is running.
Example 5: Running an Extraction¶
Extractions are executed with an option to offload
as follows:
$ $OFFLOAD_HOME/bin/offload -t SH.PRODUCTS -x --incremental-run-extraction
This command will transfer the outstanding change records to the backend and delete them from the changelog table where applicable. For Hadoop backends, the extracted records are saved in a delta table. For a Google BigQuery, Snowflake or Azure Synapse Analytics backend, the extracted records are staged and merged directly to the base table.
Extraction and Concurrency¶
Extractions for more than one table can be run at the same time, but each table must only have one extraction process running at any time. A lightweight locking mechanism is used to ensure this, but the lock is node-dependent. This means that to prevent more than one extraction from being executed against the same table, all extractions should be executed from the same node of the Hadoop cluster, Gluent Node or RDBMS server (i.e. wherever Offload is installed and executed). This lock prevents a second extraction for a table from starting when one is already in progress (i.e. to protect against scheduling errors or aggressive scheduling).
Extractions can be executed while DML changes are happening. Any DML changes that are committed after an extraction process begins to extract change records will not be extracted at this time; they will be picked up by the next extraction process.
Merging Changes (Running a Compaction)¶
Changes that are extracted to Hadoop backends and stored in the delta table need to be periodically merged with the base table data in a process known as compaction. Until a compaction occurs, the delta table and base table data is merged “on-the-fly” by the delta view; the performance of which degrades over time (i.e. as changes from multiple extractions start to build up). During compaction, a new base table or set of partitions in the base table (depending on the chosen compaction method) are created from the delta view and the previous state is discarded. Compaction is the most complex part of the Incremental Update process, in terms of scheduling (see Scheduling a Compaction) and compaction methods (see Choosing a Compaction Method).
Compaction is required to reduce the impact of having to merge the delta table and base table in the delta view to present the latest state of the data to hybrid queries. As this merge is essentially a join between the two tables, it follows that the higher the volume of changes stored in the delta table, the more CPU and I/O resources are needed to merge the data and this is a cost borne by every query that uses the delta view. When the delta volume is relatively low, it is more likely that the join will involve broadcasting the delta set (an efficient mechanism when joining small tables with large tables in Hadoop). As the delta volume increases, it can cross the Hadoop system’s particular threshold for turning broadcast joins into less-efficient alternatives and the performance of the delta view can degrade.
Scheduling a Compaction¶
Compactions do not need to be scheduled as frequently as extractions. Extractions can run as frequently as required, but compactions should be scheduled to run infrequently (depending on the rate of change) and during quieter system periods or maintenance windows. In this case “infrequently” can be taken to mean a short while before the delta table grows to a size that demonstrates a degradation in performance of the delta view.
By way of an example, consider a dimension table in an RDBMS data warehouse that is fully offloaded to Hadoop with Incremental Update enabled. The dimension is updated throughout each online day, changing approximately 1% of the table’s rows. The Hadoop latency requirements are to have the changes replicated no more than 5 minutes later than the RDBMS. In this example, a suggested approach would be to use the CHANGELOG
extraction method for capturing changes; to schedule an extraction to run every 1-2 minutes with an appropriate delta threshold; and to schedule a compaction to run once per week during a quiet system period or maintenance window.
There are three compaction methods available and depending on which method is chosen determines the potential system and application impact of the compaction process (see Choosing a Compaction Method).
Acknowledging Change Ingestion Diversion¶
Each compaction has many steps to perform and many metadata operations to record. Most of these steps are non-invasive but there is a point beyond which compaction will not proceed until it knows it is safe to do so. This tipping point is the acknowledgment of change ingestion diversion. To understand why this is necessary requires a short explanation of the structure of the delta table and the interaction between extractions and compactions.
The delta table is partitioned into “control partitions”, also known as “change ingestion locations”. Whenever an extraction runs, it checks to see if a compaction is running and if not, the change records continue to be written to the active change ingestion location (i.e. the active control partition). Multiple extractions can write to the same change ingestion location over time. However, if a compaction is underway when an extraction runs, the extraction process creates a new control partition in the delta table and makes this the active change ingestion location for the change records. The previous control partition will not receive any more change records.
When a compaction runs, it performs several tasks before reaching the step where it requires an acknowledgment that change ingestion diversion (i.e. the creation of a new control partition by an extraction process) has taken place. Depending on how compaction is executed (examples follow directly), it will wait on this step until another extraction is executed to perform the change ingestion diversion as described. Once change ingestion diversion is acknowledged, compaction can continue with the remainder of the steps required to compact the data in the static control partition with the base table. This process ensures that no new changes are lost due to extractions running while compaction takes place.
There are two ways to schedule a compaction to manage this acknowledgment step:
Single-Session Compaction¶
To schedule a single-session compaction process, a sequence of offload
commands need to be used to:
Run a compaction with an option to terminate when it reaches the change ingestion diversion acknowledgment step
Run an extraction to create a new change ingestion location (i.e. new control partition)
Run a second compaction to pick up from where the first compaction terminated and complete the process
Example 6 below demonstrates the Incremental Update options required to make this happen.
Example 6: Single-Session Compaction¶
# Start a compaction
$ $OFFLOAD_HOME/bin/offload -t SH.SALES -x --incremental-run-compaction --incremental-terminate-compaction
# Run an extraction
$ $OFFLOAD_HOME/bin/offload -t SH.SALES -x --incremental-run-extraction
# Run a second compaction
$ $OFFLOAD_HOME/bin/offload -t SH.SALES -x --incremental-run-compaction
Note
The --incremental-terminate-compaction
option tells the compaction process to terminate at the acknowledgment stage. It will exit cleanly with a success status code of zero.
Multi-Session Compaction¶
A multi-session compaction assumes that extractions are already scheduled to run frequently. In this case, only a single offload
command is required as shown in Example 7 below. The compaction will reach the change ingestion diversion acknowledgment step and wait until the next extraction executes from another session. The offload
log file for the compaction (and stdout when using the -v
option) will show that the process is waiting. Once the extraction is done and change diversion is acknowledged, the compaction process continues to completion.
Example 7: Multi-Session Compaction¶
$ $OFFLOAD_HOME/bin/offload -t SH.SALES -x --incremental-run-compaction
Compaction and Idempotency¶
Despite having many component steps, each step in the compaction process is idempotent. A compaction process can therefore be restarted after any point of termination, irrespective of which step the termination occurred and whether it was due to failure or a controlled termination by a single-session compaction.
Waiting for Queries to Complete¶
To minimize its impact on users and applications, the compaction process is designed to be as online an activity as possible. In most cases, the disruptive part of a compaction involves switching the delta views to a new target. These switches are quick and lightweight metadata operations (see In-Place Compaction and Insert Extraction Methods for an important exception). A compaction will wait for a specified period of time for user queries to complete before performing any disruptive operations. By default, this is 15 minutes and can be overridden with the --incremental-wait-time
option as shown in Example 8 below.
Example 8: Waiting for User Queries During Compaction¶
In the following example, the query wait time is set to 30 minutes.
$ $OFFLOAD_HOME/bin/offload -t SH.SALES -x --incremental-run-compaction --incremental-wait-time=30
The compaction process itself does not detect any running queries, so the query wait time should be set to a large enough value to comfortably allow application queries to complete. However, it might not always be possible to prevent queries from running when the disruptive compaction steps occur. Setting --incremental-wait-time
to 0 will prevent the compaction process from waiting on user queries.
Delta Thresholds¶
The compaction process is governed by a minimum threshold that determines how many change records must exist before a compaction process will begin (see --incremental-delta-threshold
for details). If the number of change records in the delta table is below the threshold, the compaction process will terminate immediately (with success but without doing any work). Specifying a suitable threshold with this option will prevent unnecessary compactions. The default threshold can be overridden with the --incremental-delta-threshold
option. The following example demonstrates how to use this option to run a compaction if at least 500 changes exist in the delta table.
Example 9: Running a Compaction with Non-Default Delta Threshold¶
$ $OFFLOAD_HOME/bin/offload -t SH.SALES -x –incremental-run-compaction –incremental-delta-threshold=500
Compaction and Concurrency¶
More than one table can be compacted at the same time, but each table must only have one compaction process running at any time. A lightweight locking mechanism is used to ensure this, but the lock is node-dependent. This means that to prevent more than one compaction from accidentally being executed against the same table, all compactions should be executed from the same node of the Hadoop cluster, Gluent Node or RDBMS server (i.e. wherever Offload is installed and executed). This lock prevents a second compaction for a table from starting when one is already in progress (i.e. to protect against scheduling errors or aggressive scheduling).
Choosing a Compaction Method¶
Examples 7, 8 and 9 each demonstrate an example command to run a compaction process for a table, using the default compaction method. There are three compaction methods to choose from:
Choosing a compaction method is based on a number of factors, such as how busy the system is, SLAs, whether compaction needs to be run online or can be an offline operation and so on.
In-Place Compaction¶
In-place compaction is the default compaction method (used by examples 7, 8 and 9 above). An in-place compaction processes only those partitions that have records with changes in the delta table. Each partition with change records is processed sequentially and the original partition in the base table is replaced with a new compacted set of data. This is a two-step process and for each affected partition:
A compacted copy of the changed partition data is created in the load database
The compacted partition data is written to the base table with an
INSERT OVERWRITE
statement as an atomic operation
When all partitions have been compacted, new delta views are created to reference the new control partition in the delta table and the previous control partition is removed.
This method is recommended when the number of changed partitions is low or when the base table is too large to create a full copy in a reasonable amount of time or within the resource constraints of the system. In most cases, the in-place compaction method will be quicker than the alternatives and will use fewer system resources to complete.
In-Place Compaction and Insert Extraction Methods¶
Special attention must be given when using in-place compaction for a table that has Incremental Update enabled with either of the CHANGELOG_INSERT
or UPDATABLE_INSERT
extraction methods. With these methods, the delta view is non-merging and simply appends the delta table change records to the base table with UNION ALL
. This means that for the duration of an in-place compaction, the delta view results will be inconsistent (as each partition is processed, the change records will be duplicated until compaction completes). For this reason, in-place compaction with either of the *_INSERT
extraction methods should be considered an offline activity and queries of the delta view should be avoided while compaction is running.
Full Compaction¶
With the full compaction method, a new base table is created as a full copy of the delta view results (i.e. using Create Table As Select) and a new internal delta view is created to reference the new base table and active control partition in the delta table. The final steps are to divert the user-facing delta view to the new objects and remove the previous control partition and base table, minimizing the disruption to the application. A full compaction is enabled by appending the --incremental-full-compaction
option to the offload
compaction command.
A full compaction is recommended when a significant number/proportion of partitions in the base table have corresponding changes in the delta table and the base table itself can be replicated comfortably within the CPU and storage resources of the Hadoop cluster in a single CTAS operation. It is also recommended as an alternative to an in-place compaction when using either of the *_INSERT
extraction methods and a minimal change window is available (see In-Place Compaction and Insert Extraction Methods for details).
This method is not recommended when creating a full copy of the base table would impact other users of the system or risk over-stretching the available resources. If this is an issue, a Partition-Wise Full Compaction would be a better method to use.
Partition-Wise Full Compaction¶
Like a full compaction, a partition-wise full compaction creates a new base table from the delta view, but does so one partition at a time. Being a full-table compaction, this method requires sufficient free storage to temporarily maintain two base table copies, but has less impact on system resources such as CPU and hence less impact on other users of the cluster. However, a partition-wise full compaction will take longer than a simple full compaction. A partition-wise full compaction is selected by appending the --incremental-partition-wise-full-compaction
option to the offload
compaction command.
A partition-wise full compaction is recommended when a full compaction is a better option than in-place compaction, but would be too disruptive to other users of the system during the CTAS phase (i.e. when creating a new base table from the delta view).
Saving Pre-Compaction State¶
Compactions are destructive operations. With either of the full compaction methods, the previous version of the base table is dropped, the previous control partition in the delta table is dropped, the previous internal delta view is dropped and the user-facing delta view is redirected to the new compacted objects. With an in-place compaction, previous versions of partitions are overwritten with new compacted data and the previous control partition in the delta table is dropped.
There are several mechanisms available to save the pre-compaction state of a table:
Saving Pre-Compaction State with Backups¶
All production systems should have automated backup solutions in place. Pre-compaction state should automatically be captured by whatever backup solution is in use for the Hadoop cluster.
Note
If a table is offloaded to cloud storage (a storage option provided by offload
), rather than HDFS, it is essential that timely backups are available prior to running compactions if the previous state of the table is to be retained for any reason. HDFS mechanisms described below do not apply to cloud storage.
Saving Pre-Compaction State with HDFS Snapshots¶
HDFS snapshots can be integrated with offload
to automatically create a snapshot at the start of a compaction process. To enable HDFS snapshots in this way, they must be configured in the offload.env
configuration file by setting the HDFS_SNAPSHOT_PATH
variable (and HDFS_SNAPSHOT_SUDO_COMMAND
if applicable). The HDFS_SNAPSHOT_PATH
location must be a snapshottable directory and snapshots will need to be enabled by an HDFS Administrator. When changing the value of HDFS_SNAPSHOT_PATH
from its default, the new value must remain a parent directory of HDFS_DATA
.
If for any reason HDFS snapshots are configured but unable to be taken, the compaction process will fail with an exception. To continue with the compaction but without attempting to take an HDFS snapshot, the --incremental-run-compaction-without-snapshot
option can be appended to the offload
command being used.
When the HDFS_SNAPSHOT_PATH
variable is unset, HDFS snapshots are disabled and will not be taken.
Note
This option does not apply to tables that have been offloaded to cloud storage.
Saving Pre-Compaction State with HDFS Trash¶
When HDFS trash is enabled for a cluster, the pre-compaction version of files for an offloaded table will be saved to trash by each destructive operation performed by a compaction. This includes the dropping of the previous base table by full or partition-wise full compactions, the dropped control partition in the delta table and even the previous versions of partitions replaced by INSERT OVERWRITE
operations during an in-place compaction. Previous versions of views will not, however, be put into HDFS trash.
Note
This mechanism does not apply to tables that have been offloaded to cloud storage.
Saving Pre-Compaction State with Compaction Options¶
When running a compaction, all prior versions of database objects can be retained by appending the --incremental-retain-obsolete-objects
option to the offload
command. For both of the full compaction methods, this means that the previous base table is retained and for in-place compactions, previous partitions are copied to the load database before being overwritten with new compacted data. Regardless of compaction method, the previous control partition in the delta table is preserved and prior versions of views are retained.
None of these retained objects are used or referenced by Gluent Query Engine when querying the new version of the compacted offloaded table, but it does mean that a previous state of data is available for ad hoc queries by Hadoop database users or the objects can be used as a temporary backup for a table undergoing compaction.
Warning
When using the --incremental-retain-obsolete-objects
option, Gluent Offload Engine no longer manages the previous artifacts or associated storage and any links between the offloaded table and the prior database artifacts are removed. This means that users must manage the removal of these obsolete objects when required. In some circumstances, retaining obsolete objects can cause the re-offloading of entire tables to fail (i.e. when an offloaded table is completely re-offloaded with the --reset-backend-table
option), although resetting a table in this way should be a rare occurrence.
Disabling Incremental Update¶
Incremental Update can be disabled for a table with the offload
command as shown in Example 10 below.
Example 10: Disable Incremental Update¶
$ $OFFLOAD_HOME/bin/offload -t SH.PRODUCTS -x --incremental-updates-disabled
Incremental Update can only be disabled if there are no changes to be applied (i.e. no change records in the delta table or changelog table, where applicable). If there are outstanding changes to be applied, they must either be extracted (on BigQuery, Snowflake or Synapse backends) or extracted and compacted (on Hadoop backends) or manually discarded (if they are not required).
When Incremental Update is disabled for Hadoop backends, all supporting RDBMS and Hadoop objects are dropped and the base table is renamed to match the original offloaded table, enabling queries to continue to work transparently.
When Incremental Update is disabled for a Google BigQuery, Snowflake or Azure Synapse Analytics backend, all supporting RDBMS objects are dropped and hybrid queries continue to work transparently.
Incremental Update Options¶
All Incremental Update options for the offload
command are available in the Incremental Update Parameters section of the Reference documentation.
Using Incremental Update¶
This section provides some guidance on how to integrate Incremental Update with existing ETL and offload operations.
Using Incremental Update for Fully-Offloaded Tables¶
For any of the following types of fully-offloaded table, Incremental Update can be used when the RDBMS table is subject to further DML changes:
Heap table
HASH
/REFERENCE
/SYSTEM
partitioned tableLIST
/RANGE
/INTERVAL
partitioned table where no new partitions are to be added (the partition structure is static)
Refer to Table 1: Incremental Update Extraction Methods for details of each of the extraction methods, but in most cases, CHANGELOG
will suffice.
Using Incremental Update with Partially-Offloaded Tables¶
Using Incremental Update for tables offloaded with Partition-Based Offload, Subpartition-Based Offload or Predicate-Based Offload is more restrictive and requires greater consideration than for fully-offloaded tables. For example, the following considerations help to determine how, when or whether Incremental Update can be used:
How new (sub)partitions are created (e.g. added, split, intervals)
How tables or (sub)partitions are populated (e.g. inserts, bulk loaders, (sub)partition exchange loading)
Volume of data being loaded into each new (sub)partition or table (low/high relative to system capabilities)
How and when data is modified (e.g. before offloading, after offloading)
For performance reasons, it is important to retain Partition-Based Offload, Subpartition-Based Offload or Predicate-Based Offload offloading wherever possible, as this is the high-volume, high-throughput offloading method that best supports large (sub)partitioned or heap tables. However, there are certain scenarios that require offloaded data to be updated and Incremental Update can be used in most cases. Table 3 below summarizes the various loading scenarios for such tables and suggests when Incremental Update might be useful.
Table 3: Partially-Offloaded Table Scenarios¶
ETL Method |
Data Updates |
Offload Type 10 |
Offload Strategy (Main) |
Offload Strategy (Updates) |
Extraction Methods 11 |
---|---|---|---|---|---|
Insert / Bulk Load |
Before offload |
Any |
Partition-Based Offload |
N/A |
N/A |
After offload |
Any 12 |
Partition-Based Offload |
Incremental Update |
|
|
Exchange Partition |
Before offload |
Any |
Partition-Based Offload |
N/A |
N/A |
After offload |
|
Partition-Based Offload |
Incremental Update |
|
|
|
Partition-Based Offload |
Incremental Update |
Any |
- 10
Offload type refers to the percentage of a table’s data that is offloaded. There are two offload types (
INCREMENTAL
andFULL
) that can be used to support three offload use-cases (known as 90/10 (INCREMENTAL
) 100/0 (FULL
), 100/10 (FULL
)). See the Glossary and Offload Type for more details.- 11
See Table 1: Incremental Update Extraction Methods for details of each of the extraction methods and which ones are suited to the range of DML operations that are expected.
- 12
When a (sub)partitioned table is offloaded with the
FULL
offload type, it is possible to use extraction methods such asCHANGELOG
andORA_ROWSCN
, but it is not recommended. Using either of these extraction methods will track all DML into the (sub)partitioned table, including the initial bulk loads. Incremental Update is not designed with large (sub)partitioned tables in mind and does not provide the level of performance required to offload large (sub)partitions of data. Instead, the initial offload should be performed with Partition-Based Offload, Subpartition-Based Offload or Predicate-Based Offload (as applicable) and any later DML replayed using a suitableUPDATABLE*
data patching method.
Important
When adopting an offload strategy that combines partial offloading methods with Incremental Update, it is essential that (sub)partitions or predicate data are offloaded before any changes are captured and extracted by Incremental Update. If DML changes are captured and extracted before the (sub)partitions or predicate data are offloaded, this will prevent Partition-Based Offload, Subpartition-Based Offload or Predicate-Based Offload from offloading those (sub)partitions or predicates. The application must guarantee not to capture updates to (sub)partitions or predicate data until after their initial offload. For this reason, it is almost always safest to use an UPDATABLE*
extraction method to provide greater control over the timing of extractions in relation to partially-offloaded tables.
Incremental Update Status¶
Incremental Update creates several database objects and has several phases of operations, making it more complex than a regular offload. Gluent Data Platform provides two additional utilities for displaying the Incremental Update status of an offloaded table and listing the supporting database objects:
display_incremental_status
(see Displaying Incremental Update Status)offload_status_report
(see Offload Status Reporting)
Displaying Incremental Update Status¶
The display_incremental_status
utility describes the latest Incremental Update status for a table, including compaction state, change ingestion location (control partition) and supporting objects. Example 11 below demonstrates how to use this utility and provides a sample of its output .
Example 11: Displaying Incremental Update Status¶
$ $OFFLOAD_HOME/bin/display_incremental_status -t SH.PRODUCTS
Connecting to Oracle as gluent_adm (localhost:1521/ORA11204)
Connecting to HiveServer2 (localhost:21050) - Kerberos service impala
HDFS operations will use WebHDFS (localhost:50070) using Kerberos
=============================
| CHANGE EXTRACTION STATUS: |
=============================
Max change SCN extracted: 50196699
Most recent extraction time: 2021-06-24 12:01:23
Extraction method: CHANGELOG
============================
| CHANGE INGESTION STATUS: |
============================
Current requested change ingestion location: /user/gluent/offload/sh_load.db/products_changes/gl_control_partition=0
As of: 2021-06-24 11:21:01
Current acknowledged change ingestion location: /user/gluent/offload/sh_load.db/products_changes/gl_control_partition=0
As of: 2021-06-24 11:21:01
======================
| COMPACTION STATUS: |
======================
(There is currently no ongoing compaction)
Last full compaction run completed at: (No compactions have been completed so far)
=====================
| MISC INFORMATION: |
=====================
Base table: sh.products_data
Id columns: PROD_ID
Merge view: sh.products_internal_user_view_0
User view: sh.products
Delta table: sh_load.products_changes
========================
| OFFLOAD INFORMATION: |
========================
Hybrid view: SH_H.PRODUCTS
RDBMS table: SH.PRODUCTS
Impala target: sh.products
Offload type: FULL
This example shows the latest extraction time and ingestion location (control partition in the delta table) for changes, but no compactions have taken place for this table yet.
Offload Status Reporting¶
In addition to the display_incremental_status
, Gluent Data Platform provides the more general Offload Status Report. This reports a wide range of information on offloaded tables at database, schema or table level, including the RDBMS and backend objects (if applicable) that are created for Incremental Update. Example 12 below demonstrates how to use this report and provides a relevant sample of its output for a Hadoop backend.
Example 12: Running an Offload Status Report¶
$ $OFFLOAD_HOME/bin/offload_status_report --schema=SH --table=PRODUCTS --output-level=detail
Offload Status Report v4.2.0 (6b90d09)
Log file: /u01/app/gluent/offload/log/offload_status_report_20210624_151329_075674.log
Find Tables For Report
Done
Fetch Rdbms And Offload Data
Done
Process Report Data
Done
Generate Report
Report saved to /u01/app/gluent/offload/log/Gluent_Offload_Status_Report_ORA11204_2021-06-24_15-13-29.txt
Done
The Offload Status Report saves a text report by default, as reported by the utility. A snippet of the report, relevant to Incremental Update, is as follows:
Incremental Update Objects (RDBMS)
----------------------------------
Incremental Update Object Type Object Type Object Name
------------------------------ ------------------------ ------------------------------------------------------------
Changelog trigger TRIGGER SH_H.PRODUCTS
Changelog table TABLE SH_H.PRODUCTS_LOG
Changelog sequence SEQUENCE SH_H.PRODUCTS_LOG_SEQ
Incremental Update Objects (Offloaded)
--------------------------------------
Incremental Update Object Type Object Type Object Name
------------------------------ ------------------------ ------------------------------------------------------------
Delta view VIEW SH.PRODUCTS
Delta table TABLE SH_LOAD.PRODUCTS_CHANGES
Base table TABLE SH.PRODUCTS_DATA
Internal delta view VIEW SH.PRODUCTS_INTERNAL_USER_VIEW_0
See also
The Offload Status Report can also produce rich HTML reports, in which the Incremental Update objects are displayed in a dedicated tab of a pop-up describe window for each table in the report. See Offload Status Report for more information.