Incremental Strategy#

class onetl.strategy.incremental_strategy.IncrementalStrategy(*, hwm: HWM | None = None, offset: Any = None)#

Incremental strategy for DB Reader/File Downloader.

Used for fetching only new rows/files from a source by filtering items not covered by the previous HWM value.

For DB Reader:

First incremental run is just the same as SnapshotStrategy:

SELECT id, data FROM mydata;

Then the max value of id column (e.g. 1000) will be saved as HWM to HWM Store.

Next incremental run will read only new data from the source:

SELECT id, data FROM mydata WHERE id > 1000; -- hwm value

Pay attention to resulting dataframe does not include row with id=1000 because it has been read before.

Warning

If code inside the context manager raised an exception, like:

with IncrementalStrategy():
    df = reader.run()  # something went wrong here
    writer.run(df)  # or here
    # or here...

When DBReader will NOT update HWM in HWM Store. This allows to resume reading process from the last successful run.

For File Downloader:

Behavior depends on hwm type.

hwm=FileListHWM(...):

First incremental run is just the same as SnapshotStrategy - all files are downloaded:

$ hdfs dfs -ls /path

/path/my/file1
/path/my/file2
assert download_result == DownloadResult(
    successful=[
        "/path/my/file1",
        "/path/my/file2",
    ]
)

Then the downloaded files list is saved as FileListHWM object into HWM Store:

[
    "/path/my/file1",
    "/path/my/file2",
]

Next incremental run will download only new files from the source:

$ hdfs dfs -ls /path

/path/my/file1
/path/my/file2
/path/my/file3
# only files which are not in FileListHWM

assert download_result == DownloadResult(
    successful=[
        "/path/my/file3",
    ]
)

New files will be added to the FileListHWM and saved to HWM Store:

[
    "/path/my/file1",
    "/path/my/file2",
    "/path/my/file3",
]

Warning

FileDownload updates HWM in HWM Store at the end of .run() call, NOT while exiting strategy context. This is because:

  • FileDownloader does not raise exceptions if some file cannot be downloaded.

  • FileDownloader creates files on local filesystem, and file content may differ for different modes.

  • It can remove files from the source if delete_source is set to True.

Parameters:
offsetAny, default: None

If passed, the offset value will be used to read rows which appeared in the source after the previous read.

For example, previous incremental run returned rows:

898
899
900
1000

Current HWM value is 1000.

But since then few more rows appeared in the source:

898
899
900
901 # new
902 # new
...
999 # new
1000

and you need to read them too.

So you can set offset=100, so a next incremental run will generate SQL query like:

SELECT id, data FROM public.mydata WHERE id > 900;
-- 900 = 1000 - 100 = hwm - offset

and return rows since 901 (not 900), including 1000 which was already captured by HWM.

Warning

This can lead to reading duplicated values from the table. You probably need additional deduplication step to handle them

Warning

Cannot be used with File Downloader and hwm=FileListHWM(...)

Note

offset value will be subtracted from the HWM, so it should have a proper type.

For example, for TIMESTAMP column offset type should be datetime.timedelta, not int

Examples

Incremental run with DB Reader:

from onetl.connection import Postgres
from onetl.db import DBReader
from onetl.strategy import IncrementalStrategy

from pyspark.sql import SparkSession

maven_packages = Postgres.get_packages()
spark = (
    SparkSession.builder.appName("spark-app-name")
    .config("spark.jars.packages", ",".join(maven_packages))
    .getOrCreate()
)

postgres = Postgres(
    host="postgres.domain.com",
    user="myuser",
    password="*****",
    database="target_database",
    spark=spark,
)

reader = DBReader(
    connection=postgres,
    source="public.mydata",
    columns=["id", "data"],
    hwm=DBReader.AutoDetectHWM(name="some_hwm_name", expression="id"),
)

writer = DBWriter(connection=hive, target="newtable")

with IncrementalStrategy():
    df = reader.run()
    writer.run(df)
-- previous HWM value was 1000
-- DBReader will generate query like:

SELECT id, data
FROM public.mydata
WHERE id > 1000; --- from HWM (EXCLUDING first row)

Incremental run with DB Reader and offset:

...

with IncrementalStrategy(offset=100):
    df = reader.run()
    writer.run(df)
-- previous HWM value was 1000
-- DBReader will generate query like:

SELECT id, data
FROM public.mydata
WHERE id > 900; -- from HWM-offset (EXCLUDING first row)

hwm.expression can be a date or datetime, not only integer:

from datetime import timedelta

reader = DBReader(
    connection=postgres,
    source="public.mydata",
    columns=["business_dt", "data"],
    hwm=DBReader.AutoDetectHWM(name="some_hwm_name", expression="business_dt"),
)

with IncrementalStrategy(offset=timedelta(days=1)):
    df = reader.run()
    writer.run(df)
-- previous HWM value was '2021-01-10'
-- DBReader will generate query like:

SELECT business_dt, data
FROM public.mydata
WHERE business_dt > CAST('2021-01-09' AS DATE); -- from HWM-offset (EXCLUDING first row)

Incremental run with DB Reader and Kafka connection (by offset in topic - KeyValueHWM):

from onetl.connection import Kafka
from onetl.db import DBReader
from onetl.strategy import IncrementalStrategy

from pyspark.sql import SparkSession

maven_packages = Kafka.get_packages()
spark = (
    SparkSession.builder.appName("spark-app-name")
    .config("spark.jars.packages", ",".join(maven_packages))
    .getOrCreate()
)

kafka = Kafka(
    addresses=["mybroker:9092", "anotherbroker:9092"],
    cluster="my-cluster",
    spark=spark,
)

reader = DBReader(
    connection=kafka,
    source="topic_name",
    hwm=DBReader.AutoDetectHWM(name="some_hwm_name", expression="offset"),
)

with IncrementalStrategy():
    df = reader.run()

Incremental run with File Downloader and hwm=FileListHWM(...):

from onetl.connection import SFTP
from onetl.file import FileDownloader
from onetl.strategy import SnapshotStrategy
from etl_entities.hwm import FileListHWM

sftp = SFTP(
    host="sftp.domain.com",
    user="user",
    password="*****",
)

downloader = FileDownloader(
    connection=sftp,
    source_path="/remote",
    local_path="/local",
    hwm=FileListHWM(name="some_hwm_name"),
)

with IncrementalStrategy():
    df = downloader.run()

# current run will download only files which were not downloaded in previous runs
__init__(**kwargs)#

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.