Hi,
The gz files for all tables except for requests change daily so we sync and unpack them all. The ODI mappings are all truncate / insert. I'm not aware of any way to segregate incremental data, I think the whole data set for these tables is repackaged into new gz files daily. Having said that, the data set is not large so we haven't had a need to look at it. There are a number of fields which contain data larger than the 4,000 byte limit of the varchar2 datatype, these need to be converted to clobs. Also, we have found that the data contains multibyte characters, so we use varchar2(4000 char) for most columns and a unicode character set for the database.
The requests gz files are different. New files are added to the data set each day with the previous files remaining unchanged. This means that the timestamps on the files in the download target directory will tell you when they were downloaded. By moving the files which have already been uploaded to the database to a temporary location, we can unpack just the most recent files to create a requests.txt file with incremental data. The temporary files then have to be moved back so that the next sync operation doesn't find that they are missing and download them again. Every couple of months there is an historical dump of requests data. This replaces the gz files since the previous historical dump with a smaller number of larger files. This is tricky to manage, but let me know when you are ready to use requests data and I can let you know how we handle it.
Regards,
Stuart.
This discussion post is outdated and has been archived. Please use the Community question forums and official documentation for the most current and accurate information.