@kyle_cole - to be clear, though, they would need to load all of the submissions data from CD2 first; it sounds to me like they are trying to reduce the amount of data they need to download and ingest from CD2.
@pray4 - there's not really a way to filter the data from CD2 before you download it. (You can run incremental queries to get changes since a point in time, but those incremental updates need to be applied to a full snapshot.)
However, I just loaded our submissions table (using the python dap tool) which coincidentally is also just over 36M records. The process took just under an hour, but it only needs to be run once. After you've fetched and ingested the initial snapshot, you can then fetch (much smaller) incremental updates on a regular basis.
Hope this helps!
--Colin