If you are syncing, you have the option of deleting the file after you import and then create a new file with the same name. This means that you won't end up with that file replaced. It will at least mean your storage for the gz/csv files will not be growing. In linux, this would mean unzip, import csv, delete csv and then "touch" a file with the same name as the gz. This tricks the sync into thinking you already have the file.
Since this is your own redshift instance, you can clean your data and not have to worry about it repopulating. Then you won't have to store the stuff you don't want.
That's the only things I've been able to do locally (before redshift) to deal with the sync process. I stopped using sync and only download what I want. I have never had the resources to manage requests myself so that's hosted by instructure/redshift. Took a long time to come up with a process that worked for me. You may have to think outside the box and come up with your own scripts if you want to use requests but don't want to store all the data.
This discussion post is outdated and has been archived. Please use the Community question forums and official documentation for the most current and accurate information.