When you submit a snapshot or incremental query, DAP API assigns a job ID. The job ID spans the entire lifetime of the process. When the query has produced a result-set, the result-set is saved to an S3 bucket directory (prefix) specific to the job ID. When you retrieve a pre-signed URL, this is the location the pre-signed URL points to.
If you can share the job ID with the (support) team, the engineer on duty can look into the files that were generated and saved to S3. They can check if everything is OK with the files. Specifically, if the files are already malformed in S3, the engineering team has work to do: it's a server-side error. However, if the files got corrupted during the download process, it might be related to the download script you use: it's a client-side error.
It's always a good idea to save the job ID. For example, the official DAP client library always logs the job ID (to stdout or a file, depending on your choice).