Thank you for the quick response! I tried that yesterday and the issue was with what I am seeing in the “information_schema.pipeline_files.”
When I write my data to the S3 file, I am overwriting the files with file names similiar to:
and so forth.
When I truncate the table, those files are still being shown as “loaded”
Is there a way to get the pipeline to grab the files and overwrite the table?
Do I need to delete the data in the information_schema.pipeline_files regarding this pipeline and/or Can I unload those files from this table to enable the pipeline to accept the same files?
I think it is important for me to state my goal: I want to overwrite the same data in S3 file with update data every week. When that S3 bucket is loaded with new data, I want to start a pipeline to grab this updated data and overwrite the data on the table.