How to unload data from MEMSQL to S3

We will have all processed data in the MEMSQL database. We are using the MEMSQL to store our analytics data. Guide us how to unload MEMSQL tables data into Amazon S3 in csv or parquet file format.

Please check out: https://docs.memsql.com/v7.0/reference/sql-reference/data-manipulation-language-dml/select/#select-into-s3

Thank you very much for your prompt response. How to unload data into parquet format from MEMSQL to S3.

For now i recommend to either output to kafka and run a conversion in Kafka or run a periodic batch job that picks up from S3 bucket and converts to Parquet using something like Spark.

We are looking into supporting Parquet out of the box

How to unload data into csv/text format from MEMSQL to S3?

Look up for S3 on this page

Hi Nikita, Thanks for response. Yes i am referring to mentioned page syntax only.
SELECT *
FROM t1
INTO S3 ‘testing/output’
CONFIG ‘{“region”:“us-east-1”}’
CREDENTIALS ‘{“aws_access_key_id”:“your_access_key_id”,“aws_secret_access_key”:“your_secret_access_key”}’

in place of output i am giving the file name.csv but still it is getting stored as Binary/Octet - Stream in S3. Any comments on this?

Hi @tgarg971
Currently we always upload data to S3 using the content-type binary/octet-stream. Is there a specific use case for which you need the content type to be accurate to the file contents? Thanks!

Hi Carl,
There is no speicfic use case. Just that i want that to be in readable or downloadable format hence thinking if we can store that in .csv format.

Totally understand. On the plus side, we are uploading the data in CSV format already. You can download and consume the data using any tool which supports reading CSV. Most tools will happily open the file if you download it with the extension .csv or .tsv.