Pipeline with S3, how do you find why batches are failing?
I created the pipeline fine, I have run a test of the pipeline just fine as well.
but when I start the pipeline all I get is failed batches and no data being inserted.
Only thing I can seem to get a message from is when I did PROFILE PIPELINE.
It came back with:
ERROR 1712 ER_MEMSQL_OOM: Leaf Error (xxx.xxx.xxx.xxx:3306): Memory used by MemSQL (6989.25 Mb) has reached the ‘maximum_memory’ setting (7189 Mb) on this node. Possible causes include (1) available query execution memory has been used up for table memory (in use table memory: 4208.50 Mb) and (2) the query is large and complex and requires more query execution memory than is available (in use query execution memory 0.00 Mb). See https://docs.memsql.com/troubleshooting/latest/memory-errors for additional information.
Does this mean I need more ram on the servers? I know I am working with some large files but this was just for testing, so followed the setup and it asked for 4 servers of 8GB of ram so that is what I did. If I need to load up a bigger server for testing I can, but want to make sure what server need this if that is the problem.