Too many connections error

I launched a EC2 instance on AWS and single node memsql server using Docker on the EC2 instance followed by this guide.

I created some tables and query. The query worked correctly alone.
Next, I was going to measure query performance, I wrote below shell script.

for i in {1…20} ; do
(time mysql -h -u root -P 3306 -D db1 --prompt="memsql> " < query.sql) 2>&1 &

It was normal until 10, but when it was 20, an error came out.

ERROR 1040 (08004) at line 1: Leaf Error ( Too many connections

Would you tell me how to resolve this error?

show variables;

| max_connection_threads | 192 |
| max_connections | 1000 |
| max_pooled_connections | 1024

If you connect to the leaf node (mysql -u root --port 3307 -h and run select @@max_connections what do you see? It should be 100000. If I had to guess the ulimits MemSQL is running with are too low so the leaf node automatically lowered max_connections to avoid running out of file descriptors. See:

In your script above, you are running the mysql command in the background. This means that you are opening a new connection to the Master Aggregator 20 times in parallel. Each connection to the master aggregator will open as many connections to the leaf as you have partitions. Combine this knowledge with @adam’s response - if you have too low of a NOFILE ulimit, this could cause the issue you described.

As a general note - do you actually want to benchmark parallel query performance - or do you just want to run your benchmark 20 times? If you want to do the latter I suggest removing the trailing & so that each instance of the mysql command runs serially rather than in parallel.

As you pointed out, the problem is too low of a NOFILE ulimit.
I set a NOFILE ulimit when docker run as below, resolved the issue.

docker run -i --init
–name memsql-ciab
-p 3306:3306 -p 8080:8080
–ulimit nofile=100000

As I assume 200 client execute query at the same time, I’d like to benchmark parallel query performance. In my environment, up to 60 connections were available in parallel.

Thank you so much.