MemSql slower than MySql for small datasets

Describe the problem you’re experiencing?
benchmarking memsql query random 10k rows vs mysql

memsql 6.7.16:     3.45s       172556 ns/op    1472 B/op     40 allocs/op
 memsql 7.0.9:     3.74s       186956 ns/op    1472 B/op     40 allocs/op 
 mysql 5.7.28:     1.70s        84844 ns/op    1472 B/op     40 allocs/op

Benchmark code: GitHub - kokizzu/orm-benchmark: All golang orm benchmark

What is your ideal solution? What are you looking for?

At least equal to MySQL ‘__’)

What version(s) of MemSQL or related tools is this affecting?

7.0.9

why no One replied to you in such benchmark , Did you get it , cause i am still in first step and noticed that it is slower than mysql really in small data ( Till 40k rows ) , any configuration i am missing?

Probably there aren’t any responses here as this was filed as a feature request instead of a question.

To answer though, MemSQL is a distributed database (it has no single-node version) and is designed for use with tables with 100s of millions to trillions of rows. You likely won’t see much of a difference in performance between databases when using a small number of rows (and most distributed databases have more overhead to co-ordinate multiple nodes to run queries - which is wasted overhead when the data fits on one node).

Wow, huge article but very useful. Thanks.

Did you try it against columnstore or memstore ?

memstore memstore memstore memstore