250% cpu used when querying empty column store tables

this is very simple.
Running about 200 queries per second on an empty columnar table uses about 250% cpu from the leaves.

Does this make sense?

What’s your configuration and where did you see it showing 250%? If you were using a SQL statement to gather that information, please share it.

configuration is 64 partitions with 16 leaves with 4 cpu each.
I used the memsql studio profiling feature.
i recorded for 50 seconds, 14000 queries that were hitting an empty table.
The total cpu usage was about 200%.

it is a simple select column1,column2,column3 from table order by column 1;

the table is empty.
if i drop the order by column 1, it is dropping from 200% cpu to 100% cpu. ( 1 core )

The CPU usage data comes from mv_activities_extended:

I think it’s fine to see CPU usage up to number of cores * 100%. I’ll have one of our developers weigh in on this.

It’s in general normal to see up to num_cores * recording period seconds of CPU time. Studio reports percentages relative to a single core’s worth of cpu time, as per the tooltip on that column. But it sounds like you’re aware of this.

2.5 cpus worth of time on a 64 core cluster (4% total utilization), with 9ms of cpu time per query, is higher than it should be but not shocking, especially if you’re running in a VM where syscalls can be more expensive. Given that you see leaf cpu use, we likely don’t optimize for empty tables in particular and are still broadcasting the query across partitions. Likewise, when there’s an order by, I expect we also collect (empty) result sets on the aggregator rather than streaming them directly from leaves to the client. The profiling queries themselves are currently O(plancache size) and can also contribute non-trivial CPU use - they profile themselves, so you should be able to find them as infoschema queries in the profile output.

We do in fact have work planned to reduce known per-query overhead in a 7.0 dot release. A flamegraph from a leaf and/or agg would be useful, but we’ll be repeating your experiment on our end in any case.