Write performance as database size increases

Hi -

We have several columnstore tables in our database. I’ve observed some odd behavior in CPU usage when writing to these tables.

When there’s no data in our database:

  • Idle CPU usage in the cluster is 6%
  • CPU usage when writing to columnstore tables is ~14%

When we’ve imported 400GB of columnstore data into our database:

  • Idle CPU usage in the cluster is 10%
  • CPU usage when writing to columnstore tables is ~27%

Note that when we’re writing to columnstore tables, they are different tables than where the 400GB of columnstore data was imported.

It seems that our write performance to any columnstore table is impacted when the overall size of the database increases. I’d perhaps expect this if the table itself was very large, but that’s not the case here.

I also thought that perhaps the Idle % was significantly higher because of the columnar merging going on in the background, but that also didn’t seem to be the case.

There’s nothing else happening in the cluster during this. Is there an explanation for why the CPU usage is higher when comparing these 2 scenarios or some additional debugging we could do to identify the CPU usage?

Thanks!

I don’t have a cookbook answer for you to get right to the result you want, but if you want to dig in to find where time is getting spent, you can use workload profiling. To read up on it, see here:

A good view to start with is mv_activities.

1 Like

Thanks for the feedback! I enabled the advanced counters and it looks like this might be related to the background merger process for the columnstore tables. I monitored the CPU usage for each activity being run in MemSQL and identified, over a period of time, the activities that consumed the most CPU time.