The snapshot directory has a lot of capacity

There is a problem because the snapshot directory keeps accumulating capacity. Please let me know if there is a way to limit it. I hope the number is not increasing.

By default MemSQL will keep 2 snapshot files per partition database (controlled by the snapshots_to_keep system variable). These snapshot files contain all the rowstore data stored on that partition. So if your seeing more disk use in the snapshot directory it will be mainly due to storing more data in rowstore tables.

-Adam

1 Like

Thank you for answer. Currently, 128gb of memory and 100gb of root directory are used. I am using one master and one node, but the snapshot capacity occupies more than 70gb. I’m anxious about the overflow. I wonder how to clean up the snapshot file or how the table such as _0,_1 in the leaf is no longer thinner and updated.

You likely need more disk space. A bare minimum amount of disk space if our only using rowstore tables is to have 3 times as much disk as memory (but you should really use more then that in most cases).

Don’t manually delete any files from the datadir. You can end up with a corrupt database (that is where memsql is storing your data). If you want to reduce disk use you can set the snapshots-to-keep system variable to 1 on your leaf node. This isn’t a recommend configuration, but will reduce the disk use in the snapshot directory by half (at the cost of less history stored for replication if your using redundancy 2).

6

Fixed snapshots_keep_to to 1. I wonder if the existing snapshot will be cleaned up over time.
I wonder if you can clean up and re-snap a new snapshot.
Is there a way?
I am worried about the capacity problem…
I’d appreciate it if you let me know how to improve it.

On memsql 6.X the older snapshots will be cleaned up slowly over time (when a new snapshot is taken it will cleanup older ones). In MemSQL 7.X older snapshots will be cleaned up almost right away after snapshots-to-keep has changed.

Outside of running in redundancy 1 and not having slave partitions, I think setting snapshots-to-keep to 1 is about the best you can do to save disk space.