Did I understand correctly (about transaction logs & columnstore files)?

Hi,

I understand the technical matters as below, and I would like anybody to confirm that they are correct.

transaction logs

  1. Directory of transaction logs files: …/data/logs
  2. The size of the transaction logs file is set by snapshot-trigger-size.
  3. Regardless of the setting, two 256 MB files per partition are pre-generated.
  4. Therefore, a minimum of 512 MB per partition is required for transaction logging.

columnstore files

  1. The columnstore is stored as files under the “data/blogs” directory.
  2. The Row-segment of the columnstore table is saved as a single file.
  3. The data in this file is sorted and stored by columnstore key column.
  4. During INSERT/LOAD DATA/UPDATE, the columnstore file where the data will be stored is newly created (by background merger).

Thanks in advance.

Mostly correct, yep.

For transaction logs:
3) applies to version 7.0 and higher and for reference databases we create smaller log files by default (64 mb). Both of those are controllable by global variables log_file_size_partitions and log_file_size_ref_dbs.

For columnstore files:
2) the rowstore segment is stored like a normal rowstore table (its data is logged to the transaction log and the rows are held in-memory). Once enough rows are stored in the rowstore segment in-memory the data is converted to columnstore format and written to disk (and removed from the rowstore segment) by the background flusher.
4) The INSERT/LOAD/UPDATE query will create the files in columnstore format directly themselves if they’re writing enough data (at each partition). If they are writing small amounts of data, they will go through the process described at 2) (accumulate in-memory until we have enough rows to write out to disk in columnstore format). The background merger doesn’t play a role in the initial writes of new rows. Its mainly responsible for keeping the data sorted and for cleaning up deleted rows. Some more details here: SingleStoreDB Cloud · SingleStore Documentation

1 Like

Thank you for the technical explanation.

Then I have a question.

In columnstore,

Q1. How does “columnstore_disk_insert_threshold=0” work?

  1. Store in memory as usual, and immediately store in DISK by background_flusher.
  2. Skip memory and store directly in DISK.
  3. If there is no answer here, please explain.

Q2. If one row is too big to be stored in memory, how does it work?

  1. Immediately return the failure to the client.
  2. Flush memory and store this low directly in DISK.
  3. Leave memory and store only this Row in DISK.
  4. If there is no answer here, please explain.

Q3. Which setting affects the size of the Row-segment in memory: “maximum_table_memory” or “columnstore_flush_bytes”?

Q4. I saw a post that says BLOB data is stored directly in DISK. Is that right?

Thank you in advance for your help!!

Q1. If columnstore_disk_insert_threshold=0 all writes will skip the in-memory segment (so even a 1 row write will be converted to columnstore format and written to disk). This can cause issues with lots of small files on disk (the background merger will be running and building bigger files out of the smaller files, but it may not keep up).

Q2. Its pretty hard to get a row that won’t fit into memory (You will run into the max_allowed_packet first and the query will be rejected before it runs). If you do create one though (say via a bunch of string concats), the query will fail with an “out of memory” error. The row needs to fit in-memory to be compressed.

Q3. Both affect it. All rowstore segments for all columnstore tables (summed up) can’t exceed maximum_table_memory. Queries will start to throttle if memory use by in-memory segments gets too close to maximum_table_memory (to allow the flushers to catch up). columnstore_flush_bytes is how many bytes we want for before doing any flushing to disk. Its the most direct know to control the size of the in-memory segment.

Q4. Blob in MemSQL columnstore terminology means a compressed file on disk stored in columnstore format (not a BLOB datatype in a table)

1 Like