Release Notes
Product Announcements
Rows inserted during bulk load must not overlap existing rows error may be reported when massive amounts of data are imported.rocksdb_merge_buf_size and rocksdb_merge_combine_read_size parameters based on the specification and data volume.rocksdb_merge_buf_size indicates the data volume of each way in k-way merge during index creation. rocksdb_merge_combine_read_size indicates the total memory used in k-way merge.rocksdb_block_cache_size indicates the size of rocksdb_block_cache. We recommend you decrease its value temporarily during k-way merge.bulk load to import the data.SET session rocksdb_bulk_load_allow_unsorted=1;SET session rocksdb_bulk_load=1;...Import the data...SET session rocksdb_bulk_load=0;SET session rocksdb_bulk_load_allow_unsorted=0;
rocksdb_bulk_load_allow_unsorted.rocksdb_merge_buf_size indicates the data volume of each way, and rocksdb_merge_combine_read_size indicates the total memory used in k-way merge.rocksdb_merge_buf_size to 64 MB or higher and set rocksdb_merge_combine_read_size to 1 GB or higher to avoid OOM. After all data is imported, you must modify the parameters to their original values.unique_check during data import to improve the import performance.SET unique_checks=OFF;...Import the data....SET unique_checks=ON;
unique_checks back to ON; otherwise, the uniqueness of INSERT operations in subsequent normal transaction writes will not be checked.Was this page helpful?
You can also Contact sales or Submit a Ticket for help.
Help us improve! Rate your documentation experience in 5 mins.
Feedback