Log: WARNING: INTERNAL: out-of-range in ThinMMapReader_c

Hi, in my searchd.log i get a lot of this:

WARNING: INTERNAL: out-of-range in ThinMMapReader_c: trying to read '/var/lib/manticore/la8_items_upcaft/la8_items_upcaft.24.tmp.spd' at 831772134, from mmap of 27180644, query most probably would FAIL; report the fact to dev!

la8_items_upcaft is an RT index created with CREATE TABLE sql.

The full log files are here so you can maybe see what led to here:
https://www.lot-art.com/sitemaps/mantilog.zip

Manticore version:

Server version: 6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822) git branch manticore-6.2.12...origin/manticore-6.2.12

Server is debian
Usually RAM is at 4%, CPU is at 5%

Conf:

searchd {
    listen = 0.0.0.0:9312
    listen = 0.0.0.0:9306:mysql
    listen = 0.0.0.0:9308:http
    log = /var/log/manticore/searchd.log
    #query_log = /var/log/manticore/query.log
    pid_file = /var/run/manticore/searchd.pid
    data_dir = /var/lib/manticore
    #net_workers = 3
    #preopen=1
    access_plain_attrs=mlock
    access_blob_attrs=mlock
    access_doclists=mlock
    access_hitlists=mlock
}

All files in those directories are chmod 777.

Indextool (I did a FLUSH TABLE before i ran indextool):

$ sudo indextool -c /etc/manticoresearch/manticore.conf --check la8_items_upcaft > out.log
Manticore 6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822)
Copyright (c) 2001-2016, Andrew Aksyonoff
Copyright (c) 2008-2016, Sphinx Technologies Inc (http://sphinxsearch.com)
Copyright (c) 2017-2023, Manticore Software LTD (https://manticoresearch.com)

using config file '/etc/manticoresearch/manticore.conf'...
checking table 'la8_items_upcaft'...
WARNING: failed to load RAM chunks, checking only 8 disk chunks
checking schema...
checking RT segment 0(25)...
checking rows...
checking dead row map...
checking RT segment 1(25)...
checking rows...
checking dead row map...
checking RT segment 2(25)...
checking rows...
checking dead row map...
checking RT segment 3(25)...
checking rows...
checking dead row map...
checking RT segment 4(25)...
checking rows...
checking dead row map...
checking RT segment 5(25)...
checking rows...
checking dead row map...
checking RT segment 6(25)...
checking rows...
checking dead row map...
checking RT segment 7(25)...
checking rows...
checking dead row map...
checking RT segment 8(25)...
checking rows...
checking dead row map...
checking RT segment 9(25)...
checking rows...
checking dead row map...
checking RT segment 10(25)...
checking rows...
checking dead row map...
checking RT segment 11(25)...
checking rows...
checking dead row map...
checking RT segment 12(25)...
checking rows...
checking dead row map...
checking RT segment 13(25)...
checking rows...
checking dead row map...
checking RT segment 14(25)...
checking rows...
checking dead row map...
checking RT segment 15(25)...
checking rows...
checking dead row map...
checking RT segment 16(25)...
checking rows...
checking dead row map...
checking RT segment 17(25)...
checking rows...
checking dead row map...
checking RT segment 18(25)...
checking rows...
checking dead row map...
checking RT segment 19(25)...
checking rows...
checking dead row map...
checking RT segment 20(25)...
checking rows...
checking dead row map...
checking RT segment 21(25)...
checking rows...
checking dead row map...
checking RT segment 22(25)...
checking rows...
checking dead row map...
checking RT segment 23(25)...
checking rows...
checking dead row map...
checking RT segment 24(25)...
checking rows...
checking dead row map...
checking disk chunk, extension 27, 0(8)...
checking schema...
checking dictionary...
checking data...
checking rows...
checking attribute blocks index...
checking kill-list...
checking dead row map...
checking doc-id lookup...
check passed, 1.5 sec elapsed
checking disk chunk, extension 28, 1(8)...
checking schema...
checking dictionary...
checking data...
checking rows...
checking attribute blocks index...
checking kill-list...
checking dead row map...
checking doc-id lookup...
check passed, 3.2 sec elapsed
checking disk chunk, extension 29, 2(8)...
checking schema...
checking dictionary...
checking data...
checking rows...
checking attribute blocks index...
checking kill-list...
checking dead row map...
checking doc-id lookup...
check passed, 4.8 sec elapsed
checking disk chunk, extension 26, 3(8)...
checking schema...
checking dictionary...
checking data...
checking rows...
checking attribute blocks index...
checking kill-list...
checking dead row map...
checking doc-id lookup...
check passed, 6.1 sec elapsed
checking disk chunk, extension 30, 4(8)...
checking schema...
checking dictionary...
checking data...
checking rows...
checking attribute blocks index...
checking kill-list...
checking dead row map...
checking doc-id lookup...
check passed, 6.9 sec elapsed
checking disk chunk, extension 31, 5(8)...
checking schema...
checking dictionary...
checking data...
checking rows...
checking attribute blocks index...
checking kill-list...
checking dead row map...
checking doc-id lookup...
check passed, 9.3 sec elapsed
checking disk chunk, extension 32, 6(8)...
checking schema...
checking dictionary...
checking data...
checking rows...
checking attribute blocks index...
checking kill-list...
checking dead row map...
checking doc-id lookup...
check passed, 11.4 sec elapsed
checking disk chunk, extension 25, 7(8)...
checking schema...
checking dictionary...
checking data...
checking rows...
checking attribute blocks index...
checking kill-list...
checking dead row map...
checking doc-id lookup...
check passed, 13.0 sec elapsed
check passed, 13.0 sec elapsed

I just rebuilt this entire index 2 days ago (deleted all files, restarted manti and added all data agian).
I had this problem in the past with different indexes.

What can i do to make it stable?

If:

  • you can easily reproduce this issue
  • indextool doesn’t report the table is corrupted
  • you can reproduce it even with the latest dev version

feel free to reopen searchd.log warning · Issue #1780 · manticoresoftware/manticoresearch · GitHub and share the files with us, so we can reproduce it on our side and fix it.