Hi, I’m a new user, unsure if this is a bug or not.
Basically we have a problem where search is really slow and optimize does nothing. In fact optimize says this:
rt: index indexname_0: optimized (progressive) chunk(s) 0 ( of 1 ) in 0.000 sec
We first ran an insertion of some million rows, when that was finished the RAM usage was up at ~21GB, when restarted it used the same amount of ram.
Then, a FLUSH RAMCHUNK
was done. I presume this might’ve been a mistake to do, cause now RAM usage is down to 0.2GB.
We’ve tried to enable mlock=1
and restart searchd with --force-preread
, but we get errors saying:
[Fri May 3 11:21:01.494 2019] [1706] WARNING: index '[...]/idx.0': mlock() failed: Cannot allocate memory for strings
[Fri May 3 11:21:01.498 2019] [1706] WARNING: index '[...]/idx.0': mlock() failed: Cannot allocate memory for dictionary
[Fri May 3 11:21:01.527 2019] [1706] WARNING: index '[...]/idx.0': mlock() failed: Cannot allocate memory for attributes
Any ideas?