Distributed RT Index don't get preloaded/preread into RAM?


#1

Hi, I’m a new user, unsure if this is a bug or not.

Basically we have a problem where search is really slow and optimize does nothing. In fact optimize says this:

rt: index indexname_0: optimized (progressive) chunk(s) 0 ( of 1 ) in 0.000 sec

We first ran an insertion of some million rows, when that was finished the RAM usage was up at ~21GB, when restarted it used the same amount of ram.

Then, a FLUSH RAMCHUNK was done. I presume this might’ve been a mistake to do, cause now RAM usage is down to 0.2GB.

We’ve tried to enable mlock=1 and restart searchd with --force-preread, but we get errors saying:

[Fri May  3 11:21:01.494 2019] [1706] WARNING: index '[...]/idx.0': mlock() failed: Cannot allocate memory for strings
[Fri May  3 11:21:01.498 2019] [1706] WARNING: index '[...]/idx.0': mlock() failed: Cannot allocate memory for dictionary
[Fri May  3 11:21:01.527 2019] [1706] WARNING: index '[...]/idx.0': mlock() failed: Cannot allocate memory for attributes

Any ideas?


#2

it is not error but warning what searchd can not lock such amount of memory in RAM.
could you check for user started searchd the output of

ulimit -l

#3

ulimit -l gives 16384


#4

mlock requires privileged user.


#5

I think we also tried changing the systemd service user to root but I think it ignored us. I’ll see if we can try again.

Edit: couldn’t get the service to start as root for some reason, not sure why, but we managed to start searchd as root regardless. It now uses ~5GB of ram, but that’s ~3GB less in contrast to sphinx 2, so the queries are still slower than sphinx.