Hello, the response time of my index on Manticore Search has increased a lot recently (some requests take several seconds). There are about 40 million documents currently indexed. How can I increase the performance of Manticore Search? Is there any configuration I can do that would improve it? I have put below my current configuration.
There may be many reasons why your response time has increased latency. What can help you figure out the root cause I’d recommend you first of all look into the searchd log.
Then if you don’t see anything suspicious there, enable query log, let it run for some time, then inspect the query log to find out the slowest queries, then run them manually with SHOW META and SHOW PROFILE
Another thing which often helps is looking at SHOW THREADS at the moment when you see the system is overloaded.
You didn’t say what version you are running. Perhaps upgrading to a newer version will help.
If you are running an old version without automatic rt table compaction, you may have too many disk chunks which may be another reason of the slowdown. Or, vice versa, you may need to do FLUSH RAMCHUNK my_index to flush the ram chunk as a disk chunk. It sometimes helps a lot.
First of all, thank you for responding so quickly. I have several questions.
Regarding the version of Manticore Search I use version 6.0.4
I am currently using Manticore Search on a Basic Droplet, Premium AMD, 8 vCPUs and 16 GiB of RAM from DigitalOcean (the vCPUs are shared). Is this hardware configuration supposed to be sufficient to run Manticore Search?
Should the value of rt_mem_limit be large 2G, 4G or on the contrary small like the default value which is 128K?
Is optimize_cutoff supposed to set the number of disk_chunks or is this value automatic? I tried to force optimize_cutoff to 16 in the configuration then I restarted the Manticore Search and executed the following commands:
ALTER TABLE my_index RECONFIGURE;
FLUSH RAMCHUNK my_index;
OPTIMIZE TABLE my_index;
but the number of disk_chunks remained at 17.
Should I specify the cutoff option myself in the OPTIMIZE request?
The documentation says that the table’s disk chunks should be # of CPU cores * 2, can setting cutoff to the value of the number of CPUs improve the performance? Should the optimize_cutoff configuration be set globally so that the automatic OPTIMIZE uses this value?
When using SHOW THREADS I observed that most of the requests use all the available workers, is this a normal behavior?
Regarding the analysis of the queries, all the queries are sorted to the results by the attr1 (ORDER BY attr1 DESC, attr1 is a field of type rt_attr_timestamp), without the sorting the server responds faster but still not as fast as it should.
Is there a configuration to make Manticore Search load more stuff in RAM?
Should I specify the cutoff option myself in the OPTIMIZE request?
No, you shouldn’t.
Should the optimize_cutoff configuration be set globally so that the automatic OPTIMIZE uses this value?
Yes, it can be set globally, as I mentioned earlier.
When using SHOW THREADS I observed that most of the requests use all the available workers, is this a normal behavior?
Yes, as long as pseudo-sharding is not disabled. The idea is that you have paid for a powerful multi-core CPU, and Manticore tries to make the most of it to reduce response time.
Is there a configuration to make Manticore Search load more stuff in RAM?