Client Speed - Which is best?

Hi all,

Out of interest, what’s the best for speed - the php client, mysql client or native client?

I’m testing out ahead of a switch from sphinx, Manticore so far is a drop-in replacement, but intrigued as to which client would be better for adding 700k+ rows (1.5Gb data) at once when we reset the index (which we currently do weekly).

The mysql client takes just over 25m at the moment. I see the PHP client uses the REST api, so not sure that’s going to be any quicker. As far as the native client goes, I have no clue about as I can’t see to find any information about it.

FWIW, its quicker creating the index than Sphinx was and I’m impressed that I’ve basically been able to copy the RT table configs from the sphinx config over to Manticore, fire it up and reindex all the data straight into Manticore without a single line of code being changed - that’s impressive!

And it’s good as I was stuck on Sphinx 3.03 due to a bug that was introduced in subsequent versions which broke Sphinx for me and highlighted that I needed to get away from a system that was looked after by a single dev.

Regards

Steve.

1 Like

Out of interest, what’s the best for speed - the php client, mysql client or native client?

The native client should be insignificantly faster. If you are looking for the fastest option, use real-time indexes with binary log disabled and high insert concurrency to utilize all your CPU cores. You can take the scripts mentioned here Manticore: a faster alternative to Elasticsearch in C++ with a 21-year history as examples.

Cheers. I’m running into a random crash now when rebuilding the index, which is annoying. I’ll look into how to report the bug, but given its a) intermittent and b) inserting commercial data, not sure how I can report it in a way that you can replicate it and fix it :confused:

The index is rebuilt via mysql INSERT statements, usually inserting 300 or so rows at once into the Manticore database. Sometimes it completes without issue, other times it crashes during the insert. It also happens on different tables, so it doesn’t appear to be data related. It’s currently running on a VM, I’ll try running it on metal next, but as its my main dev machine, I don’t want to screw up the sphinx install by installing manticore alongside it.

Sphinx, for all its issues, never had this particular issue and this is a bit of a showstopper for us at the moment in moving over to Manticore. Yes, ideally we’ll refactor the reindex operation to generate CSV files and then import them that way, but we’re not there yet.

Nothing’s ever simple is it! lol

(forgot to add, the version I’m using is the one installed via APT for Ubuntu 20.02 - 5.0.2 348514c86@220530 dev)

you could install debug symbols into your box then post crash log from search.log along with crash description to ticket at the GitHub first.

Yeah, I’ll look into that. I just ran it on my dev machine and it completed a reindex of 14 million docs without issue. I’m wondering if its an issue with running it inside Virtual Box.

There was one issue which is fixed which may be related to yours searchd hangs/crashes under intensive loading · Issue #827 · manticoresoftware/manticoresearch · GitHub

I suggest before you create a bug report you try the latest dev package.