For indexer: Is there any way to send selects to mysql slave and updates/insert to mysql master

Hello,

For indexer: As we use multiple mysql servers, and to prevent overload of master mysql server,
We wish to send selects to mysql slave and updates/insert to mysql master when configuring manticore.conf

Is that possible?

Thank you.

indexer is not supposed to insert new data, it’s supposed to only SELECT, so just point it to your slave and insert data into the master from your application. The data flow will be then:
Your app → mysql master → mysql slave → indexer → Manticore

indexer is not supposed to insert new data, it’s supposed to only SELECT

Actually I think it is super common when using indexer to put the last position in mysql when using sphinx/manticore using sql_query_pre to mark where the delta index should start.

In theory I could probably setup a mysql/mariadb proxy and send insert/update/replace queries to master and select queries to slave, but I did hope there would be an easier way.

Another method I know of it to break replication on only one table to mark where the delta index is being used on slave server, give write access to slave on sphinx/manticore and not touch those tables on master.

OK, I get what you mean now. Unfortunately, plain tables can be only configured to use one and the same connection for all the queries indexes runs while building them.

In theory I could probably setup a mysql/mariadb proxy and send insert/update/replace queries to master and select queries to slave, but I did hope there would be an easier way.

Right, I’d do the same. Perhaps ProxySQL can do that.

Alternatively, you can use the xmlpipe2 source type instead of mysql, then you’d need to write a script which returns XML, but you would have maximum flexibility in terms of indexation state control etc.

Each ‘source’ can only connect to one server, but you can have multiple sources per index. So can arrange for both sections to be executed.

Something like this…

source p1 {
	sql_host = primary.server.local
	sql_pre = REPLACE INTO search_counter SELECT max(doc_id) FROM table
	sql_query = SELECT id,title FROM table WHERE id = 1
}

source p1 {
	sql_host = replica.server.local
	sql_pre = SELECT SLEEP(3)
	sql_pre = SELECT @max_id := doc_id FROM search_counter
	sql_query = SELECT id,title FROM table WHERE id BETWEEN 2 AND @max_id
}

index idx {
	source p1
	source p2
}

Of course omitted lots of required stuff, its just enough to demonstate you run the real ‘sql_pre’ in the one part, but the main indexing loop in the second.

Note the sql_query in the first source needs to return a document, indexer will abort if it gets no results in my experience. The Sleep is a bit if a hack, but its to help make sure the update to the counter table has replicated (although of course 3 seconds is still not long enough to guarantee it happened!) .

Although to be honest, I have gone with a slightly different setup. I have seperate index for running the first source. I found that more reliable. You just always update the ‘primary’ index as well as a the ‘main’ which executes on the replica server. The primary index is dummy, and get queried.

In fact can see an example if it here!

(we are just making the config file executable, so can use hostname as a colunn in the counter table. )