The remote index responds to null

I use a remote index, but the following error occurs when accessing a remote index.

select count(*) from oqsindex;
ERROR (42000) : 1064 (null)

Sometimes queries will report that the field does not exist (it does)

select id, pref, cref from oqsindex;
Internal error: column ‘pref/pref’ not found in result set schema

Not every query causes this problem, and everything works fine if I cross the distributed index.

search node
index oqsindex
{
type = distributed
agent_persistent = manticore-index-0:9312:oqsindex
}

index node
index oqsindex
{
path = /var/lib/manticore/data/oqsindex
html_strip = 1
index_zones = F*
infix_fields = fullfields
min_infix_len = 3
ngram_chars = U+3000…U+2FA1F
ngram_len = 1
rt_attr_bigint = entity
rt_attr_bigint = pref
rt_attr_bigint = cref
rt_attr_json = jsonfields
rt_field = fullfields
rt_mem_limit = 1m
type = rt
}

Both have similar searchd configurations.
searchd
{
listen = 9312
listen = 9308:http
listen = 9306:mysql41
log = /var/log/manticore/searchd.log
query_log = /var/lib/manticore/data/query.log
binlog_path = /var/lib/manticore/data
pid_file = /var/run/manticore/searchd.pid
data_dir = /var/lib/manticore/data
access_blob_attrs = mmap_preread
access_doclists = file
access_hitlists = file
access_plain_attrs = mmap_preread
agent_connect_timeout = 3000
agent_query_timeout = 3000
agent_retry_count = 3000
agent_retry_delay = 1000
binlog_flush = 2
binlog_max_log_size = 268435456
client_timeout = 1h
dist_threads = 0
docstore_cache_size = 500m
expansion_limit = 0
grouping_in_utc = 0
listen_backlog = 5
max_batch_queries = 256
max_children = 0
max_filter_values = 16384
max_filters = 1024
max_open_files = max
max_packet_size = 32M
mysql_version_string = 5.0.37
net_throttle_accept = 0
net_throttle_action = 0
net_wait_tm = -1
net_workers = 3
persistent_connections_limit = 30
preopen_indexes = 1
qcache_max_bytes = 16777216
qcache_thresh_msec = 3000
qcache_ttl_sec = 1
query_log_format = sphinxql
query_log_min_msec = 3000
query_log_mode = 666
queue_max_length = 0
read_buffer_docs = 128k
read_buffer_hits = 128M
read_timeout = 5
rt_flush_period = 3600
rt_merge_iops = 30
rt_merge_maxiosize = 1m
seamless_rotate = 1
shutdown_timeout = 30m
sphinxql_timeout = 15m
subtree_docs_cache = 8M
subtree_hits_cache = 16M
thread_stack = 256K
unlink_old = 1
watchdog = 0
workers = thread_pool
}
Both are deployed n kubernetes.

Am I missing something in the configuration? The error message confused me about where the problem was.

Which Manticore version are you using?

The docker image is manticoresearch/manticore:3.4.0

This problem does not occur temporarily without a persistent connection.

agent = manticore - index - 312:0:9 oqsindex

Seems like a bug. Can you open Github ticket with this data?

I created an issue.

show agent status;

ag_0_ping is no

Is there no heartbeat using a disconnected connection?

ping is for ha agents \ mirrors

workers = thread_pool
max_children = 0

Is there a problem with this configuration?
Max_children is greater than 0, right?

max_children = 0 means default value and cause children to set to 1.5 * cores

workers = thread_pool
max_children = 1

I thought so too, but I changed the configuration to look like this and now it seems to work magically.

The default core is 1.5, so how do you assign a kernel less than 1?
My resource limitations are as follows.

limits:
cpu: 200 m
memory: 256 mi
requests:
cpu: 100 m
memory: 128 mi

1.5 times of 200m, mac_children = 0
Will it end up being set to 1?

What if docker is assigned less than 1 core?

you can explicitly set max_children to any positive value and we recommend to use value slightly more than you box cores count

I did this, and after max_chlidren=1, it seemed like the problem was over.