I use a remote index, but the following error occurs when accessing a remote index.
select count(*) from oqsindex;
ERROR (42000) : 1064 (null)
Sometimes queries will report that the field does not exist (it does)
select id, pref, cref from oqsindex;
Internal error: column ‘pref/pref’ not found in result set schema
Not every query causes this problem, and everything works fine if I cross the distributed index.
search node
index oqsindex
{
type = distributed
agent_persistent = manticore-index-0:9312:oqsindex
}
index node
index oqsindex
{
path = /var/lib/manticore/data/oqsindex
html_strip = 1
index_zones = F*
infix_fields = fullfields
min_infix_len = 3
ngram_chars = U+3000…U+2FA1F
ngram_len = 1
rt_attr_bigint = entity
rt_attr_bigint = pref
rt_attr_bigint = cref
rt_attr_json = jsonfields
rt_field = fullfields
rt_mem_limit = 1m
type = rt
}
Both have similar searchd configurations.
searchd
{
listen = 9312
listen = 9308:http
listen = 9306:mysql41
log = /var/log/manticore/searchd.log
query_log = /var/lib/manticore/data/query.log
binlog_path = /var/lib/manticore/data
pid_file = /var/run/manticore/searchd.pid
data_dir = /var/lib/manticore/data
access_blob_attrs = mmap_preread
access_doclists = file
access_hitlists = file
access_plain_attrs = mmap_preread
agent_connect_timeout = 3000
agent_query_timeout = 3000
agent_retry_count = 3000
agent_retry_delay = 1000
binlog_flush = 2
binlog_max_log_size = 268435456
client_timeout = 1h
dist_threads = 0
docstore_cache_size = 500m
expansion_limit = 0
grouping_in_utc = 0
listen_backlog = 5
max_batch_queries = 256
max_children = 0
max_filter_values = 16384
max_filters = 1024
max_open_files = max
max_packet_size = 32M
mysql_version_string = 5.0.37
net_throttle_accept = 0
net_throttle_action = 0
net_wait_tm = -1
net_workers = 3
persistent_connections_limit = 30
preopen_indexes = 1
qcache_max_bytes = 16777216
qcache_thresh_msec = 3000
qcache_ttl_sec = 1
query_log_format = sphinxql
query_log_min_msec = 3000
query_log_mode = 666
queue_max_length = 0
read_buffer_docs = 128k
read_buffer_hits = 128M
read_timeout = 5
rt_flush_period = 3600
rt_merge_iops = 30
rt_merge_maxiosize = 1m
seamless_rotate = 1
shutdown_timeout = 30m
sphinxql_timeout = 15m
subtree_docs_cache = 8M
subtree_hits_cache = 16M
thread_stack = 256K
unlink_old = 1
watchdog = 0
workers = thread_pool
}
Both are deployed n kubernetes.
Am I missing something in the configuration? The error message confused me about where the problem was.