java – 主分片不活动或未分配是已知节点?

前端之家收集整理的这篇文章主要介绍了java – 主分片不活动或未分配是已知节点?前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我在 Windows 8上运行弹性搜索版本4.1.我尝试通过java索引文档.运行JUNIT测试时,错误如下所示.
org.elasticsearch.action.UnavailableShardsException: [wms][3] Primary shard is not active or isn't assigned is a known node. Timeout: [1m],request: index {[wms][AUpdb-bMQ3rfSDgdctGY],source[{
    "fleetNumber": "45","timestamp": "1245657888","geoTag": "73.0012312,-123.00909","videoName": "timestamp.mjpeg","content": "ASD123124NMMM"
}]}
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.retryBecauseUnavailable(TransportShardReplicationOperationAction.java:784)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.doStart(TransportShardReplicationOperationAction.java:402)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$3.onTimeout(TransportShardReplicationOperationAction.java:500)
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239)
    at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:497)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)

我无法弄清楚,为什么会导致这个错误发生.当一个删除数据或索引它工作正常.
什么可能是它的可能的原因.

解决方法

你应该看看那个链接
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html

特别是那部分:

cluster.routing.allocation.disk.watermark.low controls the low
watermark for disk usage. It defaults to 85%,meaning ES will not
allocate new shards to nodes once they have more than 85% disk used.
It can also be set to an absolute byte value (like 500mb) to prevent
ES from allocating shards if less than the configured amount of space
is available.

cluster.routing.allocation.disk.watermark.high controls the high watermark. It defaults to 90%,meaning ES will attempt to relocate shards to another node if the node disk usage rises above 90%. It can also be set to an absolute byte value (similar to the low watermark) to relocate shards once less than the configured amount of space is available on the node.

猜你在找的Java相关文章