So if we have both open and closed indices, we may be at log_size_limit but Curator’s delete.yml is going to see disk space at a value lower than log_size_limit and so it shouldn’t delete any open indices.įor example, suppose our log_size_limit is 1TB and we have 30 days of open indices and 300 days of closed indices. However, keep in mind that Curator’s delete.yml is only going to see disk space used by open indices and not closed indices. This might seem like there is a potential to delete open indices before deleting closed indices. Ĭurator and so-curator-closed-delete run on the same schedule. so-curator-closed-delete does not use Curator because Curator cannot calculate disk space used by closed indices. If your total Elastic disk usage (both open and closed indices) is above log_size_limit, then so-curator-closed-delete will delete old closed indices until disk space is back under log_size_limit. If your open indices are using more than log_size_limit gigabytes, then Curator will delete old open indices until disk space is back under log_size_limit. Index deletion is different for deployments using Elastic clustering and that is described in the Elastic clustering section later.įor standalone deployments and distributed deployments using cross cluster search, Elasticsearch indices are deleted based on the log_size_limit value in the minion pillar. This section describes how Elasticsearch indices are deleted in standalone deployments and distributed deployments using our default deployment method of cross cluster search. To see your existing shards, run the following command and the number of shards will be shown in the fifth column: This will generally help the cluster stay in good health. A node with a 30GB heap should therefore have a maximum of 600-750 shards, but the further below this limit you can keep it the better. A good rule-of-thumb is to ensure you keep the number of shards per node below 20 to 25 per GB heap it has configured. TIP: The number of shards you can hold on a node will be proportional to the amount of heap you have available, but there is no fixed limit enforced by Elasticsearch. For use-cases with time-based data, it is common to see shards between 20GB and 40GB in size. Aim to keep the average shard size between a few GB and a few tens of GB. TIP: Small shards result in small segments, which increases overhead. There is no fixed limit on how large shards can be, but a shard size of 50GB is often quoted as a limit that has been seen to work for a variety of use-cases. TIP: Avoid having very large shards as this can negatively affect the cluster’s ability to recover from failure. enabled : true index_settings : so - beats : index_template : template : settings : index : number_of_shards : 1 warm : 7 close : 30 delete : 365 so - endgame : index_template : template : settings : index : number_of_shards : 1 warm : 7 close : 30 delete : 365 so - firewall : index_template : template : settings : index : number_of_shards : 1 warm : 7 close : 30 delete : 365 so - flow : index_template : template : settings : index : number_of_shards : 1 close : 45 delete : 365 so - ids : index_template : template : settings : index : number_of_shards : 1 warm : 7 close : 30 delete : 365 so - import : index_template : template : settings : index : number_of_shards : 1 warm : 7 close : 73000 delete : 73001 so - osquery : index_template : template : settings : index : number_of_shards : 1 warm : 7 close : 30 delete : 365 so - ossec : index_template : template : settings : index : number_of_shards : 1 warm : 7 close : 30 delete : 365 so - strelka : index_template : template : settings : index : number_of_shards : 1 warm : 7 close : 30 delete : 365 so - syslog : index_template : template : settings : index : number_of_shards : 1 warm : 7 close : 30 delete : 365 so - zeek : index_template : template : settings : index : number_of_shards : 2 threshold_enabled : true cluster_routing_allocation_disk_watermark_low : '95%' cluster_routing_allocation_disk_watermark_high : '98%' cluster_routing_allocation_disk_watermark_flood_stage : '98%' script. field expansion matches too many fieldsĮlasticsearch : true_cluster : False replicas : 0 discovery_nodes : 1 hot_warm_enabled : False cluster_routing_allocation_disk.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |