Solved Issue with Elasticsearch
-
Use telnet from the remote machine to see if it is open properly.
Also verify that it is listening with
netstat -tulpn
-
Figured that out. on the elasticsearch.yml file, I need to change network.host from localhost to an IP accessible from other servers; Public/Private
-
@scottalanmiller said:
Use telnet from the remote machine to see if it is open properly.
Also verify that it is listening with
netstat -tulpn
Sorry didn't see that message. It was not the firewall, was a config on ElasticSearch, thats solved now. I need to watch it for a day or two to make sure that this doesn't fail.
-
So after 24+ hours of monitoring, ElasticSearch works fine, didn't fail! Concluding that for ElasticSearch to function correctly, use minimum 16GB RAM server and keep it dedicated only for ES.
Hardware recommendation from ES site:
A machine with 64 GB of RAM is the ideal sweet spot, but 32 GB and 16 GB machines are also common. Less than 8 GB tends to be counterproductivehttps://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html
Closing this thread and marking it as solved. Thanks guys
-
@Ambarishrh said:
So after 24+ hours of monitoring, ElasticSearch works fine, didn't fail! Concluding that for ElasticSearch to function correctly, use minimum 16GB RAM server and keep it dedicated only for ES.
Hardware recommendation from ES site:
A machine with 64 GB of RAM is the ideal sweet spot, but 32 GB and 16 GB machines are also common. Less than 8 GB tends to be counterproductivehttps://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html
Closing this thread and marking it as solved. Thanks guys
IMO, that is an insane amount of RAM to be required.
-
It is a lot, but in memory large scale databases often do similar. We had similar numbers with things like Cassandra.
-
Ha my ELK server has 3, but it's a small number of VMs.
-
We have run ELK on two pretty well. But I think that our new one is going to be more like eight.
-
@johnhooks said:
Ha my ELK server has 3, but it's a small number of VMs.
An ELK server is the reason I am concerned about this value. I don't have 16GB of RAM to just through at a VM without a damned good reason.
I really want to get an ELK server setup at a couple clients, but none of their servers have that kind of RAM unallocated.
-
@JaredBusch said:
I really want to get an ELK server setup at a couple clients, but none of their servers have that kind of RAM unallocated.
How many machines will they monitor? We've done ~20 normal servers to a 2GB ELK server, worked fine. Might have been more responsive with more, but it was just fine.
The 64GB recommendation is when using Elastic as a clustered NoSQL database for other purposes where you are dealing with datasets larger than 64GB. No need for numbers like that on a normal SMB ELK install at all. You might want to look for more than 2GB, but you can do pretty well without much.
If you get to the point where the log set that you are reporting on is not able to be in memory, you'll feel the lag on the interface for sure. But most SMBs aren't looking at ten year old logs in real time, either.
-
I think, unless you have some crazy log traffic, that if you can get 4GB for ELK in an SMB, you are nearly always good. I'd expect hundreds of servers to be able to log to that, as long as you have fast disks (it still has to get to disk fast enough no matter how much memory there is.)
We've had massive Splunk databases with 32GB - 64GB, but those are taking data from thousands and thousands of servers and doing so as a high availability failover cluster, so they have to ingest, index and replicate in real time.