260 likes | 738 Views
OSSEC Log Management with Elasticsearch. Vic Hargrave | vichargrave@gmail.com | @ vichargrave. $ whoami. Software Architect for Trend Micro Data Analytics Group Blogger for Trend Micro Security Intelligence and Simply Security Email: vichargrave@gmail.com Website: vichargrave.com
E N D
OSSEC Log Management with Elasticsearch Vic Hargrave | vichargrave@gmail.com | @vichargrave
$ whoami • Software Architect for Trend Micro Data Analytics Group • Blogger for Trend Micro Security Intelligence and Simply Security • Email: vichargrave@gmail.com • Website: vichargrave.com • Twitter: @vichargrave • LinkedIn: www.linkedin.com/in/vichargrave
OSSEC does SIEMs Syslog commercial or open source SIEM syslog Syslog Syslog
Commercial SIEMs are great, but… commercial SIEM = +
Now there’s a whole new (open-source) ballgame Logstash Kibana
Elasticsearch • Open source, distributed, full text search engine • Based on Apache Lucene • Stores data as structured JSON documents • Supports single system or multi-node clusters • Easy to set up and scale – just add more nodes • Provides a RESTful API • Installs with RPM or DEB packages and is controlled with a service script.
Elasticseach Elements • Index – contains documents, ≅ table • Document – contains fields, ≅ row • Field – contains string, integer, JSON object, etc. • Shard– smaller divisions of data that can be stored across nodes • Replica– copy of the primary shard
ElasticsearchMulti-node Configuration # default configuration file - /etc/elasticsearch/elasticsearch.yml ######################### Cluster ######################### # Cluster name identifies your cluster for auto-discovery # cluster.name:ossec-mgmt-cluster ########################## Node ########################### # Node names are generated dynamically on startup, so you're relieved # from configuring them manually. You can tie this node to a specific name: # node.name:"es-node-1"# e.g. Elasticsearch nodes numbered 1 – N ########################## Paths ########################## # Path to directory where to store index data allocated for this node. # path.data:/data/0, /data/1
Logstash • Log aggregator and parser • Supports transferring parsed data directly to Elasticsearch • Controlled by a configuration file that specifies input, filtering (parsing) and output • Key to adapting Elasticsearch to other log formats • Run logstashin logstash home directory as follows: bin/logstash––conf<logstashconfig file>
OSSEC – logstash.conf input { # stdin{} udp{ port => 9000 type => "syslog" } } filter { if [type] == "syslog" { grok { # SEE NEXT SLIDE } mutate { remove_field => [ "syslog_hostname", "syslog_message", "syslog_pid", "message", "@version", "type", "host" ] } } } output { # stdout{ # codec => rubydebug # } elasticsearch_http{ host => "10.0.0.1" } }
OSSEC Alert Parsing • OSSEC syslog alert • grok { } Jan 7 11:44:30 ossecossec: Alert Level: 3; Rule: 5402 - Successful sudo to ROOT executed; Location: localhost->/var/log/secure; user:user; Jan 7 11:44:29 localhostsudo: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/bin/su match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_host}%{DATA:syslog_program}: Alert Level: %{NONNEGINT:Alert_Level}; Rule: %{NONNEGINT:Rule} - %{DATA:Description}; Location: %{DATA:Location}; (srcip: %{IP:Src_IP};%{SPACE})? (dstip: %{IP:Dst_IP};%{SPACE})? (src_port: %{NONNEGINT:Src_Port};%{SPACE})? (dst_port: %{NONNEGINT:Dst_Port};%{SPACE})? (user: %{USER:User};%{SPACE})?%{GREEDYDATA:Details}" } add_field=> [ "ossec_server", "%{host}" ]
Kibana • General purpose query UI • Javascript implementation • Query Elasticsearch without coding • Includes many widgets • Run Kibana in browser as follows:http://<web server ip>:<port>/<kibana path>
Kibana– config.js /** @scratch /configuration/config.js/5 * ==== elasticsearch * * The URL to your elasticsearch server. You almost certainly don't * want +http://localhost:9200+ here. Even if Kibana and Elasticsearch * are on the same host. By default this will attempt to reach ES at the * same host youhave kibanainstalled on. You probably want to set it to * the FQDN of your elasticsearchhost */ elasticsearch: http://+"<elasticsearch node IP>"+":9200",
Elasticsearch Cluster Management • ElasticHQ • Elasticsearch plug-in • Install from Elasticsearch home directory: bin/plugin -install royrusso/elasticsearch-HQ • Provides cluster and node management metrics and controls
And now for something completely different.The OSSEC virtual appliance
Back to Reality Free
Elasticsearch Security Caveats • Designed to work in a trusted environment • No built in security • Easy to erase all the data • Use with a proxy that provides authentication and request filtering such as Nginx • http://wiki.nginx.org/Main curl –XDELETE http://<server>:9200/_all
Further Information • Elasticsearch • http://www.elasticsearch.org • Logstash • http://logstash.net • Kibana • http://www.elasticsearch.org/overview/kibana/ • ElasticHQ • http://elastichq.org • Elasticsearch for Logging • http://vichargrave.com/ossec-log-management-with-elasticsearch/ • http://edgeofsanity.net/article/2012/12/26/elasticsearch-for-logging.html
Thanks for attending! Any questions?