First I needed a firewall, then I needed Monitoring!
Setting up the syslog-ng server:
sudo apt install syslog-ng
Configuration
I left in all the other boilerplate configuration options for s_src, but made some tweaks and added the following:
# Kept the same
source s_src {
system();
internal();
};
# Added this source for remote servers/applicances
source s_net { tcp(ip(0.0.0.0) port(514) max-connections (5000)); udp(); };
# Added this destination for the server
destination d_syslog { file("/var/log/remotelogs/$HOST/syslog" owner("logstash") group("logstash") perm(0600) create_dirs(yes) dir_perm(0770)); };
# Added this destination for Elasticsearch
destination d_elasticsearch_http {
elasticsearch-http(
index("syslog-ng")
type("")
url("http://localhost:9200/_bulk")
template("$(format-json --scope rfc5424 --scope dot-nv-pairs
--rekey .* --shift 1 --scope nv-pairs
--exclude DATE --key ISODATE @timestamp=${ISODATE})")
);
};
# Added these to capture the logs
log { source(s_net); destination(d_syslog); };
log { source(s_net); destination(d_elasticsearch_http); };
This allowed me to configure syslog-ng with elastic search.
Logrotate (Incorrect and Correct Versions!)
Straight from the manpage, logrotate is designed to ease administration of systems that generate large numbers of log files. It allows automatic rotation, compression, removal, and mailing of log files. Each log file may be handled daily, weekly, monthly, or when it grows too large.
Insecure (and incorrect) log rotating via:
vim /etc/logrotate.d/remote
/var/log/remotelogs/*/*
{
create 0755 root adm
rotate 90
daily
missingok
compress
su root adm
}
This will rotate logs daily and keep them for 90 days. To test the config use:
logrotate -d --force /etc/logrotate.d/remote
… so as a bit of an update, the above configuration was incorrect. This configuration was filling up all the inodes on my logging/monitoring server causing massive issues with the disk. Inodes would fill up the disk, all applications would come to a hault requiring me to log on and remove all excess files.
The fix was simple, the new logrotate config, located at /etc/logrotate.d/remote
, looked like the following:
/var/log/remotelogs/*.log
{
create 0755 root adm
rotate 3
daily
dateformat -%d%m%Y
size 500M
missingok
compress
su root adm
}
I also updated /etc/syslog-ng/syslog-ng.conf
to look like:
# Remote Destination
#
#destination d_syslog { file("/var/log/remotelogs/${HOST}/${YEAR}_${MONTH}_${DAY}.log" owner("logstash") group("logstash") perm(0600) create_dirs(yes) dir_perm(0770)); };
destination d_syslog { file("/var/log/remotelogs/${HOST}.log" owner("logstash") group("logstash") perm(0770) create_dirs(yes) dir_perm(0770)); };
Some of the common scripts I used to check the size of folders and checking for failures look like the following:
# Get Size of files
# Modify --block size with 1/1M/1G for bytes/megabytes/gigabytes
du -csh --block-size=1 ./*
# Get failures except for...
sudo egrep "Failed|Failure" ./* | grep -v "10.0.1.45|hlvmg1"
Permissions
The one thing I had to allow was read and write to the /var/log/remotelogs/remote.ip.addr.here
.
chmod 644 -R /var/log/remote/logs/*/*
This needs to be set to allow logstash to read the file on the host.
I also modified the docker-compose.yml file for logstash to point to a bindmount at:
- type: bind
source: /var/log/remotelogs
target: /var/log/remotelogs
read_only: true
Setting up Clients
This is done for most clients in /etc/rsyslog.conf
at the end to send logs over to the remote server via port 514:
*.* @10.0.0.X:514
Setting up Docker to go to syslog
Create the daemon.json
file.
sudo touch /etc/docker/daemon.json
And in the file put:
{
"log-driver": "syslog"
}