Centralized-Logging

What is Centralized Logging?

Centralized Logging is defined as a gathering of the computer system logs for a group of systems in a centralized location. All network logs are stored on a centralized server or computer, which helps administrators perform easy backup and retrieval. It allows the administrator to check logs on each system on a regular basis. It is used to efficiently monitor computer system logs with the frequency required to detect security violations and unusual activity.

Centralized logging offers the following:

  • Trail can be reviewed if the client machine is compromised
  • It allows for a central place to run log checking scripts
  • It is highly secure with no other
  • Packet is filtered/firewalled to allow only approved machines
  • Logs can be sent to any email address for daily analysis
  • It has suitable backup and restoration ability

All security event management (SEM) systems provide solutions for security events related to collection, processing, and storage. Dedicated SEM servers are used to centralize the functions so that security events are managed centrally.

Using a centralized system has many benefits, such as providing centralized backups, and will also be beneficial in detecting and analyzing events.

Related Product : Computer Hacking Forensic Investigator | CHFI

Servers are maintained at two levels:
  1. Local SEM server
  2. Master SEM server

The local SEM server collects, processes, and queues all the events and forwards further tasks to the master SEM.

The master SEM server executes the subsequent functions of processing and storing the security events for analysis, reporting, and display. Usually, highly configured systems are needed for the master SEM because it needs a large storage capacity.

Depending on the storage space available on the central SEM server, security events are stored for periods ranging from a few weeks to months.

Syslog

Syslog is a de-facto standard for logging system events. It is a client/server protocol used for forwarding log messages across an IP network to the syslog receiver. This syslog receiver is also called as syslog server, syslog daemons, or syslogd. The term “syslog” refers to both the syslog protocol and the application or library sending syslog messages. In general, the syslog is used to manage and monitor computer system and security auditing.

Syslog uses either TCP or UDP to transfer messages. The log messages are sent in a clear text format. Different devices and receivers support syslog across multiple platforms.

Syslog in Unix-like Systems

Source: http://www.cs.umsl.edu

Syslog is a comprehensive logging system that manages information created by the kernel and system utilities. In a UNIX-like system, it is the heart of Linux logging. The syslog function sends messages to the system logger. It is controlled through the configuration file /etc/syslog. conf. It sorts the messages according to the sources and routes them to various destinations thereby performing several functions, such as the following:

  • Sends a message to syslogd
  • Logs it in a suitable system log
  • Writes it to the system console
  • Forwards it to a list of users
The syslog facility is based on two key elements:
  • letc/sysiogd (the daemon)
  • /etc/syslogd. conf configuration file

There are three parts of syslog:

1. syslogd

  • Logging daemon, along with its config file /ete/syslog. conf
  • Starts at boot time and runs continuously
  • Reads and forwards system messages to appropriate log files and/or users
  • Programs write entries to /dev/log or /var/run/log, which can be a socket, a named pipe, or a STREAMS module
  • On Solaris, the STREAMS log driver is /dev/log
  • Syslogd reads messages from file, consults its configuration file, and dispatches message to appropriate destination
  • Logs a mark (timestamps) message every 20 minutes at priority LOG_INFO to the facility which is identified as a mark in the syslog.conf file
  • On some systems, syslogd may also read kernel messages from the device /dev/klog
  • Writes its process ID to the file /etc/syslog. pid :

    Makes it easy to send signals to the syslogd from a script
    Restart syslogd by
    kill -HUP ‘/bin/cat /etc/syslog .pid
    Compressing or rotating a log file opened by syslogd has unpredictable results

  • Controlled by the file /etc/syslog.conf
    – Uses format:
    • selector<TAB>action
    • Example: user err /var/adm/messages
    • Syslogd produces time stamp messages that are logged if the facility mark appears in syslog.conf to specify a destination for them

2. openlog:

Initializes logging using the specified facility name

3. logger:

Adds entries to system log

Also Read : Understand Log File Accuracy

Advantages of Centralized Syslog Server:

In a centralized syslogging setup, a common server receives all syslog messages and logs from all computer systems connected to the network. It receives syslog messages and logs from all

UNIX servers; Windows servers; and network devices such as routers, switches, hubs, firewall, etc.

There are many advantages of centralized syslogging, as follows:
  • The central syslog is kept on a different segment for storage security.
  • A hacker will find it difficult to delete the logs.
  • Log messages allow co-relation of attacks across different platforms.
  • It has an easy backup policy.
  • Tools like Swatch generate real-time alerts, which help to continuously monitor the log files.

IIS Centralized Binary Logging

IIS centralized binary logging is a process where most of the websites transmit binary and scattered log data to a single log file. Other 115 logging processes generate a separate log file collectively for all websites.

When 115 hosts multiple websites, the process of making many formatted log files and writing the log data to a hard disk consumes CPU and memory resources and thus creates a performance problem. 11S centralized binary logging reduces system resources that are used for logging and provides complete log data for organizations that need it.

It is server property; therefore, all websites present on the server send Fog data to the central log file. Its log file has an Internet binary log (.ibl) file name extension. This logging is useful when multiple websites are hosted on the same server. By configuring this logging process, the network administrator can monitor network activity and also focus on increasing the number of websites that the server can host. It reduces administration burden for Internet Service Providers (ISPs) and facilitates the gathering and securing of the logged data. For example, if an ISP has four servers with 5,000 websites per server, by configuring IIS centralized binary logging, the ISP can handle 5,000 log files per day per server running IIS.

Ensure System’s Integrity

The system’s integrity is the integrity of the data applications, and software on the system. Ensuring the system’s integrity means to preserve and protect the integrity of the entire system. It is very important to maintain the system’s integrity because the system holds all the logs pertaining to an intrusion. Failing to fulfill this, the court will not accept the evidence. To achieve the system’s integrity, one has to follow the guidelines shown above.

Control Access to Logs

Access control is providing privileges to the user, personnel, or an investigator. After the creation of the log file, it is very important to avoid file access or audit by both authorized and unauthorized users. If a log file is properly audited and secured using NTFS permissions, you can have the documented evidence in establishing its credibility. Certain permissions are required to a log file so that the web server can write to the file when the file is open, but once the log file is closed, no resources should have the permission to modify the contents of the file.

Ensure Log File Authenticity

A log file authenticity check is necessary for court proceedings:
  • The log files are authentic only if the investigators can prove that they have preserved the integrity of the log files from the time of starting collection.
  • Generally, the log files can be easily altered as they are simple text files.
  • Even the file date and time stamps are easy to modify. In such a default state, log files can be proved authentic by following a few tips:
  • Move the logs – you must consider that the log files are compromised if a server has been compromised.
  • Move the logs to a master server and then to secondary storage media such as a DVD or disk

Use Signatures, Encryption, and Checksums

A signature is the sender’s identity sent along with the data. The signature can be integrated with any message, file, or other digitally encoded information, or transmitted separately depending on how a given application functions. A signature is used in public key environments and provides authentication and integrity services.

Encryption is a process of transforming the information using an algorithm to make it unreadable to third-party users, whereas authentic users can decrypt and read the information using the key. To ensure that the log file integrity is maintained, the log is encrypted by using any public-key encryption scheme, Checksum is an error-detection scheme where every message transmitted has a numerical hash value attached, depending on the number of set bits in the message.

Signatures, encryption, and checksums help to prove log authenticity:
  • Use a file signature to make the log file more secure
  • To generate MD5 hashes for the files, use the Fsum tool
  • Store the signature and hashes along with the log
  • Store a secure copy in a separate, safe location

FSUM

Source: http://www.slavasoft.com

It is a command line utility for file integrity verification. It offers a choice of 13 hash and checksum functions for file message digest and checksum calculation.

Work With Copies

Original log files are necessary for court proceedings; therefore, it is necessary for an investigator to make copies of the original log files in order to safeguard the originals and produce them in court.

Guidelines for an investigator while performing a log analysis include the following:
  • Never perform log file analysis on the original files; make sure to use only copies for this purpose
  • It is essential to make copies before performing any log file analysis or post-processing.
  • Untouched, original log files are necessary for the investigator or the user to establish the authenticity of the logs in a security incident
  • If the investigator uses log files as court evidence, it is necessary to present the original log files in their original form.

Questions related to this topic

  1. How do I check file logs on a server?
  2. How do I view file audit logs?
  3. What logs should be sent to Siem?
  4. What is a Web server log file?
  5. What is Centralized Logging?

This Blog Article is posted by

Infosavvy, 2nd Floor, Sai Niketan, Chandavalkar Road Opp. Gora Gandhi Hotel, Above Jumbo King, beside Speakwell Institute, Borivali West, Mumbai, Maharashtra 400092

Contact us – www.info-savvy.com

https://g.co/kgs/ttqPpZ

Leave a Comment