Understand-Log-File-Accuracy

Understand Log File Accuracy

Understand Log File Accuracy in this during forensic investigation, log files provide a valuable source of evidence. Since these log files act as evidence in court, investigators should ensure that the files are accurate.

Without following certain guidelines while collecting and preserving the log files, they will not be acceptable as valid evidence in the court. Therefore, investigators should follow the above mentioned steps to maintain the log file accuracy.

Log Everything

Configure the web server to log all the fields available. This will help in investigations, as every field shows some information regarding the activity taking place on the system. You cannot predict which field may provide important information and might be evidence.

Logging every possible server activity is a wise decision. For instance, a victim could claim that an intruder had accessed his computer and installed a backdoor proxy server and then started attacking other systems using the same backdoor proxy. In this case, logging every server activity may help investigators in identifying the origin of traffic and the perpetrator of the crime.

Related Product : Computer Hacking Forensic Investigator | CHFI

Keeping Time

The time factor is important in a log presented as evidence; therefore, proper and accurate time maintenance is a prerequisite for forensic investigation. It is advisable to synchronize IIS servers using Windows Time service with an external time source. The Windows Time service will automatically synchronize the domain controller in the domain. The standalone server supports synchronization by setting an external source through registry entries, in the following manner:

Key: HKLM\ SYSTEM \CurrentControlSet \ Services \W32Time \ Parameters \

Value name: Type

Type: RE_G St

Value data: NIP

Key: HKLM\ SYSTEM\CurrentControlSet \Services \W32Time \ Parameters \

Value name: NtpServer

Type: REG_SZ

Value name: tock.usno.navy.mil

Key: HKLM\SYSTRM\CurrentControlSet\Services\W32Time\Parameters\

Value name: Period

Type: REG_SZ

Value data: 24

Why Synchronize Computer Times?

The most important function of a computer security system is to regularly identify, examine, and analyze the primary log file system as well as check log files generated by intrusion detection systems and firewalls.

Problems faced by the user/organization when the computer times are not in synchronization include the following:

If computers display different times, then it is difficult to match actions correctly on different computers. For example, consider the chat option via any messenger. Two systems with different clocks are communicating, and since the clocks are different, the logs show different times. Now, if an observer checks the log files of both the systems, he or she would face difficulty in reading the conversation.

If the computers connected in the internal (organization) network have the times synchronized but the timings are wrong, the user or an investigator may face difficulty in correlating logged activities with outside actions, such as tracing intrusions.

Sometimes, on a single system, a few applications leave the user puzzled when the time jumps backward or forward. For example, the investigator cannot identify the timings in database systems that are involved in services such as e-commerce transactions or crash recovery.

Also Read : Summarize the Event Correlation

What is Network Time Protocol (NTP)?

Network Time Protocol (NTP) is a protocol used for synchronizing the time of computers connected to a network. NTP is a standard internet protocol (built on top of TCP/IP) that guarantees perfect synchronization of computer clocks in a computer network to the extent of milliseconds. It runs over packet-switched and variable latency data networks and uses UDP port 123 as its transport layer.

NTP not only synchronizes the clock of a particular system but also synchronizes the client workstation clocks. It runs as a continuous background program on a computer, receiving server timestamps and sending periodic time requests to the server, often working on adjusting the client computer’s clock.

The features of this protocol are listed below:
  • It uses a reference time
  • It will choose the most appropriate time when there are many time sources.
  • It will try and avoid errors by ignoring inaccurate time sources
  • Ft is highly accessible
  • Ft uses 2^32 seconds of resolution to choose the most accurate reference time
  • It uses measurements from the earlier instances to calculate current time and error, even if the network is not available
  • When the network is unavailable, it can estimate the reference time by comparing the previous timings

Implement Log Management

The records of events taking place in the organization’s system and network are logs. A log is a collection of entries, and every entry includes detailed information about all events that occurred within the network. With the increase of workstations, network servers, and other computing services, the volume and variety of logs have also increased. To overcome this problem, log management is necessary.

A log management infrastructure consists of the hardware, software, networks, and media used to generate, transmit, store, analyze, and dispose off the log data.

A log management infrastructure typically comprises the following three tiers:

Log Generation: The first tier of the log management infrastructure includes hosts that generate log data. Log data can be made available for the log server in the second tier through two means. First, hosts run a log client service or application that makes log data available to the log server through the network. Secondly, the host allows the servers to authenticate and retrieve copies of the log files to the server.

Log Analysis and Storage: The second tier consists of log servers that receive log data or copies of log data from the hosts. Log data are transferred from the host to the server in or almost in real time. Data also travel in batches based on a schedule or on the amount of log data waiting. Log servers or on separate database servers store the log data.

Log Monitoring: The third tier of the log management infrastructure contains consoles that monitor and review log data. These consoles also review the results of the automated analysis and generate reports.

Functions of Log management Infrastructure

A log management system performs the following functions:
  • Log parsing: Log parsing refers to extracting data from a log so that the parsed values can be used as input for another logging process.
  • Event filtering: Event filtering is the suppression of log entries through analysis, reporting, or long-term storage because their characteristics indicate that they are unlikely to contain information of interest.
  • Event aggregation: Event aggregation is the process where similar entries are consolidated into a single entry containing a count of the number of occurrences of the event.
  • Log rotation: Log rotation closes a log file and opens a new log file on completion of the first file. It is performed according to a schedule (e.g., hourly, daily, weekly) or when a log file reaches a certain size.
  • Log archival and retention: Log archival refers to retaining logs for an extended time period, typically on removable media, a storage area network (SAN), or a specialized log archival appliance or server. Investigators need to preserve the logs, to meet legal and/or regulatory requirements. Log retention is archiving logs on a regular basis as part of the standard operational activities.
  • Log compression: Log compression is the process of storing a log file in a way that reduces the amount of storage space needed for the file without altering the meaning of its contents. It is often performed when logs are rotated or archived.
  • Log reduction: Log reduction is removing unneeded entries from a log to create a new log that is smaller. A similar process is event reduction, which removes unneeded data fields from all log entries.
  • Log conversion: Log conversion is parsing a log in one format and storing its entries in a second format. For example, conversion could take data from a log stored in a database and save it in an XML format in a text file.
  • Log normalization: In log normalization, each log data field is converted to a particular data representation and categorized consistently. One of the most common uses of normalization is storing dates and times in a single format.
  • Log file integrity checking: Log file integrity checking involves the calculation of a message digest for each file and storing the message digest securely to ensure detection of the changes made to the archived logs.
  • Event correlation: Event correlation is determining relationships between two or more log entries. The most common form of event correlation is rule-based correlation, which matches multiple log entries from a single source or multiple sources based on the logged values, such as timestamps, IP addresses, and event types.
  • Log viewing: tog viewing displays log entries in a human-readable format. Most log generators offer some sort of log viewing capability; third-party log viewing utilities are also available. Some log viewers provide filtering and aggregation capabilities.
  • Log reporting: Log reporting is displaying the results of log analysis. It is often performed to summarize significant activity over a particular period of time or to record the detailed information related to a particular event or series of events.
  • Log clearing: tog clearing removes all entries from a log that precede a certain date and tie•1 It is often performed to remove old log data that is no longer needed on a system because it is not of importance or because it has been archived.

Challenges in Log Management

There are three major concerns regarding the log management. The first one is the creation and storage of logs; the second is protecting the logs; and the third is analyzing logs. They are the most important challenges of the log management that will have an impact on the forensic investigation.

  • Log creation and storage: There are huge amounts of logs generated from several system resources. Due to their large size, the question of log storage is challenging. The log format is another thing that makes it difficult to manage the logs since different devices and log-monitoring systems produce logs with different formats.
  • Log protection: Log protection and availability are the foremost considerations in an investigation since the evidence it holds is very valuable. If investigators do not properly handle the logs during forensic examinations, the files lose their integrity and become invalid as evidences.
  • Log Analysis: Logs are very important in an investigation; therefore, ensuring proper log analysis is necessary. However, log analysis is not a high priority job for the administrators. Log analysis is the last thing to do for the administrators because it takes place after an incident occurs. Therefore, there is a lack of tools and skillful professionals for log analysis, making it a major drawback.

Questions related to this topic

  1. How do I read Web server logs?
  2. What is a Web server log file?
  3. What are the 3 types of logs available through the event viewer?
  4. How do I view Windows log files?
  5. How to Understand Log File Accuracy?

This Blog Article is posted by

Infosavvy, 2nd Floor, Sai Niketan, Chandavalkar Road Opp. Gora Gandhi Hotel, Above Jumbo King, beside Speakwell Institute, Borivali West, Mumbai, Maharashtra 400092

Contact us – www.info-savvy.com

https://g.co/kgs/ttqPpZ

Leave a Comment