
Using the information from your asset inventory (files and directories list, along with their permissions and checksum information), monitor your system for any deviations. Since the checksum is the fingerprint of the actual contents of a file, any unauthorized deviation from the fingerprint’s stored value means the file has been tampered with.į29bc64a9d3732b4b9035125fdb3285f5b6455778edca72414671e0ca3b2e0deį20eb0fdf6c59bdd73a1c984652c71a1ec9a59da65ea66dcda75f4fccc1371f9 For example, files under /bin or /sbin should not be modified, new files should not be created under these directories at all, and mysql.log is expected to grow but never decrease in size.Ĭalculate the checksums of your assets and store them in a safe location to be continually checked against. Examples of these files would be system, shell, or application log files.Įach file or directory needs to be associated with a modification policy. This way, you can detect if an unauthorized user modifies these files.Įspecially critical is the fact that in containers, when you have writing permissions in a volume exposed from the host, you can create a symlink to another directory and/or file in the host, a first step for escaping the container.Īt this point, it is important to differentiate a regular file write from a file destruction, since many of the files in your inventory will need to be routinely appended to, but are not expected to be truncated or deleted. You must take into consideration assets living in your cloud infrastructure, like logs files stored in an S3 bucket, if you are using AWS.ĭefine appropriate writing permissions for the files and directories in your asset list.

Kubernetes binaries: /usr/local/bin/kubectl While many of these files will depend on the actual business use case, there are a common set of assets that you certainly want to be monitored:Ĭsh config files and dirs: /etc/csh.cshrc These assets include system and configuration files, as well as files containing sensitive information. #1.1 Scope which files and directories need to be monitoredĬreate and maintain a comprehensive list of the files and directories that need monitoring. You cannot secure what you can’t see, and thus, having a list of files and directories that are important for you is the very first best practice you must implement in your infrastructure.Ī word of caution here: Too big of a scope will cause too many alerts, and it can lead to too many false positives and alert fatigue, rendering your whole FIM untrustable and worthless. Maintaining an asset inventory is the first step in securing your system. This selected set of file integrity monitoring best practices are grouped by topic, but please remember that FIM is just a piece in the whole security process. Gather forensics data for further investigation.Implement an automated alert and response mechanism.



Scope which files and directories need to be monitored.This article dives into a curated list of FIM best practices, focused on host and container security: Click to tweetįollowing these best practices for the tools you use in your infrastructure will help you detect this kind of attack. 📄👀 Malware, leaked credentials or attacks can be detected with file integrity monitoring. In an ideal scenario, any change occurred in a sensitive file by an unauthorized actor should be detected and immediately reported.
#COMMON LOG FILE SYSTEM ISO#
Knowing if any of these files have been tampered with is critical to keep your infrastructure secure, helping you detect attacks at an early stage and investigate them afterwards.įIM is a core requirement in many compliance standards, like PCI/DSS, NIST SP 800-53, ISO 27001, GDPR, and HIPAA, as well as in security best practice frameworks like the CIS Distribution Independent Linux Benchmark. By monitoring these files, we can detect those unauthorized modifications and react immediately to those compromising attempts.įile integrity monitoring (FIM) is an ongoing process which gets you visibility into all of your sensitive files, detecting changes and tampering of critical system files or directories that would indicate an ongoing attack. Security and visibility for cloud applicationsĭiscover how applying a quick set of file integrity monitoring best practices will help you detect the tampering of critical file systems in your cloud environment.Īn attacker can gain access to your system escaping the container by modifying certain files, like the runc binary ( CVE-2019-5736).
