I understand that log file rotation involves switching to a new log file either when (1) the current log file reaches a certain size or (2) at the end of the day (EOD). However, I’m uncertain about the reasoning behind (1).
I have never experienced issues with large log files and cannot think of reasons for setting an arbitrary size limit, such as the 10 MiB limit we were given.
"I have never had any issues with large files" does not equate to "there are never problems with large files." Your lack of experience with such issues only indicates that you haven't encountered them, not that they don't exist. For example, I have had to work with text-based log files exceeding 2GB back in the x86 era, where RAM was limited to 4.2GB (including the OS). Handling such large files was nearly impossible, rendering them useless for troubleshooting purposes. The simple answer is that performance is impacted by file size once it reaches a certain point. As a basic illustration, how would you efficiently look through a file larger than your machine's RAM? While a 10MB cutoff may seem low, it is likely chosen to ensure the file remains manageable for manual review and scrolling. We can debate what a more appropriate limit would be, but it is clear that setting a size-based cutoff is necessary to avoid the problems associated with excessively large files.
“I have never had any issues with large files” does not equate to “there are never problems with large files.” Your lack of experience with such issues only indicates that you haven’t encountered them, not that they don’t exist.
For example, I have had to work with text-based log files exceeding 2GB back in the x86 era, where RAM was limited to 4.2GB (including the OS). Handling such large files was nearly impossible, rendering them useless for troubleshooting purposes.
The simple answer is that performance is impacted by file size once it reaches a certain point. As a basic illustration, how would you efficiently look through a file larger than your machine’s RAM?
While a 10MB cutoff may seem low, it is likely chosen to ensure the file remains manageable for manual review and scrolling. We can debate what a more appropriate limit would be, but it is clear that setting a size-based cutoff is necessary to avoid the problems associated with excessively large files.
See lessLog file rotation is not solely about technical issues with the files themselves. It also addresses operational considerations such as: 1. Facilitating incremental backups of logs, which is crucial for transaction logs. 2. Minimizing the risk of losing logged events, especially during a hard system crash when open files are vulnerable. 3. Ensuring secure archival of log files to prevent tampering, or passing them to a SIEM system for threat detection. Although handling large files is less problematic today, frequently reopened large files can still negatively impact these operational needs. If you are only logging debugging information, these constraints might not apply. However, for a long-lived, large-scale system, especially one using binary logs, the question is not if you will encounter byte corruption, but when.
Log file rotation is not solely about technical issues with the files themselves. It also addresses operational considerations such as:
1. Facilitating incremental backups of logs, which is crucial for transaction logs.
2. Minimizing the risk of losing logged events, especially during a hard system crash when open files are vulnerable.
3. Ensuring secure archival of log files to prevent tampering, or passing them to a SIEM system for threat detection.
Although handling large files is less problematic today, frequently reopened large files can still negatively impact these operational needs.
If you are only logging debugging information, these constraints might not apply. However, for a long-lived, large-scale system, especially one using binary logs, the question is not if you will encounter byte corruption, but when.
See less