It is important to monitor the size of transaction log files for the following reasons:
- The transaction log file might get full and it might cause the following errors:
1.1) Transactions may fail and may start a roll back.
1.2) Transactions may take a long time to complete
1.3) Performance issues may occur
1.4) Blocking may occur - The transaction log file might consume all of your free disk space and halt your system.
- Too large transaction logs often indicate that transaction log backups didn’t take place for a too long time on a full recovery mode database.
If you are witnessing a rapid growth of a transaction log file the most common causes would be the following:
- A long running transaction running (Index maintenance? Big batch delete or update?)
- The database is in full recovery mode and you aren’t taking log backups frequently enough.
The scenarios usually follow two patterns:
1.) Not Understanding Recovery Models
Being in Full Recovery Mode and Not Taking Log Backups – This is the most common reason – the vast majority of those experiencing this issue are.
In SQL Server, there are three recovery models – Full, Bulk-Logged and Simple. We’ll ignore Bulk-Logged for now we’ll sort of say it is a hybrid model and most people who are in this model are there for a reason and understand recovery models.
A few important things to know about transaction log files are:
- The transaction log file is there for crash/restart recovery. For the rolling forward and rolling back of work that was either done (rolling forward/redo) before a crash or restart and the work that was started but not finished after a crash or restart (rolling back/undo). It is the job of the transaction log to see that a transaction started but never finished. In that situation It is the log’s job to say “hey, this never really finished, let’s roll it back” during recovery. It is also the log’s job to see that you did finish something and that your client application was told it was finished (even if it hadn’t yet hardened to your data file) and say “Hey.. this really happened, let’s roll it forward, let’s make it like the applications think it was” after a restart. Now there is more but that is the main purpose.
- The other purpose for a transaction log file is to be able to give us the ability to recover to a point in time due to an “oops” in a database or to guarantee a recovery point in the event of a hardware failure involving the data and/or log files of a database. If this transaction log contains the records of transactions that have been started and finished for recovery, SQL Server can and does then use this information to get a database to where it was before an issue happened, but that isn’t always an available option for us. For that to work we have to have our database in the right recovery model, and we have to take log backups.
In case your database is in Full Recovery mode you can also get yourself in trouble here by not taking log backups frequently enough. – Taking a log backup a day may sound fine, it makes a restore require less restore commands, but keeping in mind the discussion above, that log file will continue to grow and grow until you take log backups. You need to consider your log backup frequency with two things in mind:
- Recovery Needs – In the event that the drive housing your transaction log goes bad or you get serious corruption that affects your log backup, how much data can be lost? If that number is no more than 10-15 minutes, then you need to be taking the log backup every 10-15 minute, end of discussion.
- Log Growth – If your organization is fine to lose more data because of the ability to easily recreate that day you may be fine to have a log backup much less frequently than 15 minutes. Maybe your organization is fine with every 4 hours. But you have to look at how many transactions you generate in 4 hours. Will allowing the log to keep growing in those four hours make too large of a log file? Will that mean your log backups take too long? You must take that into account.
2.) Long Running Transactions
This can also be a cause of uncontrolled and unrestrained log growth. No matter the recovery model, but it often comes up as “But I’m in Simple Recovery Model – why is my log still growing?!”
The reason here is simple, if SQL is using this transaction log for recovery purposes as I described above, then it has to see back to the start of a transaction. If you have a transaction that takes a long time or does a lot of changes, the log cannot truncate on checkpoint for any of the changes that are still in open transactions or that have started since that transaction started.
This means that a big delete, deleting millions of rows in one delete statement is one transaction and the log cannot do any truncating until that whole delete is done. In Full Recovery Model, this delete is logged and that could be a lot of log records. It’s the same thing with Index optimization work during maintenance windows. It also means that poor transaction management and not watching for and closing open transactions can really hurt you and your log file.
This is how you could tell the size of your transaction log files:
SELECT (size * 8.0)/1024.0 AS size_in_mb
, CASE
WHEN max_size = -1
THEN 9999999 -- Unlimited growth
ELSE (max_size * 8.0)/1024.0 END AS max_size_in_mb
FROM sys.database_files
WHERE data_space_id = 0
Transaction log monitoring and setting a proper backup policy for your databases are the key to your systems durability.