Consequences of wrong database autogrowth settings

Forrards
March 31, 2023

In this article we`re going to address the question about autogrowth settings of the database. To be more precise we`re going to talk about database files settings and their autogrowth settings. From the first glance this seems to be an easy question. But in reality it may cause serious issues in mission critical data systems and even downtime.

The goal of the article is to decide whether to set the growth in percents or megabytes and what should be the size of the file growth.

Some theory...

When the database is being created its settings are copied from model database. By default it will look like following.

  • Data file size 5 mb
  • File growth 5 mb for data files
  • Transaction log file size 1 mb
  • Transaction log file growth 10%
  • File growth unlimited
  • Database recovery model - Full

These parameters can be changed when you create the database.

Further SQL Server will increase the size of the files automatically according to the settings defined and described above.

By default database file sizes are not limited and in some cases it may lead to filling all the space on the storage.

So let`s get to our questions about database files settings.

Should we set file growth in percents or megabytes?

It depends on the size of the database. If the database file size is 200mb 10% growth will be 20mb. And in case of 30Gb file size it will be 3000mb. Do you feel the difference? And what if the database file size is more than 100Gb? What will be the time taken to add space to the file? Obviously it will take some time.

Let`s consider an example of a 500Gb database file size. File growth is set to 10% and in case of necessity 50Gb will be added to the file. It will definitely take the OS some time to find the space on the disk and add it to the file. And what if there is only 40Gb of free space on the disk? In such case the change will not happen which may lead to further issues and system errors. And if we set the file growth to 500mb there would be most likely no issues. Free space would be added to the file faster and we would still have some space on the disk.

So this is the disadvantage of setting file growth property in percents.

Database File growth in megabytes.

Based on said above file growth in megabytes also depends on the database size and database file size.

Ideally the growth setting should ensure that file growth is not performed often and doesn`t cause query timeouts and long waits.

For example if we have rather small 10gb database 100-300Mb file growth would be enough. If the database is big enough and data intensive file growth setting should be around 500-1000mb but not more than 1gb. Why not more than 1 gb? As we mentioned above large file growth setting value may cause long waits.

Here is an example from our experience. Our client had a database server with a 3Tb database and 500 Gb transaction log. Transaction log file growth was set to 5Gb. Here is the message from database server log we found there:

Autogrow of file ‘database file name' in database 'database name' took 239356 milliseconds.  Consider using ALTER DATABASE to set a smaller FILEGROWTH for this file.

What does it mean?

It took more than 2 minutes to grow the database file by 5Gb. In other words no transactions were performed during 2 minutes! Which in its turn means no business operations for 2 minutes!

From our experience of dealing with Microsoft support they also say that SQL Server database file growth setting shouldn`t be more than 1Gb.

What should we do if we need more space for example when we perform some large data load ?

Here are the options:

  • Monitor file growth yourself and make sure to growth the file in small increments manually before the load. Also in case of transaction log file it would be good to make a transaction log backup before then. You can utilize SQL Server Agent jobs for both tasks.
  • Another option is to maintain the file of the necessary space all the time cause anyway it is needed by your loads.

SQL Server maintains transaction log through virtual files transaction journals. Their quantity and size are determined automatically based on the transaction log file size and file growth setting value. SQL Server strives to maintain less journals. But in case of small file growth and frequent grow operations it will have to maintain more journals. Result of such poor optimization can be seen when you restore large database from backup:

Database db_name has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files.

From the first glance we can see that the database is 100% restored but some processes are still running and the database is in recovery state. These operations may take even the same amount of time as restore has taken.  The root cause of this is the high quantity of transaction journals in transaction log file. So it may affect (disaster) recovery time a lot.

If interested you can check it yourself easily.

  • Create a database with full recovery model
  • Set transaction log file growth to 1mb
  • Create a full backup of the database
  • Create a table and run insert statements on it unless transaction log file size reaches 5Gb.
  • Run full backup
  • Restore the database from full backup
  • You will see the message mentioned above in the server log and you`ll also see that the database remains in recovery after restore for quite a while.

On a working laptop test database restore took 1 min 45 sec. Restore up to 100% took only 20 seconds. Whilst the rest of the time database was not available.

In real production environment such operation with 100gb database will take a significant time which may be critical for business.

Another reason not to set file growth setting value more than 1Gb is security. By default OS resets to zero  all the space which it is going to add to the database file. In case the growth volume is large enough it may take rather long time. You can get more details in MSDN article 'Perform Volume Maintenance Task'. If you are confident in security of the data on your disk you can disable that option by granting the account which is used by SQL Server the right to  Perform Volume Maintenance Task in Local Security Policy. After doing this the speed of file growth will increase.

Also here are some more questions:

  • Why do your database data file and transaction log files need growth?
  • Are  you doing unnecessary shrink operations ? And if yes why do you do them? Perhaps you should increase the file size so that it is enough for all operations ?
  • Which recovery model do you use? If Full do you really need it ?

By default model database recovery model is full and users do not keep this in mind when create database which results in having very large transaction logs which are even larger than the data files. Also it happens so that nobody take transaction log backups which also leads to transaction file growth.

Conclusion

Database file growth setting is very important and may affect performance, reliability and disaster recovery time.

Check your database settings using the checklist below:

  • Set proper recovery model
  • Set necessary transaction log file size and data file size when create the database to avoid the growth operation in the future
  • Set file growth in megabytes and not more than 1000mb but not too small as well to avoid a high number of virtual journals in transaction log file. If the data files grow often increase their size manually or schedule a job for the time when there is no much load on the system.
  • Setup transaction log backups
  • Set proper settings on model database
  • Avoid unnecessary shrink orpeations
  • Think of Perform Volume Maintenance Task rights for SQL Server account

Good luck with your databases and feel free to comment and ask questions below please.

Forrards
March 31, 2023
Share
Recent articles:
Database consultancy: SaaS database considerations
March 31, 2023

Utilizing given approach each client get his own database. In such case every client`s data stays isolated from others. It simplifies backing up the data, installing new clients but causes higher cost of maintenance.

More
Database consultancy: tree-like structure
March 31, 2023

Subject recursion is well covered in the literature, but, nevertheless, the problem of output “tree” does not mean the client and SQL Server many baffled. So, put the problem: there is a table with the name and record id field indicating the parent identifier. Immediately fill in this table, some sort of test data:

More
Database consultancy: Hierarchy in MS SQL Server
March 31, 2023

In our time of greatest prevalence of databases were relational databases, which are the main targets of the tables and the relationships between them. Tables can solve most problems for data storage and manipulation. But in the real world entity requiring storage is not always presented in a tabular form. One of these very common types of data structures other than the table is a tree structure, where each data element is the parent and the offspring. An example of such a structure may be the structure of state enterprises, which is headed by the director (the root of the tree), his deputies, heads of departments from which are subject to certain deputies, employees of departments, which are subject to the rulers.

More

Subscribe to our blog

    

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Database consultancy: SaaS database considerations

Utilizing given approach each client get his own database. In such case every client`s data stays isolated from others. It simplifies backing up the data, installing new clients but causes higher cost of maintenance.

March 31, 2023
Database consultancy: tree-like structure

Subject recursion is well covered in the literature, but, nevertheless, the problem of output “tree” does not mean the client and SQL Server many baffled. So, put the problem: there is a table with the name and record id field indicating the parent identifier. Immediately fill in this table, some sort of test data:

March 31, 2023

Discuss your project with us

    

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.