Performance Audit Checklist
SQL Server
Configuration Settings
Advanced
Setting?
Requires
Restart?
Default ValueCurrent Value
affinity mask YesYes0 
awe enabled YesYes0 
cost threshold for parallelism Yes No 5  
cursor threshold Yes No -1  
fill factor (%) Yes Yes 0  
index create memory (KB) Yes No 0  
lightweight pooling Yes Yes 0  
locks Yes Yes 0  
max degree of parallelism Yes No 0  
max server memory (MB)Yes No 2147483647  
max text repl size (B)No No 65536  
max worker threadsYes Yes 255  
min memory per query (KB)Yes No 1024  
min server memory (MB)Yes No 0  
nested triggersNo No 1  
network packet size (B)Yes No 4096  
open objectsYes Yes 0  
priority boostYes Yes 0  
query governor cost limitYes No 0  
query wait (s) Yes No -1  
recovery interval (m)Yes No 0  
scan for startup procsYes No 0  
set working set sizeYes Yes 0  
user connectionsYes Yes 0  

Enter your results in the table above.
 
Most SQL Server Configuration Settings Should Not Be Changed
In this section, we are going to take a look at some of the performance-related SQL Server configuration settings. These are SQL Server-specific settings that can be modified using either Enterprise Manager or SP_CONFIGURE.
As the title of this section says, in most cases, you should not modify the default SQL Server configuration settings. This is because most of the default settings provided will provide the optimum performance for most SQL Servers. And most of all, if you are not exactly sure of what the implications are of changing a setting, it is possible to hurt your server's performance instead of boosting it.
If this is the first time you have dealt with this particular SQL Server, one of your first steps should be to review the various configuration settings and then compare them to default settings in order to see which ones, if any, have been changed from the defaults. Once you have identified any of the changed settings, your next goal should be to find out why they were changed. If you can't find out why, or if you do find out why, but the reasoning behind the change is flimsy, then you will want to change the settings back to the default values. Once you have done this, your next step is to review all of the other settings (those that were set to default when you started) and evaluate each one in order to see if there might be a benefit of changing the value from the default value to a more appropriate value.
The focus of this article will be SQL Server 2000, although most of the advice applies equally to SQL Server 7.0. Before trying any of these suggestions under SQL Server 7.0, you will want to review the configuration setting section in the SQL Server 7.0 Books Online just to be sure.
There are a total of 36 different SQL Server configuration settings in SQL Server 2000. We will only focus on 23 key performance-related ones here.
 
Getting Started
The easiest way to begin your audit of a SQL Server's configuration settings is to run the following command, for each of your servers, in Query Analyzer:
SP_CONFIGURE
This will produce a table similar to this one:
name                                minimum     maximum     config_value run_value 
----------------------------------- ----------- ----------- ------------ ---------- 
affinity mask                       -2147483648 2147483647  0            0
allow updates                       0           1           0            0
awe enabled                         0           1           0            0
c2 audit mode                       0           1           0            0
cost threshold for parallelism      0           32767       5            5
cursor threshold                    -1          2147483647  -1           -1
default full-text language          0           2147483647  1033         1033
default language                    0           9999        0            0
fill factor (%)                     0           100         0            0
index create memory (KB)            704         2147483647  0            0
lightweight pooling                 0           1           0            0
locks                               5000        2147483647  0            0
max degree of parallelism           0           32          1            1
max server memory (MB)              4           2147483647  2147483647   2147483647
max text repl size (B)              0           2147483647  65536        65536
max worker threads                  32          32767       255          255
media retention                     0           365         0            0
min memory per query (KB)           512         2147483647  1024         1024
min server memory (MB)              0           2147483647  0            0
nested triggers                     0           1           1            1
network packet size (B)             512         65536       4096         4096
open objects                        0           2147483647  0            0
priority boost                      0           1           0            0
query governor cost limit           0           2147483647  0            0
query wait (s)                      -1          2147483647  -1           -1
recovery interval (min)             0           32767       0            0
remote access                       0           1           1            1
remote login timeout (s)            0           2147483647  5            5
remote proc trans                   0           1           0            0
remote query timeout (s)            0           2147483647  600          600
scan for startup procs              0           1           0            0
set working set size                0           1           0            0
show advanced options               0           1           1            1
two digit year cutoff               1753        9999        2049         2049
user connections                    0           32767       0            0
user options                        0           32767       0            0

The first column, "name," is the name of the SQL Server configuration setting. The second column, "minimum," is the smallest legal value for the setting. The third column, "maximum," is the largest legal value for the setting. The fourth column, "config_value," is what the setting has been set to (but may or may not be what SQL Server is actually running now. Some settings don't go into effect until SQL Server has been restarted, or until the RECONFIGURE WITH OVERRIDE option has been run, as appropriate.) And the last column, "run_value," is the value of the setting currently in effect. If you have not changed any of these values since the last time you restarted SQL Server, then the values in the last two columns will always be the same.
Unfortunately, the default values for these settings are not listed when you run SP_CONFIGURE. For your convenience, this article lists the default values of those configuration settings we discuss here (see chart above).

How to Change SQL Server Configuration Settings
Most, but not all, of the SQL Server configuration settings can be changed using Enterprise Manager. But one of the easiest ways to change any of these settings is to use the SP_CONFIGURE command, like this:
SP_CONFIGURE ['configuration name'], [configuration setting value]
GO
RECONFIGURE WITH OVERRIDE
GO
where:
configuration name = The name of the configuration setting (see the name in the table above). Note that the name must be enclosed in single quote marks (or double quote marks, depending on Query Analyzer's configuration).
configuration setting value = The numeric value of the setting (with no quote marks).
Once SP_CONFIGURE has run, you must perform one additional step. You must either run the RECONFIGURE option (normal settings) or the RECONFIGURE WITH OVERRIDE option (used for settings that can get you into trouble if you make a mistake), otherwise your setting change will not go into effect. Rather than trying to remember when to use each different version of the RECONFIGURE command, it is easier to just use RECONFIGURE WITH OVERRIDE all the time, as it works with all configuration settings. If you use Enterprise Manager to change a setting, it will execute RECONFIGURE WITH OVERRIDE automatically, so you don't have to.
Once you do this, most, but not all, settings go into effect immediately. For those that don't go into effect after RECONFIGURE, the SQL Server service has to be stopped and restarted.
Before we are finished with this topic, there is one more thing you need to know. Some of the configuration settings are considered "advanced" settings. Before you can change these options using the SP_CONFIGURE command, you must first change one of the SQL Server configuration settings to allow you to change them. The command to do this is:
SP_CONFIGURE 'show advanced options', 1
GO
RECONFIGURE
GO
Only after you have run the above code may you now run SP_CONFIGURE to change an advanced SQL Server configuration setting.
Now that you know how to change the SQL Server configuration options, let's take a look at those that are related to performance.
 
Affinity Mask
When SQL Server is run under Windows Server, a SQL Server thread can move from one CPU to another. This feature allows SQL Server to run multiple threads at the same time, generally resulting in better load balancing among the CPUs in the server. The only downside to this process is that each time a thread moves from one CPU to another, the processor cache has to be reloaded, which can hurt performance in some cases.
In cases of heavily-loaded servers with more than 4 CPUs, performance can be boosted by specifying (to a limited degree) which processor(s) should run a specific thread. This reduces the number of times that the processor cache has to be reloaded, helping to eek out a little more performance of the server. For example, you can specify that SQL Server will only use some of the CPUs, not all of them available to it in a server.
The default value for the "affinity mask" setting, which is "0," tells SQL Server to allow the Windows Scheduling algorithm to set a thread's affinity. In other words, the operating system, not SQL Server, determines which threads run on which CPU, and when to move a thread from one CPU to another CPU. In any server with 4 or less CPUs, the default value is the best overall setting. And for servers with more than 4 CPUs, and that are not overly busy, the default value is also the best overall setting for optimum performance.
But for servers with more than 4 CPUs, and are heavily loaded because of one or more non-SQL Server applications are running on the same server as SQL Server, then you might want to consider changing the default value for the "affinity mask" option to a more appropriate value. Please note that if SQL Server is the only application running on the server, then using the "affinity mask" to limit CPU use could hurt performance, not help it.
For example, let's say you have a server that is running SQL Server, multiple COM+ objects, and IIS. Let's also assume that the server has 8 CPUS and is very busy. By reducing the number of CPUs that can run SQL Server from 8 to 4, what will happen is that SQL Server threads will now only run on 4 CPUs, not 8 CPUs. This will reduce the number of times that a SQL Server thread can jump CPUs, reducing how often the processor cache as to be reloaded, helping to reduce CPU overhead and potentially boosting performance somewhat. The remaining 4 CPUs will be used by the operating system to run the non-SQL Server applications, helping them also to reduce thread movement and boosting performance.
For example, if you have a 8 CPU system, the value you would use in the SP_CONFIGURE command to select which CPUs that SQL Server should only run on are listed below:
 
Decimal Value
Allow SQL Server Threads on These Processors
10
30 and 1
70, 1, and 2
150, 1, 2, and 3
310, 1, 2, 3, and 4
630, 1, 2, 3, 4, and 5
1270, 1, 2, 3, 4, 5, and 6
Specifying the appropriate affinity mask is not an easy job, and you should consult the SQL Server Books Online before doing so for additional information. Also, you should test what happens to your SQL Server's performance before and after you make any changes to see if the value you have selected hurts or helps performance. Other than trial and error, there is no easy way to determine the optimum affinity mask value for your particular server.
As part of your audit, if you find that an affinity mask is being used, try to find out why. If there are no good answers, remove it, and return to the default value.
 
Awe Enabled
If you are using SQL Server 2000 Standard Edition under Windows 2000 or 2003 (any version), or are running SQL Server 2000 Enterprise Edition under Windows 2000 or 2003 Server, or if your server has less than 4GB of RAM, the "awe enabled" option should always be left to the default value of 0, which means that AWE memory is not being used.
The AWE (Advanced Windowing Extensions) API allows applications (that are written to use the AWE API) to run under Windows 2000 or 2003 Advanced Server, or Windows 2000 or 2003 Datacenter Server, to access more than 4GB of RAM. SQL Server 2000 Enterprise Edition (not SQL Server 2000 Standard Edition) is AWE-enabled and can take advantage of RAM in a server over 4GB. If the operating system is Windows 2000 or 2003 Advanced Server, SQL Server 2000 Enterprise Edition can use up to 8GB of RAM. If the operating system is Windows 2000 or 2003 Datacenter Server, SQL Server 2000 Enterprise can use up to 64GB of RAM.
By default, if a physical server has more than 4GB of RAM, Windows 2000 and 2003 (Advanced and Datacenter), along with SQL Server 2000 Enterprise Edition, cannot access any RAM greater than 4GB. In order for the operating system and SQL Server 2000 Enterprise Edition to take advantage of the additional RAM, two steps have to be completed.
Exactly how you configure AWE memory support depends on how much RAM your server has. Essentially, to configure Windows 2000 or 2003 (Advanced or Datacenter), you must enter one of the following switches in the boot line of the boot.ini file, and reboot the server:
  • 4GB RAM:  /3GB (AWE support is not used)
  • 8GB RAM:  /3GB /PAE
  • 16GB RAM:  /3GB /PAE
  • 16GB + RAM:  /PAE
The /3GB switch is used to tell the OS to allow SQL Server to take advantage of 3GB out of the base 4GB of RAM that Windows 2000 and 2003 supports natively. If you don't specify this option, then SQL Server will only take advantage of 2GB of the first 4GB of RAM in the server, essentially wasting 1GB of RAM.
AWE memory technology is used only for the RAM that exceeds the base 4GB of RAM, that's why the /3GB switch is needed to use as much of the RAM in your server as possible. If your server has 16GB or less of RAM, then using the /3GB switch is important. But if your server has more than 16GB of RAM, then you must not use the /3GB switch. The reason for this is because the 1GB of additional RAM provided by adding the /3GB switch is needed by the operating system in order to take advantage of all of the extra AWE memory. In other words, the operating system needs 2GB of RAM itself to mange the AWE memory if your server has more than 16GB of RAM. If 16GB or less of RAM is in a server, then the operating system only needs 1GB of RAM, allowing the other 1GB of RAM for use by SQL Server.
Once this step is done, the next step is to set the "awe enabled" option to 1, and then restart the SQL Server service. Only at this point will SQL Server be able to use the additional RAM in the server.
One caution about using the "awe enabled" setting is that after turning it on, SQL Server no longer dynamically manages memory. Instead, it takes all of the available RAM (except about 128MB which is left for the operating system). If you want to prevent SQL Server from taking all of the RAM, you must set the "max server memory" option (described in more detail later in this article) to a figure that limits SQL Server to the amount of RAM you specify.
As part of your audit process, you will want to check what this setting is and then determine if the setting matches your server's hardware and software configuration. If not, then change the setting appropriately.

Cost Threshold for Parallelism
Using parallelism to execute a SQL Server query has its costs. This is because it takes a little additional overhead to run a query in parallel than to run it serially. But if the benefits of running a query using parallelism is higher than the costs, then using parallelism is a good thing.
As a rule of thumb, if a query can run serially very fast, there is no point in even considering parallelism for the query, as the extra time required to evaluate it for possible parallelism might be longer than the time it takes to run the query serially.
By default, if the Query Optimizer determines that a query will take less than 5 seconds to execute, parallelism is not considered by SQL Server. This 5 second figure can be modified using the "cost threshold for parallelism" SQL Server option. You can change this value anywhere from 0 to 32767 seconds. So if you set this value to 10, this means that the Query Optimizer won't consider parallelism for any query that it thinks will take less than 10 seconds to run.
In most cases, you should not change this setting. But if you find that your SQL Server runs many queries with parallelism, and if the CPU rate is very high, raising this setting from 5 to a higher figure (you will have to experiment to find the ideal figure for your situation), will reduce the number of queries using parallelism, also reducing the overall usage of your server's CPUs, which may help the overall performance of your server.
Another option to consider is to reduce the value from 5 seconds to a smaller number, although this could hurt, rather than help performance in many cases. One area where a smaller value might be useful is in cases where SQL Server is acting as a data warehouse and many very complex queries are being run. A lower value will allow the Query Optimizer to use parallelism more often, which can help in some situations.
You will want to test changes to the default value thoroughly before implementing it on your production servers.
If SQL Server only has access to a single CPU (either because there is only one CPU in the server, or because of an "affinity mask" setting, parallelism is not considered for a query.
If you find in your audit that the cost threshold for parallelism is being used, find out why. If you can't get an answer, move it back to the default value.
 
Cursor Threshold
If your SQL Server does not use cursors, or uses them very little, then this setting should never be changed from its default value of "-1".
A "cursor threshold" of "-1" tells SQL Server to execute all cursors synchronously, which is the ideal setting if the result sets of cursors executed on your server are not large. But if many, or all of the cursors running on your SQL Server produce very large result sets, then executing cursors synchronously is not the most efficient way to execute a cursor.
The "cursor threshold" setting has two other options (besides the default) for running large cursors. A setting of "0" tells SQL Server to run all cursors asynchronously, which is more efficient if most or all of the cursor's result sets are large.
What if some of the cursor result sets are small and some are large, then what do you do? In this case, you can decide what large and small is, and then use this number as the cutoff point for SQL Server. For example, let's say that we consider any cursor result set of under 1000 rows as small, and any cursor result set of over 1000 rows as large. If this is the case, we can set the "cursor threshold" to 1000.
When the "cursor threshold" is set to 1000, what happens is that if the Query Optimizer predicts that the result set will be less than 1000, then the cursor will be run synchronously. And if the Query Optimizer predicts that the result set will be more than 1000, then the cursor will be run asynchronously.
In many ways, this option provides the best of both worlds. The only problem is what is the ideal "cursor threshold". To determine this, you will need to test. But as you might expect, the default value if often the best, and you should only change this option if you know for sure that your application uses very large cursors and that you have tested this option and know for sure that by changing it, it has helped, not hurt performance.
As a part of your audit, you may also want to investigate how often cursors are used, and how large the result sets are. Only by knowing this will you know what the best setting is for your server. Of course, you could always try to eliminate the use of cursors on the server. This way, the setting can remain at the default value, and you don't have to worry about the overhead of cursors.
 
Fill Factor (%)
This option allows you to change the default fill factor for indexes when they are built. By default, the fill factor setting is set to "0". A setting of "0" is somewhat confusing, as what it means is that leaf index pages are filled 100% (not 0%), but that intermediate index pages (non-leaf pages) have some space left in them (they are not filled up 100%). Legal settings for the fill factor setting range from 0 through 100.
The default fill factor only comes into play when you build indexes without specifying a specific fill factor. If you do specify a fill factor when you create a new index, that value is used, not the default fill factor.
In most cases, it is best to leave the default fill factor alone, and if you want a value other than the default fill factor, then specify it when you create an index.
As a part of your audit, note if the fill factor is some figure other than the the default value of "0". If it is, try to find out why. And if you can't find out why the default value was changed, or there is not a good reason, switch it back to the default value. Also, if the value has been changed, keep in mind that any indexes created after the default value was changed may be using this default fill factor value. If so, you may need to reevaluate these indexes to see if the fill factor used for creating them is appropriate.


Index Create Memory (KB)
The index create memory setting determines how much memory can be used by SQL Server for index creating sorts. The default value of "0" tells SQL Server to automatically determine the ideal value. In almost all cases, SQL Server will configure the amount of memory optimally.
But in some unusual cases, especially with very large tables, it is possible for SQL Server to make a mistake, causing large indexes to be created very slowly, or not at all. If you run into this situation, you may want to consider setting the Index Create Memory setting yourself, although you will have to trial and error the setting until you find the optimum one for your situation. Legal settings for this option run from 704 to 2147483647. This number refers to the amount of RAM, in KB, that SQL Server can devote to creating the index.
Keep in mind that if you do change the setting, that this memory will then be allocated for index creation and will not be available for other use. If your server has more than enough RAM, then this will be no problem. But if your server is short on RAM, changing this setting could negatively affect the performance of other aspects of SQL Server. You might consider making this change only when you are creating or rebuilding large indexes, and return the setting to the default all other times.
As with the other settings, if you find in your audit that this setting is some value other than the default, try to find out why. If you can't find out why, or if there is not a good reason, change it back to the default value.
 
Lightweight Pooling
SQL Server 7.0 and 2000, by default, run in what is called "thread mode." What this means is that SQL Server uses what are called UMS (User Mode Schedulers) threads to run user processes. SQL Server will create one UMS thread per processor, with each one taking turns running the many user processes found on a busy SQL Server. For optimum efficiency, the UMS attempts to balance the number of user processes run by each thread, which in effect tries to evenly balance all of the user processes over all the CPUs in the server.

SQL Server also has an optional mode it can run in, called fiber mode. In this case, SQL Server uses one thread per processor (like thread mode), but the difference is that multiple fibers are run within each thread. Fibers are used to assume the identity of the thread they are executing and are non-preemptive to other SQL Server threads running on the server. Think of a fiber as a "lightweight thread," which, under certain circumstances, takes less overhead than standard UMS threads to manage. Fiber mode is turned on and off using the "lightweight pooling" SQL Server configuration option. The default value is "0", which means that fiber mode is turned off.

So what does all this mean? Like everything, there are pros and cons to running in one mode over another. Generally speaking, fiber mode is only beneficial when all of the following conditions exist:
  • Two or more CPUs are found on the server (the more the CPUs, the larger the
    benefit).

  • All of the CPUS are running near maximum (90-100%) most of the time.

  • There is a lot of context switching occurring on the server (as reported
    by the Performance Monitor System Object: Context Switches/sec. Generally speaking, more than 5,000 context switches per second is considered high.

  • The server is making little or no use of distributed queries or extended stored procedures.
If all the above are true, then turning on the  "lightweight pooling" option in SQL Server may see a 5% or greater boost in performance.

But if the four circumstances are all not true, then turning on "lightweight pooling" could actually degrade performance. For example, if your server makes use of many distributed queries or extended stored procedures, then turning on "lightweight pooling" will definitely cause a problem because they cannot make use of fibers, which means that SQL Server will have to switch back-and-forth from fiber mode to thread mode as needed, which hurts SQL Server's performance.
As with the other settings, if you find in your audit that this setting is some value other than the default, try to find out why. In addition, check to see if the four conditions above exist. If they do, then turning "lightweight pooling" on may be beneficial. If these four conditions do not exist, then use the default value of "0".
 
Locks
Each time SQL Server locks a record, the lock must be stored in memory. By default, the value for the "locks" option is "0", which means that lock memory is dynamically managed by SQL Server. Internally, SQL Server can reserve from 2% to 40% of available memory for locks. In addition, if SQL Server determines that allocating additional memory for locking could cause paging at the operating system level, it will not allocate the memory to locks, instead giving it up to the operating system in order to prevent paging.
In almost all cases, you should allow SQL Server to dynamically manage locks, leaving the default value as it. If you enter your own value for lock memory (legal values are from 5000 to 2147483647 KB), then SQL Server cannot dynamically manage this portion of memory, which could cause some other areas of SQL Server to experience poor performance.
If you get an error message that says you have exceeded the maximum number of locks available, you have these options:
  • Closely examine your queries to see if they are causing excessive locking. If they are, it is possible that performance is also being hurt because of a lack of concurrency in your application. It is better to fix bad queries than it is to allocate too much memory to tracking locks.

  • Reduce the number of applications running on the server.

  • Add more RAM to your server.

  • Boost the number of locks to a high value (based on trial and error). This is the least desirable option as giving memory to locks prevents it from being used by SQL Server for other purposes, as needed.
Do your best to resist using this option. If you find in your audit that this setting is some other value other than the default, find out why. If you can't find out why, or if the reason is poor, change it back to the default value.
 
Max Degree of Parallelism
This option allows you to specify if parallelism is turned on, turned off, or only turned on for some CPUs, but not for all CPUs in your server. Parallelism refers to the ability of the Query Optimizer to use more than a single CPU to execute a query. By default, parallelism is turned on and can use as many CPUs as there are in the server (unless this has been reduced due to the affinity mask option). If your server has only one CPU, the "max degree of parallelism" value is ignored.
The default for this option is "0", which means that parallelism is turned on for all available CPUs. If you change this setting to "1", then parallelism is turned off for all CPUs. This option allows you to specify how many CPUs can be used for parallelism. For example, if your server has 8 CPUs and you only want parallelism to run on 4 of them, you can specify a value of 4 for this option. Although this option is available, it is doubtful if using it would really provide any performance benefits.
If parallelism is turned on, as it is by default if you have multiple CPUs, then the query optimizer will evaluate each query for the possibility of using parallelism, which takes a little overhead. On many OLTP servers, the nature of the queries being run often doesn't lend itself to using parallelism for running queries. Examples of this include standard SELECT, INSERT, UPDATE and DELETE statements. Because of this, the query optimizer is wasting its time evaluating each query to see if it can take advantage of parallelism. If you know that if your queries will probably never need the advantage of parallelism, you can save a little overhead by turning this feature off, so queries aren't evaluated for this.
Of course, if the nature of the queries that are run on your SQL Server can take advantage of parallelism, you will not want to turn parallelism off. For example, if your OLTP server runs many correlated subqueries, or other complex queries, then you will probably want to leave parallelism on. You will want to test this setting to see if making this particular change will help, or hurt, your SQL Server's performance in your unique operating environment.
In most cases, because most servers run both OLTP and OLAP queries, parallelism should be kept on. As part of your performance audit, if you find parallelism turned off, or if it is restricted, find out why. As part of your audit, you will also want to determine if the server is virtually all OLTP-oriented. If so, the turning off parallelism might be justified, although you will want to thoroughly test this to see if turning it off helps or hurts overall SQL Server performance. But if the server runs mixed OLTP and OLAP, or mostly OLAP queries, then parallelism should be on for best overall performance.

Max Server Memory (MB) & Min Server Memory (MB)
For best SQL Server performance, you want to dedicate your SQL Servers to only running SQL Server, not other applications. And in most cases, the settings for the "maximum server memory" and the "minimum server memory" should be left to their default values. This is because the default values allow SQL Server to dynamically allocate memory in the server for the best overall optimum performance. If you "hard code" a minimum or maximum memory setting, you risk hurting SQL Server's performance.
On the other hand, if SQL Server cannot be dedicated to its own physical server (other applications run on the same physical server along with SQL Server) you might want to consider changing either the minimum or maximum memory values, although this is generally not required.
Let's take a closer look at each of these two settings.
The "maximum server memory" setting, when set to the default value of 2147483647 (in MB), tells SQL Server to manage the use of memory dynamically, and if it needs it, to use as much RAM as is available (while leaving some memory for the operating system).
If you want SQL Server to not use all of the available RAM in the server, you can manually set the maximum amount of memory SQL Server can use by specifying a specific number that is between 4 (the lowest number you can enter) to the maximum amount of RAM in your server (but don't allocate all the RAM in your server, as the operating system needs some RAM too).
Only in cases when SQL Server has to share memory with other applications on the same server, or when you want to artificially keep SQL Server from using all of the RAM available to it, would you want to change the default value. For example, if your "other" application(s) are more important than SQL Server's performance, then you can restrain SQL Server's performance if you want.
There are also two potentially performance issues you can create if you do attempt to set the "maximum server memory" setting manually. First, if you allocate too much memory to SQL Server, and not enough for other applications or the operating system, then the operating system may have no choice but to begin excessive paging, which will slow performance of your server. Also, if you are using the Full-Text Search service, you must also leave plenty of memory for its use. Its memory is not dynamically allocated like the rest of SQL Server's memory, and there must be enough available memory for it to run properly.
The "min server memory" setting, when set to the default value of 0 (in MB), tells SQL Server to manage the use of memory dynamically. This means that SQL Server will start allocating memory as is needed, and the minimum amount of RAM used can vary as SQL Server's needs vary.
If you change the "min server memory" setting to a value other than the default value of 0, what this means is not that SQL Server will automatically begin using this amount of minimum memory automatically, as many people assume, but that once the minimum amount is reached (because it is needed) that the minimum amount specified will never go down below the specified minimum.
For example, if you specify a minimum value of 100 MB, then restart SQL Server, SQL Server will not immediately reserve 100 MB of RAM for its minimal use. Instead, SQL Server will only take as much as it needs. If it never needs 100MB, then it will never be reserved. But if SQL Server does exceed the 100 MB amount specified, then later it doesn't need it, then this 100 MB will then become the bottom limit of how much memory SQL Server allocates. Because of this behavior, there is little reason to change the "min server memory" setting to any value other than its default value.
If your SQL Server is dedicated, there is no reason to use the "min server memory" setting at all. If you are running other applications on the same server as SQL Server, there might be a very small benefit of changing this setting to a minimum figure, but it would be hard to determine what this value should be, and the overall performance benefit would be negligible.
If you find in your audit that these settings are some other value other than the default, find out why. If you can't find out why, or if the reason is poor, change them back to their default values.
 
Max Text Repl Size
The "max text repl size" setting is used to specify the maximum size of text or p_w_picpath data that can be inserted into a replicated column in a single physical INSERT, UPDATE, WRITETEXT, or UPDATETEXT transaction. If you don't use replication, or if you don't replicate text or p_w_picpath data, then this setting should not be changed.
The default value is 65536, the minimum value is 0, and the maximum value is 2147483647 (in bytes). If you do heavy replication of text or p_w_picpath data, you might want to consider increasing this value only if the size of this data exceeds 64K. But as with most of these settings, you will have to experiment with various values to see what works best for your particular circumstances.
As part of your audit, if you don't use replication, the the only correct value here is the default value. If the default value has been changed, you need to investigate if text or p_w_picpath data is being replicated. If not, or if the data is less than 64K, then change it back to the default value.
 
Max Worker Threads
The "max worker threads" SQL Server configuration setting is used to determine how many worker threads are made available to the sqlservr.exe process from the operating system. The default value is 255 worker threads for this setting. SQL Server itself uses some threads, but they will be ignored for this discussion. The focus here is on threads created for the benefit of users.
If there are more than 255 user connections, then SQL Server will use thread pooling, where more than one user connection shares a single worker thread. Although thread pooling reduces the amount of system resources used by SQL Server, it can also increase contention among the user connections for access to SQL Server, hurting performance.
To find out how many worker threads your SQL Server is using, check the number of connections that are currently made to your server using Enterprise Manager. For each SQL Server connection, there is one worker thread created, up to the total number of worker threads that are specified in the "max worker threads" settings. For example, if there are 100 connections, then 100 worker threads would be employed. But if there are 500 connections, but only 255 worker threads are available, then only 255 worker threads are being used, with all the open connections sharing the limited worker threads.
Assuming there is enough RAM in your server, for best performance, you will want to set the "max worker threads" setting to a value equal to the maximum number of user connections your server ever experiences, plus 5. But there are some limitations to this general recommendation, as we will soon see.
As has already been mentioned, the default value for the "max worker threads" is 255. If your server will never experience over 255 connections, then this setting should not be changed from its default value. This is because worker threads are only created when needed. If there are only 50 connections to the server, there will only be that many worker threads, not 255 (the default value).
If you generally have over 255 connections to your server, and if "max worker threads" is set to the default value of 255, what will happen is that SQL will begin thread pooling. Now comes the dilemma. If you increase the "max worker threads" so that there is one thread for each connection, SQL Server will take up additional resources (mostly memory). If you have plenty of RAM in your server that is not being used by SQL Server or any other application, then boosting the "max worker threads" can help boost the performance of SQL Server.
But if you don't have any extra RAM available, then adding more worker threads can hurt SQL Server's performance. In this case, allowing SQL Server to use thread pooling offers better performance. This is because thread pooling uses less resources than not using it. But, on the downside, thread pooling can introduce problems of resource contention between connections. For example, two connections sharing a thread can conflict when both connections want to perform some task as the exact same time (which can't be done because a single thread can only service a single connection at the same time).
So what do you do? In brief, if your server normally has less than 255 connections, leave this setting at its default value. If your server has more than 255 connections, and if you have extra RAM, then consider bumping up the "max worker threads" setting to the number of connections plus 5. But if you don't have any extra RAM, then leave the setting to its default value. For SQL Server with thousands of connections, you will have to experiment to find that fine line between extra resources used by additional worker threads and contention between connections all fighting for the same worker thread.
As you might expect, before using this setting in production, you will want to test your server's performance before and after the change to see if SQL Server benefited, or was hurt, from the change.
As part of your audit, follow the advice just given above for how to set this setting.
 
Min Memory Per Query
When a query runs, SQL Server does its best to allocate the optimum amount of memory for it to run efficiently and quickly. By default, the "minimum memory per query" setting allocates 1024 KB, as a minimum, for each query to run. The "minimum memory per query" setting can be set from 0 to 2147483647 KB.
If a query needs more memory to run efficiently, and if it is available, then SQL Server automatically assign more memory to the query. Because of this, changing the value of the "minimum memory per query" default setting is generally not advised.
In some cases, if your SQL Server has more RAM than it needs to run efficiently, the performance of some queries can be boosted if you increase the "minimum memory per query" setting to a higher value, such as 2048 KB, or perhaps a little higher. As long as there is "excess" memory available in the server (essentially, RAM that is not being used by SQL Server), then boosting this setting can help overall SQL Server performance. But if there is no excess memory available, increasing the amount of memory for this setting is more likely to hurt overall performance, not help it.

Nested Triggers
This configuration option does affect performance, but not in the conventional way. By default, the "nested triggers" option is set to the default value of "1". This means that nested triggers (a nested trigger is a trigger that cascades up to a maximum limit of 32) can be run. If you change this setting to "0", then nested triggers are not permitted. Obviously, by not allowing nested triggers, overall performance can be improved, but at the cost of application flexibility.
This setting should be left to its default value, unless you want to prevent developers from using nested triggers. Also, some third-party applications could fail if you turn off nested triggers, assuming they depend on them.
 
Network Packet Size (B)
"Network packet size" determines the size of the packet size SQL Server uses when it talks to clients over a network. The default value is 4096 bytes, with a legal range from a minimum of 512 bytes, to a maximum value which is based on the maximum amount of data that the network protocol you are using supports.
In theory, by changing this value, performance can be boosted if the size of the packet more or less matches the size of the data in the packet. For example, if the data is small, less than 512 bytes on average, changing the default value of 4096 bytes to 512 bytes can boost performance. Or, if you are doing a lot of data movement, such as with bulk loads, of if you deal with a lot of TEXT or IMAGE data, then by increasing the default packet size to a number larger than 4096 bytes, then it will take fewer packets to send the data, resulting in less overhead and better performance.
In theory, this sounds great. In reality, you will see little, if any, performance boost. This is because there is no such think as an average data size. In some cases data is small, and in other cases, data is very large. Because of this, changing the default value of the "network packet size" is generally not very useful.
As a part of your audit, carefully question any value for this setting other than the default. If you can't get a good answer, change it back.
 
Open Objects
"Open objects" refers to the total number of objects (such as tables, views, rules, defaults, triggers, and stored procedures) that can be open at the same time in SQL Server. The default setting for this option, which is "0", tells SQL Server to dynamically increase or decrease this number in order to obtain the best overall performance of the server.
In rare cases, generally when server memory is fully used, it is possible to get a message telling you that you have exceeded the number of open objects available. The best solution to this is to increase the server's memory, or to reduce the load on the server, such as reducing the number of databases maintained on the server.
If neither of the above options are practical, you can manually configure the maximum number of available open objects by setting the "open objects" value to an appropriately high enough setting. The problem with this is twofold. First, determining the proper value will take much trial and error. Second, any memory allocated to open objects will be taken away from other SQL Server needs, potentially hurting the server's overall performance. Sure, now your application will run when you change this setting, but it will run slower. Avoid changing this setting.
As you are performing your audit, if you find any setting other than "0", either someone made a mistake and it needs to be corrected, the server's hardware is too small and more RAM needs to be added to it, or some of this server's work needs to be moved to another, less busy, server.
 
Priority Boost
By default, the SQL Server processes run at the same priority as any other applications on a server. In other words, no single application process has a higher priority than another when it comes to getting and receiving CPU cycles.
The "priority boost" configuration option allows you to change this. The default value for this option is "0", means that the priority of SQL Server processes is the same as all other application processes. If you change it to "1", then SQL Server now has a higher priority than other application processes. In essence, this means that SQL Server has first priority to CPU cycles over other application processes running on the same server. But does this really boost performance of SQL Server?
Let's look at a couple of scenarios. First, let's assume a server runs not only SQL Server, but other apps (not recommended for best performance, but a real-world possibility), and that there is plenty of CPU power available. If this is the case, and if you give SQL Server a priority boost, what happens? No much. If there is plenty of CPU power available, a priority boost doesn't mean much. Sure, SQL Server might gain a few milliseconds here and there as compared to the other applications, but I doubt if you would be able to notice the difference.
Now let's look at a similar scenario as above, but let's assume that CPU power is virtually all exhausted. If this is the case, and SQL Server is given a priority boost, sure, SQL Server will now get its work done faster, but only at the cost of slowing down the other applications. If this is what you want, OK. But a better solution would be to boost CPU power on the server, or reduce the server's load.
But what if SQL Server is running on a dedicated server with no other applications and if there is plenty of excess CPU power available? In this case, boosting the priority will not gain a thing, as there is nothing competing (other than part of the operating system) for CPU cycles, and besides, there are plenty of extra cycles to go around.
And last of all, if SQL Server is on a dedicated server, and the CPU is maxed out, giving it a priority boost is a zero sum game as parts of the operating system could potentially be negatively affected if you do. And the gain, if any, will be very little for SQL Server.
As you can see, this option is not worth the effort. In fact, Microsoft has documented several problems related to using this option, which makes this option even less desirable to try.
If you find this option turned on in your audit, question its purpose. If you currently are not having any problems with it on, you can probably leave it on without issues. But I would recommend setting it back to its default.
 
Query Governor Cost Limit
The "query governor cost limit" option allows you to limit the maximum length of time a query can run, and is one of the few SQL Server configuration options that I endorse. For example, let's say that some of the users of your server like to run very long-running queries that really hurt the performance of your server. By setting this option, you could prevent them from running any queries that exceeded, say 300 seconds (or whatever number you pick). The default value for this setting is "0", which means that there are no limits to how long a query can run.
The value you set for this option is approximate, and is based on how long the Query Optimizer estimates the query will run. If the estimate is more than the time you have specified, the query won't run at all, producing an error instead. This can save a lot of valuable server resources.
On the other hand, users can get real unhappy with you if they can't run the queries they have to run in order to do their job. What you might consider doing is helping those users to write more efficient queries. That way, everyone will be happy.
Unlike most of my other suggestions, if your audit turns up a value here other than "0", great. As long as users aren't complaining, this is a good deal. In fact, if this setting is set to "0", consider adding a value here and see what happens. Just don't make it too small. You might consider starting with value of about 600 seconds and see what happens. If that is OK, then try 500 seconds, and so on, until you find out when users start complaining, then you can back off.
 
Query Wait (s)
If SQL Server is very busy and is hurting for more memory resources, it will queue what it considers memory-intensive queries (those that use sorting or hashing) until there is enough memory available to run them. In some cases, there just isn't enough memory to run them and they eventually time out, producing an error message. By default, a query will time out after a period of time equal to 25 times the estimated amount of time the Query Optimizer thinks it will take for the query to run.
The best solution for such a problem is to add more memory to the server, or to reduce its load. But if that can't be done, one option, although fraught with problems of its own, is to use the "query wait" configuration option. The default setting for this option is "-1", which waits the time period described above, and then causes the query to time out. If you want the time out period to be greater so that queries won't time out, you can set the "query wait" time to a large enough number. As you might guess, you will have to determine this time out number yourself through trial and error.
The problem with using this option is that a transaction with an intensive query may be holding locks, which can cause deadlock or other locking contention problems, which in the end may a bigger problem than the query timing out. Because of this, this option is not recommended to be changed.
If you find a non-default value in your audit, find out why. If there is no good reason to keep it, change it back to the default value. But, if someone has thought this out thoroughly, and if you cannot detect any locking issues, then consider leaving this option as is.

Recovery Interval (min)
If you have a very active OLTP server application with many INSERTS, UPDATES, and DELETES, it is possible that the default "recovery interval" of 0 (which means that SQL Server automatically determines the appropriate recovery interval) may not be appropriate. If you are watching the performance of your server with the Performance Monitor and notice that you have regular periods of 100% disk-write activity (occurring during the checkpoint process), you may want to set the "recovery interval" to a higher number, such as 5 or 10. This figure refers to the maximum number of minutes it will take SQL Server to perform a recovery after it is restarted. The default figure of 0, in effect, works out to be about a maximum recovery period of 1 minute.
Another potential reason to use this "recovery interval" option is if the server is devoted to OLAP or a data warehouse. In these instances, these mostly read-only databases don't generally benefit from a short recovery interval.
If your server does not match any of the above suggestions, then leaving the default value it generally the best choice.
By extending the checkpoint time, you reduce the number of times SQL Server performs a checkpoint, and if effect, reduce some of SQL Server's overhead. You may need to experiment with this figure in order to find the ideal compromise between performance and the time it takes for SQL Server to perform a recovery.
Ideally, you want to keep this number as small as possible in order to reduce the amount of time it takes to restart the mssqlserver service the next time it happens. This is because each time the mssqlserver service starts, it goes through an automatic recovery process, and the larger the "recovery interval" is set, the longer the recover process can take. You must decide what is the best compromise in performance and recovery time that best fits your needs.
As a part of your audit, you will want to evaluate the current setting for "recovery interval" in regards to its potential use. For busy OLTP servers, you will want to do a lot of research before you decide to increase the "recover interval" to see if it will help or not. Testing is important. But if your server is a dedicated OLAP or data warehouse server, increasing the "recovery interval" is an easy decision to make.
 
Scan for Startup Procs
SQL Server has the ability, if properly configured, to look for stored procedures to run automatically when the mssqlserver service starts. This can be handy if you want a particular action to occur on startup, such as the loading of a specific stored procedure into cache so that it is already there when users begin accessing the server.
By default, the "scan for startup procs" is set to "0", which means that a scan for stored procedures is not done at startup. If you don't have any startup stored procedures, then this is the obvious setting. There is no point spending resources looking for stored procedures that don't exist.
But if you do have one or more stored procedures you want to execute on server startup, then this option has to be set to "1", which turns on the startup scan.
If you find in your audit that this is set to "1", check to see if there are any start-up stored procedures. If not, then return this option back to the default setting.
 
Set Working Set Size
The "set working set size" option is used when you want to fix the minimum and maximum sizes of the amount of memory that is to be used by SQL Server when it starts. This option also helps prevents any page swapping.
By default, this setting is set to "0", which means that this option is not used. To turn on this option, it must be set to "1", plus, the minimum server memory size and the maximum memory sizes must be set to the same value. This is the value used to reserve the working set size.
As with most options, this one should not generally be necessary. The only time you might want to consider it is if the server is dedicated to SQL Server, has a very heavy load, and has sufficient memory available. Even then, any performance boost gained will be minimal, and you risk the potential of not leaving enough memory to the operating system. Testing is key to the successful use of this option.
If this option is set to a value other than the default, check also to see if the minimum server memory and the maximum server memory settings are set to the same value, otherwise this option will not work correctly. If the conditions above exit, and if thorough testing has been done, then consider leaving this setting. Otherwise, change it back to the default (don't forget to change back all three related settings).
 
User Connections
By default, SQL Server only allocates as many user connections as it needs. This allows those who need to connect to connect, while at the same time minimizing the amount of memory used. When the "user connections" setting is set to its default value of "0", user connections are dynamically set. Under virtually all circumstances, this is the ideal setting.
If you change the default value for "user connections," what you are telling SQL Server to do is to allocate only the number of user connections you have specified, no more or no less. Also, it will allocate memory for every user connection specified, whether or not it is being used. Because of these problems, and because SQL Server can perform this task automatically and efficiently, there is no reason to change this setting from the default.
If your audit shows a value other than "0", change it back to zero. Don't even both asking why.
 
Now What?
Your goal should be to perform this part of the performance audit, described on this page, for each of your SQL Servers, and then use this information to make changes as appropriate, assuming you can.
Once you have completed this part of the performance audit, you are now ready to audit your SQL Server database configurations.