hether you’re a consultant or an in-house DBA, you need to have a reliable inventory of the servers you manage. This inventory can take any number of forms but, ideally, will allow some aggregation of information.  For example, your database data and log files will grow in size over time.  By aggregating the space used by your databases, and comparing it to the available space on the physical drives, you can predict when you will need to add additional storage. When server hardware or software changes are taking place, it’s easy to forget that the inventory has to be updated as well. For this reason, I like to have a script that creates the inventory, rather than have to keep a manual log of changes as they occur. PowerShell allows me to do just that.

PowerShell makes it easy to issue WMI (Windows Management Instrumentation) queries, which we can use to collect important non-SQL Server information about our servers. For example, we can keep track of the Name, Model, Manufacturer and Domain names, the operating system name, version, free physical memory, service pack information, physical memory configuration, the sizes of the locally attached disk, and the free space on each disk.

We can also, via a Server Management Objects (SMO) connection, use Powershell to gather SQL Server information such as edition, version, logins, databases, configuration and errors. As such, Powershell offers a simple way of gathering a complete inventory of your physical servers, your SQL Server instances, and the databases they house.

Win32_PingStatus: is the Server online?

The first WMI command we’ll use is Win32_PingStatus, which ‘pings’ each of the designated servers and returns a set of results, including a StatusCode for each server. If the value of this property is zero, then the server responded successfully, and we know that we can communicate with it.

The first thing to do is create a file, called servers.txt, containing the names of the servers for which you want to produce an inventory. This is a simple text file, with one server name per line. Here’s an example:

SQLTBWS
SQLPR01
SQLDEV02

As a quick demonstration of how the Win32_PingStatus query works, here’s the basic PowerShell script:

$servers = Get-Content ‘servers.txt’

ForEach-Object ($server in $servers) {

# Ping the machine to see if it’s on the network

$results = Get-WMIObject -query “select StatusCode from

Win32_PingStatus where Address = ‘$server'”

$responds = $false

ForEach-Object ($result in $results) {

# If the machine responds break out of the result loop and indicate success

if ($result.statuscode -eq 0) {

$responds = $true

break

}

}

If ($responds) {

# Gather info from the server because it responds

Write-Output “$server responds”

} else {

# Let the user know we couldn’t connect to the server

Write-Output “$server does not respond”

}

}

The first line of the script reads the contents of the servers.txt file into a variable called $servers. This is a collection of the lines in the text file, and we can iterate the collection using the ForEach-Object cmdlet, as shown in line 3. Once for each server, we issue the following WMI query, using the Get-WMIObject cmdlet:

   $results = Get-WMIObject -query “select StatusCode from

Win32_PingStatus where Address = ‘$server'”

The result set is sent to a variable called $results. After that, we set the status of a variable called $responds to the built-in value of $false. We then iterate through the result set and set the $responds variable to $true for each server where the StatusCode property is set to 0.

Once we’ve broken out of the ForEach loop, we test for the value of the $responds variable. If it is set to “true”, the script prints out the name the server and indicates that it is responsive. Otherwise, it names the server and indicates that it isn’t responding. The results of this script (given that the SQLTBWS server is the only one on the network) are:

   SQLTBWS responds
SQLPR01 does not respond
SQLDEV02 does not respond

GetWMIInfo: Gathering System Information

Having established a server’s responsiveness, the next thing we want to do to is to gather some system information from each server, for our inventory. PowerShell functions are a powerful way to control the work your script is doing so, rather than use in-line commands in the script, we’ll create a function called GetWMIInfo. This function will query the following WMI classes to gather various bits of system information:

  • Win32_ComputerSystem– provides hardware information, such as computer name and model, number of processors, and so on
  • Win32_OperatingSystem– provides OS information, such as the OS type, service pack installed etc.
  • Win32_PhysicalMemory– provides details of the physical memory device, such as capacity etc.
  • Win32_LogicalDisk – provides details of local storage devices, including size, amount of free space etc.

First, in order to lay the groundwork, we create a directory with the name of the server, under our current directory. This is where we’ll store the results we get from our inventory queries. Then, we simply call the GetWMIInfo function, passing it the name of the server to which we are connected:

   # Check to see if a directory exists for this machine, if not create one

if (!(Test-Path -path .\$server)) {

New-Item .\$server\ -type directory

}

# Get the server info

getwmiinfo $server

Next, here is the definition of the GetWMIInfo function (PowerShell is an interpreted language, so any functions must be defined before they’re called in your script):

function getwmiinfo ($svr) {

# Get ComputerSystem info and write it to a CSV file

gwmi -query “select * from

Win32_ComputerSystem” -computername $svr | select Name,

Model, Manufacturer, Description, DNSHostName,

Domain, DomainRole, PartOfDomain, NumberOfProcessors,

SystemType, TotalPhysicalMemory, UserName,

Workgroup | export-csv -path .\$svr\BOX_ComputerSystem.csv -noType

# Get OperatingSystem info and write it to a CSV file

gwmi -query “select * from

Win32_OperatingSystem” -computername $svr | select Name,

Version, FreePhysicalMemory, OSLanguage, OSProductSuite,

OSType, ServicePackMajorVersion, ServicePackMinorVersion |

export-csv -path .\$svr\BOX_OperatingSystem.csv -noType

# Get PhysicalMemory info and write it to a CSV file

gwmi -query “select * from

Win32_PhysicalMemory” -computername $svr | select Name,

Capacity, DeviceLocator, Tag |

export-csv -path .\$svr\BOX_PhysicalMemory.csv -noType

# Get LogicalDisk info and write it to a CSV file

gwmi -query “select * from Win32_LogicalDisk

where DriveType=3” -computername $svr | select Name, FreeSpace,

Size | export-csv -path .\$svr\BOX_LogicalDisk.csv –noType

}

Each call to Get-WMIObject (aliased here as gwmi) selects all properties from each WMI class, and then the results are piped to a select (technically the select-object cmdlet, here aliased as “select”), listing the properties I want.

Finally, each result set is piped to the Export-CSV cmdlet, which sends the results to a comma-separated file, the name and location of which is specified by the value after the –path parameter. I use CSV files because it means that I can run this script at a client site without needing to create a database somewhere on their system to store this information. I can use Excel to look at the individual CSV files, or I can build an Integration Services package to build a database on my own system, for later aggregation.

Note that for the last class, Win32_LogicalDisk, I only select the results where DriveType=3, which means that I only collect information for locally attached disk drives. This way I don’t get network drives or CD/DVD drives. However, it does return information on SAN-attached drives, which is important.

SMO and PowerShell: Gathering SQL Server Information

Now we’re ready to start gathering SQL Server information. We’ll do this by creating a PowerShell function called GetSQLInfo, which will gather the inventory information about our SQL Server instances, such as version, edition, build number, configuration, session and lock information, and logins, as well as information about each database, including the physical file names and locations, and error log contents.

The first thing we need to do, at the very top of our script, is to load the SMO (Server Management Object) assembly.

[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.SMO”) | Out-Null

For SQL Server 2005 and later, we can use the ManagedComputer object in SMO to return the set of instances on the server. However, if you have any SQL Server 2000 instances in your environment, as I do, this will not work. Instead we’ll modify the servers.txt file to list the server and instance names, like this:

SQLTBWS,SQLTBWS\INST01
SQLTBWS,SQLTBWS\INST02
SQLTBWS,SQLTBWS

The first value (before the comma) is the computer name, and the value after that is the name of the SQL Server instance from which we want to gather information. The third line indicates we’re requesting information from the default instance of SQL Server. As we iterate through each server, using ForEach-Object, we separate out the server and instance names as follows:

foreach ($sv in $servers) {

# Separate the server and instance names

$srvr = $prcs.Split(“,”)

$server = $srvr[0]

$instance = $srvr[1]

Then, we call our GetSQLInfo function, passing in the server and instance details:

   getsqlinfo $server $instance

Let’s take a look at the definition of the GetSQLInfo function. Because we have two parameters, the server name and the instance name, we use the param block method of defining our parameters, as follows:

function getsqlinfo {

param (

[string]$svr,

[string]$inst

)

We need an SMO connection to the instance to gather management information from the instance, but we also need an ADO.Net connection to run queries to get configuration and session information. We make those connections as follows:

# Create an ADO.Net connection to the instance

$cn = new-object system.data.SqlClient.SqlConnection(

“Data Source=$inst;Integrated Security=SSPI;Initial Catalog=master”);

# Create an SMO connection to the instance

$s = new-object (‘Microsoft.SqlServer.Management.Smo.Server’) $inst

Next, we need to grab the instance name, without including the machine name, so that we can assign it to the name of the output CSV files. If the instance is the default instance we’ll set it, as SQL Server does, to MSSQLSERVER:

# Extract the specific instance name, and set it to MSSQLSERVER if it’s the default instance

$nm = $inst.Split(“\”)

if ($nm.Length -eq 1) {

$instnm = “MSSQLSERVER”

} else {

$instnm = $nm[1]

}

We’re now ready to start gathering information from each server. SMO provides an Information class, which contains valuable information about a given SQL Server instance such as the edition, the SQL Server version, as well as the path to the error log and the master database and log files. Therefore, the first thing we do is extract this data from the Information class, and pipe it directly to its own CSV file:

# Set the CSV output file name and pipe the instances Information collection to it

$outnm = “.\” + $svr + “\” + $instnm + “_GEN_Information.csv”

$s.Information | export-csv -path $outnm –noType

Next, we want to get the configuration settings, the current session information, and any locks present at this time. The easiest way to do this is to execute the sp_configure, sp_who and sp_lock system procedures, via ADO.Net. Remember, we’re running the same script whether we’re gathering information from SQL Server 2000, SQL Server 2005 or SQL Server 2008.  We can use SMO methods to gather this information, but it’s quite verbose, and each new version of SQL Server adds additional configuration options. By using sp_configure to return a result set, we can use the same query no matter which version of SQL Server we’re collecting data from.

First, we need to enable the ShowAdvancedOptions option of the SMO Configuration object so we get all the configuration options, not just the minimal set.  We do this as follows:

# Set ShowAdvancedOptions ON for the query

$s.Configuration.ShowAdvancedOptions.ConfigValue = 1

$s.Configuration.Alter()

We use an ADO.Net DataSet object because this will allows us to execute all three system procedures at one time. So, first we need to create the DataSet object, then build the three queries into a text string, then use a SqlDataAdapter object to fill the DataSet:

# Create a DataSet for our configuration information

$ds = new-object “System.Data.DataSet” “dsConfigData”

# Build our query to get configuration, session and lock info, and execute it

$q = “exec sp_configure;

$q = $q + “exec sp_who;

$q = $q + “exec sp_lock;

$da = new-object “System.Data.SqlClient.SqlDataAdapter” ($q, $cn)

$da.Fill($ds)

The query contains three result sets, so the DataSet has three DataTables within it. By defining three DataTable objects, and setting each of those objects to each of the three result sets, we can then export the results of each set of inventory queries to the appropriate CSV files:

# Build datatables for the config data, load them from the query results, and write them to CSV files

$dtConfig = new-object “System.Data.DataTable” “dtConfigData”

$dtWho = new-object “System.Data.DataTable” “dtWhoData”

$dtLock = new-object “System.Data.DataTable” “dtLockData”

$dtConfig = $ds.Tables[0]

$dtWho = $ds.Tables[1]

$dtLock = $ds.Tables[2]

$outnm = “.\” + $svr + “\” + $instnm + “_GEN_Configure.csv”

$dtConfig | select name, minimum, maximum, config_value, run_value | export-csv -path $outnm -noType

$outnm = “.\” + $svr + “\” + $instnm + “_GEN_Who.csv”

$dtWho | select spid, ecid, status, loginame, hostname, blk, dbname, cmd, request_id | export-csv -path $outnm -noType

$outnm = “.\” + $svr + “\” + $instnm + “_GEN_Lock.csv”

$dtLock | select spid, dbid, ObjId, IndId, Type,Resource, Mode, Status | export-csv -path $outnm –noType

Being good citizens, when we’re done gathering the configuration information we turn Show Advanced Options back off.

# Set ShowAdvancedOptions OFF now that we’re done with Config

$s.Configuration.ShowAdvancedOptions.ConfigValue = 0

$s.Configuration.Alter()

We also want to know the logins on the server. We can retrieve this information from the Logins collection of the SMO Server object:

# Write the login name and default database for Logins to a CSV file

$outnm = “.\” + $svr + “\” + $instnm + “_GEN_Logins.csv”

$s.Logins | select Name, DefaultDatabase | export-csv -path $outnm –noType

Finally, we can gather information about each database on the instance. First we create a CSV file in which to store the general information for all the databases on the instance:

# Write information about the databases to a CSV file

$outnm = “.\” + $svr + “\” + $instnm + “_GEN_Databases.csv”

$dbs = $s.Databases

$dbs | select Name, Collation, CompatibilityLevel, AutoShrink,

RecoveryModel, Size, SpaceAvailable | export-csv -path $outnm –noType

Then, we iterate through each database gathering the inventory data. If the IsSystemObject property is set on (which it will be for master, model, msdb and tempdb) we indicate that the database is a system database, using SDB in the name, and if not we indicate it’s a user database by using UDB in the name:

foreach ($db in $dbs) {

# Write the information about the physical files used by the database to CSV files for each database

$dbname = $db.Name

if ($db.IsSystemObject) {

$dbtype = “_SDB”

} else {

$dbtype = “_UDB”

}

Now to the specific inventory data. We first get the list of users in this database:

   # Write the user information to a CSV file

$users = $db.Users

$outnm = “.\” + $svr + “\” + $instnm + $dbtype + “_” +

$dbname +    “_Users.csv”

$users | select $dbname, Name, Login, LoginType, UserType, CreateDate |

export-csv -path $outnm -noType

We need to iterate through the filegroups to get the physical data file information, and then use the Logs collection for the database to get the physical log file information:

   $fgs = $db.FileGroups

foreach ($fg in $fgs) {

$files = $fg.Files

$outnm = “.\” + $svr + “\” + $instnm + $dbtype

+ “_” + $dbname + “_DataFiles.csv”

$files | select $db.Name, Name, FileName, Size,

UsedSpace | export-csv -path $outnm -noType

}

$logs = $db.LogFiles

$outnm = “.\” + $svr + “\” + $instnm + $dbtype

+ “_” + $dbname + “_LogFiles.csv”

$logs | select $db.Name, Name, FileName, Size, UsedSpace |

export-csv -path $outnm -noType

}

Finally, we export the current error log and the last six error logs out to CSV files so we can check to see if anything is amiss:

# Create CSV files for each ErrorLog file

$outnm = “.\” + $svr + “\” + $instnm + “_ERL_ErrorLog.csv”

$s.ReadErrorLog() | export-csv -path $outnm -noType

$outnm = “.\” + $svr + “\” + $instnm + “_ERL_ErrorLog_1.csv”

$s.ReadErrorLog(1) | export-csv -path $outnm -noType

$outnm = “.\” + $svr + “\” + $instnm + “_ERL_ErrorLog_2.csv”

$s.ReadErrorLog(2) | export-csv -path $outnm -noType

$outnm = “.\” + $svr + “\” + $instnm + “_ERL_ErrorLog_3.csv”

$s.ReadErrorLog(3) | export-csv -path $outnm -noType

$outnm = “.\” + $svr + “\” + $instnm + “_ERL_ErrorLog_4.csv”

$s.ReadErrorLog(4) | export-csv -path $outnm -noType

$outnm = “.\” + $svr + “\” + $instnm + “_ERL_ErrorLog_5.csv”

$s.ReadErrorLog(5) | export-csv -path $outnm -noType

$outnm = “.\” + $svr + “\” + $instnm + “_ERL_ErrorLog_6.csv”

$s.ReadErrorLog(6) | export-csv -path $outnm –noType

This collection of CSV files can then be imported into a database for analysis and reporting. It also serves as a basis for starting the process of gathering performance baseline information. In any event, it’s useful information to have, whether you’re an in-house DBA or a consultant.

Summary

Microsoft provides a tool called SQLDiag to gather all this information but it needs to be run one time for each server for which you want the detailed information. It places its results in a text file in the instances LOG directory (along with the ERRORLOG files).

The PowerShell script provided in this article only needs to be run once in order to gathering inventory information from all of your servers. Furthermore,  it not only gathers most of the same information with the one execution, but it puts the information in a single directory (well, one sub-directory for each physical server), so that loading the information into a database for aggregation and analysis is much easier.

From a management perspective, it’s important to have a comprehensive inventory of the servers you manage, so if problems or questions come up from users or management you can answer them quickly and with confidence. In addition, it helps to be able to track trends such as disk space usage so problems are anticipated.  This script will help you do that.