Category: Storage

For maintenance reason you want to disable DFS target(s) or DFS namespace, to do that you can:

A) Create a Backup of DFS target folders:

you can use the dfscmd /view \\\dfsroot /batch >>backup_mydomainlocal_dfsroot.cmd

B) Test your restore: backup_mydomainlocal_dfsroot.cmd

C) Then you can now disable your dfsroot:

c.1) the method blow works also for 2003/2008 based DFS and greater) to hide a DFS root (for maintenance reason or before real removal phase):

you can rename the dfsroot from ADUC, advanced mode, go to system, DFS-Configuration container, select the dfsroot to rename, right-click rename (Add a “$” at the end of the name). On the client, when you try to access the root status using dfsutil /root:\\mydomain.local\dfsroot_newname$ /view , you get a  “system error – element not found”.

c.2 ) new method which were introduced in Windows Server 2012 (this does not work on 2003/2008-based DFS)

To enable or disable referrals by using Windows PowerShell, use the Set-DfsnRootTarget –State or Set-DfsnServerConfiguration cmdlets,


Web resources:




You get a dialog box like “The file name is too long” or “The source file name(s) are larger than is supported by the file system”. Or “cannot delete folder: it is being used by another person or program”, or, “cannot delete file: Access is denied There has been a sharing violation”. The source or destination file may be in use.


Basically, there is a character limit set in naming or renaming files in your Windows operating system and it varies from one OS to another. Mostly it varies between 256 and 260 characters. Thus, when you transfer files with long names from one destination to another, you will experience path too long error in Windows or Linux systems.


Maximum Path Length Limitation In the Windows API (with some exceptions discussed in the following paragraphs), the maximum length for a path is MAX_PATH, which is defined as 260 characters. A local path is structured in the following order: drive letter, colon, backslash, name components separated by backslashes, and a terminating null character. 1+2+256+1 or [drive][:][path][null] = 260. One could assume that 256 is a reasonable fixed string length from the DOS days. And going back to the DOS APIs we realize that the system tracked the current path per drive, and we have 26 (32 with symbols) maximum drives (and current directories). The INT 0x21 AH=0x47 says “This function returns the path description without the drive letter and the initial backslash.” So we see that the system stores the CWD as a pair (drive, path) and you ask for the path by specifying the drive (1=A, 2=B, …), if you specify a 0 then it assumes the path for the drive returned by INT 0x21 AH=0x15 AL=0x19. So now we know why it is 260 and not 256, because those 4 bytes are not stored in the path string. Why a 256 byte path string, because 640K is enough RAM.


but, but, but: the NTFS filesystem supports paths up to 32k characters. You can use the win32 api and “\\?\” prefix the path to use greater than 260 characters.

The Windows OS since Vista support path length of 32k, but unfortunately most of the applications are limited to 255 chars long !



a) Try moving to a location which has a shorter name, or try renaming to shorter name(s) before attempting this operation.

b) To get list of files with long path, you can use this powershell script:

Write-Host “Please wait, searching…”
robocopy.exe $srcdir c:\doesnotexist /l /e /b /np /fp /njh /njs /ndl | Where-Object {$_.Length -ge 255} | ForEach-Object {$_.Substring(26,$_.Length-26)}

after this audit phase, rename the files to be shorter!

There is also a free command line tool to detect long paths, the “too long path detector”:

On Linux;

With GNU find (on Linux or Cygwin), you can look for files whose relative path is more than 255 characters long:

find -regextype posix-extended -regex '.{257,}'           (257 accounts for the initial ./.)

c) Use “Unlocker”  in the case of you cannot delete folder: It is being used by another person or program or cannot delete file: Access is denied There has been a sharing violation. Unlocker can help! Simply right-click the folder or file and select Unlocker. If the folder or file is locked, a window listing of lockers will appear.

d) But to delete a file whose name is more than 255 characters:

  1. Open a command prompt by running “CMD.EXE”
  2. Navigate to the folder holding the file
  3. Use the command “dir /x” which will display the short names of files.
  4. Delete using the short name.

i.e. if the file is named “verylongfilename.doc”, the shortname will display as something like “verylo~1.doc” and you can delete using that name.

c) In the case where you have a too long directory, you can use the subst command:

  1. Start a command prompt (no admin privileges needed)
  2. Use cd to navigate to the folder you want to go (you can use tab to autocomplete names)
  3. type subst x: . to create the drive letter association. (instead of the . you can also type the entire path)
  4. Now in explorer you have a new letter. Go to it and do whatever you need to do to copy or delete files.
  5. Go back to your cmd window and type subst /d x: to remove the drive or alternatively, restart your pc

d) Another way to cope with the path limit is to shorten path entries with symbolic links.

  1. create a c:\folder directory to keep short links to long paths
  2. mklink /J C:\folder\foo c:\Some\Crazy\Long\Path\foo
  3. add c:\folder\foo to your path instead of the long path


When using dynamic volumes, the following considerations apply:

  • Installing Windows Server 2003 operating systems. You can perform a fresh installation of Windows Server 2003 operating systems on a dynamic volume only if that volume was converted from a basic boot volume or basic system volume. If the dynamic volume was created from unallocated space on a dynamic disk, you cannot install Windows Server 2003 operating systems on that volume. This setup limitation occurs because Setup for Windows Server 2003 recognizes only dynamic volumes that have an entry in the partition table. You can, however, extend the volume (if it is a simple or spanned volume).Do not convert basic disks to dynamic disks if they contain multiple installations of Windows 2000, Windows XP Professional, or Windows Server 2003 operating systems. After the conversion, it is unlikely that you will be able to start the computer using that operating system.For information about basic volumes, see Basic disks and volumes.
  • Portable computers and removable media. Dynamic disks are not supported on portable computers, removable disks, detachable disks that use Universal Serial Bus (USB) or IEEE 1394 (also called FireWire) interfaces, or on disks connected to shared SCSI buses. If you are using a portable computer and right-click a disk in the graphical or list view in Disk Management, you will not see the option to convert the disk to dynamic.
  • Boot and system partitions. You can convert a basic disk containing the system or boot partitions to a dynamic disk. After the disk is converted, these partitions become simple system or boot volumes (after restarting the computer). You cannot mark an existing dynamic volume as active. You can convert a basic disk containing the boot partition (which contains the operating system) to a dynamic disk. After the disk is converted, the boot partition becomes a simple boot volume (after restarting the computer).
  • Mirroring the boot and system volumes. If you convert the disk containing the boot and system partitions to a dynamic disk, you can mirror the boot and system volumes onto another dynamic disk. Then, if the disk containing the boot and system volumes fails, you can start the computer from the disk containing the mirrors of these volumes. For more information, see Create and test a mirrored system or boot volume.
  • To copy disk to disk you can use robocopy:Robocopy /???

    Robocopy e:\ f:\ *.* /mir /sec /copyall /z /r:3 /w:3 /eta /log:output.txt

  • Shadow copies storage area. If you are using a basic disk as a storage area for shadow copies and you intend to convert the disk into a dynamic disk, it is important to take the following precaution to avoid data loss. If the disk is a non-boot volume and is a different volume from where the original files reside, you must first dismount and take offline the volume containing the original files before you convert the disk containing shadow copies to a dynamic disk. You must bring the volume containing the original files back online within 20 minutes, otherwise, you will lose the data stored in the existing shadow copies. If the shadow copies are located on a boot volume, you can convert the disk to dynamic without losing shadow copies.You can use the mountvol command with the /p option to dismount the volume and take it offline. You can mount the volume and bring it online using the mountvol command or the Disk Management snap-in.

The AD DS domain/forest recovery is a very complex procedure that requires regular hands on and proper isolated recovery environment (hyper/V or vmware isolated LAN).

AD DS forest recovery guidelines and procedures:

Some best practices for backing up and recovering AD DS:

  • Backup DNS integrated zone data:
    • dnscmd /enumzones > C:\Script\AllZones.txt
      for /f %%a in (C:\Script\AllZones.txt) do dnscmd /ZoneExport %%a Export\%%a.dns
  • Backup all Group policies and links
  • Backup all distinguished name of objects in the domain:
    • dsquery * domainroot -scope subtree -attr modifytimestamp distinguishedname -limit 0 > DNlist.txt
  • Store operating system files, the Active Directory database (Ntds.dit), and SYSVOL on separate volumes that do not contain other user, operating system, or application data.
  • For domain controllers, perform regular backups of system state data by using the wbadmin start systemstatebackup command or prefer BMR (bare metal restore backup) using wbadmin ( For more information, see Wbadmin start systemstatebackup (
  • For domain controllers, you can also use the other variant wbadmin start backup command to include other drives or folders. For more information, see Wbadmin start backup (
  • Create a backup volume on a dedicated internal or external hard drive. On Vista or Win 2008, you cannot use a network shared folder as a backup target for a system state backup. To store a system state backup on a network shared folder, you must use a local volume as the backup target and then copy the backup to the network shared folder. But since Win 2008 R2, you can use a network share !!!!

example: For ADDS 2008 R2: wbadmin start systemstatebackup -targetserver:\\fileserver\adbackup -quiet

example: For ADDS 2008 R2: wbadmin start backup -targetserver:\\fileserver\adbackups -include:d: -systemstate -vssfull -quiet

  • Turns out that Microsoft disabled the ability to save System State backups to the system volume (termed a “critical” volume here). There is a fix for this in the form of a registry change. The article is located here: Note that to implement this change, you will need to create a new key under the HKLM\System\CurrentControlSet\Services\wbengine, as well as adding the necessary entry: AllowSSBToAnyVolume  dword value =1.
  • To avoid having to use the operating system media during recovery, use the Windows Automated Installation Kit to install Windows RE on a separate partition. Use that partition to access Windows Recovery options. For more information about the Windows Automated Installation Kit, see Windows Automated Installation Kit (Windows AIK) (

How to install the Windows failover clustering from the command line ?

First, you should make sure that the nodes, running Windows Server 2012 R2 that you are intending to add to the cluster are part of the same domain, and proceed to install the Failover-Cluster feature on them. This is very similar to conventional Cluster installs running on Windows Servers. To install the feature, you can use the Server Manager to complete the installation.

Server Manager can be used to install the Failover Clustering feature:
Introducing Server Manager in Windows Server 2012

We can alternatively use PowerShell (Admin) to install the Failover Clustering feature on the nodes.
Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools

An important point to note is that PowerShell Cmdlet ‘Add-WindowsFeature’ is being replaced by ‘Install-WindowsFeature’ in Windows Server 2012 R2. PowerShell does not install the management tools for the feature requested unless you specify ‘-IncludeManagementTools’ as part of your command.

The Cluster Command line tool (CLUSTER.EXE) has been deprecated; but, if you still want to install it, it is available under:
Remote Server Administration Tools –> Feature Administration Tools –> Failover Clustering Tools –> Failover Cluster Command Interface in the Server Manager

The PowerShell (Admin) equivalent to install it:
Install-WindowsFeature -Name RSAT-Clustering-CmdInterface

Now that we have Failover Clustering feature installed on our nodes. Ensure that all connected hardware to the nodes passes the Cluster Validation tests. Let us now go on to create our cluster. You cannot create an AD detached clustering from Cluster Administrator and the only way to create the AD-Detached Cluster is by using PowerShell.
New-Cluster MyCluster -Node My2012R2-N1,My2012R2-N2 -StaticAddress -NoStorage -AdministrativeAccessPoint DNS

In my example above, I am using static IP Addresses, so one would need to be specified. If you are using DHCP for addresses, the switch “-StaticAddress ” would be excluded from the command.

Once we have executed the command, we would have a new cluster created with the name “MyCluster” with two nodes “My2012R2-N1” and “My2012R2-N2”. When you look Active Directory, there will not be a computer object created for the Cluster “MyCluster”; however, you would see the record as the Access Point in DNS.

From event viewer eventvwr (GUI) you can export events in a log file. EventcombMT as well.

You can use eventwatchnt, eventsentry (GUI) from

How to store events on SQL table:

How to export forwarded events using get-winevent:

write-host “Dump Quest ARS Forwarded Events (only the last hour)”
$date = Get-Date -Format ddMMyyyy
$log = “.\logs\Dump-QARS-ForwardedEvents-” + $date + “.txt”

$xml = ‘<QueryList>
<Query Id=”0″ Path=”ForwardedEvents”>
<Select Path=”ForwardedEvents”>*[System[(Level=1  or Level=2 or Level=3 or Level=4 or Level=0 or Level=5) and TimeCreated[timediff(@SystemTime) &lt;= 3600000]]]</Select>

$events = Get-WinEvent -FilterXml $xml |  Select-Object ID, LevelDisplayName, LogName, MachineName, Message, ProviderName, RecordID, TaskDisplayName, TimeCreated

write-output $events >> $log

Write-host “”


To dump events from the command line you can use:

1) psloglist from

ex: psloglist -a 01/12/15 application -n 5    ; in this example I export the last 5 events from 12th Jan 2015 located on application event log.

ex: psloglist -a 01/12/15 -w -x security        ; in this example I export new security events coming with extended data

ex: psloglist -a 01/12/15 application -n 5 -s -t “\t” > c:\temp\output.txt  ; in this example I exported the last 5 application events on one line separated by tabulation and redirected to an output file. After that I can open the output.txt in Excel.

same example but using a specific event ID: psloglist -i 851 security -s -t “\t” > c:\temp\output.txt

other example:

@echo off

for /f “tokens=1,2,3,4* delims=/ ” %%i in (‘date /t’) do set TDDAY=%%i&set TDMM=%%j&set TDDD=%%k&set TDYY=%%l
for /f “tokens=1* delims=:” %%i in (‘time /t’) do set HH=%%i&set MM=%%j
echo Starting EDM server log dump (please wait it takes time)…
psloglist -accepteula \\server01,server02 -a %1 “EDM Server” -x -s -t “\t” >.\logs\Dump-Log_%TDDD%%TDMM%%TDYY%.txt


2) using wevtutil:

3) Using powershell:

4) using logparser:


How to reset NTFS permissions on System drive on Windows 7 or Windows 2008 R2 ?

After Win 2008 R2 was installed, some files on drive C: were not accessible anymore and I was getting “Access Denied”

I tried to right-click/properties on the folders that were not accessible and changed their owner and changed permissions but still some folders were still inaccessible not matter what I did. After some research, it turned out the tool “cacls” that allows one to display or change ACLs (access control lists) can help to reset ACLs.

In Windows 7 or 2008 R2 it is called “icalcs”. To reset files permissions:

1. Run “cmd” as Administrator

2. Go to the drive or folder in question, for example:

cd /d c:

cd /d d:

3. To reset all the files permissions, type:

icacls * /t /q /c /reset

The Microsoft TechNet site has documentation for the icacls command

4. And that’s it!

After that, the files permissions were reset and I could access them back again.


It is possible that “icacls” might fail. For that try to take ownership of the files first.

Just before Step (3), please type the following command:

takeown /r /f *

Useful tools and techniques to monitor system performance on a Windows computer:
1- configure perfmon to capture data in a blg file format (using logman utility and task scheduler)
2- use the perform flowchart (VSBS document)
3- create a report using the VSBS powerpoint template
4- alternatively use also sysinternal tools, Server Performance Advisor
Which Tools?
Xperf, Xperfview (Win7 and greater): available from Windows ADK
Perfmon (NT up to 2003) : make performance monitor output file in .blg format; but load this output log file on the new Win7 perfmon


For more information on mmc.exe /32, refer to sample rate: 5 min. Use a perfmon alert to notify IT admins; ex: Available MBytes reach 10MB or below

Performance and reliability monitor (evolution of perfmon) for Vista or greater (also called WRPM):
 perform /sys ; starts only the performance monitor, formerly system monitor
 perfmon /res ; starts only the resource monitor
 perfmon /report ; starts only the diagnostic report for 60sec and display the results
 perfmon /rel  ;  starts only the reliability monitor

Reliability Monitor Helps in historical tracking of software installation and un-installation, and miscellaneous failures over time

For more information on how to use Reliability Monitor to track multiple systems, refer to the following links:

– Using Reliability Monitor:

– Start Reliability Monitor:

– View Reliability Monitor on a Remote Computer:

– Enable Data Collection for Reliability Monitor:

– Understanding the System Stability Index:

How to rebuild perform counters?

– for XP up to 2003 use fist method or a ‘new’ dedicated tool called: performance counter rebuild wizard (PCRW)


For more information on KB300956, visit;EN-US;300956.

– for Vista or above: use lodctr command tool only

Sysinternal tools ( procexp; procmon are the two most important tools
typeperf : to extract perform counters in a txt file (used in conjonction of logman);Note:For more information about Typeperf, visit
logman : command line utility of perfmon;
tracerpt : to export .etl in CSV

  Example: Logman create counter BlackBox -v mmddhhmm -cf counters.txt -si 05:00 -f bincirc – o “c:PerflogsBlackbox_%computername%” -max 250

relog : command line tool to re-sample or extract portion of perfmon file (blg …)

 Example: Relog SQL:<DSN-name>!<LogSetName> -f bin -o <output.blg> ;  check this blog to discover over explanations;

performance analysis of logs, PAL v2 (how to analyze perfmon log files):
PAL requires:
   .net framework 2.x
Server Performance Advisor: SPA (w2k3 SP2 or R2); complementary to Perform
• Response times
• Failing requests
• Hung application
Resource Usage
• Rogue clients
• Bad scripts
• Out of resources
Tuning and configuration
• Incorrect cache size
• Password expiration policy
• Not enough dynamic ports


SPA is built into Windows Vista and Windows Server 2008 and does not need to be installed (new data collector set on Performance and Reliabiliy Monitor). Microsoft Windows 2000 or Microsoft Windows XP are not supported.

taskman (Task manager)
debugging tools and symbols configuration (windbg) or procexp:


How to script perfmon?

The following links provide additional information about script deployment methods for perfmon:




How to use the Microsoft Symbol Server?

1. Make sure you have installed the latest version of Debugging Tools for Windows.
2. Start a debugging session.
3. Decide where to store the downloaded symbols (the “downstream store”). This can be a local drive or a UNC path.
4. Set the debugger symbol path as follows, substituting your downstream store path for DownstreamStore.SRV*DownstreamStore*

For example, to download symbols to c:websymbols, you would add the following to your symbol path:

Key OS Performance counters?

1- Exchange servers that are exceed kernel paged pool memory due to token use and are slow or even hang at an OS level.

2- SQL servers with slow performance due to disk speeds in the 50ms or higher range (.050) during the reported problem interval.

3- File/Print Servers that perform unacceptably slow usually due to Kernel Non-Pool paged being exceeded by the Server Service, or Disk performance worse than .25ms (.025) response time.

Key OS Performance Metrics Counter Guidelines(Sustained or during captured problem interval)
Logical Disk/Physical DiskBoth are to be captured and monitored due to today’s virtualized disk environments.%idle is a reasonable indicator of disk interface pressure.Note: On very large SAN’s offering a LUN totaling (100’s of drives) can have 0% idle and still be okay, but if we see normal levels and then 0% at the precise time of reported problems, then treat it as a valid issue %idle- 100% idle to 50% idle = Healthy- 49% idle to 20% idle = Warning or Monitor- 19% idle to 0% idle = Critical or Out of Spec%Avg. Disk Sec Read or Write

– 1ms to 15ms = Healthy (10ms for Exch/AD, 15ms for SQL)

15ms to 25ms = Warning or Monitor (25ms in general) – 26ms or greater = Critical or Out of Spec

Avg or Current Disk Queue Length

2 or under = Healthy 3-31 = Warning or Monitor. A possible issue, check read or write latency to confirm. 32 or higher = A likely issue, check read or write latency to confirm.

Memory– * = Windows does not have the ability to report maximum pool values in the OS without a debugger attached. – Please see the appendix chart document get an approximate maximum pool size given the amount of physical memory and boot.ini switches used. Note: Hotplug memory, Special Pool debug flag, Having 6GB or more but using the /Maxmem boot.ini switch to force 4GB of recognized memory (usually on Exchange servers) can reduce Pool Paged Bytes by as much as 100MB, so the appendix chart document is good estimation, but it should be understood that these maximum can vary a little depending on the server configuration in hardware and boot.ini switches chosen. Free System Page Table Entries- Greater than 10,000 free = Healthy– 9,999 to 5,000 free = Monitor – 4,999 or below = Critical or Out of SpecPool Non Paged Bytes*- Less that 60% of pool consumed=Healthy

– 61% – 80% of pool consumed = Warning or Monitor.

Greater than 80% pool consumed = Critical or Out of Spec. Pool Paged Bytes*

– Less that 60% of pool consumed=Healthy – 61% – 80% of pool consumed = Warning or Monitor.

Greater than 80% pool consumed = Critical or Out of Spec. Available Megabytes

50% of free memory available or more =Healthy

25% of free memory available = Monitor.

10% of free memory available = Warning – Less than 100MB or 5% of free memory available = Critical or out of spec

Pages per Second (4k per page, so 1000pps=4MB/sec)

– Less that 1000 pages/sec

sustained = Healthy– 1000-2500 pages/sec sustained = Caution or Monitor.

– Greater than 2500 pages/sec (

10.24MB/sec) sustained = Warning.– Greater than 5000 pages/sec peak = Warning to Critical and should be investigated.

Note regarding Hyper-V and performance:Note:For more information, refer to the Measuring Performance on Hyper-V article at
ProcessorAt this point it becomes important to identify the process, service, or driver causing the workload. %Processor Time (all instances)- Less than 60% consumed = Healthy- 51% – 90% consumed = Monitor or Caution- 91% – 100% consumed = Critical or Out of Spec%processor time + % idle time = 100%

%processor time = (%user time + %privileged time); %user time used by applications, %privileged time used by the kernel/system part


Sum of %processor time per process object = %user time

Network Interface-Due to electrical signaling limitations we do not expect to exceed 80% throughput on any bus. So if we see 80% of the interface consumed on either received or send, we expect to see the link saturated. Using the rule of thumb that we do not want to operate at > 80% of planned capacity, a maximum threshold of ~64% (80% * 80%) is the guideline for received and send, evaluating each independently. Current Bandwidth*Instances- Note bandwidth for calculation (100Mb, 10Mb, 1000Mb, etc)- Remember that Ethernet is approximately 80% usable due to collision, etc – so the usable ceiling is up to 80% of the interface as a typical guideline since we cannot guarantee a pure switched environment for all customers in all scenarios, all of the time. See note1Bytes Total/sec-

Less than 40% of the interface consumed = Healthy– 41%-64% of the interface consumed = Monitor or Caution.

– 65-100% of the interface consumed = Critical or Out of Spec

Output Queue Length

– 0 = Healthy

– 1-2 = Monitor or Caution.

– Greater than 2 = Critical or Out of Spec

ProcessThis is to detect possible leaks by applications. <process>Handle Count- If this process instance has greater than 500 handles it should be examined over time to see if it is legitimate allocation & de-allocation, or if it is a leak pattern over time
Private Bytes guideline rationaleThe goal is to catch many of the smaller memory footprint services (WinMgmt, SVCHost, or 3rd party applications) before they consume too many resources and begin to cause serious performance or stability issues.LSASS on an Active Directory DC, Exchange Server’s Store.exe, and SQL server’s Sqlserv.exe are expected to be 1+GB values and will need their own ruleFor servers with BackOffice or Large memory applications it may be better to create 2 Perfmon alert, MoM, or NetIQ rules for Private Bytes.The first rule should examine all processes except the very large memory applications, and then the second rule is adjusted for observed levels for the application greater tan 250MB as an average.


<process>Thread Count – If this process instance has greater than 500 threads it should be examined over time to see if it legitimate allocation & de-allocation, or if it is a leak pattern over time.<process>Private Bytes- If this process instance has greater than 250MB of use it should be examined over time to see if it is legitimate allocation & de-allocation, or it is a leak pattern over time.-

Note: Private bytes are not related to pool bytes in any way but very commonly code paths within an application that leak private bytes may leak pool bytes as well. This counter is a key in looking for the source of pool leaks.

Private Bytes is used instead of Working Set since a Private Bytes leak can be difficult to detect using the Working Set object because Private Bytes leaks can be paged out, etc.

<process>Working Set

– If this process instance has greater than 250MB of use it should be examined over time to see if it is legitimate allocation & de-allocation, or it is a leak pattern over time.


Threshold for switched networks and latency tolerant
• < 30 percent: Low utilization
• 30 to 60 percent: Significant utilization
• > 60 percent: High utilization
Threshold for shared networks and latency sensitive
• < 30 percent: Normal utilization
• > 30 percent: High utilization

The performance counters available to measure the Network Interface object are expressed in a mix of bits and bytes. To convert this value into a utilization percentage, you can use the following formula:

( ( “Bytes Total Per Second” * 8) / “Current Bandwidth” ) * 100

For the purpose of this workshop, a Windows Powershell function called Get- NICUtilPercent has been prepared to aid in this calculation. To use this function, copy and paste it (from the appendix of this module) into a Powershell command prompt. Then, run the following command by typing Get-NICUtilPercent followed by the value for Bytes Total / Sec followed by the value for Current Bandwidth, as shown below:

Get-NICUtilPercent -bytesTotal 6250000 -bandwidth 11000000

You can also shorten the command shown above as follows:

Get-NICUtilPercent 6250000 11000000

In either case, the command will return a percentage string, such as the one shown below:

45 percent

Because many NICs run at common speeds, the Get-NICUtilPercent function can accept

On Linux:





glsof filemonitor:

On Windows:

On the server with the file shared: – Go to start > Run > type compmgmt.msc – Under ‘System Tools’ expand ‘Shared Folders’ – Go to ‘Open Files’ That lists all the files in use by network users, right click on them and click ‘Close Open File’ to close it!

also from the command line: net files

using procmon.exe or procexp.exe or handle.exe or psfile.exe from sysinternals :

Enable NTFS audit


Considering the volume of information it gathers, it’s no surprise that the openfiles command is a performance hog. Thus, the accounting associated with openfiles is off by default, meaning users can’t pull any data from this command until it is turned on. This function can be activated by running:
C:\> openfiles /local on

Users will need to reboot, and when the system comes back, they will be able to run the openfiles command as follows:
C:\> openfiles /query /v

This command will show verbose output, which includes the user account that each process with an open file is running under. To get an idea of what malware has been installed, or what an attacker may be doing on a machine, users should look for unusual or unexpected files, especially those associated with unexpected local users on the machine.

When finished with the openfiles command, its accounting functionality can be shut off and the system returned to normal performance by running the following command and rebooting:
C:\> openfiles /local off

Netstat: Show me the network
The Windows netstat command shows network activity, focusing on TCP and UDP by default. Because malware often communicates across the network, users can look for unusual and unexpected connections in the output of netstat, run as follows:
C:\> netstat -nao

The –n option tells netstat to display numbers in its output, not the names of machines and protocols, and instead shows IP addresses and TCP or UDP port numbers. The –a indicates to display all connections and listening ports. The –o option tells netstat to show the processID number of each program interacting with a TCP or UDP port. If, instead of TCP and UDP, you are interested in ICMP, netstat can be run as follows:
C:\> netstat –s –p icmp

This indicates that the command will return statistics (-s) of the ICMP protocol. Although not as detailed as the TCP and UDP output, users can see if a machine is sending frequent and unexpected ICMP traffic on the network. Some backdoors and other malware communicate using the payload of ICMP Echo messages, the familiar and innocuous-looking ping packets seen on most networks periodically.

Like WMIC, the netstat command also lets us run it every N seconds. But, instead of using the WMIC syntax of “/every:[N]”, users simply follow their netstat invocation with a space and an integer. Thus, to list the TCP and UDP ports in use on a machine every 2 seconds, users can run:
C:\> netstat –na 2

Using wmic:

For example, to learn more about the processes running on a machine, a user could run:
C:\> wmic process 

Output of that command will likely look pretty ugly because an output format wasn’t specified. With WMIC, output can be formatted in several different ways, but two of the most useful for analyzing a system for compromise are the “list full” option, which shows a huge amount of detail for each area of the machine a user is interested in, and the “list brief” output, which provides one line of output per report item in the list of entities, such as running processes, autostart programs and available shares.

For example, we can look at a summary of every running process on a machine by running:
C:\> wmic process list brief

That command will show the name, process ID and priority of each running process, as well as other less-interesting attributes. To get even more detail, run:
C:\> wmic process list full

This command shows all kinds of details, including the full path of the executable associated with the process and its command-line invocation. When investigating a machine for infection, an administrator should look at each process to determine whether it has a legitimate use on the machine, researching unexpected or unknown processes using a search engine.

Beyond the process alias, users could substitute startup to get a list of all auto-start programs on a machine, including programs that start when the system boots up or a user logs on, which could be defined by an auto-start registry key or folder:
C:\> wmic startup list full

A lot of malware automatically runs on a machine by adding an auto-start entry alongside the legitimate ones which may belong to antivirus tools and various system tray programs. Users can look at other settings on a machine with WMIC by replacing “startup” with “QFE” (an abbreviation which stands for Quick Fix Engineering) to see the patch level of a system, with “share” to see a list of Windows file shares made available on the machine and with “useraccount” to see detailed user account settings.

A handy option within WMIC is the ability to run an information-gathering command on a repeated basis by using the syntax “/every:[N]” after the rest of the WMIC command. The [N] here is an integer, indicating that WMIC should run the given command every [N] seconds. That way, users can look for changes in the settings of the system over time, allowing careful scrutiny of the output. Using this function to pull a process summary every 5 seconds, users could run:
C:\> wmic process list brief /every:1

Hitting CTRL+C will stop the cycle.

Now, with the find command, users can look through the output of each of the commands I’ve discussed so far to find interesting tidbits. For example, to look at information every second about cmd.exe processes running on a machine, type:
C:\> wmic process list brief /every:1 | find “cmd.exe”

Or, to see which autostart programs are associated with the registry hive HKLM, run:
C:\> wmic startup list brief | find /i “hklm”

To count the number of files open on a machine on which openfiles accounting is activated, type:
C:\> openfiles /query /v | find /c /v “”

Whenever counting items in this way, remember to subtract the number of lines associated with column headers. And, as a final example, to see with one-second accuracy when TCP port 2222 starts being used on a machine, along with the process ID using the port, run:
C:\> netstat –nao 1 | find “2222”

Third-party tool:

unlocker tool: