Category: Storage


Microsoft’s file systems organize storage devices based on cluster size. Also known as the allocation unit size, cluster size represents the smallest amount of disk space that can be allocated to hold a file. Because ReFS and NTFS don’t reference files at a byte granularity, the cluster size is the smallest unit of size that each file system can reference when accessing storage. Both ReFS and NTFS support multiple cluster sizes, as different sized clusters can offer different performance benefits, depending on the deployment.

Full article from MS: https://blogs.technet.microsoft.com/filecab/2017/01/13/cluster-size-recommendations-for-refs-and-ntfs/

Summary:

ReFS cluster sizes

ReFS offers both 4K and 64K clusters. 4K is the default cluster size for ReFS, and we recommend using 4K cluster sizes for most ReFS deployments because it helps reduce costly IO amplification:

  • In general, if the cluster size exceeds the size of the IO, certain workflows can trigger unintended IOs to occur. Consider the following scenarios where a ReFS volume is formatted with 64K clusters:
    • Consider a tiered volume. If a 4K write is made to a range currently in the capacity tier, ReFS must read the entire cluster from the capacity tier into the performance tier before making the write. Because the cluster size is the smallest granularity that the file system can use, ReFS must read the entire cluster, which includes an unmodified 60K region, to be able to complete the 4K write.
    • If a cluster is shared by multiple regions after a block cloning operation occurs, ReFS must copy the entire cluster to maintain isolation between the two regions. So if a 4K write is made to this shared cluster, ReFS must copy the unmodified 60K cluster before making the write.
    • Consider a deployment that enables integrity streams. A sub-cluster granularity write will cause the entire cluster to be re-allocated and re-written, and the new checksum must be computed. This represents additional IO that ReFS must perform before completing the new write, which introduces a larger latency factor to the IO operation.
  • By choosing 4K clusters instead of 64K clusters, one can reduce the number of IOs that occur that are smaller than the cluster size, preventing costly IO amplifications from occurring as frequently.

Additionally, 4K cluster sizes offer greater compatibility with Hyper-V IO granularity, so we strongly recommend using 4K cluster sizes with Hyper-V on ReFS.  64K clusters are applicable when working with large, sequential IO, but otherwise, 4K should be the default cluster size.

NTFS cluster sizes

NTFS offers cluster sizes from 512 to 64K, but in general, we recommend a 4K cluster size on NTFS, as 4K clusters help minimize wasted space when storing small files. We also strongly discourage the usage of cluster sizes smaller than 4K. There are two cases, however, where 64K clusters could be appropriate:

  • 4K clusters limit the maximum volume and file size to be 16TB
    • 64K cluster sizes can offer increased volume and file capacity, which is relevant if you’re are hosting a large deployment on your NTFS volume, such as hosting VHDs or a SQL deployment.
  • NTFS has a fragmentation limit, and larger cluster sizes can help reduce the likelihood of reaching this limit
    • Because NTFS is backward compatible, it must use internal structures that weren’t optimized for modern storage demands. Thus, the metadata in NTFS prevents any file from having more than ~1.5 million extents.
      • One can, however, use the “format /L” option to increase the fragmentation limit to ~6 million. Read more here.
    • 64K cluster deployments are less susceptible to this fragmentation limit, so 64K clusters are a better option if the NTFS fragmentation limit is an issue. (Data deduplication, sparse files, and SQL deployments can cause a high degree of fragmentation.)
      • Unfortunately, NTFS compression only works with 4K clusters, so using 64K clusters isn’t suitable when using NTFS compression. Consider increasing the fragmentation limit instead, as described in the previous bullets.

While a 4K cluster size is the default setting for NTFS, there are many scenarios where 64K cluster sizes make sense, such as: Hyper-V, SQL, deduplication, or when most of the files on a volume are large.

Resources:

SQL Server performance: http://wp.me/p15Zft-8h

SQL Server Video archive: https://technet.microsoft.com/en-us/dn912438

Database tasks: https://technet.microsoft.com/en-us/library/ms165730(v=sql.105).aspx

T-SQL reference: https://technet.microsoft.com/en-us/library/ms189826(v=sql.90).aspx

SQL performance and troubleshooting: http://sqlnexus.codeplex.com/

Microsoft companion (MOC): http://www.microsoft.com/en-us/learning/companion-moc.aspx

Web sites:

Tips and tricks:

PowerShell: import-module SQLPS

Placement of tempdb in a dedicated disk (Raid 1) ,  same for log files  (RAID 1 or 10) and database files (RAID 5). Also dedicated disk for OS and dedicated disk for SQL server binaries.

Do a dbcc checkdb before each database backup

Use Buffer pool extension

Enable security: create logins, server roles, then for db: create users, database roles, database perms

Privileged the Microsoft service accounts (MSA) to run the SQL services.

Enable SQL audit

Enable DML triggers  (enable logons trigger)

Use SQL profile (but heavy in terms of performance). Else prefer to use (T-SQL) SQL trace (light footprint if well-designed).

Design a backup and restore strategy:

  • To backup: backup full + backup differential + backup transaction log + backup tail_log
  • To restore: restore first the full (with norecovery) + the last differential (with norecovery) + the latest transaction log (with recovery option) and eventually the latest Tail_log (if possible)
  • don’t forget to backup the tail log before to start a restore sequence
  • preferably use “backup device” which contains the full,differential,logs. Then you can backup the “backup device” using the OS backup software (Windows backup, Tivoli SM, Veritas Netbackup…)

Define maintenance plans:

– separate the maintenance plans to backup the system databases from the other databases (include also the check database integrity “dbcc checkdb” before each backup sequence)

– separate the maintenance plans to backup a Application Database from a maintenance plan to check only the Database health: check database integrity, reorganize indexes, update statistics.

 

 

 

 

 

 

http://blogs.technet.com/b/askds/archive/2008/08/12/event-logging-policy-settings-in-windows-server-2008-and-vista.aspx

DFS dirty-shutdown stopping DFS replication: DFSR event ID 2213 in Windows Server 2008 R2 or Windows Server 2012:

https://support.microsoft.com/fr-fr/kb/2846759

How to disable the Stop Replication functionality in AutoRecovery

To have DFSR perform AutoRecovery when a dirty database shutdown is detected, edit the following registry value after hotfix 2780453

is installed in Windows Server 2008 R2. You can deploy this change on all versions of Windows Server 2012. If the value does not exist, you must create it.

Key: HKLM\System\CurrentControlSet\Services\DFSR\Parameters
Value: StopReplicationOnAutoRecovery
Type: Dword
Data: 0

A reparse point is what linux calls a symbolic link ( http://en.wikipedia.org/wiki/NTFS_reparse_point).

Reparse point = NTFS symbolic links, directory junction points, volume mount points

https://technet.microsoft.com/en-us/library/cc754077%28v=ws.10%29.aspx

  • “Remote-to-remote describes a computer accessing a remote symbolic link that points to a remote UNC path using SMB.“

https://technet.microsoft.com/en-us/library/cc785435.aspx

In general:

hard link: link to a file (MFT entry) The data are still accessible as long as at least one link that points to it still exists.

  • NTFS HARD link: Hard links require an NTFS partition.With mklink /H or you can use FindLinks from http://www.microsoft.com/sysinternals   
  • Volume mount points are similar to Unix mount points, where the root of another file system is attached to a directory. In NTFS, this allows additional file systems to be mounted without requiring a separate drive letter (such as C: or D:) for each.

soft link: link to its name (file path) ;

  • NTFS symbolic link (SYMLINK)Unlike a junction point, a symbolic link can also point to a file or remote SMB network path(*). With mklink (for files) or mklink /D (for directories). Relative symbolic links are restricted to a single volume.
  • Junction point/directory junction: Directory junctions are similar to volume mount points, but reference other directories in the file system instead of other volumes. Used in default Windows Server 2008 configuration for Users folder redirs. With mklink /J. Procmon.exe (filter on  from sysinternals) : will display the JUNCTIONS. junction.exe -s -q c:\   from http://www.microsoft.com/sysinternals   ; to list,create and delete junction point.

Example of a junction to move the content of WinSxS in another drive: 

mklink /J “C:\Windows\winsxs” “d:\winsxs”           ; careful the d:\winsxs must not exist before to create the junction, else you will get a error message “file already exist…”

to remove the junction you can use junction from sysinternals: junction -d d:\winsxs

The mklink command functions as such:  mklink LINK TARGET, so in the above example the c:\Windows\winsxs acts as the LINK location where you want to trick Windows into thinking it still exists and the TARGET location is of course the d:\winsxs on another drive.

========================================================

(*)Symlink Evaluation Modes

The default symbolic link evaluation for Windows Vista, Windows 7, Windows Server 2008 and Windows Server 2008 R2 is Local-to-local enabled, Local-to-remote enabled, Remote-to-local disabled, Remote-to-remote disabled.

Symbolic link evaluation settings can be viewed and altered by the following commands respectively:
fsutil behavior query SymlinkEvaluation
fsutil behavior set SymlinkEvaluation [L2L:{0|1}] | [L2R:{0|1}] | [R2R:{0|1}] | [R2L:{0|1}]
0 disables the specified evaluation mode, while 1 enables it.
Enabling Remote-to-local and Remote-to-remote will overcome the The symbolic link cannot be followed because its type is disabled error when trying to access a symlink on a UNC share.
The symlink evaluation settings can also be controlled via Group Policy. Go to Computer Configuration > Administrative Templates > System > Filesystem and configure “Selectively allow the evaluation of a symbolic link”.

========================================================

  • Use RMDIR to remove a symbolic link <SYMLINK> or <SIMLINKD>
  • Using the DIR command prompt to list the symbolic link:

D:\>mklink /d dir3 d:\dir2
symbolic link created for dir3 <<===>> d:\dir2

C:\>dir d:\ /AD /S | find “SYMLINK”

D:\>dir d:\ /AD /S | find “SYMLINK”
03/29/2014  10:07 PM    <SYMLINKD>     dir3 [d:\dir2]

 

  • Using the DIR command prompt to list the junction:

C:\>dir c:\ /AD /S | find “<JUNCTION>”
07/26/2012  08:14 AM    <JUNCTION>     Documents and Settings [C:\Users]
07/26/2012  08:14 AM    <JUNCTION>     Documents and Settings [C:\Users]
07/26/2012  08:14 AM    <JUNCTION>     Application Data [C:\ProgramData]
07/26/2012  08:14 AM    <JUNCTION>     Desktop [C:\Users\Public\Desktop]
07/26/2012  08:14 AM    <JUNCTION>     Documents [C:\Users\Public\Documents]
07/26/2012  08:14 AM    <JUNCTION>     Start Menu [C:\ProgramData\Microsoft\Windows\Start Menu]
07/26/2012  08:14 AM    <JUNCTION>     Templates [C:\ProgramData\Microsoft\Windows\Templates]
07/26/2012  08:14 AM    <JUNCTION>     Default User [C:\Users\Default]
07/26/2012  08:14 AM    <JUNCTION>     Application Data [C:\ProgramData]
07/26/2012  08:14 AM    <JUNCTION>     Desktop [C:\Users\Public\Desktop]
07/26/2012  08:14 AM    <JUNCTION>     Documents [C:\Users\Public\Documents]
07/26/2012  08:14 AM    <JUNCTION>     Start Menu [C:\ProgramData\Microsoft\Windows\Start Menu]

http://support.microsoft.com/kb/2958414

For maintenance reason you want to disable DFS target(s) or DFS namespace, to do that you can:

A) Create a Backup of DFS target folders:

you can use the dfscmd /view \\mydomain.localeus.net\dfsroot /batch >>backup_mydomainlocal_dfsroot.cmd

B) Test your restore: backup_mydomainlocal_dfsroot.cmd

C) Then you can now disable your dfsroot:

c.1) the method blow works also for 2003/2008 based DFS and greater) to hide a DFS root (for maintenance reason or before real removal phase):

you can rename the dfsroot from ADUC, advanced mode, go to system, DFS-Configuration container, select the dfsroot to rename, right-click rename (Add a “$” at the end of the name). On the client, when you try to access the root status using dfsutil /root:\\mydomain.local\dfsroot_newname$ /view , you get a  “system error – element not found”.

c.2 ) new method which were introduced in Windows Server 2012 (this does not work on 2003/2008-based DFS)

To enable or disable referrals by using Windows PowerShell, use the Set-DfsnRootTarget –State or Set-DfsnServerConfiguration cmdlets,

 

Web resources:

https://msdn.microsoft.com/en-us/library/cc771266.aspx

https://technet.microsoft.com/en-us/library/cc771266%28v=ws.10%29.aspx

 

Hi folks, here are web resources to implement and  troubleshoot MS DFS and MS DFS-R:

DFS Replication in Windows Server 2012 R2 : http://blogs.technet.com/b/filecab/archive/2013/08/20/dfs-replication-in-windows-server-2012-r2-if-you-only-knew-the-power-of-the-dark-shell.aspx

DFS Replication Initial Sync in Windows Server 2012 R2: http://blogs.technet.com/b/filecab/archive/2013/08/21/dfs-replication-initial-sync-in-windows-server-2012-r2-attack-of-the-clones.aspx

DFS Replication in Windows Server 2012 R2: Restoring Conflicted, Deleted and PreExisting files with Windows PowerShell: http://blogs.technet.com/b/filecab/archive/2013/08/23/dfs-replication-in-windows-server-2012-r2-restoring-conflicted-deleted-and-preexisting-files-with-windows-powershell.aspx

Understanding DFS (how it works): http://technet.microsoft.com/en-us/library/cc782417(v=WS.10).aspx

=> Several mechanisn are used: routing, DNS, AD sites and subnets topology, WINS,  FW ports and rules shoud be open (RPC, SMB…):

NetBIOS Name Service:  Domain controllers; root servers that are not domain controllers; servers acting as link targets; client computers acting as link targets: TCP/UDP 137

NetBIOS Datagram Service: Domain controllers; root servers that are not domain controllers; servers acting as link targets; client computers acting as link targets: TCP/138

NetBIOS Session Service: Domain controllers; root servers that are not domain controllers; servers acting as link targets; client computers acting as link targets: TCP/139

LDAP Server: Domain controllers TCP/UDP 389

Remote Procedure Call (RPC) endpoint mapper: Domain controllers TCP/135

Server Message Block (SMB): Domain controllers; root servers that are not domain controllers; servers acting as link targets; client computers acting as link targets: TCP/UDP 445

Extract from the MS technet: “When a client requests a referral from a domain controller, the DFS service on the domain controller uses the site information defined in Active Directory (through the DSAddressToSiteNames API) to determine the site of the client, based on the client s IP address. DFS stores this information in the client site cache”
“DFS clients store root referrals and link referrals in the referral cache (also called the PKT cache). These referrals allow clients to access the root and links within a namespace. You can view the contents of the referral cache by using Dfsutil.exe with the /pktinfo “
“You can view the domain cache on a client computer by using the Dfsutil.exe command-line tool with the /spcinfo parameter”

Implementing DFS-R: http://technet.microsoft.com/en-us/library/cc770925.aspx AND DFS-R FAQ: http://technet.microsoft.com/en-us/library/cc773238.aspx, delegate DFS-R permissions: http://technet.microsoft.com/en-us/library/cc771465.aspx

DFS-R limits:

The following list provides a set of scalability guidelines that have been tested by Microsoft on Windows Server 2012 R2:
• Size of all replicated files on a server: 100 terabytes.
• Number of replicated files on a volume: 70 million.
• Maximum file size: 250 gigabytes.

The following list provides a set of scalability guidelines that have been tested by Microsoft on Windows Server 2012, Windows Server 2008 R2, and Windows Server 2008:
• Size of all replicated files on a server: 10 terabytes.
• Number of replicated files on a volume: 11 million.
• Maximum file size: 64 gigabytes

Implementing DFS Namespace: http://technet.microsoft.com/en-us/library/cc730736.aspx AND DFS-N FAQ: http://technet.microsoft.com/fr-fr/library/ee404780(v=ws.10).aspx

Consolidation of multiple DFS namespaces in a single one : http://blogs.technet.com/b/askds/archive/2013/02/06/distributed-file-system-consolidation-of-a-standalone-namespace-to-a-domain-based-namespace.aspx

Netmon trace digest: http://blogs.technet.com/b/josebda/archive/2009/04/15/understanding-windows-server-2008-dfs-n-by-analyzing-network-traces.aspx

DFS 2008 step by step: http://technet.microsoft.com/en-us/library/cc732863(WS.10).aspx

DFS tuning and troubleshooting:

DFS-N et DFS-R en ligne de commande: http://blogcastrepository.com/blogs/benoits/archive/2009/08/22/dfs-n-et-dfs-r-en-ligne-de-commande.aspx

DFSR les commandes les plus utiles: http://www.monbloginfo.com/2011/03/02/dfsr-les-commandes-les-plus-utiles/

and http://blogs.technet.com/b/filecab/archive/2009/05/28/dfsrdiag-exe-replicationstate-what-s-dfsr-up-to.aspx

Tuning DFS: http://technet.microsoft.com/en-us/library/cc771083.aspx and Tuning DFS Replication performance : http://blogs.technet.com/b/askds/archive/2010/03/31/tuning-replication-performance-in-dfsr-especially-on-win2008-r2.aspx

DFSutil command line:  http://technet.microsoft.com/fr-fr/library/cc776211(v=ws.10).aspx AND http://technet.microsoft.com/en-us/library/cc779494(v=ws.10).aspx and https://technet.microsoft.com/en-us/library/cc776211%28WS.10%29.aspx

Performance tuning guidelines for Windows 2008 R2: http://msdn.microsoft.com/en-us/windows/hardware/gg463392.aspx

Monitoring:

DFSRMon utility: http://blogs.technet.com/b/domaineetsecurite/archive/2010/04/14/surveillez-en-temps-r-el-la-r-plication-dfsr-gr-ce-dfsrmon.aspx

or  DfsrAdmin.exe in conjunction with Scheduled Tasks to regularly generate health reports: http://go.microsoft.com/fwlink/?LinkId=74010

Server side:

DFS: some notions: A referral is an ordered list of targets that a client computer receives from a domain controller or namespace server when the user accesses a namespace root or folder with targets. After the client receives the referral, the client attempts to access the first target in the list. If the target is not available, the client attempts to access the next target.

tip1) dfsutil domain : Displays all namespaces in the domain ; dfsutil /domain:mydomain.local /view

tip2) You can check the size of an existing DFS namespace by using the following syntax in Dfsutil.exe:

dfsutil /root:\\mydomain.local\rootname /view (for domain-based DFS)
dfsutil /root:\\dfsserver\rootname /view (for stand-alone DFS)

tip3) Enabling the insite setting of a DFS server is useful when: You don’t want the DFS clients to connect outside the site.
You don’t want the DFS client to connect to a site other than the site it is in, and hence avoid using expensive WAN links.
dfsutil /insite:\\mydomain.local\dfsroot /enable

tip4) You want DFS clients to be able to connect outside the internal site, but you want clients to connect to the closest site first, saving the expensive network bandwidth:

ex: dfsutil /root:\\mydomain.local\sales /sitecosting /view or /enable or /disable

If you do not know if a root is site costing aware, you can check its status by substituting the /display parameter for the /sitecosting parameter.

tip5) Enable root scalability mode: You enable root scalability mode by using the /RootScalability parameter in Dfsutil.exe, which you can install from the \Support\Tools folder on the Windows Server 2003 operating system CD. When root scalability mode is enabled,  DFS root servers get updates from the closest domain controller instead of the server acting as the PDC emulator master.
As a result, root scalability mode reduces network traffic to the PDC emulator master at the expense of faster updates  to all root servers. (When you make changes to the namespace, the changes are still made on the PDC emulator master,  but the root servers no longer poll the PDC emulator master hourly for those changes; instead, they poll the closest domain controller.)
With this mode enabled, you can have as many root targets as you need, as long as the size of the DFS Active Directory object (for each root)  is less than 5 MB. Do not use root scalability mode if any of the following conditions exist in your organization:Your namespace changes frequently, and users cannot tolerate having inconsistent views of the namespace.  Domain controller replication is slow. This increases the amount of time it takes for the PDC emulator master  to replicate DFS changes to other domain controllers, which, in turn, replicate changes to the root servers.  Until this replication completes, the namespace will be inconsistent on all root servers.

ex: dfsutil /root:\\mydomain.local\sales /rootscalability /view or /enable or /disable

tip6) Dfsdiag utility: http://blogs.technet.com/b/filecab/archive/2008/10/24/what-does-dfsdiag-do.aspx

/testdcs: With this you can check the configuration of the domain controllers. It performs the following tests:

  • Verifies that the DFS Namespace service is running on all the DCs and its Startup Type is set to Automatic.
  • Check for the support of site-costed referrals for NETLOGON and SYSVOL.
  • Verify the consistency of site association by hostname and IP address on each DC.

To run this command against your domain mydomain.local just type:

DFSDiag /testdcs /domain:mydomain.local

DFSDiag /testdcs > dfsdiag_testdcs.txt

/testsites: Used to check the configuration of Active Directory Domain Services (AD DS) sites by verifying that servers that act as namespace servers or folder (link) targets have the same site associations on all domain controllers.

So for a machine you will be running something like: DFSDiag /testsites /machine:MyDFSServer

For a folder (link): DFSDiag /testsites /dfspath:\\mydomain.local\MyNamespace\MyLink /full

For a root: DFSDiag /testsites /dfspath:\\mydomain.local\MyNamespace /recurse /full

/testdfsconfig:  With this you can check the DFS namespace configuration. The tests that perform are:

  • Verifies that the DFS Namespace service is running and that its Startup Type is set to Automatic on all namespace servers.
  • Verifies that the DFS registry configuration is consistent among namespace servers.
  • Validates the following dependencies on clustered namespace servers that are running Windows 2008 (non supported for W2K3 clusters L):
    • Namespace root resource dependency on network name resource.
    • Network name resource dependency on IP address resource.
    • Namespace root resource dependency on physical disk resource.

To run this you just need to type:  DFSDiag /testdfsconfig /dfsroot:\\mydomain.local\MyNamespace

/testdfsintegrity: Used to check the namespace integrity. The tests performed are:

  • Checks for DFS metadata corruption or inconsistencies between domain controllers
  • In Windows 2008 server, validates that the Access Based Enumeration state is consistent between DFS metadata and the namespace server share.
  • Detect overlapping DFS folders (links), duplicate folders and folders with overlapping folder targets (link targets).

To check the integrity of your domain mydomain.local:

DFSDiag /testdfsintegrity /dfsroot:\\mydomain.local\MyNamespace

DFSDiag.exe /testdfsintegrity /dfsroot:\\mydomain.local\MyNamespace /recurse /full > dfsdiag_testdfsintegrity.txt

Additionally you can specify /full, /recurse, which in this case, /full verifies the consistency of share and NTFS ACLs in all the folder targets. It also verifies that the Online property is set in all the folder targets. /recurse performs the testing including the namespace interlinks.

/testreferral: Perform specific tests, depending on the type of referral being used.

  • For Trusted Domain referrals, validates that the referral list includes all trusted domains.
  • For Domain referrals, perform a DC health check as in /testdcs
  • For Sysvol and Netlogon referrals perform the validation for Domain referrals and that it’s TTL has the default value (900s).
  • For namespace root referrals, perform the validation for Domain referrals, a DFS configuration check (as in /testdfsconfig) and a Namespace integrity check (as in /testdfsintegrity).
  • For DFS folder referrals, in addition to performing the same health checks as when you specify a namesapace root, this command validates the site configuration for folder target (DFSDiag /testsites) and validates the site association of the local host

Again for your namespace mydomain.local:

DFSDiag /testreferral /dfspath:\\mydomain.local\MyNamespace

DFSDiag.exe /testreferral /dfspath:\\mydomain.local\MyNamespace /full > dfsdiag_testreferral.txt

There is also the option to use /full as an optional parameter, but this only applies to Domain and Root referrals. In these cases /full verifies the consistency of site association information between the registry and Active Directory.

Domain controllers:

Evaluate domain controller health, site configurations, FSMO ownerships, and connectivity:

Use Dcdiag.exe to check if domain controllers are functional. Review this for comprehensive details about dcdiag:

    Dcdiag /v /f:Dcdiag_verbose_output.txt

    Dcdiag /v /test:dns /f:DCDiag_DNS_output.txt

    Dcdiag /v /test:topology /f:DCDiag_Topology_output.txt

Active Directory replication

If DCDiag finds any replication failures and you need additional details about them, Ned wrote an excellent article a while back that covers how to use the Repadmin.exe utility to validate the replication health of domain controllers:

    Repadmin /replsummary * > repadmin_replsummary.txt

    Repadmin /showrepl * > repadmin_showrepl.txt

Always validate the health of the environment prior to utilizing a namespace.

Clients:

  • dfsutil /root:\\mydomain.local\myroot /view /verbose    ; display the content of root dfs (links…)
  • dfsutil /pktinfo     ;to display the client cache
  • dfsutil /spcinfo     ; the domain cache on a client computer
  • dfsutil /purgemupcache ; cache stores information about which redirector, such as DFS, SMB, or WebDAV, is required for each UNC path
  • dfsutil /pktflush   ; Dfsutil /PktFlush is a special problem repair command that should only be executed on the client.
  • dfsutil cache referral flush   ; to flush the client cache
  • dfsdiag /testdfsintegrity /dfsroot:\\mydomain.local\dfsroot /recurse /full > dfsdiag_testdfsintegrity.txt   ; to test root dfs from local client
  • dfsutil client siteinfo <ip> ; to display the remote client AD client site
  • dfsutil /sitename:<ip address>  or nltest /dsgetsite ; to display the local AD client site
  • To display a target dfs (primary/active) from cmd line: dfsutil client property state \\mydomain.local\dfsroot\dfsfolder1
  • To change a target dfs (primary/active) from cmd line: dfsutil client property state active \\mydomain.local\dfsroot\dfsfolder1 \\server.mydomain2.net\share  ; but you need a special hotfix for win7/2008r2 clients: http://support.microsoft.com/kb/2783031/en-us
  • Understanding: DFS override referral ordering: http://blogs.technet.com/b/askds/archive/2011/10/27/dfs-override-referral-ordering-messing-with-the-natural-order.aspx

Dfsutil examples: https://technet.microsoft.com/en-us/library/cc776211%28WS.10%29.aspx

 

http://blogs.msdn.com/b/b8/archive/2012/01/16/building-the-next-generation-file-system-for-windows-refs.aspx

Symptom:

You get a dialog box like “The file name is too long” or “The source file name(s) are larger than is supported by the file system”. Or “cannot delete folder: it is being used by another person or program”, or, “cannot delete file: Access is denied There has been a sharing violation”. The source or destination file may be in use.

Description:

Basically, there is a character limit set in naming or renaming files in your Windows operating system and it varies from one OS to another. Mostly it varies between 256 and 260 characters. Thus, when you transfer files with long names from one destination to another, you will experience path too long error in Windows or Linux systems.

History:

Maximum Path Length Limitation In the Windows API (with some exceptions discussed in the following paragraphs), the maximum length for a path is MAX_PATH, which is defined as 260 characters. A local path is structured in the following order: drive letter, colon, backslash, name components separated by backslashes, and a terminating null character. 1+2+256+1 or [drive][:][path][null] = 260. One could assume that 256 is a reasonable fixed string length from the DOS days. And going back to the DOS APIs we realize that the system tracked the current path per drive, and we have 26 (32 with symbols) maximum drives (and current directories). The INT 0x21 AH=0x47 says “This function returns the path description without the drive letter and the initial backslash.” So we see that the system stores the CWD as a pair (drive, path) and you ask for the path by specifying the drive (1=A, 2=B, …), if you specify a 0 then it assumes the path for the drive returned by INT 0x21 AH=0x15 AL=0x19. So now we know why it is 260 and not 256, because those 4 bytes are not stored in the path string. Why a 256 byte path string, because 640K is enough RAM.

reference: http://msdn.microsoft.com/en-us/library/aa365247%28VS.85%29.aspx

but, but, but: the NTFS filesystem supports paths up to 32k characters. You can use the win32 api and “\\?\” prefix the path to use greater than 260 characters. http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx

The Windows OS since Vista support path length of 32k, but unfortunately most of the applications are limited to 255 chars long !

 

Workarounds:

a) Try moving to a location which has a shorter name, or try renaming to shorter name(s) before attempting this operation.

b) To get list of files with long path, you can use this powershell script:

Write-Host “Please wait, searching…”
robocopy.exe $srcdir c:\doesnotexist /l /e /b /np /fp /njh /njs /ndl | Where-Object {$_.Length -ge 255} | ForEach-Object {$_.Substring(26,$_.Length-26)}

after this audit phase, rename the files to be shorter!

There is also a free command line tool to detect long paths, the “too long path detector”: http://sourceforge.net/projects/tlpd/

On Linux;

With GNU find (on Linux or Cygwin), you can look for files whose relative path is more than 255 characters long:

find -regextype posix-extended -regex '.{257,}'           (257 accounts for the initial ./.)

c) Use “Unlocker” http://www.filehippo.com/download_unlocker/  in the case of you cannot delete folder: It is being used by another person or program or cannot delete file: Access is denied There has been a sharing violation. Unlocker can help! Simply right-click the folder or file and select Unlocker. If the folder or file is locked, a window listing of lockers will appear.

d) But to delete a file whose name is more than 255 characters:

  1. Open a command prompt by running “CMD.EXE”
  2. Navigate to the folder holding the file
  3. Use the command “dir /x” which will display the short names of files.
  4. Delete using the short name.

i.e. if the file is named “verylongfilename.doc”, the shortname will display as something like “verylo~1.doc” and you can delete using that name.

c) In the case where you have a too long directory, you can use the subst command:

  1. Start a command prompt (no admin privileges needed)
  2. Use cd to navigate to the folder you want to go (you can use tab to autocomplete names)
  3. type subst x: . to create the drive letter association. (instead of the . you can also type the entire path)
  4. Now in explorer you have a new letter. Go to it and do whatever you need to do to copy or delete files.
  5. Go back to your cmd window and type subst /d x: to remove the drive or alternatively, restart your pc

d) Another way to cope with the path limit is to shorten path entries with symbolic links.

  1. create a c:\folder directory to keep short links to long paths
  2. mklink /J C:\folder\foo c:\Some\Crazy\Long\Path\foo
  3. add c:\folder\foo to your path instead of the long path