Category: Web server

What’s new in ADFS 2016?

  • Eliminate Passwords from the Extranet
  • Sign in with Azure Multi-factor Authentication
  • Password-less Access from Compliant Devices
  • Sign in with Microsoft Passport
  • Secure Access to Applications
  • Better Sign in experience
  • Manageability and Operational Enhancements


You can upgrade an AD FS 2012 R2 farm using the “mixed farm” process described here. It works for WID or SQL farms, though the document shows only the WID scenario. Also another upgrade procedure:


ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 1

ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 2

ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 3 – Azure MFA Integration


ADFS 2016 operations

ADFS 2016 deployment

ADFS 2016 design

View story at

Microsoft Fasttrack use cases: productivity library

Office 365 deployment advisors: Deployment advisors


Microsoft’s file systems organize storage devices based on cluster size. Also known as the allocation unit size, cluster size represents the smallest amount of disk space that can be allocated to hold a file. Because ReFS and NTFS don’t reference files at a byte granularity, the cluster size is the smallest unit of size that each file system can reference when accessing storage. Both ReFS and NTFS support multiple cluster sizes, as different sized clusters can offer different performance benefits, depending on the deployment.

Full article from MS:


ReFS cluster sizes

ReFS offers both 4K and 64K clusters. 4K is the default cluster size for ReFS, and we recommend using 4K cluster sizes for most ReFS deployments because it helps reduce costly IO amplification:

  • In general, if the cluster size exceeds the size of the IO, certain workflows can trigger unintended IOs to occur. Consider the following scenarios where a ReFS volume is formatted with 64K clusters:
    • Consider a tiered volume. If a 4K write is made to a range currently in the capacity tier, ReFS must read the entire cluster from the capacity tier into the performance tier before making the write. Because the cluster size is the smallest granularity that the file system can use, ReFS must read the entire cluster, which includes an unmodified 60K region, to be able to complete the 4K write.
    • If a cluster is shared by multiple regions after a block cloning operation occurs, ReFS must copy the entire cluster to maintain isolation between the two regions. So if a 4K write is made to this shared cluster, ReFS must copy the unmodified 60K cluster before making the write.
    • Consider a deployment that enables integrity streams. A sub-cluster granularity write will cause the entire cluster to be re-allocated and re-written, and the new checksum must be computed. This represents additional IO that ReFS must perform before completing the new write, which introduces a larger latency factor to the IO operation.
  • By choosing 4K clusters instead of 64K clusters, one can reduce the number of IOs that occur that are smaller than the cluster size, preventing costly IO amplifications from occurring as frequently.

Additionally, 4K cluster sizes offer greater compatibility with Hyper-V IO granularity, so we strongly recommend using 4K cluster sizes with Hyper-V on ReFS.  64K clusters are applicable when working with large, sequential IO, but otherwise, 4K should be the default cluster size.

NTFS cluster sizes

NTFS offers cluster sizes from 512 to 64K, but in general, we recommend a 4K cluster size on NTFS, as 4K clusters help minimize wasted space when storing small files. We also strongly discourage the usage of cluster sizes smaller than 4K. There are two cases, however, where 64K clusters could be appropriate:

  • 4K clusters limit the maximum volume and file size to be 16TB
    • 64K cluster sizes can offer increased volume and file capacity, which is relevant if you’re are hosting a large deployment on your NTFS volume, such as hosting VHDs or a SQL deployment.
  • NTFS has a fragmentation limit, and larger cluster sizes can help reduce the likelihood of reaching this limit
    • Because NTFS is backward compatible, it must use internal structures that weren’t optimized for modern storage demands. Thus, the metadata in NTFS prevents any file from having more than ~1.5 million extents.
      • One can, however, use the “format /L” option to increase the fragmentation limit to ~6 million. Read more here.
    • 64K cluster deployments are less susceptible to this fragmentation limit, so 64K clusters are a better option if the NTFS fragmentation limit is an issue. (Data deduplication, sparse files, and SQL deployments can cause a high degree of fragmentation.)
      • Unfortunately, NTFS compression only works with 4K clusters, so using 64K clusters isn’t suitable when using NTFS compression. Consider increasing the fragmentation limit instead, as described in the previous bullets.

While a 4K cluster size is the default setting for NTFS, there are many scenarios where 64K cluster sizes make sense, such as: Hyper-V, SQL, deduplication, or when most of the files on a volume are large.

Download sysmon:

NEW: Sysmon 6.02 is available ! :  and how to use it:

Installation and usage:

List of web resources concerning Sysmon:

Mark russinovitch’s RSA conference:!2843&ithint=file%2cpptx&app=PowerPoint&authkey=!AMvCRTKB_V1J5ow

Sysmon config files explained:

View story at

Else other install guides:

Sysinternals Sysmon unleashed


Detecting APT with Sysmon:

Sysmon with Splunk:

Sysmon log analyzer/parsing sysmon event log:



logparser GUI:

Web article:


How to test SSL/TLS:

You can easily see what SSL protocol a server supports (and even grab the certificate from there) example below with openSSL:

openssl s_client -connect myserver.mydomain.local:636 -ssl3
openssl s_client -connect myserver.mydomain.local:636 -tls1
openssl s_client -connect myserver.mydomain.local:636 -tls1_1
openssl s_client -connect myserver.mydomain.local:636 -tls1_2

All those reports successfull connection SSL handshake and present the proper server certificate.

And it is very easy anyway for a client to get supported SSL protocols on a remote server, it is how client <==> server handshake works to
select an agreed protocol supported on both sides.

I suggest you check on application side …

# nmap –script ssl-enum-ciphers -p 636 myserver.mydomain.local

Starting Nmap 6.46 ( ) at 2017-02-16 18:22 CET
Nmap scan report for myserver.mydomain.local (
Host is up (0.025s latency).
636/tcp open ldapssl
| ssl-enum-ciphers:
| SSLv3:
| ciphers:
| TLS_RSA_WITH_RC4_128_MD5 – strong
| TLS_RSA_WITH_RC4_128_SHA – strong
| compressors:
| TLSv1.0:
| ciphers:
| TLS_RSA_WITH_AES_128_CBC_SHA – strong


The administrator receives email notifications that identify which certificates are set to expire on the specified day.


SSL certificate checker:


third-party:   ; true-Xtender certificate expiration service


When trusted sites are managed by GPO we can’t even view what servers are trusted using IE menu Tools > Internet Options > Security > Servers because the UI is disabled and won’t let you scroll down:

To solve this problem, from a command line in admin mode:

$(get-item “HKLM:\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMapKey”).property

$(get-item “HKCU:\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMapKey”).property

Troubleshooting HTTP.SYS – How delete old SSL cert?

Symptom: On internal ADFS server the new SSL certificate has been replaced using the set-adfssslcertificate cmdlet, but the get-adfssslcertificate still display the old thumbprint and

 This error cause error on Azure AD health connect


There is not special ADFS cmdlet to remove this old thumbprint. The solution is to use NETSH HTTP to manage HTTP.SYS web server.

Reference: netsh commands for HTTP:

Netsh http show sslcert                               ; to list all SSL bindings

In our case, we want to remove all bindings associated to


In general the command is : Netsh http delete sslcert ipport=w.x.y.z:443

In our case :

Netsh http delete sslcert

Netsh http delete sslcert

Check with get-adfssslcertificate cmdlet



With the release of each version of Windows Server, the maxima for supported number of sockets and logical processors have increased.

Whatever the numbers for the older OS releases, the best scalable platform for virtualization is Windows Server 2012 (and of course Hyper-V Server 2012), far exceeding VMware 5.1 and others.
Let’s start by defining common terminology.


Logical Processor: A thread of execution on a physical processing unit which can be a core or a thread on a symmetric multi-threaded (SMT) system.

Virtual Processor: A virtualized instance of a logical processor exposed to virtual machines.

Socket: What a physical processor plugs into.