Category: Active Directory


As a general best practice, you should simplify your site topology and site link costs as much as possible if you enable the Try Next Closest Site setting. In enterprises with many hub sites, this can simplify any plans that you make for handling situations in which clients in one site need to fail over to a domain controller in another site.

By default, the Try Next Closest Site setting is not enabled. When the setting is not enabled, DC Locator uses the following algorithm to locate a domain controller:

  • Try to find a domain controller in the same site.
  • If no domain controller is available in the same site, try to find any domain controller in the domain.

How to:



This post will try to explain some relevant parameters from the ADFS side. I’m not saying the defaults aren’t good, that’s something you’ve got to decide for yourself.


WS-Fed/SAML protocol requirements: All time is UTC. ADFS will ignore system time and will use UTC.

Dates in SAML

A Security Assertion Markup Language(SAML) assertion might contain many attributes that contain dates. For example the following Conditions element of a SAML assertion has two date attributes:
<saml:Conditions NotOnOrAfter=”2013-07-29T23:49:40.051Z” NotBefore=”2013-07-29T23:39:40.051Z”>

Time in SAML elements is expressed in xs:dateTime format (xs is a XML format style). The Z at the end indicates that the Time Zone is Coordinated Universal Time (UTC) or Zulu time format. SAMLCore explicitly references “W3C XML Schema Datatypes specification [Schema2]” which in turn references ISO 8601.

Dates in OpenTokens

An OpenToken contains multiple dates. For example the following OpenToken contains three dates:


The format for OpenTokens also uses ISO 8601 which also uses UTC.
See also: Microsoft KB 884804 – How to convert UTC time to local time

WebSSOLifetime (Default 480 = 8 hours)

This parameter is server-wide. Meaning if you configure it, it’s active for all of the ADFS relying parties. Whenever a user asks a token for a given RP he will have to authenticate to the ADFS service first. Upon communicating with the ADFS service he will receive two tokens: a token which proves who he is (let’s call that the ADFS Token) and a token for the RP (let’s say the RP Token). All in all this seems very much like the TGT and TGS tickets of Kerberos.

Now the WebSSOLifetime timeout determines how long the ADFS token can be used to request new RP Tokens without having to re-authenticate. In other words a user can ask new tokens for this RP, or for other RP’s, and he will not have to prove who he is until the WebSSOLifetime expires the ADFS token.

TokenLifetime (Default 0 (which is 10 hours))

The TokeLifetime is now easy to explain. This parameter is configurable for each RP. Whenever a user receives a RP Token, it will expire at some time. At that time the user will have to go to the ADFS server again an request a new RP token. Depending on whether or not the ADFS Token is still valid or not, he will not have to re-authenticate.

One argument to lower the TokenLifetime could be that you want the claims to be updated faster. With the default whenever some of the Attribute Store info is modified, it might potentially take 10 hours before this change reaches the user in its claims.

NotBefore and NotOnOrAfter, NotBeforeSkew

Setting up federated trusts with third-party vendors to provide users with single sign on (SSO) is very common.  SAML2 is the preferred method for SSO authentication.  One issue with this method is ensuring the SAML tokens have a valid lifespan.  Basically, when does a token become valid and when is it no longer valid.  Built into the SAML specification, there is a <saml:Conditions> element, which contains two attributes; NotBefore and NotOnOrAfter.  The NotBefore attribute contains the date and time value that specifies when the assertion becomes valid.  The NotOnOrAfter attribute contains the date and time value that specifies when the SAML assertion is no longer valid.  Both must be UTC datetimes, without the time zone.  As long as the SAML token is being used between the NotBefore and NotOnOrAfter times the assertion will be valid.

But what happens when the IdP server time and the third-party server times are off by a few seconds, or even a couple of minutes?  Simple, authentication may fail because the third-party server may see the SAML as not yet valid.

Luckily, ADFS 3 (Windows Server 2012 R2) offers a simple solution.  A simple time skew value can be added to the relying party on the ADFS server.  This property is called NotBeforeSkew.  It contains the number of minutes to adjust the NotBefore value by.  Setting the NotBeforeSkew to a value of 5 will result in a NotBefore of -5 minutes.

The following PowerShell command can be used to set the NotBeforeSkew value.

Set-ADFSRelyingPartyTrust -TargetIdentifier "<replying party identifier>" -NotBeforeSkew 5


ADFS Backup Restore tool

ADFS Rapid restore tool:

– download it from Microsoft Connect. 

With ADFS Rapid Restore Tool, backup and restore your ADFS farm easily in seconds…


To backup your ADFS farm, use the command listed below with the following switches:

  • BackupDKM – Backs up the Active Directory DKM container that contains the AD FS keys in the default configuration (automatically generated token signing and decrypting certificates).
  • StorageType – The type of storage:“FileSystem” -stores backup it in a folder locally or in the network.“Azure”-stores backup in the Azure Storage Container (Azure Storage Credentials should be passed to the cmdlet). The storage credentials contains the account name and key,a container name must also be passed in,if the container doesn’t exist, it is created during the backup.
  • EncryptionPassword – The password that is going to be used to encrypt all the backed up files before storing it
  • AzureConnectionCredentials – The account name and key for the Azure storage account
  • AzureStorageContainer – The storage container where the backup will be stored in Azure
  • StoragePath – The location the backups will be stored in
  • ServiceAccountCredential – specifies the service account being used for the ADFS Service running currently. This parameter is only needed if the user would like to backup the DKM and is not domain admin.
  • BackupComment <string[]> – An informational string about the backup that will be displayed during the restore, similar to the concept of Hyper-V checkpoint naming. The default is an empty string


import-module ‘C:\Program Files (x86)\ADFS Rapid Recreation Tool\ADFSRapidRecreationTool.dll’

Backup-ADFS -StorageType “FileSystem” -StoragePath “d:\Scripts\ADFS Backups\” -EncryptionPassword “mypwd” -BackupComment “ADFS” -BackupDKM

In the same way, the restore process is also very easy to achieve with the following switches:

StorageType – same as for backup (“FileSystem” and “Azure”)

  • DecryptionPassword – The password that was used to encrypt all the backed up files
  • AzureConnectionCredentials – The account name and key for the Azure storage account
  • AzureStorageContainer – The storage container where the backup will be stored in Azure
  • StoragePath – The location the backups will be stored in
  • ADFSName < string > – The name of the federation that was backed up and is going to be restored.
  • ServiceAccountCredential < pscredential > – specifies the service account that will be used for the new ADFS Service being restored
  • GroupServiceAccountIdentifier – The GMSA that the user wants to use for the new ADFS Service being restored. By default, if neither is provided then the backed up account name is used if it was GMSA, else the user is prompted to put in a service account
  • DBConnectionString – If the user would like to use a different DB for the restore, then they should pass the SQL Connection String or type in WID for WID.
  • Force – Skip the prompts that the tool might have once the backup is chosen
  • RestoreDKM – Restore the DKM Container to the AD, should be set if going to a new AD and the DKM was backed up initially.

Note that you must specify the database engine type used with the ADFS farm by using the -DBConnectionString parameter as follow:

To restore your ADFS farm when using WID Database or SQL Server, use respectively the following paramters:

import-module ‘C:\Program Files (x86)\ADFS Rapid Recreation Tool\ADFSRapidRecreationTool.dll’

Restore-ADFS -StorageType “FileSystem” -StoragePath “d:\Scripts\ADFS Backups” -DecryptionPassword “mypwd” -RestoreDKM

DBConnectionString “WID” 

Restore-ADFS -StorageType “FileSystem” -StoragePath “d:\Scripts\ADFS Backups” -DecryptionPassword “mypwd” -RestoreDKM 

-DBConnectionString “Data Source=SQLServer\SQLINSTANCE; Integrated Security=True”

During the restore process, note that the ADFS Rapid Restore Tool proposes to the administrator to specify which backup to restore  – based on date and time.

As you can see, at this point, it’s almost done because after the restore operation, the ADFS service is not yet operational and running!

What’s new in ADFS 2016?

  • Eliminate Passwords from the Extranet
  • Sign in with Azure Multi-factor Authentication
  • Password-less Access from Compliant Devices
  • Sign in with Microsoft Passport
  • Secure Access to Applications
  • Better Sign in experience
  • Manageability and Operational Enhancements

You can upgrade an AD FS 2012 R2 farm using the “mixed farm” process described here. It works for WID or SQL farms, though the document shows only the WID scenario. Also another upgrade procedure:

  1. Active Directory schema update using ‘ADPrep’ with the Windows Server 2016 additions
  2. Build Windows Server 2016 servers with ADFS and install into the existing farm and add the servers to the Azure load balancer
  3. Promote one of the ADFS 2016 servers as “primary” of the farm, and point all other secondary servers to the new “primary”
  4. Build Windows Server 2016 servers with WAP and add the servers to the Azure load balancer
  5. Remove the WAP 2012 servers from the Azure load balancer
  6. Remove the ADFSv3 servers from the Azure load balancer
  7. Raise the Farm Behavior Level feature (FBL) to ‘2016’
  8. Remove the WAP servers from the cluster
  9. Upgrade the WebApplicationProxyConfiguration version to ‘2016’
  10. Configure ADFS 2016 to support Azure MFA and complete remaining configuration

Other links:

ADFS 2016 operations

ADFS 2016 deployment

ADFS 2016 design

Reference article:



On the current WAP server wapserver1, the WAP remote access management console display a server called server2. How to remove this server from the cluster list?


Connect on the wapserver1, open a powershell prompt: Swpc –ConnectedServersName ((gwpc).ConnectedServersName –ne ‘server2.domain.local’)

gwpc to display the list of WAP servers.

How to bind a MAC to a Windows domain:

Third-party Tools:



Procedures and white papers:

Apple support articles:



Event forwarding (also called SUBSCRIPTIONS) is a mean to send Windows event log entries from source computers to a collector. A same computer can be a collector or a source.

There are two methods available to complete this challenge – collector initiated and source initiated:

Parameter Collector Initiated (PULL) Source Initiated (PUSH)
Socket direction (for firewall rules) Collector –> Source Collector –> Source
Initiating machine Collector Source
Authentication Type Kerberos Kerberos / Certificates

This technology uses WinRM (HTTP protocol on port TCP 5985 with WinRM 2.0) . Be careful with the Window firewall and configure it to allow WinRM incoming requests.

WinRM is the ‘server’ component and WinRS is the ‘client’ that can remotely manage the machine with WinRM configured.

Differences you should be aware of:

WinRM 1.1 (obsolete)
Vista and Server 2008
Port 80 for HTTP and Port 443 for HTTPS

WinRM 2.0
Windows 7 and Server 2008 R2, 2012 R2 …
Port 5985 for HTTP and Port 5986 for HTTPS

Reference for WEF and event forwarding:

Deploying WinRM using Group Policy:

Microsoft official document well documented:

Fresh How-to from Intrusion detection perspective:

How-to easy to follow from Intrusion detection perspective: same than previous one but more appendix

From Intrusion detection perspective: help to manage error of WEF deployment

Basic configuration:

on source computers and collector computer:  winrm quickconfig     and add the collector computer account to the local administrators group

To verify a listener has been created type winrm enumerate winrm/config/listener

WinRM Client Setup

Just to round off this quick introduction to WinRM, to delete a listener use winrm delete winrm/config/listener?address=*+Transport=HTTP

on collector computer: wecutil qc. Add the computer account of the collector computer to the Event Log Readers Group on each of the source computers

on collector computer: create a new subscription from event viewer (follow the wizard)

WinRS: WinRS (Windows Remote Shell) is the client that connects to a WinRM configured machine (as seen in the first part of this post). WinRS is pretty handy, you’ve probably used PSTools or SC for similar things in the past. Here are a few examples of what you do.

Connecting to a remote shell
winrs -r:http://hostnameofclient "cmd"
Stop / Starting remote service
winrs -r:http://hostnameofclient "net start/stop spooler"
Do a Dir on the C drive
winrs -r:http://hostnameofclient "dir c:\"


Forwarded Event Logs:

This is configured using ‘subscribers’, which connect to WinRM enabled machines.

To configure these subscribers head over to event viewer, right click on forwarded events and select properties. Select the 2nd tab along subscriptions and press create.

This is where you’ll select the WinRM enabled machine and choose which events you would like forwarded.


Right click the subscription and select show runtime status.

Error 0x80338126

Now it took me a minute or two to figure this one out. Was it a firewall issue (this gives the same error code), did I miss some configuration steps? Well no, it was something a lot more basic than that. Remember earlier on we were talking about the port changes in WinRM 1.1 to 2.0?

That’s right, I was using server 2008 R2 to set the subscriptions which automatically sets the port to 5985. The client I configured initially was server 2008 so uses version 1.1. If you right click the subscription and click properties -> advanced you’ll be able to see this. I changed this to port 80 and checked the runtime status again.

[DC2.domain.local] – Error – Last retry time: 03/02/2011 20:20:30. Code (0x5): Access is denied. Next retry time: 03/02/2011 20:25:30.”

Head back to the advanced settings and change the user account from machine account to a user with administrative rights. After making these changes the forwarded events started to flow.

Subscriptions Advanced

Additional considerations:

In a workgroup environment, you can follow the same basic procedure described above to configure computers to forward and collect events. However, there are some additional steps and considerations for workgroups:

  • You can only use Normal mode (Pull) subscriptions
  • You must add a Windows Firewall exception for Remote Event Log Management on each source computer.
  • You must add an account with administrator privileges to the Event Log Readers group on each source computer. You must specify this account in the Configure Advanced Subscription Settings dialog when creating a subscription on the collector computer.
  • Type winrm set winrm/config/client @{TrustedHosts="<sources>"} at a command prompt on the collector computer to allow all of the source computers to use NTLM authentication when communicating with WinRM on the collector computer. Run this command only once. Where <sources> appears in the command, substitute a list of the names of all of the participating source computers in the workgroup. Separate the names by commas. Alternatively, you can use wildcards to match the names of all the source computers. For example, if you want to configure a set of source computers, each with a name that begins with “msft”, you could type this command winrm set winrm/config/client @{TrustedHosts="msft*"} on the collector computer. To learn more about this command, type winrm help config.

If you configure a subscription to use the HTTPS protocol by using the HTTPS option in Advanced Subscription Settings , you must also set corresponding Windows Firewall exceptions for port 443. For a subscription that uses Normal (PULL mode) delivery optimization, you must set the exception only on the source computers. For a subscription that uses either Minimize Bandwidth or Minimize Latency (PUSH mode) delivery optimizations, you must set the exception on both the source and collector computers.

If you intend to specify a user account by using the Specific User option in Advanced Subscription Settings when creating the subscription, you must ensure that account is a member of the local Administrators group on each of the source computers in step 4 instead of adding the machine account of the collector computer. Alternatively, you can use the Windows Event Log command-line utility to grant an account access to individual logs. To learn more about this command-line utility, type wevtutil sl -? at a command prompt.




1st: Event forwarding between computers in a Domain—How-to-Configure-Event-Forwarding-in-AD-DS-Domains.aspx

2nd: Event forwarding between computers in workgroup—How-to-Troubleshoot-Event-Forwarding—How-to-Configure-Event-Forwarding-in-Workgroup-Environments.aspx

Additional article talking about Event forwarding too:


Behind this catchy title is a real need. As a system administrator, it may be worthwhile to audit all of your organization’s Active Directory accounts to assess the level of security for user accounts. Let’s see how we do it!

Web resources and Methods:

Latest version: 1.8 update 1 – Azure ATP and ATA v1.9 planned for Q1 2019

News from Ignite event 2017:   

Azure ATP:

Technet resource:

ATA 1.8 simulation playbook:

ATA powershell module:

(copied under \\\microsoft\microsoft ATA\)

News from pentesters:


What’s new in ATA version 1.8

Suspicious activity guide:

New & updated detections

  • NEW! Abnormal modification of sensitive groups – As part of the privilege escalation phase, attackers modify groups with high privileges to gain access to sensitive resources. ATA now detects when there’s an abnormal change in an elevated group.
  • NEW! Suspicious authentication failures (Behavioral brute force) – Attackers attempt to brute force credentials to compromise accounts. ATA now raises an alert when an abnormal failed authentication behavior is detected.
  • NEW! Remote execution attempt – WMI exec – Attackers can attempt to control your network by running code remotely on your domain controller. ATA added detection for remote execution leveraging WMI methods to run code remotely.Reconnaissance using directory services queries– In ATA 1.8, a learning algorithm was added to this detection allowing ATA to detect reconnaissance attempts against a single sensitive entity and improve the results for generic reconnaissance.
  • Kerberos Golden Ticket activity ATA 1.8 includes an additional technique to detect golden ticket attacks, detecting time anomalies for Kerberos tickets.
  • Enhancements to some detections, to remove known false positives:
    • Privilege escalation detection (forged PAC)
    • Encryption downgrade activity (Skeleton Key)
    • Unusual protocol implementation
    • Broken trust


  • NEW! More actions can be made to suspicious activities during the triage process.
    • Exclude some entities from raising future suspicious activities. Prevent ATA from alerting when it detects benign true positives (i.e. an admin running remote code or using nslookup) or known false positives (don’t open a Pass-The-Ticket alert on a specific IP).
    • Suppress a reoccurring suspicious activity from alerting.
    • Delete suspicious activities from the timeline.
  • A more efficient triage – The suspicious activities time line has gone through a major process of re-design. In 1.8, a lot more suspicious activities will be visible at the same time, and will contain better information for triage and investigation purposes.


  • NEW! Summary report. An option to see all the summarized data from ATA, including suspicious activities, health issues and more. It’s possible to define a reoccurring report.
  • NEW! Modification to sensitive groups report to see all the changes made in sensitive groups during a certain period.


  • Lightweight Gateways can now read events locally, without configuring event forwarding
  • Feature flags were added for all detection, periodic tasks and monitoring alerts
  • Accessibility ramp up – ATA now stands with Microsoft in providing an accessible product, for everyone.
  • E-mail configuration for monitoring alerts and for suspicious activities are separated


  • NEW! Single sign on for ATA management.
    • Gateway and Lightweight gateway silent installation scripts will use the logged on user’s context, without the need to provide credentials.
  • Local System privileges removed from Gateway process
    • You can now use virtual accounts (available on stand-alone GWs only), managed service accounts and group managed service accounts to run the ATA Gateway process.
  • Auditing logs for ATA Center and Gateways were added and all actions are now logged in the event viewer.Added support for KSP Certificates


Version: 1.7

Reference articles:

ATA on Technet:

ATA events:

ATA deployment demo:


Additional resources:

Powershell windows forensics:

Powershell windows forensics:

Powershell windows forensics:


As an Administrator, Renaming Domain Controller is not right way but in some cases it is required due to some previous wrong names.

Current Host name of the Domain Controller



Since the name assignment was wrongly provided, we cannot simeply rename as we do in work stations because this is domain controller. Hence we need to do it using NETDOM Command

Step 1: We need to add second name


netdom computername <CurrentComputerName FQDN> /add:<NewComputerName FQDN>

Step 2: make second name as primary name

netdom computername <CurrentComputerName FQDN> /makeprimary:<NewComputerName FQDN>

After Reboot, you could see that now domain controller name has been renamed



Step 3: Remove the Old name

netdom computername <NewComputerName> /remove:<OldComputerName FQDN>



Now run DCDIAG or REPADMIN command  to verify the Replication Status.