Category: System and Network Admins


To detect lateral movement on Windows infrastructure I recommend to collect the following events:

It’s based on events (4648 + 4672 from member servers, 8004 from DCs) + network traffic (AS/TGS).

Regarding both event 4648 (A logon was attempted using explicit credentials) and event 4672 (Special privileges assigned to new logon):
=> Collect events and send to a SIEM (splunk, logrythm …) or even Windows Event collector (WEF)

 

View story at Medium.com

Microsoft’s file systems organize storage devices based on cluster size. Also known as the allocation unit size, cluster size represents the smallest amount of disk space that can be allocated to hold a file. Because ReFS and NTFS don’t reference files at a byte granularity, the cluster size is the smallest unit of size that each file system can reference when accessing storage. Both ReFS and NTFS support multiple cluster sizes, as different sized clusters can offer different performance benefits, depending on the deployment.

Full article from MS: https://blogs.technet.microsoft.com/filecab/2017/01/13/cluster-size-recommendations-for-refs-and-ntfs/

Summary:

ReFS cluster sizes

ReFS offers both 4K and 64K clusters. 4K is the default cluster size for ReFS, and we recommend using 4K cluster sizes for most ReFS deployments because it helps reduce costly IO amplification:

  • In general, if the cluster size exceeds the size of the IO, certain workflows can trigger unintended IOs to occur. Consider the following scenarios where a ReFS volume is formatted with 64K clusters:
    • Consider a tiered volume. If a 4K write is made to a range currently in the capacity tier, ReFS must read the entire cluster from the capacity tier into the performance tier before making the write. Because the cluster size is the smallest granularity that the file system can use, ReFS must read the entire cluster, which includes an unmodified 60K region, to be able to complete the 4K write.
    • If a cluster is shared by multiple regions after a block cloning operation occurs, ReFS must copy the entire cluster to maintain isolation between the two regions. So if a 4K write is made to this shared cluster, ReFS must copy the unmodified 60K cluster before making the write.
    • Consider a deployment that enables integrity streams. A sub-cluster granularity write will cause the entire cluster to be re-allocated and re-written, and the new checksum must be computed. This represents additional IO that ReFS must perform before completing the new write, which introduces a larger latency factor to the IO operation.
  • By choosing 4K clusters instead of 64K clusters, one can reduce the number of IOs that occur that are smaller than the cluster size, preventing costly IO amplifications from occurring as frequently.

Additionally, 4K cluster sizes offer greater compatibility with Hyper-V IO granularity, so we strongly recommend using 4K cluster sizes with Hyper-V on ReFS.  64K clusters are applicable when working with large, sequential IO, but otherwise, 4K should be the default cluster size.

NTFS cluster sizes

NTFS offers cluster sizes from 512 to 64K, but in general, we recommend a 4K cluster size on NTFS, as 4K clusters help minimize wasted space when storing small files. We also strongly discourage the usage of cluster sizes smaller than 4K. There are two cases, however, where 64K clusters could be appropriate:

  • 4K clusters limit the maximum volume and file size to be 16TB
    • 64K cluster sizes can offer increased volume and file capacity, which is relevant if you’re are hosting a large deployment on your NTFS volume, such as hosting VHDs or a SQL deployment.
  • NTFS has a fragmentation limit, and larger cluster sizes can help reduce the likelihood of reaching this limit
    • Because NTFS is backward compatible, it must use internal structures that weren’t optimized for modern storage demands. Thus, the metadata in NTFS prevents any file from having more than ~1.5 million extents.
      • One can, however, use the “format /L” option to increase the fragmentation limit to ~6 million. Read more here.
    • 64K cluster deployments are less susceptible to this fragmentation limit, so 64K clusters are a better option if the NTFS fragmentation limit is an issue. (Data deduplication, sparse files, and SQL deployments can cause a high degree of fragmentation.)
      • Unfortunately, NTFS compression only works with 4K clusters, so using 64K clusters isn’t suitable when using NTFS compression. Consider increasing the fragmentation limit instead, as described in the previous bullets.

While a 4K cluster size is the default setting for NTFS, there are many scenarios where 64K cluster sizes make sense, such as: Hyper-V, SQL, deduplication, or when most of the files on a volume are large.

Some interesting sites:

 

Reference articles to secure a Windows domain:

https://github.com/PaulSec/awesome-windows-domain-hardening

Sysinternals sysmon:

https://onedrive.live.com/view.aspx?resid=D026B4699190F1E6!2843&ithint=file%2cpptx&app=PowerPoint&authkey=!AMvCRTKB_V1J5ow

On ADsecurity.org:

Securing Domain Controllers to Improve Active Directory Security

 

Download sysmon:

NEW: Sysmon 6.02 is available ! : https://technet.microsoft.com/en-us/sysinternals/sysmon  and how to use it:

Installation and usage:

List of web resources concerning Sysmon: https://github.com/MHaggis/sysmon-dfir

Mark russinovitch’s RSA conference: https://onedrive.live.com/view.aspx?resid=D026B4699190F1E6!2843&ithint=file%2cpptx&app=PowerPoint&authkey=!AMvCRTKB_V1J5ow

Sysmon config files explained:

https://github.com/SwiftOnSecurity/sysmon-config

https://github.com/ion-storm/sysmon-config/blob/master/sysmonconfig-export.xml

https://www.bsk-consulting.de/2015/02/04/sysmon-example-config-xml/

View story at Medium.com

Else other install guides:

Sysinternals Sysmon unleashed

http://www.darkoperator.com/blog/2014/8/8/sysinternals-sysmon

 

Detecting APT with Sysmon:

https://www.rsaconference.com/writable/presentations/file_upload/hta-w05-tracking_hackers_on_your_network_with_sysinternals_sysmon.pdf

 

https://www.root9b.com/sites/default/files/whitepapers/R9B_blog_005_whitepaper_01.pdf

Sysmon with Splunk:

http://blogs.splunk.com/2014/11/24/monitoring-network-traffic-with-sysmon-and-splunk/

https://securitylogs.org/tag/sysmon/

Sysmon log analyzer/parsing sysmon event log:

https://github.com/CrowdStrike/Forensics/blob/master/sysmon_parse.cmd

https://digital-forensics.sans.org/blog/2014/08/12/sysmon-in-malware-analysis-lab

https://github.com/JamesHabben/sysmon-queries

http://blog.crowdstrike.com/sysmon-2/

logparser: http://www.microsoft.com/en-us/download/confirmation.aspx?id=24659

logparser GUI: http://lizard-labs.com/log_parser_lizard.aspx

Web article:

https://technet.microsoft.com/en-us/library/cc784450(v=ws.10).aspx

 

How to test SSL/TLS:

You can easily see what SSL protocol a server supports (and even grab the certificate from there) example below with openSSL:

openssl s_client -connect myserver.mydomain.local:636 -ssl3
openssl s_client -connect myserver.mydomain.local:636 -tls1
openssl s_client -connect myserver.mydomain.local:636 -tls1_1
openssl s_client -connect myserver.mydomain.local:636 -tls1_2

All those reports successfull connection SSL handshake and present the proper server certificate.

And it is very easy anyway for a client to get supported SSL protocols on a remote server, it is how client <==> server handshake works to
select an agreed protocol supported on both sides.

I suggest you check on application side …

# nmap –script ssl-enum-ciphers -p 636 myserver.mydomain.local

Starting Nmap 6.46 ( http://nmap.org ) at 2017-02-16 18:22 CET
Nmap scan report for myserver.mydomain.local (172.19.133.64)
Host is up (0.025s latency).
PORT STATE SERVICE
636/tcp open ldapssl
| ssl-enum-ciphers:
| SSLv3:
| ciphers:
| TLS_RSA_WITH_3DES_EDE_CBC_SHA – strong
| TLS_RSA_WITH_RC4_128_MD5 – strong
| TLS_RSA_WITH_RC4_128_SHA – strong
| compressors:
| NULL
| TLSv1.0:
| ciphers:
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA – strong
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA – strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA – strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA – strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA – strong
| TLS_RSA_WITH_AES_128_CBC_SHA – strong

 

Reference article: https://technet.microsoft.com/en-us/library/cc759300(v=ws.10).aspx

Kerberos uses SPNs extensively. When a Kerberos client uses its TGT to request a service ticket for a specific service, the service is actually identified by its SPN.  The KDC will grant the client a service ticket that is encrypted in part with a shared secret that the service account as identified by the AD account that matches the SPN has (basically the account password).

In the case of a duplicate SPN, what can happen is that the KDC will generate a service ticket that may be created based on the shared secret of the wrong account.  Then, when the client provides that ticket to the service during authentication, the service itself cannot decrypt it and the auth  fails.  The server will typically log an “AP Modified” error and the client will see a “wrong principal” error code.

So, duplicate SPNs are very bad, much in the same way that duplicate UPNs are bad.  Both can cause Kerb auth to break and Windows uses Kerb for auth everywhere it can.

 

To identify duplicated accounts:

you can use the command: setspn -q <service/server:port> ; to find SPN duplicate on the forest
orusing setspn -L <computername> ; to list the SPN associated to a computer AD object

or by looking Event ID 11, source KDC on Domain controllers event log

Technet reference:

https://technet.microsoft.com/en-us/library/jj590751.aspx

Web links:

https://blogs.technet.microsoft.com/heyscriptingguy/2013/01/15/use-powershell-to-create-multiple-dhcp-scopes-on-dhcp-servers/

https://gallery.technet.microsoft.com/Create-a-DHCP-Scope-based-3a219b3c

Example:

Remove-DhcpServerv4Scope -ScopeId 10.65.11.0 -Force

Add-DhcpServerv4Scope -Name “SERVICE-LAN-VIRTUAL” -Description “Virtual Wks Office VLAN” -StartRange 10.65.11.66 -EndRange 10.65.11.126 -SubnetMask 255.255.255.0 -LeaseDuration 7.0:0:0 -State Active -Type Dhcp
Set-DhcpServerv4OptionValue -ScopeId 10.65.11.0 -OptionId 3 -Value 10.65.11.1
Set-DhcpServerv4OptionValue -ScopeId 10.65.11.0 -OptionId 6 -Value 10.65.11.6 -Force
Set-DhcpServerv4OptionValue -ScopeId 10.65.11.0 -OptionId 44 -Value 10.65.11.7 -Force
Set-DhcpServerv4OptionValue -ScopeId 10.65.11.0 -OptionId 46 -Value “0x8” -Force

Add-DhcpServerv4Failover -ScopeId 10.65.11.0 -PartnerServer 10.65.12.12 -Name TVLAN-$3rdOctet
Get-DhcpServerv4Failover -ScopeId 10.65.11.0

Behind this catchy title is a real need. As a system administrator, it may be worthwhile to audit all of your organization’s Active Directory accounts to assess the level of security for user accounts. Let’s see how we do it!

Web resources and Methods:

How to improve Windows DNS logging (audit and analytics):

Resources:

DNS logging (audit and analytics): https://technet.microsoft.com/en-us/library/dn800669(v=ws.11).aspx

Free tool:

http://www.networkstr.com/dnscentric/dns_log_converter

Scripts to parse DNS debug logs:

https://gallery.technet.microsoft.com/scriptcenter/Get-DNSDebugLog-Easy-ef048bdf

http://www.adamtheautomator.com/microsoft-dns-debug-log-script/

http://poshcode.org/4509

http://xianclasen.blogspot.fr/2016/01/dns-logging-in-windows-with-powershell.html

Introduction

Anyone who has worked with Security Incident and Event Management (SIEM) or Intrusion Detection/Prevention System (IDS/IPS) alerts knows it can be very, very difficult to track down the actual source of the network traffic. Tracking the source becomes even more difficult when it comes to finding the machine that attempted resolution of a known bad or suspicious domain.

The Cause: DNS architecture

The Domain Name System (DNS) architecture is the primary reason why it can be so hard to find the machine attempting to resolve a bad or suspicious domain. Most organizations are Microsoft based and rely on their domain controllers to perform DNS resolution. These domain controllers are configured to act as recursive resolvers, meaning they perform resolution on behalf of the client. Because of this, when you get a SIEM or IDS/IPS alert, the source IP address will generally belong to the domain controller. This causes problems, as it usually is not the domain controller that is infected, but some client behind it. Even if you are not a Microsoft based organization, there is usually some form of a recursive resolver in place, which is at a lower level in the network than the edge IDS/IPS that detects the activity.

Why do most organizations use a recursive resolver? The answer is simple: so the internet does not see all the internal client addresses as they resolve domains. If you use NAT on your firewall, it limits what the world is able to see, but makes managing of the firewall rules for DNS more difficult. It also causes more network traffic as these recursive servers are generally caching as well. In addition, it puts more control around the name resolution service to help spot anomalies and control behavior.

The Solution: Implement DNS logging or network architecture approach

Performance considerations

DNS server performance can be affected when additional logging is enabled, however the enhanced DNS logging and diagnostics feature in Windows Server 2012 R2 and Windows Server 2016 Technical Preview is designed to have a very low impact on performance. The following sections discuss DNS server performance considerations when additional logging is enabled.

 

Prior to the introduction of DNS analytic logs, DNS debug logging was an available method to monitor DNS transactions. DNS debug logging is not the same as the enhanced DNS logging and diagnostics feature discussed in this topic. Debug logging is discussed here because it is also a tool that is available for DNS logging and diagnostics. See Using server debugging logging options for more information about DNS debug logging. The DNS debug log provides extremely detailed data about all DNS information that is sent and received by the DNS server, similar to the data that can be gathered using packet capture tools such as network monitor. Debug logging can affect overall server performance and also consumes disk space, therefore it is recommended to enable debug logging only temporarily when detailed DNS transaction information is needed.

 

Enhanced DNS logging and diagnostics in Windows Server 2012 R2 and later includes DNS Audit events and DNS Analytic events. DNS audit logs are enabled by default, and do not significantly affect DNS server performance. DNS analytical logs are not enabled by default, and typically will only affect DNS server performance at very high DNS query rates. For example, a DNS server running on modern hardware that is receiving 100,000 queries per second (QPS) can experience a performance degradation of 5% when analytic logs are enabled. There is no apparent performance impact for query rates of 50,000 QPS and lower. However, it is always advisable to monitor DNS server performance whenever additional logging is enabled.

 

DNS Server logging must be enabled to record events from all DNS server functions

Log on to the DNS server using the Domain Admin or Enterprise Admin account.

Press Windows Key + R, execute dnsmgmt.msc.

Right-click the DNS server, select Properties.

Click on the Event Logging tab. By default, all events are logged.

Verify “Errors and warnings” or “All events” is selected.

If any option other than “Errors and warnings” or “All events” is selected, this is a finding

Log on to the DNS server using the Domain Admin or Enterprise Admin account.

Open an elevated Windows PowerShell prompt on the DNS server to which event logging needs to be enabled.

Use the Get-DnsServerDiagnostics cmdlet to view the status of individual diagnostic events.

All diagnostic events should be set to “True”.

If all diagnostic events are not set to “True”, this is a finding

Use the Set-DnsServerDiagnostics cmdlet to enable all diagnostic events at once.

Set-DnsServerDiagnostics -All $true

Also enable debug log rollover.

Set-DnsServerDiagnostics – EnableLogFileRollover $true