Tag Archive: ETL

Netsh command reference:




Using Netsh to redirect a port to another computer:


How to create a wifi hotspot with netsh:


To check SSL cert:

netsh http show sslcert


Using netsh with DHCP:


Using netsh to capture traffic:


Capture a NETSH network trace

a) Open an elevated command prompt and run: “netsh trace start persistent=yes capture=yes tracefile=c:\temp\nettrace-boot.etl” (make sure you have a \temp directory or choose another location).

b) Log on and stop the trace using: “netsh trace stop” (from an elevated prompt).

c) Open the .etl with Network monitor or Message Analyzer  (allows you to choose .etl as a file to open) and save as .cap to be analyzed in detail with Wireshark if you prefer: https://blogs.msdn.microsoft.com/benjaminperkins/2018/03/09/analyze-netsh-traces-with-wireshark-or-network-monitor/




How to improve Windows DNS logging (audit and analytics):


DNS logging (audit and analytics): https://technet.microsoft.com/en-us/library/dn800669(v=ws.11).aspx

Free tool:


Scripts to parse DNS debug logs:






Anyone who has worked with Security Incident and Event Management (SIEM) or Intrusion Detection/Prevention System (IDS/IPS) alerts knows it can be very, very difficult to track down the actual source of the network traffic. Tracking the source becomes even more difficult when it comes to finding the machine that attempted resolution of a known bad or suspicious domain.

The Cause: DNS architecture

The Domain Name System (DNS) architecture is the primary reason why it can be so hard to find the machine attempting to resolve a bad or suspicious domain. Most organizations are Microsoft based and rely on their domain controllers to perform DNS resolution. These domain controllers are configured to act as recursive resolvers, meaning they perform resolution on behalf of the client. Because of this, when you get a SIEM or IDS/IPS alert, the source IP address will generally belong to the domain controller. This causes problems, as it usually is not the domain controller that is infected, but some client behind it. Even if you are not a Microsoft based organization, there is usually some form of a recursive resolver in place, which is at a lower level in the network than the edge IDS/IPS that detects the activity.

Why do most organizations use a recursive resolver? The answer is simple: so the internet does not see all the internal client addresses as they resolve domains. If you use NAT on your firewall, it limits what the world is able to see, but makes managing of the firewall rules for DNS more difficult. It also causes more network traffic as these recursive servers are generally caching as well. In addition, it puts more control around the name resolution service to help spot anomalies and control behavior.

The Solution: Implement DNS logging or network architecture approach

Performance considerations

DNS server performance can be affected when additional logging is enabled, however the enhanced DNS logging and diagnostics feature in Windows Server 2012 R2 and Windows Server 2016 Technical Preview is designed to have a very low impact on performance. The following sections discuss DNS server performance considerations when additional logging is enabled.


Prior to the introduction of DNS analytic logs, DNS debug logging was an available method to monitor DNS transactions. DNS debug logging is not the same as the enhanced DNS logging and diagnostics feature discussed in this topic. Debug logging is discussed here because it is also a tool that is available for DNS logging and diagnostics. See Using server debugging logging options for more information about DNS debug logging. The DNS debug log provides extremely detailed data about all DNS information that is sent and received by the DNS server, similar to the data that can be gathered using packet capture tools such as network monitor. Debug logging can affect overall server performance and also consumes disk space, therefore it is recommended to enable debug logging only temporarily when detailed DNS transaction information is needed.


Enhanced DNS logging and diagnostics in Windows Server 2012 R2 and later includes DNS Audit events and DNS Analytic events. DNS audit logs are enabled by default, and do not significantly affect DNS server performance. DNS analytical logs are not enabled by default, and typically will only affect DNS server performance at very high DNS query rates. For example, a DNS server running on modern hardware that is receiving 100,000 queries per second (QPS) can experience a performance degradation of 5% when analytic logs are enabled. There is no apparent performance impact for query rates of 50,000 QPS and lower. However, it is always advisable to monitor DNS server performance whenever additional logging is enabled.


DNS Server logging must be enabled to record events from all DNS server functions

Log on to the DNS server using the Domain Admin or Enterprise Admin account.

Press Windows Key + R, execute dnsmgmt.msc.

Right-click the DNS server, select Properties.

Click on the Event Logging tab. By default, all events are logged.

Verify “Errors and warnings” or “All events” is selected.

If any option other than “Errors and warnings” or “All events” is selected, this is a finding

Log on to the DNS server using the Domain Admin or Enterprise Admin account.

Open an elevated Windows PowerShell prompt on the DNS server to which event logging needs to be enabled.

Use the Get-DnsServerDiagnostics cmdlet to view the status of individual diagnostic events.

All diagnostic events should be set to “True”.

If all diagnostic events are not set to “True”, this is a finding

Use the Set-DnsServerDiagnostics cmdlet to enable all diagnostic events at once.

Set-DnsServerDiagnostics -All $true

Also enable debug log rollover.

Set-DnsServerDiagnostics – EnableLogFileRollover $true

Web resources: