Latest Entries »

https://www.myotherpcisacloud.com/post/SRV-Record-for-NTP-In-MY-Active-Directory

 

View story at Medium.com

Microsoft Fasttrack use cases: productivity library

Office 365 deployment advisors: Deployment advisors

 

What’s new in ADFS 2016?

https://technet.microsoft.com/en-us/windows-server-docs/identity/ad-fs/overview/whats-new-active-directory-federation-services-windows-server-2016?f=255&MSPPError=-2147217396

You can upgrade an AD FS 2012 R2 farm using the “mixed farm” process described here. It works for WID or SQL farms, though the document shows only the WID scenario.

 

For reference, here is the link to Mozilla official best practices for enterprise deployment:

https://developer.mozilla.org/en-US/Firefox/Enterprise_deployment

GPO settings for Firefox:

https://sourceforge.net/projects/firefoxadmx/

https://support.olfeo.com/kb/article/2121

https://www.websense.com/content/support/library/web/hosted/getting_started/apply_policy.aspx

 

 

Technet article: https://technet.microsoft.com/en-us/library/cc978014.aspx

” Explanation:

When a requested object exists in the directory but is not present on the contacted domain controller, name resolution depends on that domain controller’s knowledge of how the directory is partitioned. In a partitioned directory, by definition, the entire directory is not always available on any one domain controller.

An LDAP referral is a domain controller’s way of indicating to a client application that it does not have a copy of a requested object (or, more precisely, that it does not hold the section of the directory tree where that object would be, if in fact it exists) and giving the client a location that is more likely to hold the object, which the client uses as the basis for a DNS search for a domain controller. Ideally, referrals always reference a domain controller that indeed holds the object. However, it is possible for the referred-to domain controller to generate yet another referral, although it usually does not take long to discover that the object does not exist and to inform the client. Active Directory returns referrals in accordance with RFC 2251. ”

Atlassian KB article: https://confluence.atlassian.com/confkb/user-lookups-fail-with-partialresultexceptions-due-to-active-directory-follow-referrals-configuration-612959323.html

 

 

Microsoft’s file systems organize storage devices based on cluster size. Also known as the allocation unit size, cluster size represents the smallest amount of disk space that can be allocated to hold a file. Because ReFS and NTFS don’t reference files at a byte granularity, the cluster size is the smallest unit of size that each file system can reference when accessing storage. Both ReFS and NTFS support multiple cluster sizes, as different sized clusters can offer different performance benefits, depending on the deployment.

Full article from MS: https://blogs.technet.microsoft.com/filecab/2017/01/13/cluster-size-recommendations-for-refs-and-ntfs/

Summary:

ReFS cluster sizes

ReFS offers both 4K and 64K clusters. 4K is the default cluster size for ReFS, and we recommend using 4K cluster sizes for most ReFS deployments because it helps reduce costly IO amplification:

  • In general, if the cluster size exceeds the size of the IO, certain workflows can trigger unintended IOs to occur. Consider the following scenarios where a ReFS volume is formatted with 64K clusters:
    • Consider a tiered volume. If a 4K write is made to a range currently in the capacity tier, ReFS must read the entire cluster from the capacity tier into the performance tier before making the write. Because the cluster size is the smallest granularity that the file system can use, ReFS must read the entire cluster, which includes an unmodified 60K region, to be able to complete the 4K write.
    • If a cluster is shared by multiple regions after a block cloning operation occurs, ReFS must copy the entire cluster to maintain isolation between the two regions. So if a 4K write is made to this shared cluster, ReFS must copy the unmodified 60K cluster before making the write.
    • Consider a deployment that enables integrity streams. A sub-cluster granularity write will cause the entire cluster to be re-allocated and re-written, and the new checksum must be computed. This represents additional IO that ReFS must perform before completing the new write, which introduces a larger latency factor to the IO operation.
  • By choosing 4K clusters instead of 64K clusters, one can reduce the number of IOs that occur that are smaller than the cluster size, preventing costly IO amplifications from occurring as frequently.

Additionally, 4K cluster sizes offer greater compatibility with Hyper-V IO granularity, so we strongly recommend using 4K cluster sizes with Hyper-V on ReFS.  64K clusters are applicable when working with large, sequential IO, but otherwise, 4K should be the default cluster size.

NTFS cluster sizes

NTFS offers cluster sizes from 512 to 64K, but in general, we recommend a 4K cluster size on NTFS, as 4K clusters help minimize wasted space when storing small files. We also strongly discourage the usage of cluster sizes smaller than 4K. There are two cases, however, where 64K clusters could be appropriate:

  • 4K clusters limit the maximum volume and file size to be 16TB
    • 64K cluster sizes can offer increased volume and file capacity, which is relevant if you’re are hosting a large deployment on your NTFS volume, such as hosting VHDs or a SQL deployment.
  • NTFS has a fragmentation limit, and larger cluster sizes can help reduce the likelihood of reaching this limit
    • Because NTFS is backward compatible, it must use internal structures that weren’t optimized for modern storage demands. Thus, the metadata in NTFS prevents any file from having more than ~1.5 million extents.
      • One can, however, use the “format /L” option to increase the fragmentation limit to ~6 million. Read more here.
    • 64K cluster deployments are less susceptible to this fragmentation limit, so 64K clusters are a better option if the NTFS fragmentation limit is an issue. (Data deduplication, sparse files, and SQL deployments can cause a high degree of fragmentation.)
      • Unfortunately, NTFS compression only works with 4K clusters, so using 64K clusters isn’t suitable when using NTFS compression. Consider increasing the fragmentation limit instead, as described in the previous bullets.

While a 4K cluster size is the default setting for NTFS, there are many scenarios where 64K cluster sizes make sense, such as: Hyper-V, SQL, deduplication, or when most of the files on a volume are large.

Some interesting sites:

 

Reference articles to secure a Windows domain:

https://github.com/PaulSec/awesome-windows-domain-hardening

Sysinternals sysmon:

https://onedrive.live.com/view.aspx?resid=D026B4699190F1E6!2843&ithint=file%2cpptx&app=PowerPoint&authkey=!AMvCRTKB_V1J5ow

On ADsecurity.org:

Securing Domain Controllers to Improve Active Directory Security

 

Download sysmon:

NEW: Sysmon 6.0 is available ! : https://technet.microsoft.com/en-us/sysinternals/sysmon  and how to use it:

Installation and usage:

List of web resources concerning Sysmon: https://github.com/MHaggis/sysmon-dfir

Mark russinovitch’s RSA conference: https://onedrive.live.com/view.aspx?resid=D026B4699190F1E6!2843&ithint=file%2cpptx&app=PowerPoint&authkey=!AMvCRTKB_V1J5ow

Sysmon config files explained:

https://github.com/SwiftOnSecurity/sysmon-config

https://github.com/ion-storm/sysmon-config/blob/master/sysmonconfig-export.xml

https://www.bsk-consulting.de/2015/02/04/sysmon-example-config-xml/

View story at Medium.com

Else other install guides:

Sysinternals Sysmon unleashed

http://www.darkoperator.com/blog/2014/8/8/sysinternals-sysmon

 

Detecting APT with Sysmon:

https://www.rsaconference.com/writable/presentations/file_upload/hta-w05-tracking_hackers_on_your_network_with_sysinternals_sysmon.pdf

 

https://www.root9b.com/sites/default/files/whitepapers/R9B_blog_005_whitepaper_01.pdf

Sysmon with Splunk:

http://blogs.splunk.com/2014/11/24/monitoring-network-traffic-with-sysmon-and-splunk/

https://securitylogs.org/tag/sysmon/

Sysmon log analyzer/parsing sysmon event log:

https://github.com/CrowdStrike/Forensics/blob/master/sysmon_parse.cmd

https://digital-forensics.sans.org/blog/2014/08/12/sysmon-in-malware-analysis-lab

https://github.com/JamesHabben/sysmon-queries

http://blog.crowdstrike.com/sysmon-2/

logparser: http://www.microsoft.com/en-us/download/confirmation.aspx?id=24659

logparser GUI: http://lizard-labs.com/log_parser_lizard.aspx

Web article:

https://technet.microsoft.com/en-us/library/cc784450(v=ws.10).aspx

 

How to test SSL/TLS:

You can easily see what SSL protocol a server supports (and even grab the certificate from there) example below with openSSL:

openssl s_client -connect myserver.mydomain.local:636 -ssl3
openssl s_client -connect myserver.mydomain.local:636 -tls1
openssl s_client -connect myserver.mydomain.local:636 -tls1_1
openssl s_client -connect myserver.mydomain.local:636 -tls1_2

All those reports successfull connection SSL handshake and present the proper server certificate.

And it is very easy anyway for a client to get supported SSL protocols on a remote server, it is how client <==> server handshake works to
select an agreed protocol supported on both sides.

I suggest you check on application side …

# nmap –script ssl-enum-ciphers -p 636 myserver.mydomain.local

Starting Nmap 6.46 ( http://nmap.org ) at 2017-02-16 18:22 CET
Nmap scan report for myserver.mydomain.local (172.19.133.64)
Host is up (0.025s latency).
PORT STATE SERVICE
636/tcp open ldapssl
| ssl-enum-ciphers:
| SSLv3:
| ciphers:
| TLS_RSA_WITH_3DES_EDE_CBC_SHA – strong
| TLS_RSA_WITH_RC4_128_MD5 – strong
| TLS_RSA_WITH_RC4_128_SHA – strong
| compressors:
| NULL
| TLSv1.0:
| ciphers:
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA – strong
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA – strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA – strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA – strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA – strong
| TLS_RSA_WITH_AES_128_CBC_SHA – strong