Archive for February, 2017


Microsoft’s file systems organize storage devices based on cluster size. Also known as the allocation unit size, cluster size represents the smallest amount of disk space that can be allocated to hold a file. Because ReFS and NTFS don’t reference files at a byte granularity, the cluster size is the smallest unit of size that each file system can reference when accessing storage. Both ReFS and NTFS support multiple cluster sizes, as different sized clusters can offer different performance benefits, depending on the deployment.

Full article from MS: https://blogs.technet.microsoft.com/filecab/2017/01/13/cluster-size-recommendations-for-refs-and-ntfs/

Summary:

ReFS cluster sizes

ReFS offers both 4K and 64K clusters. 4K is the default cluster size for ReFS, and we recommend using 4K cluster sizes for most ReFS deployments because it helps reduce costly IO amplification:

  • In general, if the cluster size exceeds the size of the IO, certain workflows can trigger unintended IOs to occur. Consider the following scenarios where a ReFS volume is formatted with 64K clusters:
    • Consider a tiered volume. If a 4K write is made to a range currently in the capacity tier, ReFS must read the entire cluster from the capacity tier into the performance tier before making the write. Because the cluster size is the smallest granularity that the file system can use, ReFS must read the entire cluster, which includes an unmodified 60K region, to be able to complete the 4K write.
    • If a cluster is shared by multiple regions after a block cloning operation occurs, ReFS must copy the entire cluster to maintain isolation between the two regions. So if a 4K write is made to this shared cluster, ReFS must copy the unmodified 60K cluster before making the write.
    • Consider a deployment that enables integrity streams. A sub-cluster granularity write will cause the entire cluster to be re-allocated and re-written, and the new checksum must be computed. This represents additional IO that ReFS must perform before completing the new write, which introduces a larger latency factor to the IO operation.
  • By choosing 4K clusters instead of 64K clusters, one can reduce the number of IOs that occur that are smaller than the cluster size, preventing costly IO amplifications from occurring as frequently.

Additionally, 4K cluster sizes offer greater compatibility with Hyper-V IO granularity, so we strongly recommend using 4K cluster sizes with Hyper-V on ReFS.  64K clusters are applicable when working with large, sequential IO, but otherwise, 4K should be the default cluster size.

NTFS cluster sizes

NTFS offers cluster sizes from 512 to 64K, but in general, we recommend a 4K cluster size on NTFS, as 4K clusters help minimize wasted space when storing small files. We also strongly discourage the usage of cluster sizes smaller than 4K. There are two cases, however, where 64K clusters could be appropriate:

  • 4K clusters limit the maximum volume and file size to be 16TB
    • 64K cluster sizes can offer increased volume and file capacity, which is relevant if you’re are hosting a large deployment on your NTFS volume, such as hosting VHDs or a SQL deployment.
  • NTFS has a fragmentation limit, and larger cluster sizes can help reduce the likelihood of reaching this limit
    • Because NTFS is backward compatible, it must use internal structures that weren’t optimized for modern storage demands. Thus, the metadata in NTFS prevents any file from having more than ~1.5 million extents.
      • One can, however, use the “format /L” option to increase the fragmentation limit to ~6 million. Read more here.
    • 64K cluster deployments are less susceptible to this fragmentation limit, so 64K clusters are a better option if the NTFS fragmentation limit is an issue. (Data deduplication, sparse files, and SQL deployments can cause a high degree of fragmentation.)
      • Unfortunately, NTFS compression only works with 4K clusters, so using 64K clusters isn’t suitable when using NTFS compression. Consider increasing the fragmentation limit instead, as described in the previous bullets.

While a 4K cluster size is the default setting for NTFS, there are many scenarios where 64K cluster sizes make sense, such as: Hyper-V, SQL, deduplication, or when most of the files on a volume are large.

Advertisements

Some interesting sites:

Reference articles to secure a Windows domain:

https://github.com/PaulSec/awesome-windows-domain-hardening

Microsoft audit Policy settings and recommendations:

https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations

Sysinternals sysmon:

https://onedrive.live.com/view.aspx?resid=D026B4699190F1E6!2843&ithint=file%2cpptx&app=PowerPoint&authkey=!AMvCRTKB_V1J5ow

On ADsecurity.org:

Beyond domain admins: https://adsecurity.org/?p=3700

Gathering AD data with PowerShell: https://adsecurity.org/?p=3719

Hardening Windows computers, secure Baseline check list: https://adsecurity.org/?p=3299

Hardening Windows domain, secure Baseline check list:

Securing Domain Controllers to Improve Active Directory Security

 

Download sysmon:

NEW: Sysmon 6.10 is available ! : https://technet.microsoft.com/en-us/sysinternals/sysmon  and how to use it:

NEW: WMI detections: https://rawsec.lu/blog/posts/2017/Sep/19/sysmon-v610-vs-wmi-persistence/

Installation and usage:

List of web resources concerning Sysmon: https://github.com/MHaggis/sysmon-dfir

Sysmon events table: https://rawsec.lu/blog/posts/2017/Sep/19/sysmon-events-table/

Mark russinovitch’s RSA conference: https://onedrive.live.com/view.aspx?resid=D026B4699190F1E6!2843&ithint=file%2cpptx&app=PowerPoint&authkey=!AMvCRTKB_V1J5ow

Sysmon config files explained:

https://github.com/SwiftOnSecurity/sysmon-config

https://github.com/ion-storm/sysmon-config/blob/master/sysmonconfig-export.xml

https://www.bsk-consulting.de/2015/02/04/sysmon-example-config-xml/

View story at Medium.com

Else other install guides:

Sysinternals Sysmon unleashed

http://www.darkoperator.com/blog/2014/8/8/sysinternals-sysmon

 

Detecting APT with Sysmon:

https://www.rsaconference.com/writable/presentations/file_upload/hta-w05-tracking_hackers_on_your_network_with_sysinternals_sysmon.pdf

https://www.jpcert.or.jp/english/pub/sr/ir_research.html

https://www.root9b.com/sites/default/files/whitepapers/R9B_blog_005_whitepaper_01.pdf

Sysmon with Splunk:

http://blogs.splunk.com/2014/11/24/monitoring-network-traffic-with-sysmon-and-splunk/

https://securitylogs.org/tag/sysmon/

Sysmon log analyzer/parsing sysmon event log:

https://github.com/CrowdStrike/Forensics/blob/master/sysmon_parse.cmd

https://digital-forensics.sans.org/blog/2014/08/12/sysmon-in-malware-analysis-lab

https://github.com/JamesHabben/sysmon-queries

http://blog.crowdstrike.com/sysmon-2/

WEF: https://docs.microsoft.com/en-us/windows/threat-protection/use-windows-event-forwarding-to-assist-in-instrusion-detection

logparser: http://www.microsoft.com/en-us/download/confirmation.aspx?id=24659

logparser GUI: http://lizard-labs.com/log_parser_lizard.aspx

Web article:

https://technet.microsoft.com/en-us/library/cc784450(v=ws.10).aspx

 

How to test SSL/TLS:

You can easily see what SSL protocol a server supports (and even grab the certificate from there) example below with openSSL:

openssl s_client -connect myserver.mydomain.local:636 -ssl3
openssl s_client -connect myserver.mydomain.local:636 -tls1
openssl s_client -connect myserver.mydomain.local:636 -tls1_1
openssl s_client -connect myserver.mydomain.local:636 -tls1_2

All those reports successfull connection SSL handshake and present the proper server certificate.

And it is very easy anyway for a client to get supported SSL protocols on a remote server, it is how client <==> server handshake works to
select an agreed protocol supported on both sides.

I suggest you check on application side …

# nmap –script ssl-enum-ciphers -p 636 myserver.mydomain.local

Starting Nmap 6.46 ( http://nmap.org ) at 2017-02-16 18:22 CET
Nmap scan report for myserver.mydomain.local (172.19.133.64)
Host is up (0.025s latency).
PORT STATE SERVICE
636/tcp open ldapssl
| ssl-enum-ciphers:
| SSLv3:
| ciphers:
| TLS_RSA_WITH_3DES_EDE_CBC_SHA – strong
| TLS_RSA_WITH_RC4_128_MD5 – strong
| TLS_RSA_WITH_RC4_128_SHA – strong
| compressors:
| NULL
| TLSv1.0:
| ciphers:
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA – strong
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA – strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA – strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA – strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA – strong
| TLS_RSA_WITH_AES_128_CBC_SHA – strong

 

Reference article: https://technet.microsoft.com/en-us/library/cc759300(v=ws.10).aspx

Kerberos uses SPNs extensively. When a Kerberos client uses its TGT to request a service ticket for a specific service, the service is actually identified by its SPN.  The KDC will grant the client a service ticket that is encrypted in part with a shared secret that the service account as identified by the AD account that matches the SPN has (basically the account password).

In the case of a duplicate SPN, what can happen is that the KDC will generate a service ticket that may be created based on the shared secret of the wrong account.  Then, when the client provides that ticket to the service during authentication, the service itself cannot decrypt it and the auth  fails.  The server will typically log an “AP Modified” error and the client will see a “wrong principal” error code.

So, duplicate SPNs are very bad, much in the same way that duplicate UPNs are bad.  Both can cause Kerb auth to break and Windows uses Kerb for auth everywhere it can.

 

To identify duplicated accounts:

you can use the command: setspn -q <service/server:port> ; to find SPN duplicate on the forest
orusing setspn -L <computername> ; to list the SPN associated to a computer AD object

or by looking Event ID 11, source KDC on Domain controllers event log

How to move Office 365 data to another Office 365 tenant?

this will includes:

  • exchange online mailboxes
  • sharepoint online data
  • onedrive online data

Read those articles:

https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-topologies

https://gooroo.io/GoorooThink/Article/17097/Migration-between-two-Office-365-tenants/26024#.WJziLfkrKM8

Microsoft does not provide at the moment the possibility of transferring mailboxes, Sharepoint and OneDrive data in an automated manner between Office365 tenants

It is not possible to connect between the Office365 servers and send data from one to another. However, with the help of third-party tools like MigrationWiz from BitTitan, Quest Migration Manager from Dell, Sharegate-created especially for the Sharepoint and others – you can prepare and go through the migration process.

Third-party tools can only copy the data, not syncing them!

You need to prepare customers that using third party tools to migrate Office365 between tenants, contents of mailboxes and Sharepoint/One Drive data are copied from one location to another. For example, if mail message has been copied to the destination mailbox and will be removed in source mailbox, then it will still exists in the destination.

• OneDrive for Business (or OD4B)

In the Office365’s OneDrive each user gets their own, separate data space on Sharepoint Online server. At the time of writing this article it is 1 TB. Administrators problem is that by default, this is designed as a user private storage, so until changes of permissions they cannot access it and so migrate data. To copy it across the Office365 tenants, administrator permissions needs to be added to OneDrive sites. You can achieve this via Powershell script for all OneDrive sites.:

$Creds = Get-Credential
Connect-SPOService -Url https://DOMAINNAME-admin.sharepoint.com -credential $Creds
$Users = Get-SPOUser -Site https://DOMAINNAME-my.sharepoint.com/ -limit all| Where-Object {$_.LoginName -like ‘*DOMAINNAME.DOMAIN*’}
$Users = $Users.LoginName | ForEach-Object { $_.TrimEnd(“DOMAINNAME.DOMAIN”) } | ForEach-Object { $_.TrimEnd(“@”) }
$Users | ForEach-Object {Set-SPOUser -Site https://DOMAINNAME-my.sharepoint.com/personal/”$_”_DOMAINNAME_DOMAIN/ -LoginName ADMINNAME@DOMAINNAME.onmicrosoft.com -IsSiteCollectionAdmin $true}

or only for selected OneDrive’s if you prepare list beforehand (here it is named O4bUsers.csv)

$Creds = Get-Credential
Connect-SPOService -Url https://DOMAINNAME-admin.sharepoint.com -credential $Creds
$Users = import-csv ./O4bUsers.csv
$Users | ForEach-Object { Set-SPOUser -Site $_.url -LoginName ADMINNAME@DOMAINNAME.onmicrosoft.com -IsSiteCollectionAdmin $true }

The second issue related to OneDrive migration, is the fact that when you move its data to a new tenant, you need to prepopulate O4B sites first-they are not automatically created when you assign license to Office365 user.

Here comes Powershell again, however it is required to use complex script and prepare a list of accounts to be created beforehand.

You can get relevant information here:

https://technet.microsoft.com/en-us/library/dn800987.aspx

During preparation of migration batches, be careful entering account parameters.

OneDrive and Sharepoint links will change accordingly to the domain connected to user UPN – that is when UPN (login name) changes, OneDrive url will change too – for example:

https://xcompany-my.sharepoint.com/personal/jan_kowalski_xcompany_onmicrosoft_com

can change to:

https://xcompany-my.sharepoint.com/personal/jan_kowalski_xcompany_com

Due to this process, you must set the correct source and destination addresses in the migration tools. Be aware and do not migrate data in the wrong way!

• SharePoint world

Although it is integrated with other Office365 services, it is a separate environment, which is governed by its own laws. It can have its own set of users, permissions, and services you need to keep in mind during the migration.

Microsoft has no out of the box solution to transfer data, configuration and structure between Office365 tenants, however, there are third-party tools which can help you with the migration.

The complexity of all of the Sharepoint features causing that there is no application that can mirror everyone environment in the new place. Every tool on the market has always some limitations. You need to check the documentation for a list of functions and properties that can be included in the migration and exceptions that just cannot be migrated.

Even though you choose the best tool, it is useful to have Sharepoint specialist on board on the planning phase, during and just after migration to help solving emerging problems.

• Switching domain-downtime in the mail delivery

To move organization data from one Office365 tenancy to the other, one of the steps you need to perform is a domain migration. You cannot assign the same domain to two Office 365 tenants. You need to remove it from the existing one first, then add and verify in the second. This will be possible if you remove any domain aliases assigned to Office365 objects -mailboxes, groups, contact. This step is critical, because removing domain will stop mail flow directed to it.

If we do not use additional servers, which can take over the mail traffic for the duration of the switching domains (switching consists of removing, adding and verifying domain, changing MX records, waiting for DNS replication), for example hybrid on-premises Exchange Server, or Linux server – there will be a downtime in the mail service for selected domain.

You have to get this into account during planning stage and preparing migration steps for customer.

It is even more important when there are many, sometimes several hundred SMTP (mail) domains to migrate between the Office365 tenants, and at the same time you can get the verification code only up to 50 domains.

It is possible that you can also encounter unplanned obstacles, for example in the form of damaged objects – in my case it was when I cannot remove aliases and needed to remove object completely.

Some migration solutions:

for sharepoint:

read the article: https://collab365.community/forum/topics/office365-content-copy/

https://en.share-gate.com/

http://dms-shuttle.com/downloads/

for onedrive: https://documents.software.dell.com/migration-suite-for-sharepoint/4.8/user-guide/migrating-to-one-drive-for-business/migrating-one-drive-for-business-to-onedrive-for-business

All in one suite:

https://www.bittitan.com/products/migrationwiz/overview

https://www.cloudiway.com/solutions/migration-between-office-365-tenants/

https://www.avepoint.com/products/office-365-services/office-365-management/

 

Tips:

Office 365 objects:

How to recognize objects created directly in the Office 365? You can check date of their synchronization with the AD. Cloud users, groups and contacts which have never been synchronized with the directory service, will have the lastdirsynctime parameter set to null ($null).

Powershell commands to check cloud identities:

get-msoluser-all | where {$ _. lastdirsynctime-eq $null}

get-msolgroup-all | where {$ _. lastdirsynctime-eq $null}

get-msolcontact-all | where {$ _. lastdirsynctime-eq $null}

 

The administrator receives email notifications that identify which certificates are set to expire on the specified day.

https://blogs.technet.microsoft.com/nexthop/2011/11/17/certificate-expiration-alerting/

https://www.corelan.be/index.php/2009/04/10/free-tool-windows-2008-certificate-authority-certificate-list-utility-for-pending-requests-and-about-to-expire-certificates/

https://www.shellandco.net/monitor-certificate-expiration/

https://gallery.technet.microsoft.com/scriptcenter/Monitor-certificate-9d7a2141

 

SSL certificate checker:

https://www.quora.com/What-is-the-best-tool-to-automatically-inspect-expiration-dates-for-SSL-certificates-and-alert-you-before-they-expire

 

third-party:

https://www.keyon.ch/en/Produkte-Loesungen/Microsoft-PKI/index.php   ; true-Xtender certificate expiration service

http://www.venafi.com

 

On Office 365 landing page:

How to Disable OneDrive for Business in Office 365

On Office 2016 package – by GPO:

https://support.microsoft.com/en-us/help/3117548/how-to-block-onedrive-use-from-within-office-365-proplus-and-office-2016-applications