Quantcast
Channel: Active Directory – Jacques Dalbera's IT world
Viewing all 302 articles
Browse latest View live

Event Logging policy settings in Windows Server/Computer


Choosing a Hash and Encryption Algorithm for a new PKI?

$
0
0

Reference: http://blogs.technet.com/b/askpfeplat/archive/2013/04/22/choosing-a-hash-and-encryption-algorithm-for-a-new-pki.aspx

” If you absolutely must support legacy applications that don’t understand CNG algorithms, and are building out a new public key infrastructure, my advice today is to build two hierarchies. The first hierarchy – a legacy hierarchy if you will – would have a lower key lifetime aimed at a documented point at which legacy applications and devices MUST support CNG algorithms. You could issue certificates based on this “lower assurance” hierarchy for a limited time only to legacy clients, perhaps with limited EKUs and a specific Certificate Policy attached to it. The second PKI would be erected with more current algorithms and key lengths to support more current clients and with much longer expiry periods. When building that PKI, you could follow the stronger guidance put forth in the Federal CP and choose SHA-256, or SHA-384 along with RSA Keys of 4096 bits or ECC keys of 256 or 384 bits. I agree that this adds complexity, but I find in the IT industry that we’re constantly dragging older applications and devices into a new security world – often, kicking and screaming the entire way.”


Troubleshooting Slow Logons via PowerShell

$
0
0

Reference and script:

http://blogs.citrix.com/2015/08/05/troubleshooting-slow-logons-via-powershell/

Analyze GPOs load time: http://www.controlup.com/script-library/Analyze-GPO-Extensions-Load-Time/ee682d01-81c4-4495-85a7-4c03c88d7263/

other reference about logon process: http://fr.slideshare.net/ControlUp/understanding-troubleshooting-the-windows-logon-process

Logon Phases

The following table summarizes the logon phases the script covers and the Windows events used for calculating the start and end time for each phase:

Logon Phase Name Logon Phase Description Start Event End Event
Network Providers A Network Provider is a DLL that is responsible for a certain type of connection protocol1. On each logon Winlogon notifies these Network Providers so they can collect credentials and authenticate the user for their network2. Citrix PnSson is a common network provider found on XenApp and XenDesktop VM’s. Log name: SecurityEvent Id: 4688 (mpnotify.exe start) Log name: SecurityEvent Id: 4689(mpnotify.exe end)
Citrix Profile Management During logon, Citrix UPM copies the users’ registry entries and files from the user store to the local profile folder. If a local profile cache exists, the two sets are synchronized3. Log name: ApplicationEvent Id: 10(User X path to the user store is…) Log name:User Profile Service Event Id: 1(Received user logon notification on session X.)
User Profile During logon, the system loads the user’s profile, and then other system components configure the user’s environment according to the information in the profile4. Log name:User Profile Service Event Id: 1(Received user logon notification on session X.) Log name:User Profile Service Event Id: 2(Finished processing user logon notification on session X.)
Group Policy**See also a detailed Group Policy load time script. Enforce the domain policy and settings for the user session, in the case of synchronous processing the user will not see their desktop at logon until user GP processing is completed5. Log name: GroupPolicyEvent Id: 4001(Starting user logon Policy processing for X.) Log name: GroupPolicyEvent Id: 8001(Completed user logon policy processing for X.)
GP Scripts Running the logon scripts configured in the relevant GPO’s, in the case of synchronous logon scripts Windows Explorer does not start until the logon scripts have finished running6. Log name: GroupPolicyEvent Id: 4018(Starting Logon script for X.) Log name: GroupPolicyEvent Id: 5018(Completed Logon script for X.)
Pre-Shell (Userinit) Winlogon runs Userinit.exe, which runs logon scripts, reestablishes network connections, and then starts Explorer.exe, the Windows user interface7. On RDSH sessions, Userinit.exe also executes the Appsetup8 entries such as cmstart.exe which in-turn calls wfshell.exe Log name: SecurityEvent Id: 4688(userinit.exe start) Log name: SecurityDesktop session:Event Id: 4688(explorer.exe start)Published Apps:Event Id: 4688(icast.exe start)
Shell**Only available when running the script via ControlUp. The interval between the beginning of desktop initialization and the time the desktop became available to the user, also includes the Active Setup9 Phase. Log name: SecurityEvent Id: 4688(explorer.exe start) ControlUp argument – “Desktop Load Time

Multiple PKI (AD CS) on a same forest?

$
0
0

Is it possible to cohabit with an old PKI hierarchy and a new PKI in a same Forest?

“Yes you can have multiple root CAs and even multiple PKIs in a single Active Directory forest. Because of the way the objects are representing those CAs are named and stored, you couldn’t possibly experience a conflict unless you tried to give more than one CA the same CA name.”

http://blogs.technet.com/b/askds/archive/2010/08/23/moving-your-organization-from-a-single-microsoft-ca-to-a-microsoft-recommended-pki.aspx

Tasks to do before to remove the old CA:

“The first thing you’ll want to do is prevent the old CA from issuing any new certificates. You just uninstall it, of course, but that could cause considerable problems. What do you think would happen if that CA’s published CRL expired and it wasn’t around to publish a new one? Depending on the application using those certificates, they’d all fail to validate and become useless. Wireless clients would fail to connect, smart card users would fail to authenticate, and all sorts of other bad things would occur. The goal is to prevent any career limiting outages so you shouldn’t just uninstall that CA.”

“No, you should instead remove all the templates from the Certificate Templates folder using the Certification Authority MMC snap-in on the old CA. If an Enterprise CA isn’t configured with any templates it can’t issue any new certificates. On the other hand, it is still quite capable of refreshing its CRL, and this is exactly the behavior you want. Conversely, you’ll want to add those same templates you removed from the Old And Busted CA into the Certificate Templates folder on the New Hotness Issuing CA.

If you modify the contents of the Certificate Templates folder for a particular CA, that CA’s pKIEnrollmentService object must be updated in Active Directory. That means that you will have some latency as the changes replicate amongst your domain controllers. It is possible that some user in an outlying site will attempt to enroll for a certificate against the Old And Busted CA and that request will fail because the Old And Busted CA knows immediately that it should not issue any certificates. Given time, though, that error condition will fade as all domain controllers get the new changes. If you’re extremely sensitive to that kind of failure, however, then just add your templates to the New Hotness Issuing CA first, wait a day (or whatever your end-to-end replication latency is) and then remove those templates from the Old And Busted CA. In the long run, it won’t matter if the Old And Busted CA issues a few last minute certificates.

At this point all certificate requests within your organization will be processed by the New Hotness Issuing CA, but what about all those certificates issued by the Old And Busted CA that are still in use? Do you have to manually go to each user and computer and request new certificates? Well…it depends on how the certificates were originally requested”.

Manually Requested

If a certificate has been manually requested then, yes, in all likelihood you’ll need to manually update those certificates. I’m referring here to those certificates requested using the Certificates MMC snap-in, or through the Web Enrollment Pages. Unfortunately, there’s no automatic management for certificates requested manually. In reality, though, refreshing these certificates probably means changing some application or service so it knows to use the new certificate. I refer here specifically to Server Authentication certificates in IIS, OCS, SCCM, etc. Not only do you need to change the certificate, but you also need to reconfigure the application so it will use the new certificate. Given this situation, it makes sense to make your necessary changes gradually. Presumably, there is already a procedure in place for updating the certificates used by these applications I mentioned, among others I didn’t, as the current certificates expire. As time passes and each of these older, expiring certificates are replaced by new certificates issued by the new CA, you will gradually wean your organization off of the Old And Busted CA and onto the New Hotness Issuing CA. Once that is complete you can safely decommission the old CA.

And it isn’t as though you don’t have a deadline. As soon as the Old And Busted CA certificate itself has expired you’ll know that any certificate ever issued by that CA has also expired. The Microsoft CA enforces such validity period nesting of certificates. Hopefully, though, that means that all those certificates have already been replaced, and you can finally decommission the old CA.

Automatically Enrolled

Certificate Autoenrollment was introduced in Windows XP, and it allows the administrator to assign certificates based on a particular template to any number of forest users or computers. Triggered by the application of Group Policy, this component can enroll for certificates and renew them when they get old. Using Autoenrollment, once can easily deploy thousands of certificates very, very quickly. Surely, then, there must be an automated way to replace all those certificates issued by the previous CA?

As a matter of fact, there is.

As described above, the new PKI is up and ready to start issuing digital certificates. The old CA is still up and running, but all the templates have been removed from the Certificate Templates folder so it is no longer issuing any certificates. But you still have literally thousands of automatically enrolled certificates outstanding that need to be replaced. What do you do?

In the Certificates Templates MMC snap-in, you’ll see a list of all the templates available in your enterprise. To force all holders of a particular certificate to automatically enroll for a replacement, all you need to do is right-click on the template and select Reenroll All Certificate Holders from the context menu.

clip_image002

What this actually does is increment the major version number of the certificate template in question. This change is detected by the Autoenrollment component on each Windows workstation and server prompting them to enroll for the updated template, replacing any certificate they may already have. Automatically enrolled user certificates are updated in the exact same fashion.

Now, how long it takes for each certificate holder to actually finish enrolling will depend how many there are and how they connect to the network. For workstations that are connected directly to the network, user and computer certificates will be updated at the next Autoenrollment pulse.

Note: For computers, the autoenrollment pulse fires at computer startup and every eight hours thereafter. For users, the autoenrollment pulse fires at user logon and every eight hours thereafter. You can manually trigger an autoenrollment pulse by running certutil -pulse from the command line. Certutil.exe is installed with the Windows Server 2003 Administrative Tools Pack on Windows XP, but it is installed by default on the other currently supported versions of Windows.

For computers that only connect by VPN it may take longer for certificates to be updated. Unfortunately, there is no blinking light that says all the certificate holders have been reenrolled, so monitoring progress can be difficult. There are ways it could be done — monitoring the certificates issued by the CA, using a script to check workstations and servers and verify that the certificates are issued from the new CA, etc. — but they require some brain and brow work from the Administrator.

There is one requirement for this reenrollment strategy to work. In the group policy setting where you enable Autoenrollment, you must have the following option selected: Update certificates that use certificate templates.

clip_image003

If this policy option is not enabled then your autoenrolled certificates will not be automatically refreshed.

Remember, there are two autoenrollment policies — one for the User Configuration and one for the Computer Configuration. This option must be selected in both locations in order to allow the Administrator to force both computers and users to reenroll for an updated template.

But I Have to Get Rid of the Old CA!

As I’ve said earlier, once you’ve configured the Old And Busted CA so that it will no longer issue certificates you shouldn’t need to touch it again until all the certificates issued by that CA have expired. As long as the CA continues to publish a revocation list, all the certificates issued by that CA will remain valid until they can be replaced. But what if you want to decommission the Old And Busted CA immediately? How could make sure that your outstanding certificates would remain viable until you can replace them with new certificates? Well, there is a way.

All X.509 digital certificates have a validity period, a defined interval time with fixed start and end dates between which the certificate is considered valid unless it has been revoked. Once the certificate is expired there is no need to check with a certificate revocation list (CRL) — the certificate is invalid regardless of its revocation status. Revocation lists also have a validity period during which time it is considered an authoritative list of revoked certificates. Once the CRL has expired it can no longer be used to check for revocation status; a client must retrieve a new CRL.

You can use this to your advantage by extending the validity period of the Old And Busted CA’s CRL in the CA configuration to match (or exceed) the remaining lifetime of the CA certificate. For example, if the Old And Busted CA’s certificate will be valid for the next 4 years, 3 months, and 10 days, then you can set the publication interval for the CA’s CRL to 5 years and immediately publish it. The newly published CRL will remain valid for the next five years, and as long as you leave that CRL published in the defined CRL distribution points — Active Directory and/or HTTP — clients will continue to use it for checking revocation status. You no longer need the actual CA itself so you can uninstall it.

One drawback to this, however, is that you won’t be able to easily add any certificates to the revocation list. If you need to revoke a certificate after you’ve decommissioned the CA, then you’ll need to use the command line utility certutil.exe.

Certutil.exe -resign “Old And Busted CA.crl” +<serialNumber>

Of course, this requires that you keep the private keys associated with the CA, so you’d better back up the CA’s keys before you uninstall the role.”


Pkiview – unable to download (http/ldap)

LDAP Queries with port 389 or port 3268 ?

$
0
0

Reference Article: https://technet.microsoft.com/en-us/library/cc978012.aspx

Port 3268. This port is used for queries specifically targeted for the global catalog. LDAP requests sent to port 3268 can be used to search for objects in the entire forest. However, only the attributes marked for replication to the global catalog can be returned. For example, a user’s department could not be returned using port 3268 since this attribute is not replicated to the global catalog.

Port 389. This port is used for requesting information from the local domain controller. LDAP requests sent to port 389 can be used to search for objects only within the global catalog’s home domain. However, the requesting application can obtain all of the attributes for those objects. For example, a request to port 389 could be used to obtain a user’s department.

The Schema Manager is used to specify additional attributes (i.e ThumbnailPhoto, Department…) that should be replicated to each global catalog server. The attributes included in the global catalog are consistent across all domains in the forest.

Effect of Global Catalog When Searching Back Links and Forward Links

Some Active Directory attributes cannot be located specifically by finding a row in the directory database. A back link is an attribute that can be computed only by referencing another attribute, called a forward link. An example of a back-link attribute is the memberOf attribute on a user object, which relies on the group attribute members to derive its values. For example, if you request the groups of which a specific user is a member, the forward link members , an attribute of the group object, is searched to find values that match the user name that you specified.

Because of the way that groups are enumerated by the Global Catalog, the results of a back-link search can vary, depending on whether you search the Global Catalog (port 3268) or the domain (port 389), the kind of groups the user belongs to (global groups vs. domain local groups), and whether the user belongs to groups outside the local domain. Connecting to the local domain does not locate the user’s group membership in groups outside the domain. Connecting to the Global Catalog locates the user’s membership in global groups but not in domain local groups because local groups are not replicated to the Global Catalog


The version store has reached its maximum size because of unresponsive transaction

$
0
0

This Alert occurs in 2008 R2 Servers

 

Alert: Active Directory cannot update object due to insufficient memory
Last modified by: System
Last modified time: 7/18/2013 1:02:10 PM
Alert description: Active Directory Domain Services could not update the following object in the local Active Directory Domain Services database with changes received from the following source directory service. Active Directory Domain Services does not have enough database version store to apply the changes.

User Action

Restart this directory service. If this does not solve the problem, increase the size of the database version store. If you are populating the objects with a large number of values, or the size of the values is especially large, decrease the size of future changes.

 

Additional Data

Reboot will clear the version table but it does nothing to identify or resolve the core issue.

The version store has reached its maximum size because of unresponsive transaction. Updates to database are rejected until the long-running transaction is omitted or rolled back. TechNet suggested looking for event IDs-1022, 1069,623 and none of these event ids could be found in event viewer.

Resolution:

Below is the solution but it is your own risk to change registry setting.

Backup the Registry before Proceeding

  1. Update ‘Version Store Size’ (the Ops Mgr Agent queue/cache Db) by using Regedit to change “HKLM\System\CurrentControlSet\Services\HealthService\Parameters\”Persistence Version Store Maximum”.
    Value should be 5120 (decimal) (equates to 80MB).
  2. Update value for ‘MaximumQueueSizeKb’ in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HealthService\Parameters\Management Groups\<ManagementGroupName> Value should be 102400 (decimal)

“Please reboot the server”

Check in the Event Viewer for Event ID 1394 “All Problems preventing updates to the Active Directory Domain Services database have been cleared. New Updates to the Active Directory Domain Services database are succeeding. The Net Logon service has restarted”

You can find this event in “Directory Services” Log of the Domain Controller.


AD best practices

$
0
0

AD design and placement best practices:

 

Category Best Practice
Forest – General Forest count:

·         A single forest is ideal when possible

Forest – General Forest trusts:

·         When your forest contains domain trees with many child domains and you observe noticeable user authentication delays between the child domains, you can optimize the user authentication process between the child domains by creating shortcut trusts to mid-level domains in the domain tree hierarchy.

Forest – General Forest functional level for Windows 2003 forests:

·         If all of your DCs are Windows 2003 or higher OS versions then ensure that you raise the forest functional level to 2003 (or higher). This enables the following benefits:

o   Ability to use forest trusts

o   Ability to rename domains

o   The ability to deploy a read-only domain controller (RODC)

o   Improved Knowledge Consistency Checker (KCC) algorithms and scalability

Forest – General Forest functional level for Windows 2008 forests:

·         If all of your DCs are Windows 2008 or higher OS versions then ensure that you raise the forest functional level to 2008 (or higher). This enables the following benefits:

o   Active Directory Recycle Bin, which provides the ability to restore deleted objects in their entirety while AD DS is running.

Forest – FSMO Schema Master placement:

·         Place the schema master on the PDC of the forest root domain.

Forest – FSMO Domain Naming Master placement:

·         Place the domain naming master on the forest root PDC.

Domain – General Domain count:

·         To reap the maximum benefits from Active Directory, try to minimize the number of domains in the finished forest. For most organizations an ideal design is a forest root domain and one global domain for a total of 2 domains.

Domain – General Domain root:

·         The best practices approach to domain design dictates that the forest root domain be dedicated exclusively to administering the forest infrastructure and be mostly empty.

Domain – General Domain functional level:

·         If all of your DCs are Windows 2003 or higher OS versions then ensure that you raise the domain functional level to 2003 (or higher). This enables the following benefits:

o      Renaming domain controllers

o      LastLogonTimeStamp attribute

o      Replicating group change deltas

o      Renaming domains

o      Cross forest trusts

o      Improved KCC scalability

Domain – General Old DC Metadata:

·         In the event that a DC has to be forcibly removed (dcpromo /forceremoval) such as when it has not replicated beyond the TSL, you will need to clean up the DC Metadata on the central DCs. Metadata includes elements such as the Computer Object, NTDS Settings, FRSMember object and DNS Records. Use ntdsutil to perfom this.

Domain – FSMO PDC FSMO placement:

·         Place the PDC on your best hardware in a reliable hub site that contains replica domain controllers in the same Active Directory site and domain.

Domain – FSMO PDC FSMO colocation:

·         PDC and RID FSMO roles should be held by the same DC.

Domain – FSMO RID FSMO placement:

·         Place the RID master on the domain PDC in the same domain.

Domain – FSMO RID FSMO in windows 2008 environment:

·         On 2008 R2 DCs ensure that hotfix 2618669 is applied

Domain – FSMO RID FSMO colocation:

·         PDC and RID FSMO roles should be held by the same DC

Domain – FSMO RID pool size:

·         Ensure that the RID pool is large to avoid possible RID depletion.

Domain – FSMO Infrastructure Master in a single domain forest:

·         In a forest that contains a single Active Directory domain, there are no phantoms. Therefore, the infrastructure master has no work to do. The infrastructure master may be placed on any domain controller in the domain, regardless of whether that domain controller hosts the global catalog or not.

Domain – FSMO Infrastructure Master in a multiple domain forest:

·         If every domain controller in a domain that is part of a multidomain forest also hosts the global catalog, there are no phantoms or work for the infrastructure master to do. The infrastructure master may be put on any domain controller in that domain. In practical terms, most administrators host the global catalog on every domain controller in the forest.

Domain – FSMO Infrastructure Master in a multiple domain forest where not all DCs are hosting a global catalog:

·         If every domain controller in a given domain that is located in a multidomain forest does not host the global catalog, the infrastructure master must be placed on a domain controller that does not host the global catalog.

DC – General DC organizational unit:

·         DCs should not be moved from the Domain Controllers OU or the Default Domain Controllers GPO won’t apply to them.

DC – Network Configuration DNS NIC configuration in a single DC domain:

·         If the server is the first and only domain controller that you install in the domain, and the server runs DNS, configure the DNS client settings to point to that first server’s IP address. For example, you must configure the DNS client settings to point to itself. Do not list any other DNS servers until you have another domain controller hosting DNS in that domain.

DC – Network Configuration DNS NIC configuration in a multiple DC domain where all DCs are also DNS servers:

·         In a domain with multiple domain controllers, DNS servers should include their own IP addresses on their interface lists of DNS servers. We recommend that the DC local IP address be the primary DNS server, another DC be the secondary DNS server (first local and then remote site), and that the localhost address act as a tertiary DNS resolver on the network cards for all DCs.

DC – Network Configuration DNS Configuration in a multiple DC domain where not all DCs are DNS servers:

·         If you do not use Active Directory-integrated DNS, and you have domain controllers that do not have DNS installed, Microsoft recommends that you configure the DNS client settings according to these specifications:

o    Configure the DNS client settings on the domain controller to point to a DNS server that is authoritative for the zone that corresponds to the domain where the computer is a member. A local primary and secondary DNS server is preferred because of Wide Area Network (WAN) traffic considerations.

o    If there is no local DNS server available, point to a DNS server that is reachable by a reliable WAN link. (Up-time and bandwidth determine reliability.)

o    Do not configure the DNS client settings on the domain controllers to point to your ISP’s DNS servers. Instead, the internal DNS server should forward to the ISP’s DNS servers to resolve external names.

DC – Network Configuration Multi-homed DC NIC configuration:

·         It is recommended not to run a domain controller on a multi-homed server. If server management adapters are present or multi-homing is required then the extra adapters should not be configured to register within DNS. If these interfaces are enabled and allowed to register in DNS, computers could try to contact the domain controller using this IP address and fail. This could potentially exhibit itself as sporadic failures where clients seemingly authenticate against remote domain controllers even though the local domain controller is online and reachable.

DC – Network Configuration WINS NIC configuration on DCs where the WINS service is hosted:

·         Unlike other systems, WINS servers should only point to themselves for WINS in their client configuration. This is necessary to prevent possible split registrations where a WINS server might try to register records both on itself and on another WINS server.

DC – DNS Server Configuration External name resolution:

·         It is recommended to configure DNS forwarders first for internet traffic. This will result in faster and more reliable name resolution. If that is not an option then utilize root hints.

DC – DNS Server Configuration DNS Zone Types:

·         Use directory-integrated storage for your DNS zones for increased security, fault tolerance, simplified deployment and management.

DC – DNS Server Configuration DNS Scavenging:

·         DNS scavenging is recommended to clean up stale and orphaned DNS records that were dynamically created. This process keeps the database from growing unnecessarily. It also reduces name resolution issues where multiple records could unintentionally point to the same IP address. This is often seen in workstations that use DHCP, because the same IP address can be assigned to different workstations over time.

DC – DNS Server Configuration DNS _msdcs.forestdomain zone authoritation:

·         It is recommended that every DNS server be authoritative for the _msdcs.forestdomain zone. A freshly created Windows 2003 forest places _msdcs in it’s own zone and this zone is replicated forest wide. If the domain began as a Windows 2000 forest the _msdcs zone is a subzone of the forest root zone. The _msdcs zone could be placed in it’s own zone or the forest root zone could be replicated forest wide. Having _msdcs on every Domain Controller running DNS allows the Domain Controllers to look for other Domain Controllers without having to forward the query.

The dnslint.exe tool can be used to validate that each DNS server it queries is authoritative for the_msdcs.forestdomain zone.

DC – DNS Server Configuration DNS on servers with multiple NICs:

·         If multiple NICs exist on the DNS server, make sure that the DNS services are only listening on the LAN interface.

DC – DNS Server Configuration DNS services in multiple DC environments:

·         Configure all DNS Servers to have either local copies of all DNS Zones or to appropriately forward to other DNS servers.

 

Replicating DNS zones across domain lines will allow all domains in the forest to share DNS information easier and ultimately make DNS administration easier. Simply secure each DNS zone as needed if decentralized administration and security is a concern. Replicate to “all Domains in the Forest” even if you have only one domain, this will save you time in the future should a second domain be added.

 

·         Use Active Directory (AD) Integrated DNS Forwarders instead of normal standalone DNS Forwarders when possible

 

1.     Example:

dnscmd /ZoneAdd domain.com /DsForwarder 10.10.10.10 [/DP /forest]

 

Using AD integrated forwarders will replicate the information to all the DNS servers in the domain or the forest (/DP /Forest). This will simplify DNS administration. Replicating to the forest (/DP /Forest) is preferred.

 

·         Use AD Integrated Stub Zones instead of standalone DNS Domain Forwards.

 

Stub Zones can automatically be replicated to all DNS servers when AD integrated Zones are used and they work similar to DNS forwards. Using DNS Stubs will decrease administration as DNS servers are replaced overtime. Using standalone server based DNS Domain Forwards can require configuration of every DNS server, increasing DNS administration

 

·         Configure Zone Transfers by using the Name Servers tab, and configuring the Zone Transfers tab to transfer to and notify the Name Servers of changes. Do not use Zone Transfers to IP Addresses.

 

Using the Name Servers tab to configure the Zone Transfer creates a better documented DNS server. An Active Directory integrated DNS Server will replicate the Name Server information to each DNS server. As DNS servers are added or replaced this information is kept, using only the Zone Transfers tab and transferring by IP Address can result in lost information when a server is replaced.

DC – DNS Server Configuration DNS services in environments which integrate with other companies:

·         Use AD Integrated DNS forwarders to resolve DNS Zones across independent companies/forests, or replicate DNS Zones onto all DNS servers if the companies are owned by the same parent company and in the same forest.

DC – DNS Server Configuration DNS record caching:

·         Configure all DNS Servers to be a Caching DNS Server in addition to hosting DNS Zones.

 

This is the default configuration for Windows 2003 DNS servers. Leaving this enabled simplifies DNS administration and speeds DNS queries.

DC – DNS Server Configuration DNS Dynamic Updates:

·         Configure DNS Zones that are used by Active Directory domains to accept Dynamic Updates

 

Allowing Dynamic DNS (DDNS) updates on DNS zone used by the Active Directory domain is the default/recommended configuration.   This configuration is fundamental to having good communication between all devices in the AD domain.

DC – DNS Server Configuration DNS manually created records in dynamic zones:

·         Do NOT manually create Host (A) records in the same domain with records where dynamic Host records are created via DDNS. Instead create a SubZone (or new Zone) and create the Host records there, then create an Alias (CNAME) record in the appropriate zone for user friendly DNS searches.

 

The SubZone (or new Zone) can be used to document the device type as server, router, appliance, etc., this provides a better documented DNS environment. This also allows manual DNS host records to be easily monitored and maintained. As equipment is replace over time easier DNS maintenance is achieved.

 

a.       Bad Practice Example: domain.com is used for DDNS registration, do not manually create a Host (A) record in this Zone.

 

b.     Best Practice Example: domain.com is used for DDNS registration, serv.domain.com is used for manual Host records, then place an Alias record in domain.com to allow easy client configuration.

DC – Time Configuration DC NT5DS configuration for servers not hosting the PDC FSMO role:

·         Configure NTP on all domain controllers to point to the domain controller hosting the PDC FSMO role.

DC – Time Configuration DC NT5DS configuration for the domain PDC FSMO role:

·         Configure the Windows Time service (on the PDC FSMO role holder) to synchronize with an external time server.

DC – Time Configuration External NTP server definition:

·         When specifying specific NTP servers it is possible to define one or more servers. It is important to follow the correct syntax when defining multiple NTP servers. Failure to do so may invalid the list and cause time synchronization failures. The main point to focus on is the delimiters between each value. The correct delimiter is a space. Commas, semi-colons and anything other than a space is invalid.

Sites and Services Sites:

·         Do not disable the Knowledge Consistency Checker (KCC).

·         Do not specify bridgehead servers.

·         Keep the replication schedule open as long as is practical.

·         Remove empty domains and consolidate any IP subnets associated therein to sites which have domain controllers.

·         Do not enable Universal Group Membership Caching in sites where a global catalog resides. Universal Group Membership Caching is set at the site level and affects all DCs in the site.  If one of the DCs is a GC, the remaining DCs will continue to cache Universal Group Membership resulting un unpredictable authentication failures (dependent on which DC is chosen for authentication by the DS Locator Service).

·         All sites should contain at least one global catalog server. In order to logon, a user account needs to be evaluated against Universal Group Membership which is stored on GCs. A site without GCs can cause logon failure as a result. A new option is to enable Universal Group Membership Caching in order not to require a GC in each site.

Sites and Services Connection objects:

·         Do not manually create connection objects. Do not manually modify default connection objects. If you leave the KCC to do it’s job, it will automatically create the necessary connection objects. However, any manually created connection object (INCLUDING an automatically created object that has been modified) will remain static. “Admin made it, so admin must know something I don’t know” is the general logic behind this. Only create manual connection objects if you know something the KCC doesn’t know.   Don’t confuse a connection object with a site link.

·         Connection objects should maintain default schedules. By default, connection objects will inherit their schedule based on the site link.  However, they can be changed directly.  Once you make a change to a connection object, it will no longer be managed by the KCC and will be treated as a manual connection object.

·         If you are cleaning up the connection objects, don’t delete more than 10 connections at a time or a Version Vector Join (vv join) might be required to re-join the DC.

·         Do not disable connection objects.

Sites and Services Site links:

·         Do not manually create site-links, let the ISTG create links based on KCC results.

·         All sites need to be contained in at least one Site Link in order to replicate to other sites.  Automatic Site Covering and DFS costing might be affected if sites are not within site links.

·         There must be 2 or more sites associated with a site link.  The deletion of a site may require the manual clean-up of the respective site link.

·         If two site links contain the same two remote sites, a suboptimal replication topology may result.

·         Do not disable site link transitivity.

Sites and Services Site subnets:

·         All infrastructure ip subnet ranges where servers or workstations login from should be defined within ad sites and services. Sites consist of one or more subnets and allow clients to logon to a local Domain Controller quickly through the DC Locator Process. If the subnet definition is missing from AD, the client will logon to any generic DC which may be on the other side of the world. You can easily find subnets not defined in AD by reviewing the Netlogon.log file in %systemroot%\debug folder. You can look for all DCs with event 5778 using eventcomb and then selectively gather the various netlogon.log files.

Sites and Services Inter-Site Change Notification:

·         Replication of AD is always pulled and not pushed. Within a site, when a change occurs, a DC will notify other DCs of the change so that they can pull the change. Between sites, this is not used and rather a schedule is used with the lowest time being 15 minutes.

 

This can be changed to work with Change Notification making inter-site replication much faster (but using more bandwidth as a consequence). It is recommended to only enable change notification on a link if it is a high speed link or a dedicated Exchange site.

 

To enable Change Notification, use adsiedit.msc and update the attribute called “Options” on the site link to a value of 1. You can find this object in the Configuration NC.

Replication Morphed folders:

·         A morphed folder refers to a folder that has been renamed by FRS to resolve a conflict. FRS identifies the conflict during replication, and the receiving member protects the original copy of the folder and renames (morphs) the later inbound copy of the folder. The morphed folder names have a suffix of “_NTFRS_xxxxxxxx,” where “xxxxxxxx” represents eight random hexadecimal digits.

 

If morphed folders are found within SYSVOL they should be fixed or they may not be linked to other AD components properly. Fixing a morphed folder involves removing or renaming the folder and its morphed pair and waiting for replication to complete. Then the correct folder is identified and renamed to its correct name or copied to its correct location.

Replication Lingering objects:

·         Lingering objects are objects that exist on 1 or more DCs but not on others. Lingering objects can occur if a domain controller does not replicate for an interval of time that is longer than the tombstone lifetime (TSL). The domain controller then reconnects to the replication topology. Objects that are deleted from the Active Directory directory service when the domain controller is offline can remain on the domain controller as lingering objects. This can be caused from recovering a DC from a virtual snapshot or from reviving a domain controller which has been off the network or not replicating with the domain for longer than the tombstone lifetime.

Replication GPT and GPC lingages:

·         Group Policy Objects have two parts consisting of the Group Policy Template (GPT) residing in the SYSVOL and the Group Policy Container (GPC) in Active Directory. When problems occur with SYSVOL replication or in the AD itself, the two halves can become unsynchronized. When this happens, Group Policy can cease to function or start behaving strangely.

 

To validate synchronization of GPTs and GPCs, use the Resource Kit tool gptool.exe. In a healthy domain all policies should return a “Policy OK” result. When a policy fails to do so, some troubleshooting of SYSVOL replication and GPO version numbers is in order.

Replication Topology clean-up setting:

·         This should be enabled. This option controls the automatic clean-up of unnecessary connection objects and replication links. To enable it, run:

 

repadmin /siteoptions HubServer1 -IS_TOPL_CLEANUP_DISABLED

Replication Detect stale topology setting:

·         This site option is used by the KCC Branch Office Mode which tells the KCC to ignore failed replication and not to try to find a path around.

 

repadmin /siteoptions BranchServer1 -IS_TOPL_DETECT_STALE_DISABLED

 

This should not be enabled on Central or Hub Sites or replication failures can result. To undo this:

 

repadmin /siteoptions HubServer1 +IS_TOPL_DETECT_STALE_DISABLED

Replication KCC Intra-site topology setting:

·         If the KCC Intra-Site Topology is disabled, all replication connections need to be manually maintained which will have a high administrative burden. This is not recommended. Rather allow the KCC to dynamically build the topology every 15 minutes.

 

repadmin /siteoptions HubServer1 +IS_AUTO_TOPOLOGY_DISABLED

 

For inter-site, you may choose to disable the KCC and create manual connection objects as follows:

 

repadmin /siteoptions HubServer1 IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED

Replication Inbound replication setting:

·         Disabling inbound replication should only be used for testing and should be removed once complete. Leaving inbound replication disabled will eventually orphan the DC once the TSL has expired. To re-enable inbound replication, run the following (Note the + and – switches on the Repadmin options to confirm or negate the option):

 

repadmin /options site:Branch -DISABLE_INBOUND_REPL

Replication Outbound Replication setting:

·         Outbound replication is disabled automatically when a DC has not replicated within it’s tombstone linetime (180 days). If it has been disabled manually you need to reenable it as follows:

 

repadmin /options site:Branch -DISABLE_OUTBOUND_REPL

Replication Ignore schedules setting:

·         If you’ve configured replication on a schedule on a site link, this schedule will be ignored if the “Ignore IP Schedules” option is set on the IP Container.

 

This is NOT the GUI for “Options = 1” which enables inter-site change notification.

Replication Topology Minimum Hops setting:

·         By default, the KCC will create the intra-site repl topology so that no replication partner is more than 3 hops away. This 3 hop limit can be disabled as follows:

 

repadmin /siteoptions server1 +IS_TOPL_MIN_HOPS_DISABLED

 

To undo this, negate the option (-) as follows:

 

repadmin /siteoptions server1 -IS_TOPL_MIN_HOPS_DISABLED

Replication Non-default dSHeuristics setting:

·         The dSHeuristics attribute modifies the behaviour of certain aspects of the Domain Controllers. An examples of behavioral changes include enabling anonymous LDAP operations.   The dsHeuristics attribute is located at CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=<forest root domain>

 

The data is a Unicode string where each value may represent a different possible setting.

 

The default value is <not set>

 

For more information on dSHeuristics:

http://msdn.microsoft.com/en-us/library/cc223560(PROT.10).aspx

Replication Recycle bin deleted object lifetime setting:

·         Without knowing the Recycle Bin Deleted Object Lifetime, it’s not possible to know if a deleted object will be recoverable. By default, the value is set to Null and it uses the value of the TombStone Lifetime instead. The TSL is also set to Null by default and if it remains null, it uses the hard coded value of 60 (or 180 if the forest was deployed on 2003 SP1 or above). If the value is changed, ensure it is longer than your backup schedule to avoid having to do authoritative restores on deleted objects.

 

The location of the TombStone Lifetime and the Deleted Object Lifetime are both at CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=<forest root domain> with the following Attribute Names:

 

TombStone Lifetime (TSL): tombstoneLifetime

Deleted Object Lifetime: msDS-DeletedObjectLifetime

Replication Inbound Replication Connections:

·         Do not manually create inbound replication connections on an RODC. A manually created inbound replication connection from an RODC will result in failed replication as an RODC will never replicate outbound.

Read Only DCs Site links to RODC sites:

·         In a mixed environment of both 2003 and 2008 DCs, ensure the lowest cost site link for an RODC site is to a site with more than 1 writeable 2008 domain controller. The Filtered Attribute Set (FAS) is the definition of what an RODC may replicate (some attributes being filtered). It only recognises the FAS when replicating to a 2008 RWDC. If there is only 1 RWDC at the next hop which fails, the RODC may replicate with a 2003 DC including all attributes. It’s important to validate the site links, site link bridges and costs to ensure that there are at least 2 RWDCs each RODC can replicate from.

Read Only DCs RODCs per site:

·         Ideally each RODC site contains only a single RODC. RODCs cache users passwords. In the event of a disconnection to a RWDC, the users can logon using the cached RODC password.

 

In the event that there are multiple RODCs in the Site for the same domain, it is unpredictable which RODC will respond to an Authentication Request. Therefore, user logon experience will be equally unpredictable.

Read Only DCs RODCs and RWDCs in the same site:

·         Typically, RODCs are placed in remote branch sites by themselves. In the event that there are both RWDCs and RODCs, there will be a noticeable and unpredictable user experience in the event of the RWDC being unavailable. This is especially true during WAN outages where passwords are not cached.

Read Only DCs Number of non-RODCs per domain.

·         It is recommended to always have more than a single read/write domain controller per domain. Although a single RWDC and many RODCs can exist in a domain, this is not recommended. RODCs can’t replicate outbound and in the event of failure of the RWDC an undesirable AD Restore would be required.

  AutoSiteCoverage:

·         AutoSiteCoverage enables a DC to cover a site where no DCs exist by registering the relevant SRV records for the site in question. Windows 2003 DCs don’t recognise RODCs and if AutoSiteCoverage is enabled on these DCs, they will register their SRV records in this site. This will result in users authenticating to the 2003 DC even though an RODC exists in the site.

 

To resolve this, either disable AutoSiteCoverage on the 2003 DC or install the RODC Compatibility Pack on the 2003 DCs.

 

HKLM\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters

 

REG_DWORD called AutoSiteCoverage, value = 1 or 0



Event ID 1479 – This source server failed to generate the changes

$
0
0

Alert: This source server failed to generate the changes

Description: This directory service failed to retrieve the changes requested for the following directory partition. As a result, it was unable to send change requests to the directory service at the following network address.

1479

Event ID: 1479

Active Directory Domain Services could not update the following object in the local Active Directory Domain Services database with changes received from the following source directory service. Active Directory Domain Services does not have enough database version store to apply the changes.

User Action

Restart this directory service. If this does not solve the problem, increase the size of the database version store. If you are populating the objects with a large number of values, or the size of the values is especially large, decrease the size of future changes.

Additional Data

Error value:

8573 The database is out of version store.

Resolution:

{MS has provided the resolution in this Link}

Note: Take Backup of Registry before changing

Registry Location:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

You need to add the Registry value “EDB max ver pages” with 32 Bit DWord Decimal value as you need with reference below:

9600 = 152 MB
12800 = 202 MB
16000 = 252 MB
19200 = 302 MB

Reboot the Server once the changes have been done.

Check the Event viewer after restart; you need to get event 1394 in ADS Logs

1394


FIM and MIM resources

$
0
0

Dump AD Users objects with ‘Password never expires’, ‘Store password using reversible encryption’ and ‘Use Kerberos DES encryption types for this account’.

$
0
0

How do you check for these accounts?

Get-ADUser-Filter {UserAccountControl-band 0x200000}

That was easy!

User Accounts have different options that can be set to control security settings. In Active Directory Users and Computers most of these options can be found in the ‘Account’ tab of the user object dialogue box, under ‘Account options’:

 

 

In the above window, the user is set to use DES encryption. This setting is stored as part of a binary mask in the ‘UserAccountControl’ attribute of the user object. In the binary mask, each positional bit represents a different possible user account option that can be switched on or switched off. Like a light switch – when switched on, the option is active. These settings can be queried using PowerShell’s ‘binary And’ (-band) operator. The hexadecimal setting for DES encryption is 0x200000 and we use -band to check that it is present (switched on) in the binary mask.

 

Here are other values you could check for with the aid of a filter and Get-ADUser:

Property Flag Value in Hexadecimal Value in Decimal
SCRIPT 0x0001 1
ACCOUNTDISABLE 0x0002 2
HOMEDIR_REQUIRED 0x0008 8
LOCKOUT 0x0010 16
PASSWD_NOTREQD 0x0020 32
PASSWD_CANT_CHANGE 0x0040 64
ENCRYPTED_TEXT_PWD_ALLOWED 0x0080 128
TEMP_DUPLICATE_ACCOUNT 0x0100 256
NORMAL_ACCOUNT 0x0200 512
INTERDOMAIN_TRUST_ACCOUNT 0x0800 2048
WORKSTATION_TRUST_ACCOUNT 0x1000 4096
SERVER_TRUST_ACCOUNT 0x2000 8192
DONT_EXPIRE_PASSWORD 0x10000 65536
MNS_LOGON_ACCOUNT 0x20000 131072
SMARTCARD_REQUIRED 0x40000 262144
TRUSTED_FOR_DELEGATION 0x80000 524288
NOT_DELEGATED 0x100000 1048576
USE_DES_KEY_ONLY 0x200000 2097152
DONT_REQ_PREAUTH 0x400000 4194304
PASSWORD_EXPIRED 0x800000 8388608
TRUSTED_TO_AUTH_FOR_DELEGATION 0x1000000 16777216
PARTIAL_SECRETS_ACCOUNT 0x04000000 67108864

 

You’re quite at liberty to combine them. This one tests for users who have the following set: ‘Password never expires’, ‘Store password using reversible encryption’ and ‘Use Kerberos DES encryption types for this account’.

$COMBINED_VALUE = 0x10000 + 0x0080 + 0x200000

Get-ADUser-Filter {UserAccountControl-band$COMBINED_VALUE}


WAP 2012 R2 highly available?

$
0
0

How do I configure WAP in Windows Server 2012 R2 highly available?

Web Application Proxy (WAP) in Windows Server 2012 R2 provides a reverse proxy service enabling services hosted internally on-premises to be published to the Internet. It does this while also integrating with Active Directory Federation Services (ADFS) to enable pre-authentication, single sign-on and more. If you need to use WAP in a production scenario its important that the WAP service is highly available. This is achieved by deploying multiple WAP instances that use the same certificate and connect to the same ADFS instance to ensure consistent policy. Network load balancing is used to provide a virtual IP that joins the multiple WAP instances into a single highly available service. You can use either Windows NLB or a separate load balancing solution.

Working with WAP: https://technet.microsoft.com/en-us/library/Dn584113.aspx

A step-by-step guide is available which walks through configuring two WAP servers using Windows NLB at http://blogs.technet.com/b/platformspfe/archive/2015/02/16/part-6-windows-server-2012-r2-ad-fs-federated-web-sso.aspx. As part of the same series it also walks through deploying a highly available ADFS implementation which is important as all parts of the solution need to be highly available to provide a highly available complete solution.

Do I need multiple NICs for Web Application Proxy?

No. Web Application Proxy has no requirements or preference around the number of network adapters. The decision to have multiple NICs is dependent only on your network topology and if you need multiple network adapters to enable the connectivity required

Best practice analyzer: https://technet.microsoft.com/en-us/library/Dn383651.aspx

Example of implementation: http://blogs.technet.com/b/platformspfe/archive/2015/02/16/part-6-windows-server-2012-r2-ad-fs-federated-web-sso.aspx

 

 


PKI – Certificates – CSR – Certificate Signing Request creation

$
0
0
CSR Generation: Using certreq (Windows)

This article is for administrators who prefer the command shell!

Save the following file as request.inf on your server editing the subject according to the comment:

;—————– request.inf —————–

[Version]
Signature=”$Windows NT$”

[NewRequest]
;Change to your,country code, company name and common name
Subject = “C=US, O=Example Co, CN=something.example.com”

KeySpec = 1
KeyLength = 2048
Exportable = TRUE
MachineKeySet = TRUE
SMIME = False
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = “Microsoft RSA SChannel Cryptographic Provider”
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0
HashingAlgorithm = SHA256

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication / Token Signing
;———————————————–
then run

C:\>certreq -new request.inf request.csr

Related Articles


LDAP Queries with port 389 or port 3268 ?

$
0
0

Reference Article: https://technet.microsoft.com/en-us/library/cc978012.aspx

Port 3268. This port is used for queries specifically targeted for the global catalog. LDAP requests sent to port 3268 can be used to search for objects in the entire forest. However, only the attributes marked for replication to the global catalog can be returned. For example, a user’s department could not be returned using port 3268 since this attribute is not replicated to the global catalog.

Port 389. This port is used for requesting information from the local domain controller. LDAP requests sent to port 389 can be used to search for objects only within the global catalog’s home domain. However, the requesting application can obtain all of the attributes for those objects. For example, a request to port 389 could be used to obtain a user’s department.

The Schema Manager is used to specify additional attributes (i.e ThumbnailPhoto, Department…) that should be replicated to each global catalog server. The attributes included in the global catalog are consistent across all domains in the forest.

Effect of Global Catalog When Searching Back Links and Forward Links

Some Active Directory attributes cannot be located specifically by finding a row in the directory database. A back link is an attribute that can be computed only by referencing another attribute, called a forward link. An example of a back-link attribute is the memberOf attribute on a user object, which relies on the group attribute members to derive its values. For example, if you request the groups of which a specific user is a member, the forward link members , an attribute of the group object, is searched to find values that match the user name that you specified.

Because of the way that groups are enumerated by the Global Catalog, the results of a back-link search can vary, depending on whether you search the Global Catalog (port 3268) or the domain (port 389), the kind of groups the user belongs to (global groups vs. domain local groups), and whether the user belongs to groups outside the local domain. Connecting to the local domain does not locate the user’s group membership in groups outside the domain. Connecting to the Global Catalog locates the user’s membership in global groups but not in domain local groups because local groups are not replicated to the Global Catalog


The version store has reached its maximum size because of unresponsive transaction

$
0
0

This Alert occurs in 2008 R2 Servers

 

Alert: Active Directory cannot update object due to insufficient memory
Last modified by: System
Last modified time: 7/18/2013 1:02:10 PM
Alert description: Active Directory Domain Services could not update the following object in the local Active Directory Domain Services database with changes received from the following source directory service. Active Directory Domain Services does not have enough database version store to apply the changes.

User Action

Restart this directory service. If this does not solve the problem, increase the size of the database version store. If you are populating the objects with a large number of values, or the size of the values is especially large, decrease the size of future changes.

 

Additional Data

Reboot will clear the version table but it does nothing to identify or resolve the core issue.

The version store has reached its maximum size because of unresponsive transaction. Updates to database are rejected until the long-running transaction is omitted or rolled back. TechNet suggested looking for event IDs-1022, 1069,623 and none of these event ids could be found in event viewer.

Resolution:

Below is the solution but it is your own risk to change registry setting.

Backup the Registry before Proceeding

  1. Update ‘Version Store Size’ (the Ops Mgr Agent queue/cache Db) by using Regedit to change “HKLM\System\CurrentControlSet\Services\HealthService\Parameters\”Persistence Version Store Maximum”.
    Value should be 5120 (decimal) (equates to 80MB).
  2. Update value for ‘MaximumQueueSizeKb’ in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HealthService\Parameters\Management Groups\<ManagementGroupName> Value should be 102400 (decimal)

“Please reboot the server”

Check in the Event Viewer for Event ID 1394 “All Problems preventing updates to the Active Directory Domain Services database have been cleared. New Updates to the Active Directory Domain Services database are succeeding. The Net Logon service has restarted”

You can find this event in “Directory Services” Log of the Domain Controller.



AD best practices

$
0
0

AD design and placement best practices:

 

Category Best Practice
Forest – General Forest count:

·         A single forest is ideal when possible

Forest – General Forest trusts:

·         When your forest contains domain trees with many child domains and you observe noticeable user authentication delays between the child domains, you can optimize the user authentication process between the child domains by creating shortcut trusts to mid-level domains in the domain tree hierarchy.

Forest – General Forest functional level for Windows 2003 forests:

·         If all of your DCs are Windows 2003 or higher OS versions then ensure that you raise the forest functional level to 2003 (or higher). This enables the following benefits:

o   Ability to use forest trusts

o   Ability to rename domains

o   The ability to deploy a read-only domain controller (RODC)

o   Improved Knowledge Consistency Checker (KCC) algorithms and scalability

Forest – General Forest functional level for Windows 2008 forests:

·         If all of your DCs are Windows 2008 or higher OS versions then ensure that you raise the forest functional level to 2008 (or higher). This enables the following benefits:

o   Active Directory Recycle Bin, which provides the ability to restore deleted objects in their entirety while AD DS is running.

Forest – FSMO Schema Master placement:

·         Place the schema master on the PDC of the forest root domain.

Forest – FSMO Domain Naming Master placement:

·         Place the domain naming master on the forest root PDC.

Domain – General Domain count:

·         To reap the maximum benefits from Active Directory, try to minimize the number of domains in the finished forest. For most organizations an ideal design is a forest root domain and one global domain for a total of 2 domains.

Domain – General Domain root:

·         The best practices approach to domain design dictates that the forest root domain be dedicated exclusively to administering the forest infrastructure and be mostly empty.

Domain – General Domain functional level:

·         If all of your DCs are Windows 2003 or higher OS versions then ensure that you raise the domain functional level to 2003 (or higher). This enables the following benefits:

o      Renaming domain controllers

o      LastLogonTimeStamp attribute

o      Replicating group change deltas

o      Renaming domains

o      Cross forest trusts

o      Improved KCC scalability

Domain – General Old DC Metadata:

·         In the event that a DC has to be forcibly removed (dcpromo /forceremoval) such as when it has not replicated beyond the TSL, you will need to clean up the DC Metadata on the central DCs. Metadata includes elements such as the Computer Object, NTDS Settings, FRSMember object and DNS Records. Use ntdsutil to perfom this.

Domain – FSMO PDC FSMO placement:

·         Place the PDC on your best hardware in a reliable hub site that contains replica domain controllers in the same Active Directory site and domain.

Domain – FSMO PDC FSMO colocation:

·         PDC and RID FSMO roles should be held by the same DC.

Domain – FSMO RID FSMO placement:

·         Place the RID master on the domain PDC in the same domain.

Domain – FSMO RID FSMO in windows 2008 environment:

·         On 2008 R2 DCs ensure that hotfix 2618669 is applied

Domain – FSMO RID FSMO colocation:

·         PDC and RID FSMO roles should be held by the same DC

Domain – FSMO RID pool size:

·         Ensure that the RID pool is large to avoid possible RID depletion.

Domain – FSMO Infrastructure Master in a single domain forest:

·         In a forest that contains a single Active Directory domain, there are no phantoms. Therefore, the infrastructure master has no work to do. The infrastructure master may be placed on any domain controller in the domain, regardless of whether that domain controller hosts the global catalog or not.

Domain – FSMO Infrastructure Master in a multiple domain forest:

·         If every domain controller in a domain that is part of a multidomain forest also hosts the global catalog, there are no phantoms or work for the infrastructure master to do. The infrastructure master may be put on any domain controller in that domain. In practical terms, most administrators host the global catalog on every domain controller in the forest.

Domain – FSMO Infrastructure Master in a multiple domain forest where not all DCs are hosting a global catalog:

·         If every domain controller in a given domain that is located in a multidomain forest does not host the global catalog, the infrastructure master must be placed on a domain controller that does not host the global catalog.

DC – General DC organizational unit:

·         DCs should not be moved from the Domain Controllers OU or the Default Domain Controllers GPO won’t apply to them.

DC – Network Configuration DNS NIC configuration in a single DC domain:

·         If the server is the first and only domain controller that you install in the domain, and the server runs DNS, configure the DNS client settings to point to that first server’s IP address. For example, you must configure the DNS client settings to point to itself. Do not list any other DNS servers until you have another domain controller hosting DNS in that domain.

DC – Network Configuration DNS NIC configuration in a multiple DC domain where all DCs are also DNS servers:

·         In a domain with multiple domain controllers, DNS servers should include their own IP addresses on their interface lists of DNS servers. We recommend that the DC local IP address be the primary DNS server, another DC be the secondary DNS server (first local and then remote site), and that the localhost address act as a tertiary DNS resolver on the network cards for all DCs.

DC – Network Configuration DNS Configuration in a multiple DC domain where not all DCs are DNS servers:

·         If you do not use Active Directory-integrated DNS, and you have domain controllers that do not have DNS installed, Microsoft recommends that you configure the DNS client settings according to these specifications:

o    Configure the DNS client settings on the domain controller to point to a DNS server that is authoritative for the zone that corresponds to the domain where the computer is a member. A local primary and secondary DNS server is preferred because of Wide Area Network (WAN) traffic considerations.

o    If there is no local DNS server available, point to a DNS server that is reachable by a reliable WAN link. (Up-time and bandwidth determine reliability.)

o    Do not configure the DNS client settings on the domain controllers to point to your ISP’s DNS servers. Instead, the internal DNS server should forward to the ISP’s DNS servers to resolve external names.

DC – Network Configuration Multi-homed DC NIC configuration:

·         It is recommended not to run a domain controller on a multi-homed server. If server management adapters are present or multi-homing is required then the extra adapters should not be configured to register within DNS. If these interfaces are enabled and allowed to register in DNS, computers could try to contact the domain controller using this IP address and fail. This could potentially exhibit itself as sporadic failures where clients seemingly authenticate against remote domain controllers even though the local domain controller is online and reachable.

DC – Network Configuration WINS NIC configuration on DCs where the WINS service is hosted:

·         Unlike other systems, WINS servers should only point to themselves for WINS in their client configuration. This is necessary to prevent possible split registrations where a WINS server might try to register records both on itself and on another WINS server.

DC – DNS Server Configuration External name resolution:

·         It is recommended to configure DNS forwarders first for internet traffic. This will result in faster and more reliable name resolution. If that is not an option then utilize root hints.

DC – DNS Server Configuration DNS Zone Types:

·         Use directory-integrated storage for your DNS zones for increased security, fault tolerance, simplified deployment and management.

DC – DNS Server Configuration DNS Scavenging:

·         DNS scavenging is recommended to clean up stale and orphaned DNS records that were dynamically created. This process keeps the database from growing unnecessarily. It also reduces name resolution issues where multiple records could unintentionally point to the same IP address. This is often seen in workstations that use DHCP, because the same IP address can be assigned to different workstations over time.

DC – DNS Server Configuration DNS _msdcs.forestdomain zone authoritation:

·         It is recommended that every DNS server be authoritative for the _msdcs.forestdomain zone. A freshly created Windows 2003 forest places _msdcs in it’s own zone and this zone is replicated forest wide. If the domain began as a Windows 2000 forest the _msdcs zone is a subzone of the forest root zone. The _msdcs zone could be placed in it’s own zone or the forest root zone could be replicated forest wide. Having _msdcs on every Domain Controller running DNS allows the Domain Controllers to look for other Domain Controllers without having to forward the query.

The dnslint.exe tool can be used to validate that each DNS server it queries is authoritative for the_msdcs.forestdomain zone.

DC – DNS Server Configuration DNS on servers with multiple NICs:

·         If multiple NICs exist on the DNS server, make sure that the DNS services are only listening on the LAN interface.

DC – DNS Server Configuration DNS services in multiple DC environments:

·         Configure all DNS Servers to have either local copies of all DNS Zones or to appropriately forward to other DNS servers.

 

Replicating DNS zones across domain lines will allow all domains in the forest to share DNS information easier and ultimately make DNS administration easier. Simply secure each DNS zone as needed if decentralized administration and security is a concern. Replicate to “all Domains in the Forest” even if you have only one domain, this will save you time in the future should a second domain be added.

 

·         Use Active Directory (AD) Integrated DNS Forwarders instead of normal standalone DNS Forwarders when possible

 

1.     Example:

dnscmd /ZoneAdd domain.com /DsForwarder 10.10.10.10 [/DP /forest]

 

Using AD integrated forwarders will replicate the information to all the DNS servers in the domain or the forest (/DP /Forest). This will simplify DNS administration. Replicating to the forest (/DP /Forest) is preferred.

 

·         Use AD Integrated Stub Zones instead of standalone DNS Domain Forwards.

 

Stub Zones can automatically be replicated to all DNS servers when AD integrated Zones are used and they work similar to DNS forwards. Using DNS Stubs will decrease administration as DNS servers are replaced overtime. Using standalone server based DNS Domain Forwards can require configuration of every DNS server, increasing DNS administration

 

·         Configure Zone Transfers by using the Name Servers tab, and configuring the Zone Transfers tab to transfer to and notify the Name Servers of changes. Do not use Zone Transfers to IP Addresses.

 

Using the Name Servers tab to configure the Zone Transfer creates a better documented DNS server. An Active Directory integrated DNS Server will replicate the Name Server information to each DNS server. As DNS servers are added or replaced this information is kept, using only the Zone Transfers tab and transferring by IP Address can result in lost information when a server is replaced.

DC – DNS Server Configuration DNS services in environments which integrate with other companies:

·         Use AD Integrated DNS forwarders to resolve DNS Zones across independent companies/forests, or replicate DNS Zones onto all DNS servers if the companies are owned by the same parent company and in the same forest.

DC – DNS Server Configuration DNS record caching:

·         Configure all DNS Servers to be a Caching DNS Server in addition to hosting DNS Zones.

 

This is the default configuration for Windows 2003 DNS servers. Leaving this enabled simplifies DNS administration and speeds DNS queries.

DC – DNS Server Configuration DNS Dynamic Updates:

·         Configure DNS Zones that are used by Active Directory domains to accept Dynamic Updates

 

Allowing Dynamic DNS (DDNS) updates on DNS zone used by the Active Directory domain is the default/recommended configuration.   This configuration is fundamental to having good communication between all devices in the AD domain.

DC – DNS Server Configuration DNS manually created records in dynamic zones:

·         Do NOT manually create Host (A) records in the same domain with records where dynamic Host records are created via DDNS. Instead create a SubZone (or new Zone) and create the Host records there, then create an Alias (CNAME) record in the appropriate zone for user friendly DNS searches.

 

The SubZone (or new Zone) can be used to document the device type as server, router, appliance, etc., this provides a better documented DNS environment. This also allows manual DNS host records to be easily monitored and maintained. As equipment is replace over time easier DNS maintenance is achieved.

 

a.       Bad Practice Example: domain.com is used for DDNS registration, do not manually create a Host (A) record in this Zone.

 

b.     Best Practice Example: domain.com is used for DDNS registration, serv.domain.com is used for manual Host records, then place an Alias record in domain.com to allow easy client configuration.

DC – Time Configuration DC NT5DS configuration for servers not hosting the PDC FSMO role:

·         Configure NTP on all domain controllers to point to the domain controller hosting the PDC FSMO role.

DC – Time Configuration DC NT5DS configuration for the domain PDC FSMO role:

·         Configure the Windows Time service (on the PDC FSMO role holder) to synchronize with an external time server.

DC – Time Configuration External NTP server definition:

·         When specifying specific NTP servers it is possible to define one or more servers. It is important to follow the correct syntax when defining multiple NTP servers. Failure to do so may invalid the list and cause time synchronization failures. The main point to focus on is the delimiters between each value. The correct delimiter is a space. Commas, semi-colons and anything other than a space is invalid.

Sites and Services Sites:

·         Do not disable the Knowledge Consistency Checker (KCC).

·         Do not specify bridgehead servers.

·         Keep the replication schedule open as long as is practical.

·         Remove empty domains and consolidate any IP subnets associated therein to sites which have domain controllers.

·         Do not enable Universal Group Membership Caching in sites where a global catalog resides. Universal Group Membership Caching is set at the site level and affects all DCs in the site.  If one of the DCs is a GC, the remaining DCs will continue to cache Universal Group Membership resulting un unpredictable authentication failures (dependent on which DC is chosen for authentication by the DS Locator Service).

·         All sites should contain at least one global catalog server. In order to logon, a user account needs to be evaluated against Universal Group Membership which is stored on GCs. A site without GCs can cause logon failure as a result. A new option is to enable Universal Group Membership Caching in order not to require a GC in each site.

Sites and Services Connection objects:

·         Do not manually create connection objects. Do not manually modify default connection objects. If you leave the KCC to do it’s job, it will automatically create the necessary connection objects. However, any manually created connection object (INCLUDING an automatically created object that has been modified) will remain static. “Admin made it, so admin must know something I don’t know” is the general logic behind this. Only create manual connection objects if you know something the KCC doesn’t know.   Don’t confuse a connection object with a site link.

·         Connection objects should maintain default schedules. By default, connection objects will inherit their schedule based on the site link.  However, they can be changed directly.  Once you make a change to a connection object, it will no longer be managed by the KCC and will be treated as a manual connection object.

·         If you are cleaning up the connection objects, don’t delete more than 10 connections at a time or a Version Vector Join (vv join) might be required to re-join the DC.

·         Do not disable connection objects.

Sites and Services Site links:

·         Do not manually create site-links, let the ISTG create links based on KCC results.

·         All sites need to be contained in at least one Site Link in order to replicate to other sites.  Automatic Site Covering and DFS costing might be affected if sites are not within site links.

·         There must be 2 or more sites associated with a site link.  The deletion of a site may require the manual clean-up of the respective site link.

·         If two site links contain the same two remote sites, a suboptimal replication topology may result.

·         Do not disable site link transitivity.

Sites and Services Site subnets:

·         All infrastructure ip subnet ranges where servers or workstations login from should be defined within ad sites and services. Sites consist of one or more subnets and allow clients to logon to a local Domain Controller quickly through the DC Locator Process. If the subnet definition is missing from AD, the client will logon to any generic DC which may be on the other side of the world. You can easily find subnets not defined in AD by reviewing the Netlogon.log file in %systemroot%\debug folder. You can look for all DCs with event 5778 using eventcomb and then selectively gather the various netlogon.log files.

Sites and Services Inter-Site Change Notification:

·         Replication of AD is always pulled and not pushed. Within a site, when a change occurs, a DC will notify other DCs of the change so that they can pull the change. Between sites, this is not used and rather a schedule is used with the lowest time being 15 minutes.

 

This can be changed to work with Change Notification making inter-site replication much faster (but using more bandwidth as a consequence). It is recommended to only enable change notification on a link if it is a high speed link or a dedicated Exchange site.

 

To enable Change Notification, use adsiedit.msc and update the attribute called “Options” on the site link to a value of 1. You can find this object in the Configuration NC.

Replication Morphed folders:

·         A morphed folder refers to a folder that has been renamed by FRS to resolve a conflict. FRS identifies the conflict during replication, and the receiving member protects the original copy of the folder and renames (morphs) the later inbound copy of the folder. The morphed folder names have a suffix of “_NTFRS_xxxxxxxx,” where “xxxxxxxx” represents eight random hexadecimal digits.

 

If morphed folders are found within SYSVOL they should be fixed or they may not be linked to other AD components properly. Fixing a morphed folder involves removing or renaming the folder and its morphed pair and waiting for replication to complete. Then the correct folder is identified and renamed to its correct name or copied to its correct location.

Replication Lingering objects:

·         Lingering objects are objects that exist on 1 or more DCs but not on others. Lingering objects can occur if a domain controller does not replicate for an interval of time that is longer than the tombstone lifetime (TSL). The domain controller then reconnects to the replication topology. Objects that are deleted from the Active Directory directory service when the domain controller is offline can remain on the domain controller as lingering objects. This can be caused from recovering a DC from a virtual snapshot or from reviving a domain controller which has been off the network or not replicating with the domain for longer than the tombstone lifetime.

Replication GPT and GPC lingages:

·         Group Policy Objects have two parts consisting of the Group Policy Template (GPT) residing in the SYSVOL and the Group Policy Container (GPC) in Active Directory. When problems occur with SYSVOL replication or in the AD itself, the two halves can become unsynchronized. When this happens, Group Policy can cease to function or start behaving strangely.

 

To validate synchronization of GPTs and GPCs, use the Resource Kit tool gptool.exe. In a healthy domain all policies should return a “Policy OK” result. When a policy fails to do so, some troubleshooting of SYSVOL replication and GPO version numbers is in order.

Replication Topology clean-up setting:

·         This should be enabled. This option controls the automatic clean-up of unnecessary connection objects and replication links. To enable it, run:

 

repadmin /siteoptions HubServer1 -IS_TOPL_CLEANUP_DISABLED

Replication Detect stale topology setting:

·         This site option is used by the KCC Branch Office Mode which tells the KCC to ignore failed replication and not to try to find a path around.

 

repadmin /siteoptions BranchServer1 -IS_TOPL_DETECT_STALE_DISABLED

 

This should not be enabled on Central or Hub Sites or replication failures can result. To undo this:

 

repadmin /siteoptions HubServer1 +IS_TOPL_DETECT_STALE_DISABLED

Replication KCC Intra-site topology setting:

·         If the KCC Intra-Site Topology is disabled, all replication connections need to be manually maintained which will have a high administrative burden. This is not recommended. Rather allow the KCC to dynamically build the topology every 15 minutes.

 

repadmin /siteoptions HubServer1 +IS_AUTO_TOPOLOGY_DISABLED

 

For inter-site, you may choose to disable the KCC and create manual connection objects as follows:

 

repadmin /siteoptions HubServer1 IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED

Replication Inbound replication setting:

·         Disabling inbound replication should only be used for testing and should be removed once complete. Leaving inbound replication disabled will eventually orphan the DC once the TSL has expired. To re-enable inbound replication, run the following (Note the + and – switches on the Repadmin options to confirm or negate the option):

 

repadmin /options site:Branch -DISABLE_INBOUND_REPL

Replication Outbound Replication setting:

·         Outbound replication is disabled automatically when a DC has not replicated within it’s tombstone linetime (180 days). If it has been disabled manually you need to reenable it as follows:

 

repadmin /options site:Branch -DISABLE_OUTBOUND_REPL

Replication Ignore schedules setting:

·         If you’ve configured replication on a schedule on a site link, this schedule will be ignored if the “Ignore IP Schedules” option is set on the IP Container.

 

This is NOT the GUI for “Options = 1” which enables inter-site change notification.

Replication Topology Minimum Hops setting:

·         By default, the KCC will create the intra-site repl topology so that no replication partner is more than 3 hops away. This 3 hop limit can be disabled as follows:

 

repadmin /siteoptions server1 +IS_TOPL_MIN_HOPS_DISABLED

 

To undo this, negate the option (-) as follows:

 

repadmin /siteoptions server1 -IS_TOPL_MIN_HOPS_DISABLED

Replication Non-default dSHeuristics setting:

·         The dSHeuristics attribute modifies the behaviour of certain aspects of the Domain Controllers. An examples of behavioral changes include enabling anonymous LDAP operations.   The dsHeuristics attribute is located at CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=<forest root domain>

 

The data is a Unicode string where each value may represent a different possible setting.

 

The default value is <not set>

 

For more information on dSHeuristics:

http://msdn.microsoft.com/en-us/library/cc223560(PROT.10).aspx

Replication Recycle bin deleted object lifetime setting:

·         Without knowing the Recycle Bin Deleted Object Lifetime, it’s not possible to know if a deleted object will be recoverable. By default, the value is set to Null and it uses the value of the TombStone Lifetime instead. The TSL is also set to Null by default and if it remains null, it uses the hard coded value of 60 (or 180 if the forest was deployed on 2003 SP1 or above). If the value is changed, ensure it is longer than your backup schedule to avoid having to do authoritative restores on deleted objects.

 

The location of the TombStone Lifetime and the Deleted Object Lifetime are both at CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=<forest root domain> with the following Attribute Names:

 

TombStone Lifetime (TSL): tombstoneLifetime

Deleted Object Lifetime: msDS-DeletedObjectLifetime

Replication Inbound Replication Connections:

·         Do not manually create inbound replication connections on an RODC. A manually created inbound replication connection from an RODC will result in failed replication as an RODC will never replicate outbound.

Read Only DCs Site links to RODC sites:

·         In a mixed environment of both 2003 and 2008 DCs, ensure the lowest cost site link for an RODC site is to a site with more than 1 writeable 2008 domain controller. The Filtered Attribute Set (FAS) is the definition of what an RODC may replicate (some attributes being filtered). It only recognises the FAS when replicating to a 2008 RWDC. If there is only 1 RWDC at the next hop which fails, the RODC may replicate with a 2003 DC including all attributes. It’s important to validate the site links, site link bridges and costs to ensure that there are at least 2 RWDCs each RODC can replicate from.

Read Only DCs RODCs per site:

·         Ideally each RODC site contains only a single RODC. RODCs cache users passwords. In the event of a disconnection to a RWDC, the users can logon using the cached RODC password.

 

In the event that there are multiple RODCs in the Site for the same domain, it is unpredictable which RODC will respond to an Authentication Request. Therefore, user logon experience will be equally unpredictable.

Read Only DCs RODCs and RWDCs in the same site:

·         Typically, RODCs are placed in remote branch sites by themselves. In the event that there are both RWDCs and RODCs, there will be a noticeable and unpredictable user experience in the event of the RWDC being unavailable. This is especially true during WAN outages where passwords are not cached.

Read Only DCs Number of non-RODCs per domain.

·         It is recommended to always have more than a single read/write domain controller per domain. Although a single RWDC and many RODCs can exist in a domain, this is not recommended. RODCs can’t replicate outbound and in the event of failure of the RWDC an undesirable AD Restore would be required.

  AutoSiteCoverage:

·         AutoSiteCoverage enables a DC to cover a site where no DCs exist by registering the relevant SRV records for the site in question. Windows 2003 DCs don’t recognise RODCs and if AutoSiteCoverage is enabled on these DCs, they will register their SRV records in this site. This will result in users authenticating to the 2003 DC even though an RODC exists in the site.

 

To resolve this, either disable AutoSiteCoverage on the 2003 DC or install the RODC Compatibility Pack on the 2003 DCs.

 

HKLM\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters

 

REG_DWORD called AutoSiteCoverage, value = 1 or 0


Event ID 1479 – This source server failed to generate the changes

$
0
0

Alert: This source server failed to generate the changes

Description: This directory service failed to retrieve the changes requested for the following directory partition. As a result, it was unable to send change requests to the directory service at the following network address.

1479

Event ID: 1479

Active Directory Domain Services could not update the following object in the local Active Directory Domain Services database with changes received from the following source directory service. Active Directory Domain Services does not have enough database version store to apply the changes.

User Action

Restart this directory service. If this does not solve the problem, increase the size of the database version store. If you are populating the objects with a large number of values, or the size of the values is especially large, decrease the size of future changes.

Additional Data

Error value:

8573 The database is out of version store.

Resolution:

{MS has provided the resolution in this Link}

Note: Take Backup of Registry before changing

Registry Location:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

You need to add the Registry value “EDB max ver pages” with 32 Bit DWord Decimal value as you need with reference below:

9600 = 152 MB
12800 = 202 MB
16000 = 252 MB
19200 = 302 MB

Reboot the Server once the changes have been done.

Check the Event viewer after restart; you need to get event 1394 in ADS Logs

1394


FIM and MIM resources

$
0
0

Dump AD Users objects with ‘Password never expires’, ‘Store password using reversible encryption’ and ‘Use Kerberos DES encryption types for this account’.

$
0
0

How do you check for these accounts?

Get-ADUser-Filter {UserAccountControl-band 0x200000}

That was easy!

User Accounts have different options that can be set to control security settings. In Active Directory Users and Computers most of these options can be found in the ‘Account’ tab of the user object dialogue box, under ‘Account options’:

 

 

In the above window, the user is set to use DES encryption. This setting is stored as part of a binary mask in the ‘UserAccountControl’ attribute of the user object. In the binary mask, each positional bit represents a different possible user account option that can be switched on or switched off. Like a light switch – when switched on, the option is active. These settings can be queried using PowerShell’s ‘binary And’ (-band) operator. The hexadecimal setting for DES encryption is 0x200000 and we use -band to check that it is present (switched on) in the binary mask.

 

Here are other values you could check for with the aid of a filter and Get-ADUser:

Property Flag Value in Hexadecimal Value in Decimal
SCRIPT 0x0001 1
ACCOUNTDISABLE 0x0002 2
HOMEDIR_REQUIRED 0x0008 8
LOCKOUT 0x0010 16
PASSWD_NOTREQD 0x0020 32
PASSWD_CANT_CHANGE 0x0040 64
ENCRYPTED_TEXT_PWD_ALLOWED 0x0080 128
TEMP_DUPLICATE_ACCOUNT 0x0100 256
NORMAL_ACCOUNT 0x0200 512
INTERDOMAIN_TRUST_ACCOUNT 0x0800 2048
WORKSTATION_TRUST_ACCOUNT 0x1000 4096
SERVER_TRUST_ACCOUNT 0x2000 8192
DONT_EXPIRE_PASSWORD 0x10000 65536
MNS_LOGON_ACCOUNT 0x20000 131072
SMARTCARD_REQUIRED 0x40000 262144
TRUSTED_FOR_DELEGATION 0x80000 524288
NOT_DELEGATED 0x100000 1048576
USE_DES_KEY_ONLY 0x200000 2097152
DONT_REQ_PREAUTH 0x400000 4194304
PASSWORD_EXPIRED 0x800000 8388608
TRUSTED_TO_AUTH_FOR_DELEGATION 0x1000000 16777216
PARTIAL_SECRETS_ACCOUNT 0x04000000 67108864

 

You’re quite at liberty to combine them. This one tests for users who have the following set: ‘Password never expires’, ‘Store password using reversible encryption’ and ‘Use Kerberos DES encryption types for this account’.

$COMBINED_VALUE = 0x10000 + 0x0080 + 0x200000

Get-ADUser-Filter {UserAccountControl-band$COMBINED_VALUE}


WAP 2012 R2 highly available?

$
0
0

How do I configure WAP in Windows Server 2012 R2 highly available?

Web Application Proxy (WAP) in Windows Server 2012 R2 provides a reverse proxy service enabling services hosted internally on-premises to be published to the Internet. It does this while also integrating with Active Directory Federation Services (ADFS) to enable pre-authentication, single sign-on and more. If you need to use WAP in a production scenario its important that the WAP service is highly available. This is achieved by deploying multiple WAP instances that use the same certificate and connect to the same ADFS instance to ensure consistent policy. Network load balancing is used to provide a virtual IP that joins the multiple WAP instances into a single highly available service. You can use either Windows NLB or a separate load balancing solution.

Working with WAP: https://technet.microsoft.com/en-us/library/Dn584113.aspx

A step-by-step guide is available which walks through configuring two WAP servers using Windows NLB at http://blogs.technet.com/b/platformspfe/archive/2015/02/16/part-6-windows-server-2012-r2-ad-fs-federated-web-sso.aspx. As part of the same series it also walks through deploying a highly available ADFS implementation which is important as all parts of the solution need to be highly available to provide a highly available complete solution.

Do I need multiple NICs for Web Application Proxy?

No. Web Application Proxy has no requirements or preference around the number of network adapters. The decision to have multiple NICs is dependent only on your network topology and if you need multiple network adapters to enable the connectivity required

Best practice analyzer: https://technet.microsoft.com/en-us/library/Dn383651.aspx

Example of implementation: http://blogs.technet.com/b/platformspfe/archive/2015/02/16/part-6-windows-server-2012-r2-ad-fs-federated-web-sso.aspx

 

 


Viewing all 302 articles
Browse latest View live