Contents

XenDesktop 7 Site Configuration Database ** IMP

Website Visitors:
Contents

XenDesktop 7, and some of the earlier XD editions as well, is based on the FlexCast Management Architecture or FMA in short. Simply put you could state that the FMA is primarily made up out of Delivery Controllers and Agents, of-course there’s more to it but for now lets just leave it at that. Have a lookhere for a complete overview on FMA. Delivery Agents are installed on all virtual and or physical machines provisioned by XenDesktop 7, they communicate (and register themselves) with the Delivery Controller(s) which on their turn contact the license server and communicate with the central Site configuration database, lets have a closer look.

Site Configuration Database?

I’ll try and keep it simple… XenApp has its IMA (data) store, XenDesktop has its Site configuration (SQL) database as part of the FMA. I’m not going into to much detail here, just think of the IMA and Site configuration database as one central location where all configuration information regarding the Farm or Site gets stored and pulled from when needed. In the case of IMA (XenApp) all servers have a Local Host Cache in which a copy of the IMA data store information gets cached. XenDesktop doesn’t make use of LHC, more on this in a bit. XenApp Data Collectors also cache live runtime data used for load balancing, with XenDesktop all this is stored in the central Site configuration database, read on.

A bit more on Virtual Agents

Unlike XenApp servers, Delivery Agents only communicate with the Site Delivery Controller(s) and do not need to access the Site configuration database or license server directly. Having said that, XenApp workers (session host only servers) offer the same sort of benefit. As they only host user sessions and will (or can) never be ‘elected’ as an Data Collector for their zone they won’t get all the IMA store (database) information pushed into their LHC enhancing overall performance. However, these workers still consist of the same bits and bytes as installed on a Data Collector (the full XenApp installation) compared to ‘just’ a Delivery Agent which are lighter weight, as Citrix puts it. Providing you with multiple performance benefits.

Some differences

Note that although Delivery Controllers are comparable to XenApp Data Collectors there are some distinct differences. Sure, the both handle user authentication and load balancing for example but in very different ways. Data Collectors are part of zones with each zone having its own (there can be only one per zone) Data Collector. Multiple Data Collectors in your Farm means having multiple zones. These Data Collectors (zones) need to be able to communicate with each other. With XenDesktop / Delivery Controllers this works differently, there are no more zones just one big Site. Controllers are part of your Site and you can have multiple spreading the load, no problem.

If you have two or more Controllers as part of your Site infrastructure they only communicate with the central Site configuration database and license server, not with each other as apposed to XenApp Data Collectors mentioned above. If you need or want to separate them like we did with zones, for geographical purposes perhaps, you will need to create two separate Sites and apply load balance policies on Site level instead of zone preference or load balance policies on zone level, not a necessity but often preferable.

Local Host Cache

In a XenApp Farm a Data Collector and all other XenApp servers as well, with the exception of workers a.k.a. session host only servers, have something called a Local Host Cache (LHC) in which they cache a copy of the central IMA configuration database. This helps speed up the user authentication and application enumeration process. Data Collectors also hold and collect dynamic live runtime data used for making load balance decisions.

In an FMA, or XenDesktop architecture Delivery Controllers don’t have a LHC so if they need to authenticate a user or enumerate applications for example they will (always) need to contact the central Site configuration database for information. The same goes for load balancing information, it doesn’t get stored locally. So if you have multiple Controllers configured within your Site, but on different physical (geographically separated) locations keep in mind that they will all need to communicate with the same central database when a user logs in, starts an application or to get load balance information etc…

Also… In a XenApp environment all servers, including the Data Collectors, will contact the IMA database (often referred to as the IMA store) every 30 minutes to update their LHC (except for so called workers a.k.a. session host only servers which need to be configured explicitly). In XenDesktop there’s no need for this since the Delivery Controllers don’t have a LHC and get all their information directly from the central Site configuration database, live runtime data included. Although the above differences do raises some questions as far as Farm vs Site designs go, XenDesktop has been doing it this way for a few years now so we should be ok.

Database what’s in it (for me)

Since all static configuration information like Site policies, Machine Catalogs, Delivery Groups and published applications and or (hosted) desktops is stored in the central database, and all live dynamic runtime data like who is connected to which resource, on which server, server load and connection statuses used for load balance decision making as well, this database has become and is very important. As of XenDesktop 7 the database can only be SQL no other types are supported. If for whatever reason this database becomes unreachable, running sessions will keep working but new sessions cannot be established and configuration changes aren’t possible.

High Availability

It’s recommended to backup your database on a regular basis so it can be restored if necessary when the database server fails (or the database itself) In addition there are several high availability solutions to consider. I got these from the E-Docs website:

SQL Mirroring… This is the recommended solution. Mirroring the Database ensures that, should you lose the active database server, the automatic failover process happens in a matter of seconds, so that users are generally unaffected. This method, however, is more expensive than other solutions because full SQL Server licenses are required on each database server; you cannot use SQL Server Express edition for a mirrored environment # Using the hypervisor’s high availability features… With this method, you deploy the database as a virtual machine and use your hypervisor’s high availability features. This solution is less expensive than mirroring as it uses your existing Host software and you can also use SQL Express. However, the automatic failover process is slower, as it can take time for a new machine to start for the database, which may interrupt the service to users.

SQL Clustering… Microsoft’s SQL clustering technology can be used to automatically allow one server to take over the tasks and responsibilities of another server that has failed. However, setting up this solution is more complicated, and the automatic failover process is typically slower than with alternatives such as SQL Mirroring # AlwaysOn Availability Groups… a high-availability and disaster recovery solution introduced in SQL Server 2012 to enable you to maximize availability for one or more user databases. AlwaysOn Availability Groups requires that the SQL Server instances reside on Windows Server Failover Clustering (WSFC) nodes. A cool feature I still need to look into myself.

https://www.mediafire.com/convkey/8c4f/cu6mmv3pofkqr077g.jpg

Using one of the above methods, in combination with regular (daily) back-ups, will ensure that your central Site configuration database will always (well…) be online, or will at least narrow your chances of running into any issues. Have a closer look at the above solutions and decide for yourself which one works best for you.

VDA in High Availability mode

Normally all connections run from your installed Agents through your Delivery Controllers. But what if you Controllers aren’t reachable? So your database is fine but your Controller(s) aren’t hmm… You can configure your Virtual Agents (VA) to operate in high availability mode, this way users can continue to use their desktops and installed applications. In high availability mode the VA will accept direct ICA connections from users instead of connections brokered by a Delivery Controller.

Although it’s hard to imagine this ever happening it’s good to know what your options are :-) When enabled, if the communication with all Delivery Controllers fails high availability mode is initiated after a pre set period of time, which is configurable. By default it kicks in after 300 seconds. High availability mode will be enabled for a maximum of 30 day’s in total. During this time the VA will attempt to register itself with the, or one of the controllers while your users will continue to use their desktop and or installed applications.

As soon as a Controller becomes available the VA will try and register itself without any interruptions to the user. From then on all other connection will be ‘brokered’ as normal. If during these 30 days the VA isn’t able to register itself with one of the Controllers the desktop(s) will stop listening for connections and will be no longer available.

As per Citrix: High availability mode is suitable only for use with dedicated desktops, where the mapping between the user and the DA is known. You cannot configure high availability mode for use with pooled desktops.

You need to

To enable high availability mode, you:

Set the HighAvailability and HaRegistrarTimeout registry keys. These keys need to be created manually after the Virtual Agent is installed. With the HighAvailability key you enable or disable high availability for the VA. Set it to 1 to enable or 0 to disable. the HaRegistrarTimeout key lets you configure the amount of time the VA will try and register itself with a Delivery Controller if it looses it’s connection before initiating high availability mode on the VA # Secondly you need to provide your users with an ICA launch file that will enable them to make direct ICA connections. You have to create an ICA file for each user who requires this feature; Citrix does not create or distribute ICA files for this purpose.

https://www.mediafire.com/convkey/c679/ach543mt5hee5wf7g.jpg

Have a look here it will lead you to the E-Docs article explaining how to set and create the appropriate registry key’s and how to create an ICA launch file. There are however a few limitations to using the VA high availability feature, these include:

User roaming. If a user device is already connected to the desktop, users are unable to connect from a different user device # Delivery Controller-originated policies. Policies originating on the Controller, such as those governing client drive mapping and access to the clipboard, will not function as there is no connection to the Controller. Policies originating from the Domain Controller and Local Group Policy are unaffected. Note that policies from a previous registration persist and are applied, so outdated policies might take affect # Power management. When the desktop powers up, it attempts to register, fails and, after the timeout, enters high availability mode # NetScaler Gateway and Remote Access.

Conclusion

Be aware that unless proper action is taken your central SQL configuration database might be a single point of failure, it seems obvious, but still… Be sure to spent some time understanding how this works, get to know the FMA architecture, it can save you a lot of trouble. Browse through some of the E-Docs articles available, it’s all explained in detail.

Posted in basvankaam.com

Want to learn more on Citrix Automations and solutions???

Subscribe to get our latest content by email.

If you like our content, please support us by sponsoring on GitHub below: