500 likes | 708 Views
Independent Management Architecture: Terms and Concepts. Barry Flanagan, Southeast Systems Engineer. New Terms. IMA: Independent Management Architecture Data Store: Central configuration database LHC: Local Host Cache (Persistent data cache that exists on each server)
E N D
Independent Management Architecture: Terms and Concepts Barry Flanagan, Southeast Systems Engineer
New Terms • IMA: Independent Management Architecture • Data Store: Central configuration database • LHC: Local Host Cache (Persistent data cache that exists on each server) • Data Collector: Manages dynamic data and client enumeration/resolution (replaces ICA Master Browser) • Zone: Deliberate grouping of MetaFrame XP servers, each with its own Data Collector • CMC: Citrix Management Console (replaces MetaFrame 1.8 administration tools)
What is IMA? Why is it important? • IMA… • Is a TCP-based, event driven messaging bus, used by MetaFrame XP servers. • Is a modular and easily extensible subsystem capable of supporting current and future MetaFrame products and tools. • Overcomes the scalability constraints of the MetaFrame 1.8 platform, allowing MetaFrame XP to scale environments to new levels. • Provides capability to administer any farm from a central tool (CMC) that doesn’t have to run on a MetaFrame server.
Independent Management Architecture Citrix Management Console MetaFrame XP NT 4.0 TSE MetaFrame XP Windows 2000 Independent Management Architecture (IMA) DB Load Management Application Packaging & Delivery System Monitoring & Analysis • Central Data Store • SQL, Oracle, Access
MetaFrame Server Farms • MetaFrame 1.8: • Server Farms in MetaFrame 1.8 are a collection of servers on a given broadcast segment that are managed as a single unit. • Server Farms in MetaFrame 1.8 may also be defined by sharing a common ‘Application Set’. • MetaFrame XP: • The Server Farm in MetaFrame XP defines the scope of management as well as the ‘Application Set’. • Server Farms in MetaFrame XP are designed to operate across segments and are managed through the Citrix Management Console.
MetaFrame 1.8/ICA Browser MFAdmin, PAM, etc. MFAdmin, PAM, etc. • MetaFrame 1.8/ICA Browser Attributes • Server Farms cannot span segments. • Each segment has ONE ICA Master Browser. • ICA Master Browser stores dynamic data for the segment and handles Enumeration/Resolution for ICA clients. • Persistent data stored in registry (farm membership, licenses, published applications, etc.). ICA Master Browser ICA Master Browser Segment 1 10.1.1.x Farm 1 (2, 3) Segment 2 10.1.2.x Farm 4 (5, 6)
MetaFrame 1.8/ICA Browser MFAdmin, PAM, etc. MFAdmin, PAM, etc. • MetaFrame 1.8/ICA Browser Attributes • Persistent data read by ICA browser/PN Service at startup. • Cross server configuration tools read/write to registry on all servers. • Servers communicate via UDP broadcasts, remote REG calls, RPCs, etc. ICA Master Browser ICA Master Browser Segment 1 10.1.1.x Farm 1 (2, 3) Segment 2 10.1.2.x Farm 4 (5, 6)
MetaFrame XP/IMA CMC LHC LHC LHC LHC LHC LHC DS DC DC • MetaFrame XP/IMA Attributes • Server farms can span segments, can contain multiple zones. • Each zone has ONE Data Collector. • Data Collectors store dynamic data and handle Enumeration/Resolution for ICA clients. • Persistent farm data stored in shared, persistent Data Store. Server Farm Zone 2 Zone 1
MetaFrame XP/IMA CMC LHC LHC LHC LHC LHC LHC DS DC DC • MetaFrame XP/IMA Attributes • Persistent data read from DS at startup, cached in Local Host Cache. • Management tool communicates via IMA to Data Store and member servers. • Servers communicate via IMA (TCP). Server Farm Zone 2 Zone 1
Data Store • Attributes of the MetaFrame XP Data Store (DS) • The DS is a repository (database) which contains persistent, farm-wide data, such as member servers, licenses in farm, zone configs, printers/drivers, published apps, load evaluators, trust relationships, etc. • Each MetaFrame XP farm shares one Data Store. • All information in the DS is stored in an encrypted binary format (except indexes). • A farm can operate for 48 hours if DS is unavailable, then licenses time out and no new users can connect. • A DS can be an Access, MS SQL, or Oracle database. • A DS can be configured for either ‘Direct’ or ‘Indirect’ access.
Data Store in ‘Direct’ Mode DS • Attributes of Direct Mode • Uses Microsoft SQL 7/2000 or Oracle 7.3.4/8.0.6/8.1.6 database. • Servers initialize directly from the DS via ODBC. • Servers maintain an open connection to the database for consistency checks. LHC LHC LHC LHC LHC LHC
Data Store in ‘Indirect’ Mode LHC LHC LHC LHC LHC LHC DS DC • Attributes of Indirect Mode • Uses JET 4.x, Microsoft SQL 7/2000 or Oracle 7.3.4/8.0.6/8.1.6 database. • Member servers communicate via through ‘IMA host’ server to read/write to data store. • If using JET database, MF20.MDB lives on the ‘IMA host’ server. IMA Host (indirect mode)
Local Host Cache (LHC) CMC LHC LHC LHC LHC LHC LHC DS DC DC • Attributes of the Local Host Cache • A subset of the Data Store, stored on each individual server (IMALHC.MDB). • Contains basic info about servers in farm, pub. apps and properties, trust relationships, server specific configs (product code, SNMP settings, load evaluators, etc.). • Used for initialization if DS is down. • Used for ICA client application Enumeration. Server Farm Zone 2 Zone 1
Data Collectors CMC LHC LHC LHC LHC LHC LHC DS DC DC • Attributes of Data Collectors • A DC stores dynamic information about a farm, such as servers up/down, logons/logoffs, disconnect/reconnect, license in use/released, server/application load, etc. • There is a DC for each Zone. Server Farm Zone 2 Zone 1
Data Collectors CMC LHC LHC LHC LHC LHC LHC DS DC DC • Attributes of Data Collectors • DC’s handle all ICA client Resolution activity, should handle all Enumeration activity. ANY DC can Resolve ANY app for ANY client (DC’s are peers in a multi-zone implementation). • DC’s distribute most persistent data changes to member servers for LHC update. Server Farm Zone 2 Zone 1
Zones CMC LHC LHC LHC LHC LHC LHC DS DC DC • Attributes of Zones • Logical, centrally configurable grouping of MetaFrame XP servers. • Each Zone has one Data Collector (DC). • Can span IP networks (LAN, WAN). • Aren’t necessarily tied to an IP segment (only by default). Server Farm Zone 2 Zone 1
Zones CMC LHC LHC LHC LHC LHC LHC DS DC DC • Attributes of Zones • Are useful for partitioning/controlling persistent data update traffic and for distributing ICA client Enumeration/Resolution traffic. • A Zone can contain up to 256 hosts without a registry modification. • In most cases, fewer zones are better! Server Farm Zone 2 Zone 1
Citrix Management Console (CMC) CMC LHC LHC LHC LHC LHC LHC DS DC DC • Attributes of the CMC • Central management tool where 98% of farm configuration/maintenance occurs. • Extensible framework that allows different tools to ‘snap in’. • Doesn’t need to run on a MetaFrame server. Server Farm Zone 2 Zone 1
Citrix Management Console (CMC) CMC LHC LHC LHC LHC LHC LHC DS DC DC • Attributes of the CMC • Works through the IMA service (dest. port 2513) to access DS, DC, and member servers. • Should be run through a DC that has local access to the DS. • Is the most read/write intensive usage of the DS. Server Farm Zone 2 Zone 1
Hardware and Software Configuration • Load up on processors and memory • Have home directories on separate server • Roaming profiles in multi-server enviroments • Q161334-Guide to Windows NT 4.0 Profiles and Policies • NTFS partitions only ( at least 4096 cluster ) • Install only required network components and protocols • Change drive letters at installation time only
Hardware and Software Configuration • For 4 and 8 processors systems, use one controller for OS and one for applications and temporary files. • Dedicate a drive for page file for best performance. • Increase Maximum Registry Size to 100 MB. • See MF Install and Tuning Tips for more info.
Selecting a Data Store Direct Mode • IMA directly querying the database • Microsoft SQL Server 7 or 2000 • Oracle 7.3.4, 8.0.6, or 8.1.6 Indirect Mode • IMA requesting another server to query the database on its behalf • Gathering its DS information indirectly from another server who is accessing the DS directly
Data Store Info Indirect Mode • Select Use a local database as the data store to enable Indirect mode to Access (Direct Mode is not available for Access) on the first server installed. All subsequent servers joining the farm must be installed with the Connect to a data store set up locally on another server option. • First server will be Zone DC by default. • Server hosting the Access DS will be the only server to write to the Access database. • Server hosting the DS in Access acts as proxy for all other servers. • Overcomes the file locking and corruption problems common with Access.
Data Store Info Using Access • Approximately 20 MB of disk space should be available for every 100 servers in the farm. • 32 MB of additional RAM is recommended if the MetaFrame XP server will also host connections. • Need MDAC 2.5 SP 1 installed on TSE. Stop TS Licensing Service before Installing MDAC. Reboot. • %ProgramFiles%\Citrix\Independent Management Architecture\MF20.MDB ( System must have read/write access) • The default user name/password is citrix/citrix. To change the password on the database, use the dsmaint config /pwd:newpassword command with the IMA service running.
Data Store Info Using Access • Each time the IMA service is stopped gracefully, the existing mf20.mdb file is backed up, compacted, and copied as mf20.unk. Each time the IMA service starts successfully, it deletes any existing instance of mf20.bak and then renames the mf20.unk file to mf20.bak. This file is used when the dsmaint recover command is executed. • If the server runs out of disk space on the drive where the mf20.mdb file is stored, the automatic backup stops functioning. Always ensure there is enough disk space to hold 3 times the size of the mf20.mdb. • Perform backup of DS with DSMAINT BACKUP
Data Store Info Using SQL • Approximately 20 MB of disk space for every 100 servers in the farm. The disk space used may increase if there are a large number of published applications in the farm. • The temp database should be set to Auto Grow on a partition with at least 1 GB of free space (4 GB is recommended if it is a large farm with multiple print drivers). • Verify that enough disk space exists on the server to support growth of both the temp database and the farm database. • Use MDAC 2.5 SP1 on TSE. Do not use MDAC 2.6 with SQL 2000. Known bug.
Data Store Info Using SQL • When using Microsoft SQL Server in a replicated environment, be sure to use the same user account on each Microsoft SQL Server for the DS. • Each MetaFrame XP farm requires a dedicated database. However, multiple databases may be running on a single Microsoft SQL Server. • The MetaFrame XP farm should not be installed in a database that is shared with any other client-server applications. • Databases should have the Truncate log on Checkpoint option set to keep log space controlled. • Ensure DS is backed up whenever a change is made via CMC.
Data Store Info Using SQL • For high security environments, Citrix recommends using NT Authentication only. • The account used for the DS connection should have db_owner (DBO) rights on the database that is being used for the DS. • If tighter security is required, after the initial installation of the database as DBO, the user permissions may be modified to be read/write only. • If installing more than 256 servers in a farm, increase number of worker threads available for database.
Data Store Info Using Oracle • Approximately 20 MB of disk space for every 100 servers in the farm. The space used may increase if there are a large number of published applications in the farm. • The Oracle Client (version 8.1.55 or 8.1.6) must be installed on the terminal server prior to the installation of MetaFrame XP. The 8.1.5 and 8.1.7 clients are not supported with MetaFrame XP. • The server should be rebooted after installation of the Oracle Client, or the MetaFrame XP installation fails to connect to the DS.
Data Store Info Using Oracle • Oracle8i version 8.1.6 or later is recommended. However, Oracle7 (7.3.4) and Oracle8 (8.0.6) are supported for the MetaFrame XP platform. • Creating a separate tablespace for the DS simplifies backup and restoration operations. • A small amount of data is written to the system tablespace. If experiencing installation problems, verify that the system tablespace is not full. • Using Shared/MTS (Multi-Threaded Server) mode may reduce the number of processes in farms over 200 servers. Consult the Oracle documentation on configuring the database to run in MTS mode.
Data Store Info Using Oracle • Oracle for Solaris supports Oracle authentication only. • Oracle user account must the the same for every server in the farm because all servers share a common schema. • This account needs the following permissions: Connect Resource
Dedicating a server for Indirect Mode • May be necessary when the following occurs: • Delays in using CMC • Increased IMA service start times due high CPU utilization on server hosting DS. • Cut maximum users to one half to two thirds of full load to improve performance.
Bandwidth Requirements • In a single server configuration, a single server reads approximately 275 KB of data from the DS. The amount of data read is a function of the number of published applications in the farm, the number of servers in the farm, and the number of printers in the farm. The number of kilobytes read from the DS during startup can be approximated by the following formula: KB Read = 275 + 5*Srvs + 0.5*Apps + 92*PrintD Where: Srvs = Number of servers in the farm Apps = Number of published applications in the farm PrintD = Number of print drivers installed on the member server
Data Store Info • High Latency WAN Concerns • Without use of replicated databases, may create situations where DS is locked for extensive periods of time when performing maintenance • A high latency situation reads should not adversely affect any local connections, but the remote site may experience slow performance. • Replicated Databases • Speed up performance if there is enough MetaFrame servers to justify the cost • Database replication will consume bandwidth but is controlled through the database chosen, not MetaFrame
Data Store Info • Access is best used for centralized farms. • Access supports only indirect mode for other servers, and as such will have slower performance then a direct mode DS on large farms. • Database replication is not supported with Access. • Databases supporting replication should be used when deploying large farms across a WAN. • Server farms with over 100 servers should use SQL or Oracle to remain at acceptable performance levels.
Data Store Info • Farms using excessive printer drivers and scheduled replication should use SQL or Oracle. • Farms that cycle boot large groups of servers simultaneously should use SQL or Oracle in direct mode to minimize the IMA service start time. • Both Microsoft SQL Server and Oracle are very similar in performance. In the Citrix Test eLabs both database servers performed similarly with large farms. The choice between the two should be based on feature sets of the databases, in-house expertise, management tools, and licensing costs rather than performance numbers • Use Microsoft Clustering Services with SQL or Oracle Parallel Server with Oracle for fault tolerance.
Data Store Info • DS Query Interval • Key: HKLM\Software\Citrix\IMA\DCNChangePollingInterval • Default value: 600000 milliseconds REG_DWORD: 0x927C0 • If a member server is unable to contact the data store for 48 hours, licensing will stop functioning on the member server • CMC always connects directly to the DC • Change > 10K in size, all member servers in the farm will be sent a change notification and query the DS for the change
Data Distribution with Data Collectors • Server 1 writes information to the DS • Server 1 sends change notification to its zone DC • Zone DC distributes change notification to all member servers in its zone • Other zone DC’s receive notification and distribute it to all member servers within their respective zones • All member servers receive the notification and update their LHC as requested
Data Distribution with Data Collectors • Inter-zone connection formula • N * (N-1)/2, where N is the number of zones in the farm • IMA ping configuration parameter • Key: HKLM\Software\Citrix\IMA\Runtime\KeepAliveInterval • Default value: 60000 milliseconds REG_DWORD: 0xEA60 • Zone DC synchronization parameter • Key:HKLM\Software\Citrix\IMA\Runtime\Gateway\ValidationInterval • Default value: 300000 milliseconds REG_DWORD: 0x493E0 • Inter-zone connection formula • Key:HKLM\Software\Citrix\IMA\Runtime\MaxHostAddressCacheEntriesl • Default Value: 256 Entries REG_DWORD: 0x100
Data Distribution with Data Collectors • Bandwidth requirements between zones • Connect: ~3Kb • Disconnect: ~2.25Kb • Reconnect: ~2.91Kb • Logoff: ~1.50Kb • CMC: ~2.23 • Application Publishing: ~9.07
Data Collector Elections • Each zone is responsible for electing its own data collector (DC). By default, the first server in the farm becomes the DC and is set to Most Preferred. If the setting is changed from Most Preferred, another election will take place. DC elections are won based on the following criteria: 1. Highest Master Version Number (1 for all MetaFrame XP 1.0 servers) 2. Lowest Master Ranking (1=Most Preferred – 4=Not Preferred) 3. Highest Host ID (0-65536 randomly assigned at installation)
Data Collector Elections • To view server’s ranking, use Queryhr ( copy from support\debug\i386 on CD • DC elections are triggered in the following situations: • A member server loses contact with the DC. • The DC goes offline. • A farm server is brought online. • The querydc -e command is executed to force an election. • Zone configurations are changed (i.e. zone name, election preference, adding or removing servers)
Data Collector Elections • When a new DC is elected, all servers in the zone send a complete update to the new DC. The following formula can be used to approximate the amount of data in bytes sent by all servers in the zone to the new zone DC: • Bytes = (11000 + (1000 * Con) + (600 * Discon) + (350 * Apps)) * (Srvs - 1) • Where: • Con = Average number of connected sessions per server • Discon = Average number of disconnected sessions per server • Apps = Number of published applications in the farm • Srvs = Number of servers in the zone
Local Host Cache • Attributes of the Local Host Cache • A subset of the Data Store, stored on each individual server (IMALHC.MDB). • Contains basic info about servers in farm, pub. apps and properties, trust relationships, server specific configs (product code, SNMP settings, load evaluators, etc.). • Used for initialization if DS is down. • Used for ICA client application Enumeration.
Local Host Cache • On the first startup of the member server, the LHC is populated with a subset of information from the DS. From then on, the IMA service is responsible for keeping the LHC synchronized with the DS. The IMA service performs this task through change notifications and periodic polling of the DS. • In the event the DS is unreachable, the LHC contains enough information about the farm to allow normal operations for up to 48 hours. • During this “grace” period, the server continues to service requests while the IMA service attempts to connect to the DS periodically (based on the DS query interval as described in the Data Store Activity section of the IMA Architecture chapter of this document). If the DS is unreachable for 48 hours, the licensing subsystem fails to verify licensing and the server stops taking incoming connections.
Local Host Cache • Because the LHC holds a copy of the published applications and NT trust relationships, ICA Client application enumeration requests can be resolved locally by the LHC. The member server must still contact the zone DC for LM resolutions. • If the IMA service is currently running, but information in the CMC appears to be incorrect, a refresh of the LHC can be manually forced by executing dsmaint refreshlhc from the command prompt of the affected server.
Local Host Cache • . If the IMA service does not start, it may be caused by a corrupt LHC. 1. Verify the DS is available before continuing because this procedure causes the LHC to be reloaded directly from the DS. 2. Stop the IMA service on the MetaFrame server. 3. Launch the ODBC Data Source Administrator. On Windows 2000, choose Control Panel | Administrative Tools | Data Sources (ODBC). On TSE choose Control Panel | ODBC Data Sources. 4. Select the File DSN tab.
Local Host Cache 5. Open the imalhc.dsn file located in %ProgramFiles%\Citrix\IndependentManagement Architecture by default. 6. Once that file is selected, click on Create from the ODBC Setup screen. 7. Enter in any name besides imalhc for the new LHC database. Optionally, rename the old imalhc and reuse the name. 8. Exit the ODBC Data Source Administrator.
Local Host Cache 8. Exit the ODBC Data Source Administrator. 9. Modify the following registry value: Key: HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\ IMA\RUNTIME Value: PSRequired REG_DWORD: 0x1 10. Restart the IMA service. Note: The DS server must be available for this procedure to work. If the DS is not available, the IMA service fails to start until the DS is available.