440 likes | 680 Views
XenServer Storage Integration Deep Dive. Agenda. XenServer 5.5 Storage Architecture. Multipathing. Vendor Integration. StorageLink. Platinum Edition. Enterprise Edition. Free Edition. Citrix XenServer & Essentials 5.5 Family. NEW. Stage Management. Lab Management.
E N D
Agenda XenServer 5.5 Storage Architecture Multipathing Vendor Integration StorageLink
Platinum Edition Enterprise Edition Free Edition Citrix XenServer & Essentials 5.5 Family NEW Stage Management Lab Management Provisioning Services (p+v) Essentials Provisioning Services (virtual) WorkloadBalancing NEW StorageLinkTM High Availability Performance Monitoring Workflow Studio Orchestration Live Migration (XenMotion) Active Directory Integration NEW Generic Storage Snapshotting 64bit, Windows, Linux Workloads XenServer XenCenter Management Shared Storage (iSCSI, FC, NFS) No socket restriction
Storage Technologies XenServer 5.5 iSCSI / FC XenServer 5.0 iSCSI / FC XenServer 5.0 / 5.5 NFS / EXT3 Storage Repository Storage Repository Storage Repository LUN LUN Filesystem VHD header VHD header LVM Logical Volume LVM Logical Volume .VHD file .VHD file LVM Logical Volume LVM Logical Volume LVM Volume Group LVM Volume Group VM virtual disk
LVM XenServer 5.5 (LVHD) Replaces LVM for SRs Hosts VHD files directly on LVM volumes Best of both worlds Features of VHD Performance of LVM Adds Advanced Storage features Fast Cloning Snapshots Fast and simple upgrade Backwards compatible
Why using Multipathing? • Path redundancy to storage • Performance increase by load sharing algorithms • Many fibre channel environments by default have multiple paths LUN 1 Storage controller 1 LUN 1 FCHBA 1 LUN 1 LUN 1 FCHBA 2 Storage controller 2 LUN 1 FC Switches XenServer Storage Subsystem
Enabling Multipathing • xe host-param-set other-config:multipathing=true uuid=host_uuid • xe host-param-set other-config:multipathhandle=dmp uuid=host_uuid • Note: Do not enable multipathing by other ways (e.g. CLI)!!!
XenServer supports 2 multipathing technologies See details: http://support.citrix.com/article/ctx118791
DMP vs RDAC MPP • Check if RDAC MPP is running • lsmod | grep mppVhba • „multipath –ll“ would show MD device as output (if DMP is active) • Use only 1 technology • When RDAP MPP is running use it • Otherwise use DMP
MPP RDAC: Path check Lun #0 - WWN: 600a0b80001fdf0800001d9c49b0caa1 ---------------- LunObject: present DevState: OPTIMAL Controller 'A' Path -------------------- Path #1: LunPathDevice: present DevState: OPTIMAL Path #2: LunPathDevice: present DevState: OPTIMAL Controller 'B' Path -------------------- Path #1: LunPathDevice: present DevState: OPTIMAL Path #2: LunPathDevice: present DevState: OPTIMAL • mpputil
DMP: Path check • Monitoring using XenCenter • Monitoring using CLI • Command: multipath -ll
iSCSI with Software Initiator • IP addressing to be done by XenServer Dom-0 • Multipathing also to be done by XenServer Dom-0 • Dom-0 IP configuration is essential Storage LAN Ctrl 1 XenServer Dom-0 LUN 1 LUN 1 IP LUN 1 IP IP NIC 1 LUN 1 XenServer Switches Storage Subsystem
Best practice configuration: iSCSI storage with multipathing Subnet: 255.255.255.0 • Separation of subnets also on IP base Subnet 1 Storage LAN Adapter 1 Port 1 IP: 192.168.1.201 NIC 1 NIC 1 IP: 192.168.1.10 Port 2 IP: 192.168.2.201 NIC 2 NIC 2 IP: 192.168.2.10 Storage LAN Adapter 2 Port 1 IP: 192.168.1.202 LUN 1 Port 2 IP: 192.168.2.202 XenServer FC Switches Subnet 2 Storage Subsystem
Not recommended configurations for multipathing and iSCSI: Both server NICs in same subnet Mixing of NIC teaming and multipathing Subnet 1 Subnet 1 NIC 1 NIC 1 Team IP: 192.168.1.10 NIC 1 IP: 192.168.1.10 NIC 2 NIC 2 NIC 2 IP: 192.168.1.11 XenServer XenServer
Multipathing with Software InitiatorXenServer 5 • XenServer 5 supports Multipathing with iSCSI software initiator • Prerequisites are: • iSCSI target uses same IQN on all ports • iSCSI target ports operate in portal mode • Multipathing reliability has been enhanced massively in XenServer 5.5
How to check if iSCSI target operates in portal mode? • Execute iscsiadm -m discovery --type sendtargets --portal <ip address of 1 target> • Output must show alls IPs of the target ports with identical IQN • Example:192.168.0.161:3260,1 iqn.strawberry:litchie192.168.0.204:3260,2 iqn.strawberry:litchie • When connecting to iSCSI target using XenCenter Storage Repository Wizard, also all target IPs should show up after Discovery
NetApp Storage • NetApp Storage supports Multipathing • For configuring NetApp storage and modification of multipath.conf see whitepaperhttp://support.citrix.com/article/CTX118842 • NetApp typically supports portal mode for iSCSI multipathing for iSCSI SW Initiator is supported • Especially for low-end NetApp storage (e.g. FAS2020) with limited LAN adapters special considerations take place
NetApp low-end storage (iSCSI) • Often limited by NIC configuration • Example: 2 NICs per head • 1 aggregate / LUN is represented by 1 head at a time (other head for fault tolerance) • Thus: 2 NICs effectively can be used for storage connection • Typically Filer delivers non-block-based protocols (e.g. CIFS) which also require redundancy as well as block based protocols (e.g. iSCSI)
Example FAS2020: Scenario 1nonetworkreduncancyforiSCSIand CIFSseparationofnetworks CIFS Network Controller 0(active) NIC 0 NIC 1 NIC 0 Controller 1(fault tolerance) NIC 0 NIC 1 NIC 1 iSCSI Network
Example FAS2020: Scenario 2network redundancy for iSCSI and CIFSno separation of networks CIFS & iSCSI Network Controller 0 (active) NIC 0 vif / bond NIC Bond NIC 1 NIC 0 Controller 1(fault tolerance) NIC 0 NIC 1 vif / bond NIC 1
Example FAS2020: Scenario 3network redundancy for iSCSI (multipathing) and CIFSseparation of networks NIC Bond Ctrl 1(active) CIFS VLAN CIFS VLAN NIC 0 iSCSI VLAN NIC 0 iSCSI VLAN NIC 1 Vif / bond NIC 1 Multipathing Controller 1(fault tolerance) Same configuration NIC 2 NIC 3
Dell Equalogic Support • XenServer 5.5 includes Adapter (min. firmware 4.0.1 required) • Redundant path configuration does not depend on using adapter or not • All PS series are supported as running same OS • StorageLink Gateway support planned
Dell / Equalogic • See whitepaper for Dell / Equalogic storagehttp://support.citrix.com/article/CTX118841 • Each Equalogic has two controllers • Only 1 controller is active • Uses „Group ID“ address on storage side(similar to bonding / teaming on server side) • Only connection over group ID, no direct connection to the iSCSI ports possible • Therefore multipathing cannot be used bonding on XenServer side
Multipathing architecture with Datacore Subnet: 255.255.255.0 • Different IQNs for targets – no portal mode possible!! Subnet 1 Storage Controller 1 Port 1 IP: 192.168.1.201 NIC 1 NIC 1 IP: 192.168.1.10 IQN 1 NIC 2 NIC 2 IP: 192.168.2.10 Storage Controller 2 IQN 2 LUN 1 Port 2 IP: 192.168.2.202 XenServer Switches Subnet 2
Datacore hints • Special attention for software iSCSI • Follow Datacore technical bulletin: TB15ftp://support.datacore.com/psp/tech_bulletins/TechBulletinsAll/TB15b_Citrix%20XenServer_config_501.pdf • Datacore in VM • O.k. when not using HA • Configuration possible, but take care about booting the whole environment • Take care when updating XenServer
Logical advancement of XenServer integrated storage adapters Netapp & Equalogic Storage adapter
Citrix StorageLink Overview (XenServer) Data Path XenServer Control Path SAN Storage Cloud Guest iSCSI / FC Snap-in for XenServer StorageLink
StorageLink Overview XenServer Hyper-V Netapp EqualLogic DOM0 VSM Bridge VDS Parent Partition XenServer Hyper-V Data Path Data Path Virtual Storage Manager Control Path SAN/NAS
StorageLink Gateway Overview SMI-S is the preferred method of integration as it requires no custom development work XenServer Dell Vendor-specific VSM Storage Adapters run in separate processes
Storage Technologies XenServer 5.5 iSCSI / FC XenServer 5.5 iSCSI / FC + Storage Repository Storage Repository LUN LUN LUN VHD header VHD header LVM Logical Volume LVM Logical Volume LVM Volume Group VM virtual disk
Essentials: Example usage scenarioEffective creation of VMs from template VM clone 1x VM clone 3x VM Template 1. Copyof LUN 1. Copyof LUN 2. ModificationofZoning 2. ModificationofZoning 3. Creationof VM 3. Creationof VM Hugemanualeffort 4. Assignmentof LUN to VM 4. Assignmentof LUN to VM 1. Copyof LUN fewmouseclicks 2. ModificationofZoning 3. Creationof VM 4. Assignmentof LUN to VM 1. Copyof LUN • Effectiveness: • Fast-Cloningusing Storage Snapshots • Fullyautomatedstorageand SAN configuration • for FC andiSCSI 2. ModificationofZoning 3. Creationof VM LUN 4. Assignmentof LUN to VM
StorageLink: Supported Storages • StorageLink HCLhttp://hcl.vmd.citrix.com/SLG-HCLHome.aspx