450 likes | 603 Views
Design and Implementation of a Server Cluster Backend for Thin Client Computing. MTP final stage presentation By Khurange Ashish Govind Under the guidance of Prof. Om P. Damani. Talk outline. Introduction Working of Thin Clients Design issues High availability of various services
E N D
Design and Implementation of a Server Cluster Backend for Thin Client Computing MTP final stage presentation By Khurange Ashish Govind Under the guidance of Prof. Om P. Damani
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
1.1 Thin Client System • Consists of multiple heterogeneous workstations connected to a single server. Diskless work station Old PC New PC
1.1 Thin Client System Thin Client Hardware requirement of a Thin Client is : Keyboard, monitor, mouse and some computation power.
1.1 Thin Client System Server Thin Client Server performs all application processing and stores all the user’s data
1.1 Thin Client System Keyboard and mouse input Server Thin Client Communication protocol Screen updates Thin Client and Server communicates using a display protocol such as X or RDP
1.2 Advantages of Thin Client System • Terminals are less expensive and maintenance free • Reduced cost of security • Reduced cost of data backup • Reduced cost of software installation
1.3 Limitations of Thin Client System • Server is a single point of failure • System is not scalable • Adding more independent servers helps with scalability but not with high availability
1.4 Windows Solution • Terminals cost $500 (without monitor) • Needs third party load balancing solution • Needs a separate file server
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
2.1 LTSP • Linux Terminal Server Project Thin Clients need following services to run • DHCP • TFTP • NFS • X protocol and a display manager (XDM, GDM, KDM)
DHCP TFTP Thin Client NFS XDM DHCP request DHCP reply Download OS Mount root filesystem Send keyboard and mouse input Send graphical output
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
3. Design Issues • Provide highly available DHCP, TFTP, NFS and XDM services • Thin Client’s binding with XDM server must be dynamic • Software for finding cluster’s status, load balancing and managing cluster • Highly available filesystem
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
4 High Availability of Various Services • In this section we discuss about HA of following services • DHCP • TFTP • NFS • XDM
4.1 DHCP • More than one DHCP servers can exist on same subnet • IP addresses are offered to client on ‘lease’ basis • DHCP protocol has phases : • INIT • RENEW • REBIND
4.1.1 HA of DHCP • Multiple independent DHCP servers can not provide HA • For HA DHCP servers need to be arranged in • Failover protocol • Static binding
4.1.2 Failover Protocol • Pair of DHCP servers (primary & secondary) • lease database is synchronized between them • In normal mode only primary server works, updating secondary about its lease information • When primary goes down, secondary starts working • Assign IP addresses to new thin clients • Renewing existing leases
4.1.3 Static Binding • DHCP server’s static IP allocation method is used • All servers have the same binding of Thin Client’s MAC to IP address. • Thin Client can continue as long as one DHCP server is running
4.2 TFTP • TFTP service do not have any states and bindings like DHCP • Multiple independent TFTP servers running provides HA • TFTP server run on nodes on which DHCP server run • DHCP advertise its own address as TFTP
4.3 XDM • XDM server runs on all nodes in cluster • Each node provides XDM service to users who have home directory on this server • Static binding of Thin Client to XDM server will not work • Dynamic binding is based on username
NFS NFS + Log-in Server Thin Client Thin Client Log – in Server 4.4 NFS • Multiple independent NFS servers running provide HA • To support this NFS server runs on all the nodes in cluster Single dependency Two separate dependencies
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
5 Software Components of The System • Main software components of the system • Health Status Service • Load Balancer • Cluster Manager
5.1 Health Status Service • Consists of two parts • Health Status Server • Health Status Client • Health Status server, runs on all servers and provides information of server health • Health Status Client is used by Load Balancer to find load on servers
5.2 Load Balancer • Accepts username from Thin Client • Database, mapping username to group of servers • Replies back with least loaded server’s IP, hosting user’s home directory • Cluster has two independent dependencies : DHCP, Load Balancer
5.3 Cluster Manager • Tool provided to system administrator to manage cluster • Makes sure that all the nodes in the cluster has latest information about the system • Provide service to • Add / remove node • Add / remove user
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
6.1 Arrangement of Components in The Cluster XDM Server NFS Server Health Status Server XDM Server NFS Server Health Status Server XDM Server NFS Server Health Status Server DHCP TFTP Load balancer XDM Server NFS Server Health Status Server DHCP TFTP Load balancer XDM Server NFS Server Health Status Server
Server 1 Server 2 Thin client Server 3 Server 4 Server 5 Server 6 6.2 Working of The System 5 2 4 3 1 User 'A' 2 :-Nodes which has home directory for user 'A' :-Nodes where Load balancer is running
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
7.1 Filesystem Issues • Keep hardware cost as minimum as possible • Use server’s storage to store user’s data • Replicate user’s filesystem on multiple servers • Use open source tools to build filesystem
7.2 DRBD • Kernel module that maintains real time mirror of block device on remote machine • Pair of nodes one together. One acts as primary and other as secondary. • Read write access to replicated block device is allowed on primary only
7.3 Heartbeat • Detects liveliness of a node • Runs between pair of nodes • Primary node runs all services and has virtual IP of cluster • When primary goes down, secondary take over virtual IP and starts all services
7.4 HA-NFS DRBD Primary node DRBD messages Host-a 10.129.22.12 Host-b 10.129.22.13 Heartbeat Virtual NFS server 10.129.22.14 DRBD Primary node Host-a 10.129.22.12 Host-b 10.129.22.13 Virtual NFS server 10.129.22.14
7.5 Cluster Filesystem • Users are divide into mutually exclusive groups • Each user group’s files are replicated between separate pair of servers • Each user group can tolerate failure of one server
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
8 Future Work • Scalable filesystem • DRBD solution • Heartbeat Version 2 • Changes to DRBD • Coda filesystem • Filesystem performance measurement • Server sizing
Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion
9.1 Contribution • Scalable and highly available cluster design for Thin Client computing • HA solutions for DHCP, TFTP, NFS services • Filesystem using open source tools : DRBD and Heartbeat • Health Status Service, Load Balancer, Cluster Manager developed from scratch
9.2 Cluster Characteristics • Generic design • No special hardware required • DHCP can tolerate ‘n-1’ failures • Load balancer also tolerates ‘n-1’ failures • Each user group tolerate one server failure
References • Internet Draft DHCP failover protocol • Linux Terminal Server Project http://www.ltsp.org • DRBD Homepage http://www.drbd.org • Heartbeat project http://linuxha.org/Heartbeat