380 likes | 525 Views
Emulab Node Lifecycle. Overview: Node Lifecycle State Machines. State Machines and stated. Emulab uses a centralized service to track what each node does: stated Booting, self-configuration, reloading images, shutdown, etc. One server for all nodes; each node tracked individually
E N D
State Machines and stated • Emulab uses a centralized service to track what each node does: stated • Booting, self-configuration, reloading images, shutdown, etc. • One server for all nodes; each node tracked individually • Tracking based on state machines that describe what nodes should be doing
Local vnode Local PC Example: Normal Node Boot
Local PC PC+OS w/ minimal Elab support Example: PC w/ partial OS support
State Overview • Each OS has a state machine • describes what is valid • Many OS’s can follow the same state machine • Node (or boss) sends event for changes • stated records,checks in state machine (in DB) if transition from state A to B is valid • Takes actions if not (mail, reboot, retry, etc.) • Also takes action when we “time out” in a state
States for Communication • Programs on boss depend on state transitions to find out about important events • Reboot nodes, wait for ISUP state • Reloading: wait for RELOADDONE, then ISUP • States can also have actions associated with them (“state triggers”) • E.g. when reloading finishes, we check if it is a node being cleaned up before going free, and release it if necessary
Now, more depth on some of the node lifecycle pieces • Node bootstrapping • Node self-configuration • TMCC/CD: Testbed Master Control Client & Daemon
Requirements • Ability to take control of a node regardless of current state • Ability to restore node to a known state • All with no manual intervention • Provide this capability to users
Taking Control:Rebooting a Node • Multi-step approach • “ssh reboot” • ICMP “Ping of Death” (IPoD) • Power cycle • Encapsulated in node_reboot • Available to users
node_reboot • Authenticates the caller • Sends an event to stated • stated knows what the node is doing • Can prevent reboots at bad time • stated reruns node_reboot in “really do it” mode
Taking control:Catching the Boot • PXE-enabled NICs, first boot choice • PXE downloads a boot loader via TFTP • Emulab boot loaders may then • Boot from a particular disk partition • Download a standalone kernel • Download an MFS-based FreeBSD
Taking control: Catching the Boot (wide-area nodes) • Use bootable CDROM • CDROM contains an MFS-based FreeBSD system • Contact Emulab (using secure tmcc)for instructions: • Apply patches • Reload disk • Just boot from disk
Restoring a Node:Disk reloading • Frisbee: the multi-threaded, multi-filesystem, multicast marvel! • Images are intelligently compressed using filesystem specific knowledge • Image distribution is client driven: • Each client independently requests data • Clients “snoop” each other’s requests • Client has network, unzip, disk threads • Server takes requests from multiple clients and multicasts data
Disk reloading II • Frisbee server (frisbeed) is started up to feed appropriate image • Client node boots into FreeBSD MFS • Obtains info about what image to load • Runs frisbee client (frisbee) • Performs post-frisbee customizations • Users can reload their own disks at any time (os_load)
Disk reloading(wide-area) • Initiated by the CDROM system • Copies or streams the image from Emulab via ssh • Feeds into imageunzip • Distribution in this manner means: • TCP for flow control and reliability • ssh for privacy
Summary:A typical bootstrap scenario • Experiment creation requests nodes with FreeBSD or Linux • DB state is setup, nodes are rebooted • On each, PXE loads pxeboot which boots from appropriate disk partition • Nodes come up and self-configure • When freed, nodes are reloaded with the default image
Bootstrap Issues • PXE-based boot does not scale well • PXE (DHCP) requires MAC broadcast • Alternative: CD/floppy/flash-based • Speed issues: • Biggest time sink: the BIOS (2 min) • DHCP can be slow (10-15 sec) • Disk reload not a problem (30-90 sec)
What is “Self-configuration”? • Nodes run “stock” OS install and customize themselves at boot time • Alternative: pre-customized disk images • Issues: speed and space • Alternative: post-disk load customization • Issue: compatibility • Disadvantage of self-configuration: • Portability: must be adapted to every OS
Configuration feature Traditional Emulab Local Emulab Remote Network identity Shared filesystems User accounts and keys Hosts file Network interfaces IP tunnels Link shaping Routing Tar and RPM installation Daemon and agent startup Custom user script execution x x x x x x x x x x x x x x x x x x x x x Emulab self-configuration
Features of the Implementation • Non-intrusive • Single hook on the host (rc scripts) • Adds two directories of scripts (mostly perl) • Some changes/replacements of standard files • Mostly “just works” on Unix-like systems • Linux, FreeBSD, OpenBSD to date • Should be easy to port to others • Windows XP support partially done
Where is my Control Net?? • Must locate the control net interface • Bus search order different in BSD and Linux • Cannot rely on DB since we can’t reach it! • Current: • Hack scripts to ID based on kernel boot output • Lame: must be customized, tied to node type • Future: • DHCP on all interfaces, use IF that replies?
The process • rc.testbed run as last step of node boot • Tell boss we are configuring (TBSETUP) • If node status is “free,” we are done • Get all TMCD-provided info in one transfer • Set up FS mounts, accounts • Construct rc.foo scripts for initializing the rest • interfaces, routes, tarballs... • Scripts generated depend on target environment
The process(we're not done yet!) • Network setup: run scripts for setup of interfaces, tunnels, link shaping, routes • User files: run scripts for RPMs and tarballs • Run daemons: healthd, idled, watchdog • Run agents: program, link, trafgen • Tell boss we are up (ISUP) • Configure virtual nodes
Configuring non-PC nodes • IXP network processor • Parasitic relationship with the host PC • Much of the configuration done from host PC • Multiplexed (“virtual”) nodes • Like IXPs, has “sub node” relationship with host • Many-to-one relationship makes it advantageous to perform setup from the physical node • Still many aspects performed by the node itself
Configuring non-PC nodes(cont.) • Future: Cisco routers • First cut: build config file (from template) and have router reconfigure • More advanced: allow custom router OS and config file
Executive Summary • Simple protocol for transfering state between nodes and “boss” (essentially a database proxy) • Primarily used for node self-configuration • Flexible authentication and transport protocol • The “Swiss-army knife” (or “kitchen sink”) of Emulab protocols
TMCC/TMCD • Testbed Master Control Client and Daemon • Used to request and return: • Configuration info for a node • State transitions • Uses a simple ASCII message format, suitable for perl parsing • Can use UDP, TCP, SSL on TCP • Client has “caching” and proxy modes
Command Action nodeid status ifconfig accounts rpms mounts routing state vnodelist fullconfig Returns Emulab node name Returns project and experiment ID Returns IP info for network interfaces Returns user names and public keys to install Returns list of RPMs to install Returns list of NFS directories to mount Returns list of static routes to install Sets current node state List of virtual nodes for this physical node Bulk return of all info needed at boot time TMCC API • Usage: tmcc command argument … • Returns NAME=VALUE pairs (usually) • Examples:
TMCC Security(local node) • Identify their server via config file, compiled in name, or DNS • Authentication at the server based solely on IP address • Vulnerabilities: • Malicious impersonation of server on control net • Malicious impersonation of another node
TMCC Security(wide area nodes) • SSL: single node private key used by all • Nodes can ensure they are talking to the server • Server can ensure it is talking to some node • Vulnerability: node private key is in filesystem of all nodes, crack one node and you have them all
Issues • Scaling • Every client made 20+ calls at boot time • Mitigated with “bulk transfer” and proxies • Still a hot spot (e.g., ISALIVE messages) • Security • Highly DOS-able • Mitigate with caching, per-experiment proxy?