1 / 33

Veritas Application Director

Veritas Application Director. Jim Senicka Director, Technical Product Management. Agenda. VAD Overview Typical Use Cases Roadmap. VAD Policy Master. Architecture. A Server Farm that consists of: Servers – machines running any of the major OSes may co-exist is a single server farm

frostj
Download Presentation

Veritas Application Director

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Veritas Application Director Jim Senicka Director, Technical Product Management

  2. Agenda • VAD Overview • Typical Use Cases • Roadmap

  3. VAD Policy Master Architecture • A Server Farm that consists of: • Servers – machines running any of the major OSes may co-exist is a single server farm • Applications – potentially multi-tiered • Basically, your entire data center, or a subset thereof • A VAD Policy Master that monitors your Server Farm in real time • VAD Agents that run on all managed servers

  4. VAD Policy Master A real-time control center for your data center • Monitor applications health and load in real time • Plan how to react to events • Define policies for configuration changes based on: • Failure – describe if/how to execute a fail-over • Load – describe how to react to high or low load • Schedule (future) – describe actions to be taken based on a known schedule • Execute • Manually start or stop an application, move an application from one server to another, change application priorities, etc • Automatically execute policies in reaction to failure, load or schedule events

  5. So, what can I do with it? Centralized Application Management Increase operator/admin efficiency Multi Tier Application Management Manage complex N-Tier apps as a single unit Manage Server Consolidation Increase server utilization without increasing admin workload Increase Availability Without Additional Hardware Utilize lower priority application servers for spares

  6. Centralized Application Management • Today: While mission critical applications are typically controlled by a clustering product, such as VCS, non-mission critical applications are left to their own means • Every application has its own scripts for start/stop • Application health and load is not closely monitored • With Veritas Application Director: Put all non-clustered applications in a VAD Server Farm: • Monitor the health and load of all applications from a central console • Start and stop applications in a standard, uniform way • Migrate applications between servers, as your environment evolves over time, using simple drag & drop

  7. Centralized Application ManagementAutomation improves manageability Control the start/stop/ restart and monitoring of stand alone applications without local login or root privs… Centralized automation and monitoring of all applications …hundreds from a single screen VAD service groups provide component dependency, and agents remove need for scripts

  8. Multi Tier Application Management Customer pain points • Multi tier applications require specific knowledge to handle restarts • Difficult to automate startup/shutdown or migrate • DR recovery takes manual intervention • No centralized status of entire application stack

  9. Multi Tier Application ManagementDependencies, priorities manageable like single app Create dependencies between tiers Web Apps Configure behavior on fault of any tier DBMS Entire stack can be moved, or stopped to clear space for higher priority Establish business priorities of entire application stack

  10. Manage Server Consolidation Customer pain points • Pre-consolidation utilization is often less than 15-40% • Server consolidation  More eggs on one basket • Need to manage OS Resource Manager • Need to manage application suitability for co-existence on a server

  11. Policy master starts applications on valid server based on capacity requirements Higher priority applications can eject lower priority to free needed capacity CPU=6 Mem=6g App 2 Priority 4 App 6 Priority 4 App 5 Priority 4 App 5 Priority 4 CPU=6 Mem=6g CPU=4 Mem=4g App 1 Priority 2 App 1 Priority 2 App 4 Priority 3 App 7 Priority 2 App 3 Priority 1 App 8 Priority 3 CPU=6 Mem=6g CPU=4 Mem=4g CPU=4 Mem=4g CPU=6 Mem=6g CPU=6 Mem=6g Managing Server Consolidation Intelligent Workload Management Server definitions include relative CPU power and physical memory Service Group Queue Application Director interacts with OS-based resource manager to allocate appropriate CPU “shares” and amount of physical memory • Applications move based on: • Failover • Capacity • Operator action

  12. Increase Availability Without Additional Hardware Customer pain points • More and more applications need high availability • Adequate spares represents a major cost

  13. Systems used by low priority apps become possible spares App 5 Priority 4 App 6 Priority 4 App 2 Priority 4 App 1 Priority 2 App 7 Priority 2 App 3 Priority 1 App 4 Priority 3 App 8 Priority 3 Increase Availability Without Additional HardwarePriority based availability HA is achievable by leveraging underutilized systems Applications collectively run at higher capacity while still realizing HA

  14. Automating DR operations Customer pain points • Multi tiered applications complicate DR operations • Would like to utilize DR site and still handle P1 applications

  15. Policy masters split between sites Replication or mirroring between sites Global Data Center AvailabilityPriority-based management across DCs During major outages,Application Director selectively controls app movement to available resources by priority Critical apps use a data mobility solution to keep data in sync between sites

  16. Veritas Application Director – Now & Future

  17. Release Plan • Version 0.8 beta available now to select customers • Version 0.9 beta scheduled for Dec 2005 • Version 1.0 will be released in a controlled release in March 2006 • First year after 1.0 – work with a small number of early adopter customers

  18. Contents of VAD – Scale and Scope • VAD 1.0 • Support up to 256 nodes in a server farm • Platforms: Solaris, AIX, Redhat, SUSE • Limited role-based access control model • Authentication pass-through to Active Directory or LDAP • Full CLI, Web-based GUI, basic reporting, SNMP and SMTP alerts • A future release • Support HP-UX, Windows, VMware, Xen • Scale up to 1000’s of servers • Full role-based access control and GUI model • Security zones

  19. Contents of VAD – Application Agents and Notifiers • VAD 1.0 • Agents for Oracle, WebLogic, WebSphere, Apache, NFS, Samba, Solaris Zones • Pre-installed notifiers for SNMP and SMTP • A future release • Agents for more applications, driven by customer requirements • Custom notifiers

  20. Veritas Application Director vs. VCS

  21. Thank you

  22. Additional Slides

  23. Security • Log on using enterprise credentials • Integrates with enterprise Active Directory or LDAP • Application Nodes and Policy Master mutually authenticate • Encrypted communication between App Nodes and Policy Master • Role-based access control model • Define your own roles, some basic templates provided • Security zones (future) – segregate your environment into protected zones

  24. Interfaces • Web-based GUI • High level view of the Server Farm • Drag & Drop application placement • Reporting • CLI • All low-level operations are available from the CLI • CLI can be used to interface with 3rd party config management and SLA enforcement tools • Alerts and notifications • SNMP, SMTP, and custom (user defined) notifiers

  25. Advanced usage: Use VAD as the infrastructure for a dynamic data center • Today: Applications are “glued” to their servers • Very time consuming to move an application to another server • Result: vast upfront over-provisioning of resources, to accommodate anticipated future increase in needs, in a static environment • With Veritas Application Director: Decouple applications from servers • Migrate applications between servers at will: in case of failure, in reaction to increase or decrease in load or on a scheduled basis • Spend your time defining policies, rather than manually reacting to events • Visibility into the health and load of entire application environment at all times

  26. Advanced usage – continuedApplication Mobility – Implementation • Define all application resources as part of the Service Group: • Virtual IP address, virtual hostname etc • Decide how to deal with application binaries: • Option 1: Pre-install on all potential target nodes • Option 2: NFS-mount from shared storage on-demand • Decide how to deal with application data: • Option 1: Store on NFS shared storage • Option 2: Create (e.g. using Command Central Storage) islands of target nodes sharing SAN using Veritas Volume Manager or SAN-VM • Option 3: Replicate (e.g. using Veritas Volume Replicator) - for disaster recovery purposes

  27. Contents of VAD – Metaphors and Operations • VAD 1.0 • Service Groups • Service Group placement as reaction to failures or manual migration • Recognized data center objects: servers and applications • Full CLI, Web-based GUI, basic reporting • A future release • Application Groups • Other operations: GetLog, reboot etc

  28. Network ID • All applications have 3 basic components that must be managed • Storage – volume and file systems • Application components • network identity Application Storage These operations all have specific sequencing, or “dependency relationships” on startup and shutdown • Application Director controls 3 core functions on a per application basis • Startup sequence of components • Shutdown sequence of components • Monitoring ongoing health How does it work? Understanding “managed applications”

  29. Listener Database IP NIC FS FS FS Vol Vol Vol DG How does it work? Understanding “Service Groups” A VAD “Service Group” encapsulates all component requirements for an application and resource dependencies Rolled up status of entire group displayed Service groups can be started and stopped with a single command With all dependency relations mapped, root cause analysis of application failures becomes simpler Different types of resources use specific “agents” to control all logic of starting, stopping, monitoring Properly configured service groups can be moved for planned outages, or in response to failures

  30. Web • Different tiers may: • run on different operating systems • Run as a single instance, or use a “scale out” architecture App • Dependencies between tiers define: • Startup/shutdown sequence • Error handling behavior when any group faults DB Application director supports creating dependencies between service groups, where each group represents an application tier How does it work? Understanding “group dependencies”

  31. View: Placement of a Service Group Migrate a Service Group? Simply Drag & Drop Want a Service Group tofail-over in case of fault? Simply define a fail-overpolicy Stop a Service Group? Right click  Offline

  32. View: Availability Summary Load averages in server farm Summary of any issues within server farm Server farm-wide distribution of objects

  33. App node App node App node CLI GUI Policy Master Comm ASA Database Loader Status config DB XML PM Components How does it work? PM architecture

More Related