1 / 24

DevOps Maturity Model

DevOps Maturity Model. IBM Rational 03/10/2014. What Are Our Goals?. Ultimate goal is to continuously improve tools and process to produce a quality product with faster delivery to our customers.

fuller
Download Presentation

DevOps Maturity Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DevOpsMaturity Model IBM Rational 03/10/2014

  2. What Are Our Goals? • Ultimate goal is to continuously improve tools and process to produce a quality product with faster delivery to our customers. • Measuring the continuous improvement is important to show we are focusing in the areas that provide us the best ROI. • How do we plan to measure? • DevOps Maturity Model Self Assessment • DevOps Outcome Based Self Measurement • How often do we plan on doing the self assessments? • Current plan is once quarterly • What will be done with the measurements? • Identify Tools/Processes that can help improve DevOps Maturity • Continuous improvement measured through the metrics collected. • Will be shared with the executive teams

  3. DevOps Principles and Values • Develop and test business strategy requirements against a production-like system • Iterative and frequent deployments using repeatable and reliable processes • Continuously monitor and validate operational quality characteristics • Amplify feedback loops DevOps Maturity Model Self Assessment "Plan & Measure" Questions. 1. . 63. DevOps Maturity Model Outcome Based Metrics • 1) Which of the following best describes the state of your code at the end of each iteration? • * Ready for testing* Partially tested, ready for additional integration, performance, security, and/or other testing* Fully tested, documented, and ready for production delivery or GA release (modulo translation work or legal approvals) • 2) How quickly can your team pivot (complete a feature in progress and start working on a newly-arrived, high-priority feature)? • * 3 months or longer • * less than 3 months • * less than one month • * less than one week • 3) How quickly are you able to change a line of code and deliver to customers as part of a fully-tested, non-fixpack release? • * 12 months or longer* less than 6 months* less than 3 months* less than one month* less than one week* less than one day • 4) What is the cost (in person-hours) of executing a full functional regression test? • <enter value> • 5) How long does it take for developers to find out that they have committed a source code change that breaks a critical function? • * One week or longer* 1 to 7 days* 12 to 24 hours* 3 to 12 hours* 1 to 3 hours* less than one hour • 6) Which of the following best describes the state of your deployment automation for the environments used for testing?* We have no deployment automation.  Setting up our test environments is entirely manual.* We have some deployment automation, but manual intervention is typically required (for example, to provision machines, setup dependencies, or to complete the process).* We create fully-configured test environments from scratch and we reliably deploy into those environments without manual intervention.* We create fully-configured production-congruent environments from scratch and we reliably deploy into those environments without manual intervention.

  4. Where do you start? DevOps improvements adoptionAssess and define outcomes & supporting practices to drive strategy and roll-out Determine Activities Objective What am I trying to achieve? • Think through business-level drivers for improvement • Define measurable goals for your organizational investment • Look across silos and include key Dev and Ops stakeholders Step 1 Business Goal Determination • What do you measure and currently achieve • What don’t you measure, but should to improve • What practices are difficult, incubating, well scaled • How do your team members agree with these findings Where am I currently? Step 2 Current PracticeAssessment • Start where you are today and where your improvement goals • Consider changes to People, Practices, Technology • Prioritize change using goals, complexities and dependencies What are my priorities ? Step 3 Objective & Prioritized Capabilities • Understand your appetite for cross functional change • Target improvements which get the best bang for the buck • Roadmap and agree on an actionable plan • Use measurable milestones that include early wins How should my practices improve? Step 4 Roadmap 4

  5. Outcome Based Metrics • 1) Which of the following best describes the state of your code at the end of each iteration? • * Ready for testing* Partially tested, ready for additional integration, performance, security, and/or other testing* Fully tested, documented, and ready for production delivery or GA release (modulo translation work or legal approvals) • 2) How quickly can your team pivot (complete a feature in progress and start working on a newly-arrived, high-priority feature)? • * 3 months or longer • * less than 3 months • * less than one month • * less than one week • 3) How quickly are you able to change a line of code and deliver to customers as part of a fully-tested, non-fixpack release? • * 12 months or longer* less than 6 months* less than 3 months* less than one month* less than one week* less than one day • 4) What is the cost (in person-hours) of executing a full functional regression test? • <enter value> • 5) How long does it take for developers to find out that they have committed a source code change that breaks a critical function? • * One week or longer* 1 to 7 days* 12 to 24 hours* 3 to 12 hours* 1 to 3 hours* less than one hour • 6) Which of the following best describes the state of your deployment automation for the environments used for testing?* We have no deployment automation.  Setting up our test environments is entirely manual.* We have some deployment automation, but manual intervention is typically required (for example, to provision machines, setup dependencies, or to complete the process).* We create fully-configured test environments from scratch and we reliably deploy into those environments without manual intervention.* We create fully-configured production-congruent environments from scratch and we reliably deploy into those environments without manual intervention.

  6. Outcome Based Metrics 7) (If your product is SaaS)  Which of the following best describes the state of your deployment automation for your staging and production environments? * We have no deployment automation.  Setting up our staging and production environments is entirely manual.* We have some deployment automation, but manual intervention is typically required (for example, to provision machines, setup dependencies, or to complete the process).* We create fully configured staging and production environments from scratch and we reliably deploy into those environments without manual intervention. 8) (If your product is SaaS)  Are you able to make business decisions based on data provided by infrastructure, application, and customer experience monitoring? * yes * no 9) (If your product is SaaS)  How much downtime is generally required to deploy a new version into production? * 4 hours or longer* 1-4 hours* less than 1 hour* No downtime is needed 10) (If your product is SaaS) How often do problems occur when deploying a new version into production? * Problems always occur* Problems occur about 50% of the time* Problems occur about 25% of the time* Problems occur about 10% of the time* Problems are rare

  7. Initial Rollout

  8. Sample Results • DevOps Maturity Model Self Assessment Results • Outcome Based Metrics

  9. Sample Details Plan & Measure Reliable

  10. Sample Details Develop & Test Practiced

  11. Sample Details Release & Deploy Practiced

  12. Sample Details Monitor & Optimize Practiced

  13. DevOps Outcome Metrics

  14. Sample Maturity Model Assessment Define release with business objectives Measure to customer value Improve continuously with development intelligence Test Continuously Manage environments through automation Provide self-service build, provision and deploy Automate problem isolation and issue resolution Optimize to customer KPIs continuously • Plan and source strategically • Dashboard portfolio measures Manage data and virtualize services for test Deliver and integrate continuously Standardize and automate cross-enterprise Automate patterns-based provision and deploy Optimize applications Use enterprise issue resolution procedures Link objectives to releases Centralize Requirements Management Measure to project metrics Automated test environment deployment Run unattended test automation / regression Plan departmental releases and automate status Automated deployment with standard topologies Monitor using business and end user context Centralize event notification and incident resolution Document objectives locally Manage department resources Schedule SCM integrations and automated builds Test following construction Plan and manage releases Standardize deployments Monitor resources consistently Collaborate Dev/Ops informally Fully Achieved Partially Achieved Goal 14

  15. Sample Maturity Model Assessment Define release with business objectives Measure to customer value Improve continuously with development intelligence Test Continuously Manage environments through automation Provide self-service build, provision and deploy Automate problem isolation and issue resolution Optimize to customer KPIs continuously GOALS: Where is the best result? • Plan and source strategically • Dashboard portfolio measures Manage data and virtualize services for test Deliver and integrate continuously Standardize and automate cross-enterprise Automate patterns-based provision and deploy Optimize applications Use enterprise issue resolution procedures Focus up Link objectives to releases Centralize Requirements Management Measure to project metrics Automated test environment deployment Run unattended test automation / regression Plan departmental releases and automate status Automated deployment with standard topologies Monitor using business and end user context Centralize event notification and incident resolution Focus across Document objectives locally Manage department resources Schedule SCM integrations and automated builds Test following construction Plan and manage releases Standardize deployments Monitor resources consistently Collaborate Dev/Ops informally Fully Achieved Partially Achieved Goal 15

  16. Goal Discussion: Planning for Initiative #1

  17. DevOpsProof of Concept Investigation • Rational Focal Point • Rational Team Concert • Task WI, Change Record WI • Jazz SCM • Jazz Build Engine (JBE) Security (AppScan) Driver VM Driver VM Driver VM Customer Interaction • Service Management Connect Customer Interaction Content Download Hosted Environment Feedback • RFE SCE Continuous Test Continuous Feedback Continuous Deployment Agile Development Continuous Build Continuous Integration Rational UrbanCode Image Catalog Test Environment Build Resources Development VM Builder VM AppScan Compilers Driver Images RTC Eclipse Client Web Browser Compile Pool Resource AppScan Driver Images Debug Environment Web Browser RTC Eclipse Client Compilers Test Resources UrbanCode Deploy? Test Resources Compile Pool Resource RTC Web Client RTC Web Client RTC Build Engine Client RTC Web Client Debug Environment RTC Build Engine Agent RTC Eclipse Client RTC Eclipse Client Focal Point Client RTC Build Client (JBE)

  18. Introduction to Practice Based Maturity Model Define release with business objectives Measure to customer value Improve continuously with development intelligence Test Continuously Manage environments through automation Provide self-service build, provision and deploy Automate problem isolation and issue resolution Optimize to customer KPIs continuously • Plan and source strategically • Dashboard portfolio measures Manage data and virtualize services for test Deliver and integrate continuously Standardize and automate cross-enterprise Automate patterns-based provision and deploy Optimize applications Use enterprise issue resolution procedures Link objectives to releases Centralize Requirements Management Measure to project metrics Link lifecycle information Deliver and build with test Centralize management and automate test Plan departmental releases and automate status Automated deployment with standard topologies Monitor using business and end user context Centralize event notification and incident resolution Document objectives locally Manage department resources Manage Lifecycle artifacts Schedule SCM integrations and automated builds Test following construction Plan and manage releases Standardize deployments Monitor resources consistently Collaborate Dev/Ops informally

  19. Maturity Levels Defined Specific maturity levels are defined by how well an organization can perform practices. The levels look at consistency, standardization, usage models, defined practices, mentor team or center of excellence, automation, continuous improvement and organizational or technical change management.

  20. Plan/Measure At the practiced level, organizations capture business cases or goals in documents for each project to define scope within the strategy but resourcing for projects are managed at the department level. Once projects are executed change decisions and scope are managed within the context of the project or program to achieve goals within budget/time. As organizations mature business needs are documented within the context of the enterprise and measured to meet customer value metrics. Those needs are then prioritized and aligned to releases and linked to program or project requirements. Project change decisions and scope are managed at the portfolio level.

  21. Development/Test At the practiced level, project and program teams produce multiple software development lifecycle products in the form of documents, spreadsheets to explain their requirements, design, test plans. Code changes and application level builds are performed on a formal, periodic schedule to ensure sufficient resources are available to overcome challenges. Testing, except for unit level, is performed following a formal delivery of the application build to the QA team after most if not all construction is completed. As organizations mature, software development lifecycle information is linked at the object level to improve collaboration within the context of specific tasks and information. This provides the basis for development intelligence used to assess the impact of processor technology improvements, continuously. A centralized testing organization and service provides support across application/projects that can continuously test regressions and higher level automated tests provided infrastructure and application deployment can also support. Software delivery, integration and build with code scans/unit testing are performed routinely and on a continuous basis for individual developers, teams, applications and products. .

  22. Release/Deploy At the practiced level, releases are planned annually for new features and maintenance teams. Critical repairs and off-cycle releases emerge as needed. All are managed in a spreadsheet updated through face-to-face meetings. Impact analysis of change is performed manually as events occur. Application deployments and middleware configurations are performed consistently across departments using manual or manually staged and initiated scripts. Infrastructure and middleware are provisioned similarly. As organization mature, releases are managed centrally in a collaborated environment that leverages automation to maintain the status of individual applications. deployments and middleware configurations are automated then move to a self-service providing individual developers, teams, testers and deployment managers with a capability to build, provision, deploy, test and promote, continuously . Infrastructure and middleware provisioning evolves to an automated then self-service capability similar to application deployment. Operations engineers move to changing automation code and re-deploying over manual or scripted changes to existing environments. .

  23. Monitor/Optimize At the practiced level, deployed resources are monitored and events or issues are addressed as they occur without context of the affected business application. Dev and Ops coordination is usually informal and event driven. Feedback of user experience with business applications is achieved through formalized defect programs. As organizations mature, monitoring is performed within the context of business applications and optimization begins in QA environments to improve stability, availability and overall performance. Customer experience is monitored to optimize experiences within business applications. Optimization to customer KPIs is part of the continuous improvement program.

  24. Sample: Practice based maturity model: Maturity Goals for an Initiative Define release with business objectives Measure to customer value Improve continuously with development intelligence Test Continuously Manage environments through automation Provide self-service build, provision and deploy Automate problem isolation and issue resolution Optimize to customer KPIs continuously • Plan and source strategically • Dashboard portfolio measures Manage data and virtualize services for test Deliver and integrate continuously Standardize and automate cross-enterprise Automate patterns-based provision and deploy Optimize applications Use enterprise issue resolution procedures Monitor using business and end user context Centralize event notification and incident resolution Link objectives to releases Centralize Requirements Management Measure to project metrics Link lifecycle information Deliver and build with test Centralize management and automate test Plan departmental releases and automate status Automated deployment with standard topologies Document objectives locally Manage department resources Manage Lifecycle artifacts Schedule SCM integrations and automated builds Test following construction Plan and manage releases Standardize deployments Monitor resources consistently Collaborate Dev/Ops informally Fully Achieved Partially Achieved Goals

More Related