1 / 21

KS Technical Architecture Review

KS Technical Architecture Review. John A. Lewis | Unicon Inc. Planned Two Phases Short first phase with high-level review, ensure review adding value Longer second phase with more detail on areas surfaced earlier Various Review Activities Presentations / walk-thrus from various teams

meris
Download Presentation

KS Technical Architecture Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. KS Technical Architecture Review John A. Lewis | Unicon Inc.

  2. Planned Two Phases Short first phase with high-level review, ensure review adding value Longer second phase with more detail on areas surfaced earlier Various Review Activities Presentations / walk-thrus from various teams Interviews with various team members Independent documentation and code review Participate in some specific team activities Involvement in Performance Improvement Used more time than planned in phase 1, meant phase 2 was shorter Got to go deeper than planned, but made more detail observations Valuable to follow one issue through all the layers of the architecture Architecture Review Process

  3. Overall Assessment The Kuali Student platform has a solid foundation Will be adoptable as production enterprise software Will run with appropriate availability/scalability There are no "red flag" issues Some areas for concern/improvement 13 areas of general findings 19 specific recommendations

  4. Major Findings

  5. Performance Overall system is exhibiting a reasonably performant architecture Supporting several hundred concurrent users in a single node per layer configuration Shows there aren't fundamental issues in the architecture stopping it from scaling More bottlenecks to be remediated, ongoing needs for performance and scalability testing Current indications are positive

  6. Complexity "Gas Factory" anti-pattern: the overall system suffer from significant unnecessary/accidental complexity Has improved over time and now has a more viable architecture, but still needs ongoing focus Three different causes: Original “Pure SOA” design concept Use of Kuali Rice “Prefactoring”

  7. Services Orientation General SOA approach is wise Diversifying UI platforms requires it Balance use of SOAP and REST Avoid full WS-* implementation Continue with Contract-First Design Continue avoiding AOP Need complete mock implementations Careful about Class 1 services referencing each other Ensure services call each other locally

  8. Client-side Development GWT was an unfortunate choice – good to see the project moving away Also glad to see KNS is being retired Need to make sure KRAD doesn't become another NIH ("Not Invented Here") project Leverage JQuery and Fluid – just add on what is needed Use JSON-basesd REST APIs for client-side communication Watch out for AJAX getting “chatty”

  9. Kuali Rice General use of Rice was appropriate Does contribute to complexity and NIH ("Not Invented Here") issues Should look at more general-purpose modules that are widely adopted Rice 2.0 roadmap fixes many NIH concerns ImpEx should be replaced with Liquibase KEW persistence has been a big part of performance problems and complexity

  10. Persistence Layers Object-Relational Mapping (ORM) technologies have mixed history Automated ease comes at a cost Fine for small/departmental apps Complex apps need to manage persistence more directly Project already running into Hibernate issues Need to take two actions to address Add greater Hibernate expertise Start transitioning to native DAO approach Rice using completely different persistence layer OJB & OScache, which are dead projects Consider use of NoSQL databases for non-relational datasets

  11. Caching KS 1.1, had some manual caching. KS 1.2 has none Keeps architecture simple, but likely won't scale as needed Should only cache datasets that are small, read frequently, changed rarely Distributed/clustered caching not simple, invest wisely Use Hibernate's own Level 2 caching where the ORM is used Use same caching service in native DAOs Investigate caching Class 1 service results, watch for nested objects Start with Ehcache, but investigate others Use JMX to provide monitoring and administration of caches

  12. Distributed Transactions Infrastructure for JTA and XA-2PC is throughout Rice and Student XA distributed transaction is resource intensive, hurts scalability, and is temperamental Should be avoided and eliminated wherever possible Sometimes can adjust to other patterns, like "Best Efforts 1PC" Redesign transaction boundaries to avoid them

  13. Software Configuration Management General SCM environment in good shape Should move version control from Subversion to a distributed version control system (Git, Bazaar, Mercurial) More flexible for institutional branching Also more generally reliable/flexible Definitely a big change, so schedule in advance and train people Jenkins CI server should be used for more than builds/tests – use more static code analysis and enforce results Developer setup process is reasonable and well documented Support for multiple developer IDEs (Eclipse, NetBeans, IntelliJ) is good Code seems well organized, Spring application contexts could use some cleanup

  14. Testing Excellent that project is performance testing at this phase Would be good to perf test via UI once off GWT Need automated end-user functional testing Watch for regressions Hard to do with GWT Some unit testing, but could use more Enforce code coverage metrics via CI server Adopt Test-Driven Development (TDD) practices Better code design Smaller, cleaner codebase Higher quality code

  15. Horizontal Scalability System designed to run on multiple app server nodes Essential to scale to concurrency needs of the system Has not been tested, proven, and documented yet Need to set up a multi-node test environment ASAP to ensure it works, document it, and measure it (scale-up) Should also test greater node count (4, 6, 8, +) with a deploying institution Need to determine if clustered user sessions are required or if load-balanced session affinity is sufficient

  16. Project Teams Services Team Working well – great leadership Keep grounded with real-world scenarios Good constructive confrontation – avoiding “group think” Development Team Working well – good dynamic, quality output Some challenges KRAD was bleeding edge / had to share resources “Core Slice” goals were overly aggressive Needs dedicated PM to coordinate activities Needs deeper commitment / training on Scrum and Agile

  17. Multi-Tenancy Ability to host multiple independent institutions in a single physical instance Security, data segregation, branding, delegated admin, etc. No explicit documentation on KS stance Might be an appropriate requirement for this project Has major design and data modeling implications Difficult and expense to to change later Should make an explicit decision as early as possible

  18. Specific Recommendations

  19. Specific Recommendations Pursue specific development process improvements Add database/persistence expertise to the team Push hard for Kuali Rice 2.0 roadmap and adopt it Use Liquibase for managing static/test datasets Establish pattern for exposing REST+JSON services Move from GWT toward Jquery+Fluid+KRAD Selenium automated end-user unit/regression tests Transition from Hibernate ORM to native DAOs Persistence caching layer using Ehcache Investigate NoSQL databases for non-relational datasets

  20. Specific Recommendations Eliminate XA-2PC distributed transactions Move version control from Subversion to Git Implement full TDD practices on the dev team Instrument CI server with tools and enforce results Ongoing performance testing with end-user loads Perf test in load-balanced multi-node configuration Determine if user session clustering is required Determine if multi-tenancy is required Ensure that services call each other locally

  21. Q & A

More Related