230 likes | 352 Views
Next Generation Digital Library. Distributed object oriented 3 layered architecture using CORBA and software agents. Messaging architecture Agent architecture Database architecture. Modifying Content Review. Existing Content review System Proposed Content Review System
E N D
Next Generation Digital Library • Distributed object oriented 3 layered architecture using CORBA and software agents. • Messaging architecture • Agent architecture • Database architecture
Modifying Content Review • Existing Content review System • Proposed Content Review System • Expected gains/ Performance enhancements • Fully automated • Daily logs • Ease of changing the parameters like frequency of content review
Proposed Architecture • The Software Agents • JADE provides some built in agents to facilitate application development and management. • AMS – Agent management system • DF – Directory facilitator • Other applications specific agents are to be added to complete the application. • AgentStarter • AgentContent • AgentReviewer • AgentUser • AgentMonitor • AgentEmail • AgentErrorlog
Agent Monitor Agent Starter Start Daily Review Process Looks for new content to be reviewed Agent Email Error Error Sending appropriate email messages to reviewers Agent Content DataBase Error DataBase Agent Error Log Guest User with the same interests available Message to find reviewers for the content to be reviewed Error Error Agent User DataBase DataBase Agent Reviewer If no Available reviewers found, sends message to check if any guest users are available Checks for any available Reviewers Proposed Architecture
Performance Comparison • The Hypothesis • Software agent based content review process has better response time and is more maintainable and so has better performance than the J2EE based existing system. • Experimental Variables • Type of system is the independent variable • Different performance measures are the dependent variables • Subject Assignment • Within subject assignment • Sample Size • Carefully chosen based on observations an discussions with the team.
Dependent Variable : Response Time • What it is? • Time taken by the application to complete the review process. • How do we measure? • Different load conditions • Separate modules to measure response time. • Importance • As the user base grows and advanced features are incorporated, response time becomes critical.
Dependent variable: Maintainability • What it is? • The ease of maintaining i.e. addition or update of the application is measured • How do we measure? • Three persons do the same change • Time taken to add a feature/ business logic • Time taken to change a business rule • Average time is calculated • Importance • Constant update required to sustain
Scalability • What it is? • Effectiveness of the application with increased load. • How do we measure? • We test the two applications for heavy load conditions and observe the results. • Importance • With time DLNET will see increased number of users, resources and reviewers.
Correctness • What it is? • Maintaining the correctness of the application with varying load conditions. • How do we measure? • Reviewers selected by both the applications are checked manually to see for any mistakes. • Importance • The applications need to be effective with increased load conditions.
Reliability • What it is? • The extent to which an application yields consistent, stable, and uniform results over repeated observations or measurements under the same conditions each time. • How do we measure? • Applications are left running for a long time and failures are recorded. • Importance • Applications should be giving error free and valid results when left running continuously.
Reusability • What it is? • Reusing a piece of code elsewhere in the application. • How do we measure? • We select three pieces of code in the two applications and reuse them for another function. • Note the time taken to develop and successfully integrate test the applications again. • Importance • Better reusability helps save a lot of time as the units being reused are already unit tested.
Test Plan • Measurement Instruments • Monitor agent for response time • Similar code added in the current application. • Logs generated for measuring correctness. • Test Data • We observe the current application and understand the database design carefully to understand the various scenarios of different load conditions.
Test Plan • Test Procedure • Response Time, Correctness, Scalability – one procedure is followed. • Replicate the database. • Disassociate the existing application with the live database and link it to test database. • Run the existing application – logs are generated. • Run the proposed application with another copy of test database – logs generated. • Reliability – applications run parallel. • Maintainability, Reusability – the change is done in the application and the time to understand the requirement, write/ change the code, unit test and then integrate test the application is noted.
Conclusions • Software agent based architecture will make the current content review more efficient, maintainable and reusable. • The measurement methods designed for this research can be further used for other modules of DLNET.