130 likes | 241 Views
Quality of System requirements. 1 Performance The performance of a Web service and therefore Solution 2 involves the speed that a request can be processed and serviced. This requirement can be determined by measurements like throughput and latency . How?
E N D
Quality of System requirements 1 Performance The performance of a Web service and therefore Solution 2 involves the speed that a request can be processed and serviced. This requirement can be determined by measurements like throughput and latency. How? Throughput is the measure of the number of requests serviced in a given time. Latency is the delay experienced from when a request is submitted to when a response initiated. The response time and the throughput depend on the workload that the Web server is experiencing at the time. Both latency and throughput can be measured by using the timestamps at the request time and response times.
Quality of System requirements 2 Reliability Reliability measures the quality of Solution 2 in terms of performance, given an amount of time and the network conditions, while maintaining a high service quality. It can also defined by the number of failures per day and the medium of delivery. Reliability shows the percentage of the times a request is completed by Solution 2 with success or the times a request has failed. The count on failures can be based on the number of dropped deliveries, duplicate deliveries, faulty message deliveries, and out-of-order deliveries. An event may either succeed or fail and therefore take the values of 0 or 1. How? Web Service Reliability (or WSReliability) is a latest specification for open, reliable Web service messaging. The WS-Reliability can be embedded into SOAP as an additional extension rather than to a transport level protocol. This specification provides reliability in addition to interoperability, thus allowing communication in a platform and vendor-independent manner [2].
Quality of System requirements 3 Scalability Scalability defines how expandable Solution 2 can be. Solution 2 can be introduced to new interfaces and techniques and this makes keeping the service up-to-date a necessity. Solution 2 should be able to handle heavy load while making sure that the performance in terms of response time experienced by their clients is not objectionable. How? The Performance Non-Scalability Likelihood (PNL) metric is predict whether the system is going to be able to withstand the higher loads of traffic without affecting the performance levels. This metric is used to calculate the intensity of the loads at which the system cannot perform without degrading the response time and throughput. The calculation of PNL involves generating potential workloads and studying the behaviour of the system which will be similar to how the system would react given such varying workloads. If the system crashes, then that shoes that it is not scalable enough to accommodate potential future workloads.
Quality of System requirements 4 Accuracy Accuracy is defined as the level to which Solution 2 gives accurate results for the received requests. How? An experiment can be conducted to measure the accuracy of the system by calculating the standard deviation of the reliability. The number of errors generated by Solution 2, the number of fatal errors, and the frequency of the situation determine the amount of accuracy for the system. The closer the value to zero the most accurate the measurement is considered.
Quality of System requirements 5 Integrity Integrity guarantees that any modifications to the system based on Semantic Web Services are performed in an authorized manner. Data integrity assures that the data is not corrupted during the transfer, and if it corrupted, it assures that there are enough mechanisms in the design that can detect such modifications. How? Data integrity is the measure of a Web service’s accurate transactional and data delivery abilities. The data messages that are received are verified to see if they have not been modified in transit. There are a number of tools in the market like SIFT that can collect and monitor the data being sent and received between the communicating parties. These tools can be used to monitor the number of faulty transactions that are unidentified and the data messages that are received but with the checksum or hash that cannot be tallied. Data integrity can only take values of 0 or 1, meaning that data either has integrity or does not; there is no middle ground.
Quality of System requirements 6 Availability Availability is the probability that Solution 2 is up and in a ready-to-use state. High availability assures that there is the least amount of system failures or server failures even during the peak times when there is heavy traffic to and from the server and that the given service is available at all times. How? As the system is either available or unavailable, the remaining time after subtracting the down time can be termed as the “up time”, the time that the system is available. Since checking upon the time that the system is easier (because down time is smaller than uptime), calculating down time help us to measure the availability of the system. Keeping track on all the events failed during an operation can reveal the down time.
Quality of System requirements 7 Accessibility Accessibility is a measure of the success rate of a service instantiation at a given time. Solution 2 for example, might not be accessible even though it is still available. It may be up and running but might not be able to process a request due to high work load. Accessibility in turn depends on how scalable the Solution 2 system design is, because a highly scalable system serves the requests irrespective of their volume. How? Accessibility is the ratio of the number of successful responses received from the server to the number of requests messages sent by the clients.
Quality of System requirements 8 Interoperability Web services are accessed by thousands of clients around the world using different system architectures and different operating systems. Therefore, Interoperability within Solution 2 means that the solution can be used by any system, irrespective of operating system or system architecture and that accurate and identical result is rendered in any environment How? The interoperability can be calculated as the ratio of the total number of environments the Web service runs to the total number of possible environments that can be used. This interoperability value measures the successful execution of Solution 2 in different environments such as operating systems, programming languages, and hardware types.
Quality of System requirements 9 Unit testing The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as system administrators expect. How? Each unit is tested separately before being integrated into modules to test the interfaces between modules. Unit testing effectiveness resides on the a large percentage of defects which are identified during its use.
Quality of System requirements 10 Integration/Interaction testing Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before system testing. How? Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
Quality of System requirements 11 Usability and accessibility testing The user will be involved in the evaluation process beginning with the early stages of the development. How? The usability and accessibility of the application will be evaluated through the following methods: - Heuristic evaluation – a theoretical stage, based on the heuristics developed by Jacob Nielsen, needed to ensure that most of the usability problems have been taken into account.
Quality of System requirements 11 Usability and accessibility testing - Formative evaluation – implemented along the entire development process, from early stages until the final solution. Part of this evaluation procedure is already taking place through the Metamorphosis platform, which is considered as a testbed application for mEducator developments (e.g. the metadata scheme implementation). - Summative evaluation – will take place at the end of the development process, using the final version. At this stage, in the evaluation process users outside the consortium will be also involved.
Quality of System requirements Thank You