970 likes | 1.14k Views
Introduction. The chapter will address the following questions: What is an information system’s architecture in terms of DATA, PROCESSES, INTERFACES, and NETWORKS — the building blocks of all information systems?
E N D
Introduction • The chapter will address the following questions: • What is an information system’s architecture in terms of DATA, PROCESSES, INTERFACES, and NETWORKS — the building blocks of all information systems? • What are both centralized and distributed computing alternatives for information system design, including various client/server and Internet/intranet options? • What are the database and data distribution alternatives for information system design? • What are the make versus buy alternatives and variations for information system design? • What are the user and system interface alternatives for information system design? Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Introduction • The chapter will address the following questions: • What are the various networking topologies and their importance in information system design? • What are the methods for general application architecture and design? • What are the differences between logical and physical data flow diagrams, and explain how physical data flow diagrams are used to model application architecture and guide process design? • How do you draw physical data flow diagrams for a system/application? Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
General System Design • During general systems design the basic technical decisions are made. These decisions include: • Will the system use centralized or distributed? • Will the system’s data stores be centralized or distributed? If distributed, how so? What data storage technology(s) will be used? • Will software be purchased, built in-house, or both? For programs to be written, what technology(s) will be used? • How will users interface with the system? How will data be input? How will outputs be generated? • How will the system interface to other, existing systems? Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
General System Design • The decisions made during general systems design constitute the application architecture of the system. • An application architecture defines the technologies to be used by (and to build) one, more, or all information systems in terms of its data, process, interface, and network components. It serves as a framework for general design. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • What is client/server computing? • A client is single-user computer that provides (1) user interface services, appropriate database and processing services; and (2) connectivity services to servers (and possibly other clients). • A server is a multiple-user computer that provides (1) shared database, processing, and interface services; and (2) connectivity to clients and other servers. • In client/server computing an information system’s database, software, and interfaces are distributed across a network of clients and servers which communicate and cooperate to achieve system objectives. Despite the distribution of computing resources, each system user perceives that a single computer (their own client PC) is doing all the work. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • Client/server computing is an alternative to traditional centralized computing. • In centralized computing, a multi-user computer (usually a mainframe or minicomputer) hosts all of the information system components including (1) the data storage (files and databases), (2) the business logic (software and programs), (3) the user interfaces (input and output), and (4( any system interfaces (networking to other computers and systems). The user may interact with this host computer via a terminal (or, today, a PC emulating a terminal), but all of work is actually done on the host computer. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • Centralized Computing: • Centralized process architectures were once dominant because the cost of placing computers closer to the end-user was prohibitive. • Many (if not most) legacy applications remain centralized on large mainframe computers (such as IBM’s S/370 and 3090 families of computers) or smaller minicomputers (such as IBM’s AS/400). Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • Distributed Presentation: • This alternative builds upon and enhances centralized computing applications. • The old character user interfaces are stripped from the centralized applications and regenerated as graphical user interfaces that will run on the PC. • The user interface (or presentation) is distributed off the server and onto the client. • All other elements of the centralized application remain on the server, but the system users get a friendlier graphical user interface to the system. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • Distributed Presentation: • Distributed presentation computing advantages: • It can be implemented relatively quickly since most aspects of the legacy application remain unchanged. • Users get a friendly and familiar interface to existing systems • The useful lifetime of legacy applications can be extended until such a time as resources warrant a wholesale redevelopment of the application. • Distributed presentation computing disadvantages: • The application’s functionality cannot be significantly improved, and the solution does not maximize the potential of the client’s desktop computer by only dealing with the user interface. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • Distributed Data: • Sometimes called two-tiered client/server. • This architecture places the information system’s stored data on a server, and the business logic and user interfaces on the clients. • A local or wide area network usually connects the clients to the server. • A local area network (or LAN) is a set of client computers (usually PCs) connected to one or more server computers (usually microprocessor-based, but could also include mainframes or minicomputers) through cable over relatively short distances. • A wide area network (or WAN) is an interconnected set of LANs, or the connection of PCs over a longer distance. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • Distributed Data: • The database server is fundamental to this architecture and it’s technology is different from a file server. • File serversstore the database, but the client computers must execute all database instructions. This means that entire databases and tables may have to be transported to and from the client across the network. • Database servers also store the database, but the database commands are also executed on those servers. The clients merely send their database commands to the server. The server only returns the result of the database command processing — not entire databases or tables. Thus, database servers generate much less network traffic. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • Distributed Data: • The clients in the distributed database solution typically run the business logic of the information system application. • Distributed data computing advantages: • Separates data and business logic to (1) isolate each from changes to the other, (2) make the data more available to users, and (3) retain the data integrity of centralized computing through centrally managed servers. • Distributed data computing disadvantages: • The application logic must be maintained on all of the clients. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • Distributed Data and Logic: • Referred to as three-tiered or n-tiered client/server computing. • This approach distributes databases and business logic to separate servers. • Uses the same database server(s) as in the two-tiered approach. • Uses an application server. • The application server provides a transaction monitor such as to manage transactions. • Some or all of the business logic of the application can be moved from the client to the application server. • Only the user interface and some relatively stable or personal business logic need be executed on the clients. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • Distributed Data and Logic: • Distributed data and logic computing disadvantages: • Very complex to design and development. • The most difficult aspect of three-tier client/server application design is partitioning. • Partitioning is the act of determining how to best distribute or duplicate application components (data, process, and interfaces) across the network. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • The Internet and Intranets: • The Internet is an (but not necessarily ‘the’) information superhighway that permits computers of all types and sizes, all over the world to exchange data and information using standard languages and protocols. • An intranet is a secure network, usually corporate, that uses Internet technology to integrate desktop, workgroup, and enterprise computing into a single cohesive framework. • The intranet provides management and users with a common interface to applications and information Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • The Internet and Intranets: • Java is a cross-platform programming language designed specifically to exploit the Internet standards. • Java applets (modular software components) are stored on an Internet or intranet server and downloaded to the client when they access the application. • Javaapplets can execute on any client computing platform. • A network computer (or NC) is designed to only run Internet-based applications (such as web browsers and Java applets). • The NC (also called a thin client) is simpler, and much cheaper than personal computers (increasingly called a fat client). Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • The Role of Network Technologies: • The well designed network provides connectivity and interoperability. • Connectivity defines how computers are connected to “talk” to one another. • Interoperability is an ideal state in which connected computers cooperate with one another in a manner that is transparent to their users (the clients). • Network topology describes how a network provides connectivity between the computers on that network. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • The Role of Network Technologies: • The Bus network topology: • A direct point-to-point link between any two computer systems. • The simplest network topology. • The network can contain mainframes, minicomputers (or mid-range computers), personal computers, and dumb and intelligent terminals. • To completely connect all points between n computers, you would need n times (n-1)/2 direct paths. • Only one computer can send data through the bus at any given time. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • The Role of Network Technologies: • The Ring network topology: • Connects multiple computers and some peripherals into a ring-like structure. • Each computer can transmit messages, instructions, and data (called packets) to only one other computer (or node on the network). • Every transmission includes an address. • When a computer receives a packet, it checks the address and if the packet’s address is different than the computer’s address, it passes it on to the next computer or node. • Ring networks generally transmit packets in one direction; therefore, many computers can transmit at the same time to increase network throughput. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • The Role of Network Technologies: • The Star network topology: • Links multiple computer systems through a central computer. • The central computer does not have to be a mainframe or minicomputer. • Central computer could be an application server that manages the transmission of data and messages between the other clients and servers (as in the n-tier model). Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Network Architectures for Client/Server Computing • The Role of Network Technologies: • The Hierarchical network topology: • Can be thought of as a multiple star network, where the communications processors are arranged in a hierarchy. • The top computer system (usually a mainframe) controls the entire network. • All network topologies operate according to established network protocols that permit different types of computers to communicate and interoperate. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Data Architectures for Distributed Relational Databases • The underlying technology of client/server computing has made it possible to distribute data without loss of centralized control. • This control is being accomplished through distributed relational databases. • A relational database stores data in a tabular form. Each file is implemented as a table. Each field is a column in the table. Each records in the file is a row in the table. Related records between two tables are implemented by intentionally duplicating columns in the two tables. • A distributed relational database distributes or duplicates tables to multiple database servers (and in rare cases, clients). Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Data Architectures for Distributed Relational Databases • The software required to implement distributed relational databases is called a distributed relational database management system. • A distributed relational database management system (or distributed RDBMS) is a software program that controls access to, and maintenance of the stored data. It also provides for backup, recovery and security. It is sometimes called a client/server database management system. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Data Architectures for Distributed Relational Databases • What sets a distributed RDBMS apart from a PC RDBMS is the database engine. • The database engine is that part of the DBMS that executes database commands to create, read, update, and delete records (rows) in the tables. • In a PC RDBMS, the database engine that processes all database commands must execute on the client PC, even if the data is actually stored on the server. • In a distributed RDBMS, the database engine that processes all database commands executes on the database server. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Data Architectures for Distributed Relational Databases • True data distribution partitions data to one or more database servers. • Entire tables can be allocated to different servers, or subsets of rows in a table can be allocated to different servers. • An RDBMS controls access to and manages each server. • Data replication duplicates data on one or more database servers. • Entire tables can be duplicated on different servers, or subsets of rows in a table can be duplicated to different servers. • The RDBMS not only controls access to, and management of each server database — it also ensures that updates on one server are updated on any server where the data is duplicated. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Batch Input/Output: • In batch processing, transactions are accumulated into batches for periodic processing. • The batch inputs are processed against master files or databases. • Transaction files or databases may also be created or updated by the transactions. • Most outputs tend to be generated to paper or microfiche on a scheduled basis. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • On-line Processing: • The majority of systems have slowly evolved from batch processing to on-line processing. • On-line systems provide for a conversational dialogue between user and computer. • Business transactions and inquiries are often best processed when they occur. • Errors are identified and corrected more quickly. • Transactions tend to be processed earlier since on-line systems eliminate the need for batch data file preparation. • On-line methods permit greater human interaction in decision making, even if the data arrives in natural batches. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Remote Batch: • Remote batch combines the best aspects of batch and on-line I/O. • Distributed on-line computers handle data input and editing. • Edited transactions are collected into a batch file for later transmission to host computers that process the file as a batch. • Results are usually transmitted as a batch back to the original computers. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Keyless Data Entry: • Keying errors have always been a major source of errors in computer inputs (and inquiries). • In batch systems, keying errors can be eliminated through optical character reading (OCR) and optical mark reading (OMR) technology. • The real advances in keyless data entry are coming for on-line systems in the form of auto-identification systems. • Bar coding systems (similar to universal product code systems that are commonplace in the grocery and retail industries) are widely available for many modern applications. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Pen Input: • Some businesses use this technology for remote data collection. • For example, UPS. • A promising technology is emerging in the form of handheld PCs (HPCs). • Similar to personal organizers and personal data assistants, these HPCs offer greater compatibility with desktop and laptop PCs. • Based on Microsoft’s Windows CE operating system, they can be programmed to become disconnected clients in a client/server application. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Graphical User Interfaces: • GUI technology has become the user interface of choice for client/server applications. • GUIs do not automatically make an application better. • Poorly designed GUIs can negate the alleged advantages of consistent user interfaces. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Graphical User Interfaces: • Most users interface with the Internet via a client software tool called a browser. • The browser paradigm is based on hypertext and hyperlinks. • Hypertext are keywords that are clearly highlighted as a link to a new page of information. • Hyperlinks are links from graphics, buttons, and areas that link to a different page of information. • These links may it easy to navigate from page-to-page and application-to-application. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Electronic Messaging and Work Group Technology: • Information systems are being designed to directly incorporate the electronic mail. • For example, Microsoft Outlook and Exchange Server and IBM/Lotus Notes allow for the construction of intelligent electronic forms that can be integrated into an application. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Electronic Data Interchange: • Businesses that operate in many locations and businesses that seek more efficient exchange of transactions with their suppliers and/or customers often utilize electronic data interchange. • Electronic data interchange (EDI) is the electronic flow of business transactions between customers and suppliers. • With EDI, a business can eliminate its dependence on paper documents and mail, plus dramatically reduce response time.. • Various EDI standards exist for the standardized exchange of data between organizations within the same industry. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Imaging and Document Interchange: • Similar to EDI except that the actual images of forms and data are transmitted and received. • It is particularly useful in applications in which the form images or graphics are required. (insurance industry) Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Middleware: • Information systems must also interface to other information systems. • System integration is the process of making heterogeneous information systems (and computer systems) interoperate. • A key technology used to interface and integrate systems is middleware. • Middleware is utility software that serves to interface systems built with incompatible technologies. Middleware serves as a consistent bridge between two or more technologies. It may be built into operating systems, but it is also frequently sold as a separate product. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Interface Architectures - Inputs, Outputs, & Middleware • Selecting User and System Interface Technologies: • The preferred or approved user and system interface technologies may be specified as part of the Interface architecture. • An organization may leave interface technologies as a decision to be made on a project-by-project basis. • An organization may establish macro guidelines for interfaces and leave the micro decisions to individual projects. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • The PROCESS architecture of an application is defined in terms of the software languages and tools that will be used to develop the business logic and application programs. • This is expressed as a menu of choices since different software development environments (SDEs) are suited to different applications. • A software development environment is a language and tool kit for constructing information system applications. They are usually built around one or more programming languages such as COBOL, Basic, C or C++, Pascal, Smalltalk, or Java. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • SDEs for Centralized Computing & Distributed Presentation: • The software development environment for centralized computing consists of: • An editor and compiler, usually COBOL, to write programs. • A transaction monitor, usually CICS, to manage on-line transactions and terminal screens. • A file management system, such as VSAM, or a database management system, such as DB2. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • SDEs for Centralized Computing & Distributed Presentation: • The personal computer brought many new COBOL development tools down to the mainframe. • A PC-based COBOL SDE provided the programmer with more powerful editors, and testing and debugging tools at the workstation level. • A programmer could do much of the development work at the PC level, and then upload the code to the central computer for system testing, performance tuning, and production. • The SDE could be interfaced with a CASE tool and code generator to take advantage of process models developed during systems analysis. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • SDEs for Centralized Computing & Distributed Presentation: • SDEs provide tools to develop distributed presentation client/server. • The Micro Focus Dialog Manager provided COBOL Workbench users with tools to build Windows-based user interfaces that could cooperate with the CICS transaction monitors and the mainframe COBOL programs. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • SDEs for Two-Tier Client/Server: • The SDE for two-tiered client/server applications (also called distributed data) consists of a client-based programming language with built-in SQL connectivity to one or more server database engines. • SDEs provide the following: • Rapid application development (RAD) for quickly building the graphical user interface that will be replicated and executed on all of the client PCs. • Automatic generation of the template code for the above GUI and associated system events (such as mouse-clicks, keystrokes, etc.) that use the GUI. The programmer only has to add the code for the business logic. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • SDEs for Two-Tier Client/Server: • SDEs provide the following: (continued) • A programming language that is compiled for replication and execution on the client PCs. • Connectivity (in the above language) for various relational database engines, and interoperability with those engines. Interoperability is achieved by including SQL database commands (to, for example, create, read, update, delete, and sort records) that will be sent to the database engine for execution on the server. • A sophisticated code testing and debugging environment for the client. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • SDEs for Two-Tier Client/Server: • SDEs provide the following: (continued) • A system testing environment that helps the programmer develop, maintain, and run a reusable test script of user data, actions, and events against the compiled programs to ensure that code changes do not introduce new or unforeseen problems. • A report writing environment to simply the creation of new end-user reports off a remote database. • A help authoring system for the client PCs. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • SDEs for MultiTier Client/Server: • Unlike two-tied applications, n-tiered applications must support more than 100 users with mainframe-like transaction response time and throughput; with 100 gigabyte or larger databases. • The SDEs in this class must provide the all of the capabilities typically associated with two-tiered SDEs plus the following: • Support for heterogeneous computing platforms, both client and server, including Windows, OS/2, UNIX, Macintosh, and legacy mainframes and minicomputers. • Code generation and programming for both clients and servers. Most tools in this genre support pure object-oriented languages such as C++ and Smalltalk. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • SDEs for MultiTier Client/Server: • The SDEs in this class must provide the all of the capabilities typically associated with two-tiered SDEs plus the following: (continued) • A strong emphasis on reusability using software application frameworks, templates, components, and objects. • Bundled mini-case tools for analysis and design that interoperate with code generators and editors. • Tools to help analysts and programmers partition application components between the clients and servers. • Tools to help developers deploy and manage the finished application to clients and servers. This generally includes security management tools. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley
Information Technology Architecture • Process Architecture - The Software Development Environment and System Management • SDEs for MultiTier Client/Server: • The SDEs in this class must provide the all of the capabilities typically associated with two-tiered SDEs plus the following: (continued) • Ability to automatically ‘scale’ the application to larger and different platforms, client and server. This issue of scalability was always assumed in the mainframe computing era, but is relatively new to the client/server computing era. • Sophisticated software version control and application management. Prepared by Kevin C. Dittman for Systems Analysis & Design Methods 4ed by J. L. Whitten & L. D. Bentley