140 likes | 151 Views
Explore the challenges in validating functionalities of grid agents to ensure reliable service matching in distributed environments. Learn about a methodical approach for functional validation through testing sequences and commitment phases.
E N D
Matching Conflicts:Functional Validation of Agents George Cybenko and Guofei Jiang Thayer School of Engineering Dartmouth College 18th July,1999, AAAI’99 Orlando, Florida Acknowledgements: This work was partially supported by AFOSR grants F49620-97-1-0382, NSF grant CCR-9813744 and DARPA contract F30602-98-2-0107.
Multiagent Grids • Several efforts are underway to build multiagent grids such as Globus, Infospheres, Sciagents and DARPA CoABS. • Basic structure: User Interface Request Client Server Service Service Location Request Reply Advertise Matchmaker
Matching Conflicts • Brokers and matchmakers use keywords and domain ontologies to specify services. • Keywords and ontologies cannot be defined and interpreted precisely enough to make brokering or matchmaking between grid agents’ services robust in a truly distributed, heterogeneous computing environment. • Matching conflicts exist between client’s requested functionality and service provider’s actual functionality.
An Example • A client requires a three-dimensional FFT. A request is made to a broker or matchmaker for a FFT service based on the keywords and possibly parameter lists. • The broker or matchmaker uses the keywords to retrieve its catalog of services and returns with several candidate remote services. • The client must validate the actual functionality of these remote services before the client commits to use it. • Question: Do all parties involved in the computation agree on the actual agent functionality?
Functional Validation • Functional validation means that a client presents to a prospective service agent a sequence of challenges.The service agent replies to these challenges with corresponding answers. Only after the client is satisfied that the service agent’s answers are consistent with the client’s expectations is an actual commitment made to using the service. • Three steps: • Service agent identification and location. • Service agentfunctional validation. • Commitmentto the service agent.
Our approach • Challenge the service agent with some test cases x1, x2, ..., xk. The remote service agent R provides the corresponding answers fR(x1), fR(x2), ..., fR(xk). The client C may or may not have independent access to the answers fC(x1), fC(x2), ..., fC(xk). • Some basic questions: • How large should k be? • What if fC(x) is not known by the client agent? • What if the service agent is fee based and so answers,fR(x), are not given away freely before commitment? • How to implement the validation process online?
Possible situations • C = Client , R = Remote service • 1. C “knows” fC(x) and R provides fR(x). • E.g. Simple test case(easy to be computed by C itself). Matrix inverse operation. Comparing fC(x) and fR(x). • PAC Learning theory. • 2. C “knows” fC(x) and R does not provide fR(x). • E.g.Fee-based service. R offers hashed results g(fR(x)), where g is a secure hash function. E.g. , y is a singular matrix. Comparing and . • Zero-Knowledge proofs.
Possible situations • 3. C does not “know” fC(x) and R provides fR(x). • E.g. Weather forecast; Some control systems. • E.g. C doesn’t know but can verify R’s service by . • Simulation-based learning and reinforcement learning etc. • 4. C does not “know” fC(x) and R does not provide fR(x). • Basically this situation can be reduced to a combination of the above two. • E.g. R offers hashed result , C can verify R’s service by .
Mathematical framework • The goal of PAC learning is to use few examples as possible, and as little computation as possible to pick a hypothesis concept which is a close approximation to the target concept. • Define a concept to be a boolean mapping . X is the input space. c(x)=1 indicates x is a positive example , i.e. the service agent can offer the “correct” service for challenge x. • Define an index function • Now define the error between the target concept c and the hypothesis h as .
Mathematical framework • The client agent can randomly pick m samples to PAC learn a hypothesis h about whether the service agent can offer the “correct” service . • Theorem(Blumer et.al.) Let H be any hypothesis space of finite VC dimension d contained in , P be any probability distribution on X and the target concept c be any Borel set contained in X. Then for any , given the following m independent random examples of c drawn according to P , with probability at least , every hypothesis in H that is consistent with all of these examples has error at most .
Protocol Testbeds • public Client() { try {index=object.validation("actcomm.dartmouth.edu","1997","Plus","plus.data"); } catch(Exception e) {String err=e.toString(); System.out.println(err); } if(index==true) {System.out.println("Correct service"); } else {System.out.println("Incorrect service"); } • public class Validation { public boolean validation(String sname, String pname, String fname, String dname) { DataInputStream instream=null; String inline; Socket socket; PrintStream out; String Outline; ……}}
Mobile Functional Validation Agent Send Mobile Agent Create User Interface User Agent A’s Service Correct B’s Service Correct Jump Incorrect Incorrect Interface Agent Interface Agent Machine C, D, E, ... Computing Server B Computing Server A C, D, E, ….. Machine B Machine A
Future work and open questions • PAC learning framework is very pessimistic, need a different approach. • Make agent functional validation a standard agent service. • Thanks!!