200 likes | 309 Views
Agent Teams in Grid Resource Brokering and Management (preliminary considerations). Introduction. The Grid what? why? Local Grid in a laboratory / company Global Grid the P2P nightmare nodes appear and disappear node load can change radically no problem for SETI@HOME
E N D
Agent Teams in Grid Resource Brokering and Management (preliminary considerations)
Introduction • The Grid • what? • why? • Local Grid in a laboratory / company • Global Grid the P2P nightmare • nodes appear and disappear • node load can change radically • no problem for SETI@HOME • problem when you need • QoS • SLA
Agents in grids today • B. Di Martino and O. Rana • static and mobile agents in the system (MAGDA) • agents visit sites to find resources (services) • visits based on exchanges of messages with nodes • executes a task or a part of it • AGLETS-based / no economic model • S. Manvi et. al. • attempt at adding economic model • single agent moves, negotiates, executes • heavily based on mobility
Comments / reminders • Mobility can be costly • there is no free lunch • Single resource provider difficult to assure QOS / SLA • Economic model is “necessary” (Buyya, 2000) • Proposed solution agent teams • “one for all and all for one”
Assumptions • Agents work in teams • Each team has a team leader (local master – Lmaster) • Incoming Workers can join any team based on their criteria of joining • Teams can accept workers based on their own criteria of acceptance • Each Worker can (if needed) play role of Lmaster • Decisions about joining and accepting will utilize contract net protocol and multi-criterial analysis • Yellow-page method for matchmaking (provided by the CIC agent) other approaches possible
Structure of a work-team • Each team has an Lmaster and a mirror Lmaster (LMirror) • if only one agent Lmaster • next incoming agent becomes Lmirror • LMirror becomes Lmaster if Lmaster fails • Lmaster keeps its role as long as it can handle the workload • If LMirror “disappears” Lmaster appoints one of slaves to be a mirror • Lmaster and LMirror check each other existence in regular intervals • Each subsequent agent becomes a worker
Finding team to ... • Lagent checks with the CIC who • it can join • does the work it needs • Lagent sends representatives to negotiate • Lagent makes decision • which team to join • which team will do the job • Lagent collects data to be used (in the future) in MCDM
CIC architecture:#1 • task-per-thread paradigm • CICAgent picks requests from the JADE-provided message queue and enqueues them into the request queue
CIC architecture #2 • local CICDbAgents • the CIC agent picks requests form the JADE message queue and enqueues them into the internal request queue • each CICDbAgent completes one task (request) at a time • upon completion, results are sent back to the CICAgent
CIC architecture #3 • database agents are located on remote machines contributing additional computational power and allowing CICDbAgents to work without stealing resources from the CICAgent
About experiments • 4 Querying Agents (QA), requesting the CIC to perform SPARQL resource queries • Each QA was running concurrently on separate machine, and was sending 2,500 requests and receiving query-results • All experimental runs were coordinated by the Test Coordinator Agent (TCA). Before each test, remote JADE agent containers were restarted to provide equal environment conditions • 11 AMD Athlon 2500+, 512MB RAM machines running Gentoo Linux and JVM 1.4.2. Computers were interconnected with a 100Mbit LAN
Experimental results – different CIC architectures • multi-threaded (pull) • multi-agent with local CICDbAgents (push) • multi-agent with distributed CICDbAgents (push) • 10,000 queries
Final comparisons • Left panel – remote agents with and without CICIA – throughput • Right panel – remote agents vs. threads – processing time
:hasMemory :a owl:DatatypeProperty; rdfs:comment "in MB"; rdfs:range xsd:float; rdfs:domain :Computer. :hasUserDiskQuota :a owl:DatatypeProperty; rdfs:comment "in MB"; rdfs:range xsd:float; rdfs:domain :Computer. :LMaster :a owl:Class; :hasContactAID :a owl:ObjectProperty; rdfs:range xsd:string; rdfs:domain :LMaster. :hasUserDiskQuota :a owl:DatatypeProperty; rdfs:comment "in MB"; rdfs:range xsd:float; rdfs:domain :Computer. Resource ontology :Computer :a owl:Class. :hasCPU :a owl:ObjectProperty; rdfs:range :CPU; rdfs:domain :Computer. :CPU :a owl:Class. :hasCPUFrequency :a owl:DataProperty; rdfs:comment "in GHz"; rdfs:range xsd:float; rdfs:domain :CPU. :hasCPUType :a owl:ObjectProperty; rdfs:range :CPUType; rdfs:domain :CPU. :CPUType :a owl:Class. Intel :a :CPUType. AMDAthlon :a :CPUType.
Sample resource description :LMaster3 :hasContactAID "monster@e-plant:1099/JADE"; :hasWorker :PC2929. :PC2929 :a :Computer; :hasCPU [ a :CPU; :hasCPUType :Intel; :hasCPUFrequency "3.7"; ] ; :hasUserDiskQuota "400"; :hasMemory "512".
SPARQL query PREFIX : <http://www.ibspan.waw.pl/mgrid#> SELECT ?contact WHERE { ?lmaster :hasContactAID ?contact; :a :LMaster; :hasWorker [ :a :Computer; :hasCPU [ a :CPU; :hasCPUType :Intel; :hasCPUFrequency ?freq; ]; :hasUserDiskQuota ?quota; :hasMemory ?mem; ]. FILTER (?freq >= 3.2) FILTER (?quota >= 350) FILTER (?mem >= 256) }
End of Agents in Grid Part QUESTIONS? Looking for collaborators papers available at: http://agentlab.swps.edu.pl