1 / 86

Enhancing Neighborship Consistency for Peer-to-Peer Distributed Virtual Environments

Enhancing Neighborship Consistency for Peer-to-Peer Distributed Virtual Environments. Jehn-Ruey Jiang, Jiun-Shiang Chiou and Shun-Yun Hu Department of Computer Science and Information Engineering National Central University. Outline . Introduction Background P2P DVEs

pgriswold
Download Presentation

Enhancing Neighborship Consistency for Peer-to-Peer Distributed Virtual Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enhancing Neighborship Consistency for Peer-to-PeerDistributed Virtual Environments Jehn-Ruey Jiang, Jiun-Shiang Chiou and Shun-Yun Hu Department of Computer Science and Information Engineering National Central University

  2. Outline • Introduction • Background • P2P DVEs • Factors affecting Neighborship Consistency • Proposed Solutions • Simulation Results • Conclusion

  3. Outline • Introduction • Background • P2P DVEs • Factors affecting Neighborship Consistency • Proposed Solutions • Simulation Results • Conclusion

  4. DVE (1) • Distributed Virtual Environments (DVEs) are computer-generated virtual world where multiple geographically distributed users can assume virtual representatives (or avatars) to concurrently interact with each other • A.K.A. Networked Virtual Environments (NVEs)

  5. DVE (2) • Examples of DVEs include early DARPA SIMNET and DIS systems, as well as currently booming Massively Multiplayer Online Games (MMOGs).

  6. Massively Multiplayer Online Games MMOGs are growing quickly 8 million registered users for World of Warcraft Over 100,000concurrent players Billion-dollar business Adaptive Computing and Networking Lab, CSIE, NCU

  7. Adaptive Computing and Networking Lab, CSIE, NCU

  8. Adaptive Computing and Networking Lab, CSIE, NCU

  9. Adaptive Computing and Networking Lab, CSIE, NCU

  10. DVE (3) • 3D virtual world with • People (avatar) • Objects • Terrain • Agents • … • Each avatar can do a lot of operations • Move • Chat • Using items • …

  11. Issues for DVE • Scalability • To accommodate as many as participants • Consistency • All participants have the same view of object states • Persistency • All contents (object states) in DVE need to exist persistently • Reliability • Need to tolerate H.W and S.W. failures • Security • To prevent cheating and to keep user information and game state confidentially.

  12. The Scalability Problem (1) Client-server: has inherent resource limit Resource limit [Funkhouser95] Adaptive Computing and Networking Lab, CSIE, NCU

  13. The Scalability Problem (2) Peer-to-Peer: Use the clients’ resources Resource limit [Keller & Simon 2003] Adaptive Computing and Networking Lab, CSIE, NCU

  14. You only need to know some participants ★: self ▲: neighbors Area of Interest(AOI) Adaptive Computing and Networking Lab, CSIE, NCU

  15. Neighborship Consistency (1) • Definition # current AOI neighbors observed # current AOI neighbors

  16. Neighborship Consistency (2) • An example :is actual neighbor :is observed neighbor Neighborship Consistency = 4 / 5 = 80%

  17. Outline • Introduction • Background • P2P DVEs • Factors affecting Neighborship Consistency • Proposed Solutions • Simulation Results • Conclusion

  18. Related Work (1):DHT-based: SimMUD • B. Knutsson, H. Lu, W. Xu and B. Hopkins, “Peer-to-peer Support for Massively Multiplayer Games,” in Proceedings of INFOCOM 2004. • Authors are fromDepartment of Computer and Information Science, University of Pennsylvania

  19. Related Work (1):DHT-based: SimMUD [Knutsson et al. 2004] (UPenn) • Pastry (DHT mapping) + Scribe (Multicast) • Fixed-Sized Regions • Coordinators

  20. SimMUD -- Introduction • Proposes use of P2P overlays to support Massively multiplayer games (MMG) • Primary contribution of paper: • Architectural (P2P for MMG) • Evaluative

  21. SimMUD -- Introduction MMG GAME SCRIBE (Multicast support) PASTRY (P2P overlay)

  22. SimMUD -- Introduction • Players contribute memory, CPU cycles and bandwidth for shared game state • Persistent user state is centralized • Example: payment information, character • Allows central server to delegate to peers the dissemination and the process of intensive game states

  23. Distributed Game Design • GAME STATES • Game world divided into connected regions • Regions are controlled by different coordinates

  24. Distributed Game Design

  25. Distributed Game Design • Game design based on fact that: • Players have limited movement speed • Limited sensing capability • Hence data shows temporal and spatial localities • Use Interest Management • Limit amount of state player has access to

  26. Distributed Game Design • Players in same region form interest group • State updates relevant to group disseminated only within group • Player changes group when going from region to region

  27. Distributed Game Design • GAME STATE CONSISTENCY • Must be consistent among players in a region • Basic approach: employ coordinators to resolve update conflicts • Split game state management into three classes to handle update conflicts: • Player state • Object state • The Map

  28. Distributed Game Design • Player state • Single writer multiple reader • Position change is most common event • Use best effort multicast to players in same region • Use dead reckoning to handle loss or delay

  29. Distributed Game Design • Object state • Use coordinator-based mechanism for shared objects • Each object assigned a coordinator • Coordinator resolves conflicting updates and keeps current value

  30. Distributed Game Design • Map • Maps are considered read-only because they remain unchanged during the game play. • They can be created offline and inserted into the system dynamically. • Dynamic map elements are handled as objects.

  31. Game on P2P overlay • Map game states to players • Group players & objects by region • Map regions to peers using pastry Key • Each region is assigned ID • Live Node with closest ID becomes coordinator • Random Mapping reduces chance of coordinator becoming member of region (reduces cheating) • Currently all objects in region coordinated by one Node • Could assign coordinator for each object

  32. Game on P2P overlay • Shared state replication • Lightweight primary- backup to handle failures • Failure detected using regular game events • Dynamically replicate coordinator when failure detected • Keep at least one replica at all times • Uses property of P2P (route message with key K to node ID, say N , closest to K)

  33. Game on P2P overlay • Shared state replication (contd..) • The replica kept at M which is the next closest to key K • If new node T added which is closer to K than coordinator N • Forwards messages to coordinator N until all states of K are transferred from N to T • Takes over as coordinator and N becomes a replica

  34. Game on P2P overlay • Catastrophic failure • Both coordinator and replica dead • Problem solved by cached information from nodes interested in the area

  35. Experimental Results • Prototype Implementation of “SimMud” • Used FreePastry (open source) • Maximum simulation size constrained by memory to 4000 virtual nodes • Players eat and fight every 20 seconds • Remain in a region for 40 seconds • Position updates every 150 millisec by multicast

  36. Experimental Results • Base Results • No players join or leave • 300 seconds of game play • Average 10 players per region • Link between nodes have random delay of 3-100 ms to simulate network delay

  37. Experimental Results(Base results)

  38. Experimental Results(Base results)

  39. Experimental Results(Base results)

  40. Experimental Results(Base results) • 1000 to 4000 players with 100 to 400 regions • Each node receives 50 –120 messages • 70 update messages per second • 10 players * 7 position updates • Unicast and multicast message take around 6 hops (but 50 hops in the worst case)

  41. Experimental Results • Breakdown of type of messages • 99% messages are position updates • Region changes take most bandwidth • Message rate of object updates higher than player-player updates • Object updates multicast to region • Object update sent to replica • Player player interaction effects only players

  42. Experimental Results • Effect of Population Growth • As long as average density remains same, population growth does not make difference • Effect of Population Density • Ran with 1000 players , 25 regions • Position updates increases linearly per node • Non – uniform player distribution hurts performance

  43. Experimental Results • Three ways to deal with population density problem • Allow max number of players in region • Different regions have different size • System dynamically repartitions regions with increasing players

  44. Experimental Results • Effect of message aggregation • Since updates are multicast, aggregate them at root • Position update aggregated from all players before transmit • Cuts bandwidth requirement by half • Nodes receive less messages

  45. Experimental Results

  46. Experimental Results

  47. Experimental Results • Effect of network dynamics • Nodes join and depart at regular intervals • Simulate one random node join and depart per second • Per-node failure rate of 0.06 per minute • Average session length of 16.7 minutes (close to 18 minutes for a FPS game -- Half Life) • Average message rate increased from 24.12 to 24.52

  48. Related Work (2):Neighbor-list Exchange [Kawahara et al. 2004] (Univ. of Tokyo) • Fully-distributed • Nearest-neighbors • List exchange • High transmission • Overlay partition

More Related