1 / 24

Use of Markov Chains to Design an Agent Bidding Strategy for Continuous Double Auctions

Use of Markov Chains to Design an Agent Bidding Strategy for Continuous Double Auctions.

mead
Download Presentation

Use of Markov Chains to Design an Agent Bidding Strategy for Continuous Double Auctions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Use of Markov Chains to Design an Agent Bidding Strategy for Continuous Double Auctions Sunju ParkManagement Science and Information Systems DepartmentRutgers Business School, Rutgers UniversityEdmund H. DurfeeArtificial Intelligence Laboratory, University of MichiganWilliam P. BirminghamMath & Computer Science Department, Grove City College Presenter: TinTin Yu {tiyu@mtu.edu}

  2. Introduction • Not like tradition auctions • Single seller and multiple buyers (e.g. eBay) • Continuous Double Auctions (CDA) • Buyers place bids, and sellers place offers to the same items. • We have a match whenever a buyer’s bid is higher than a seller’s offer. • (e.g. Name your price (hotel.com?) • Goal • To determine the optimal price/offer for a seller in order to gain the maximum profit.

  3. Definitions • Notation: bbssp • b: buyer’s bid; s: seller’s offer • sp: seller’s offer that was just submitted • bbssp: a queue in ascending order (of price) • Clearing Price (CP) • bspbs: When an offer is less than a bid • sp<=CP<=b (the right most b) • We use sp in this paper.

  4. Definitions • Markov Chains (Markov state machine) • Probabilistic finite state machine • Input is ignored • We uses first-order Markov chain only • First-order means the probability of the present state is a function only to its direct predecessor states.

  5. p-strategy Algorithm (1/2)

  6. p-strategy Algorithm (2/2) • Information used by p-strategy

  7. Step1: Building Markov Chains (1/3) • Given a current state (bbs). • When the p-seller (a seller use p-strategy) submit its offer sp, there are four possible next auction states. • We make these states the initial states of the Markov Chain.

  8. Step1: Building Markov Chains (2/3) • From the initial states, we keep populate the (bbss) queue by either submitting a new buyer bid or a seller offer. • If we have a match, it goes to the SUCCESS state. • If it goes out of the bound (maximum number of standing offers), it goes to the FAIL state.

  9. Step1: Building Markov Chains (3/3) • The MC model of the CDA with starting state (bbs) and the number of bids and offers are limited to 5 each.

  10. Step2: Compute Utilities (1/5) • Step2.1: The utilities function • Ps(p): probability of success at price p • U(Payoffs(p)): utilities of payoff if the offer receives a match • CP: clearing price • C: cost • TD(s/f): delay overhead

  11. Step2: Compute Utilities (2/5) • Things we need to compute for each p

  12. Step2: Compute Utilities (3/5) • Step2.2.1: Transition Probabilities • Going from state (bbs) to (bbssp) at time step n • That is P(bbssp | bbs); • Applying Baye’s rule; • Evaluating using probability density function(PDF), f(s); bababa…

  13. Step2: Compute Utilities (4/5) • Step2.2.2: TD(s/f): delay overhead • Too complex to cover in details • It involves building a transition probability matrix P from the states of the Markov Chain we built in step1. • Here is listed equations: • w: reward = c (a constant) except for the initial states and the absorbing states • : the number of visits to state (…) until it goes to S.

  14. Step2: Compute Utilities (5/5) • Plug in the numbers and we will get a expected utility value associated with price p. • The algorithm find the optimal price p by looping through all p in a possible range. • Time complexity of the algorithm is O( n3), where  is the number of possible prices, n is the number of MC states.

  15. Benchmark (1/6) • Agents used for comparison • FM: Fixed-Markup • bids its cost plus some predefined markup • RM: Random-Markup • bids its cost plus some random markup • CP: Clearing-Price • obtains a clearing-price quote (similar to FM agent) • OPT: Post-facto Optimal • our benchmark strategy. Given it “knows” exactly everything about the future (no uncertainty at all), it returns the maximum profit an agent may have achieved.

  16. Benchmark (2/6)

  17. Benchmark (3/6): p-strategy vs other • Results: • Arrival rate: • 0.4=high • 0.1=low • negotiation zone • narrow: • =5

  18. Benchmark (4/6): p-strategy vs other • Results: • Arrival rate: • 0.4=high • 0.1=low • negotiation zone • narrow: • =25

  19. Benchmark (5/6): p-strategy vs itself • Results • Profit of individualp-agent decreaseas the numberof p-agents increase. • However, when thereis more buyers,p-agents are able togain similar profitat the expense of buyers.

  20. Benchmark (6/6): CP vs multiple p and CP • Results • CP-strategy agents areable to raises profit as the number mixed p-agents andCP-agents increase.

  21. Conclusion • Summary: • p-strategy is based on stochastic modeling of the auction process. • It works while it does not need to consider much about the other individual agents. Time complexity only depends on the number of MC states, not the number of agents. • It out performs other agents (FM/ RM/ CP) • Future Work • Similar strategy can be apply to buyers. • Analysis shows an average of 20% gap between p-strategy and the optimal one. • Ongoing work: hybrid strategy. This adaptive approach allow the agent to figure out when to use stochastic model and when to use some simpler strategies.

  22. Question to think about • Human can think very differently: • e.g. Selling a 50” plasma HDTV • Place a very low selling price like $1.00 without a hidden limit. • Shipping cost = $3000.00 ?! • Can artificial intelligent agents think outside the box?

  23. Your Questions

  24. Bibliography • Park, S., Durfee, E.H. and Birmingham, W.P. (2004) "Use of Markov Chains to Design an Agent Bidding Strategy for Continuous Double Auctions", Volume 22, pages 175-214.

More Related