620 likes | 630 Views
Learn about differential games in wireless communication networks, including control models, strategies, and optimization techniques. Explore non-cooperative differential games and the use of dynamic programming and Pontryagin's maximum principle. Understand the concept of Nash equilibrium and closed-loop information in game theory.
E N D
Game Theory in Wireless and Communication Networks: Theory, Models, and ApplicationsLecture 3Differential Game Zhu Han, Dusit Niyato, Walid Saad, Tamer Basar, and Are Hjorungnes
Overview of Lecture Notes • Introduction to Game Theory: Lecture 1, book 1 • Non-cooperative Games: Lecture 1, Chapter 3, book 1 • Bayesian Games: Lecture 2, Chapter 4, book 1 • Differential Games: Lecture 3, Chapter 5, book 1 • Evolutionary Games: Lecture 4, Chapter 6, book 1 • Cooperative Games: Lecture 5, Chapter 7, book 1 • Auction Theory: Lecture 6, Chapter 8, book 1 • Matching Game: Lecture 7, book 2 • Contract Theory, Lecture 8, book 2 • Stochastic Game, Lecture 9, book 2 • Learning in Game, Lecture 10, book 2 • Equilibrium Programming with Equilibrium Constraint, Lecture 11, book 2 • Mean Field Game, Lecture 12, book 2 • Zero Determinant Strategy, Lecture 13, book 2 • Network Economy, Lecture 14, book 2 • Game in Machine Learning, Lecture 15, book 2
Introduction • Basics • Controllability • Linear ODE: Bang-bang control • Linear time optimal control • Pontryagin’s maximum principle • Dynamic programming • Dynamic game • Note: Some parts are not from the book. See some dynamic control book and Basar’s dynamic game book for more references.
Basic Problem • ODE: x: state, f: a function, : control • Payoff: r: running payoff, g: terminal payoff
Example • Moon lander: Newton’s law • ODE • Objective: minimize fuel Maximize the remain • Constraints
Observability • Observation
Bang-bang Control • Bang-bang control is optimal
Existence Of Time-optimal Controls • Minimize the time from any point to the origin
Hamiltonian • Definition
Example: Rocket Railroad Car • x(t) = (q(t), v(t))
Example: Rocket Railroad Car Satellite example
Pontryagin Maximum Principle • “The maximum principle was, in fact, the culmination of a long search in the calculus of variations for a comprehensive multiplier rule, which is the correct way to view it: p(t) is a “Lagrange multiplier” . . . It makes optimal control a design tool, whereas the calculus of variations was a way to study nature.”
Pontryagin Maximum Principle adjoint equations maximization principle transversality condition
Solving the Riccati Equation • Convert (R) into a second–order, linear ODE
Dynamic Programming • “it is sometimes easier to solve a problem by embedding it in a larger class of problems and then solving the larger class all at once.”
Hamilton-Jacobi-Bellman Equation • “it’s better to be smart from the beginning, than to be stupid for a time and then become smart”. choice of life Backward induction: change to a sequence of constrained optimization
Relation between DP & Maximum Principle • Maximum principle starts from 0 to T • DP starts from t to T • Costate p at time t is the gradient
Introduction • Basics • Controllability • Linear ODE: Bang-bang control • Linear time optimal control • Pontryagin’s maximum principle • Dynamic programming • Dynamic game
Two-person, Zero-sum Differential Game • Basic idea: Two players control the dynamics of some evolving system, where one tries to maximize, and the other tries to minimize, a payoff functional that depends upon the trajectory
Strategies • Idea: One player will select in advance, not his control, but rather his responses to all possible controls that could be selected by his opponent
Non-cooperative Differential Game • Optimization problem for each player can be formulated as the optimal control problem • The dynamics of state variable and payoff of each player • For player to play the game, the available information is required • Three cases of available information 1. Open-loop information
Non-cooperative Differential Game 2. Feedback information • At time t, players are assumed to know the values of state variables at time where is positive and arbitrarily small • The feedback information is defined as: 3. Closed-loop information • The Nash equilibrium is defined as a set of action paths of one player to maximize the payoff, given the other players' behavior
Non-cooperative Differential Game • To obtain the Nash equilibrium, it is required to solve a dynamic optimization problem • The Hamiltonian function • where is co-state variable. Co-state variable is considered to be the shadow price of the variation of the state variable.
Non-cooperative Differential Game • The first order conditions for the open-loop solution • For the closed-loop solution, the conditions are slightly different • Further reading: Basar’s book
Summary of Dynamic Control • Dynamic problem formulation • ODE and payoff function • Conditions for controllability • Rank of G and eigenvalue of M • Bang-bang control • Maximum principle • ODE, ADJ, M and P • Dynamic programming • Divide a complicated problem into sequence of sub-problems • HJB equations • Dynamic game: Multiuser case • Future reading: Stochastic game
Applications in Wireless Networks Packet Routing • For routing in the mobile ad hoc network (MANET), the forwarding nodes as the players have incentive from the destination in terms of price to allocate transmission rate to forward packets from source • A differential game for duopoly competition is applied to model this competitive situation L. Lin, X. Zhou, L. Du, and X. Miao. Differential game model with coupling constraint for routing in ad hoc networks. In Proc. of the 5th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM 2009), pages 3042-3045, September 2009.
Applications in Wireless Networks Packet Routing • There are two forwarding nodes that are considered to be the players in this game • Destination pays some price to forwarding nodes according to the amount of forwarded data • Forwarding nodes compete with each other by adjusting the forwarding rate (i.e., action denoted by ai(t) for player i at time t) to maximize their utilities over time duration of [0,∞]
Applications in Wireless Networks Packet Routing • Payment from the destination at time t is denoted by P(t) • Payoff function of player i can be expressed as follows: - P(t)ai(t) is revenue - g(a) is a cost function given vector a of actions of players • For the payment, the following evolution of price (i.e., a differential equation of Tsutsui and Mino) is considered Quadratic cost function