报告内容介绍
We will talk about some recent work on the risk-sensitive conlrol problem with unconstrained discounted continuous-time Markov decision processes (CTMDPs) taking values in discrete state space. The controlled model associated with the CTMDPs is under the deterministic history-dependent policies. Under some conditions imposed on the primitives, allowing unbounded transition rates and unbounded cost rates, we develop the dynamic programming approach and then obtain the corresponding Hamilton-Jacobi-Bellman equation (HJB equation). Furthermore, compactness-continuity conditions are introduced to ensure the existence of a solution to the HJB equation. Finally, we show the existence of an optimal Markov policies and also verify that the value function of the risk-sensitive control problem is the solution of the HJB equation.