Document Type



Doctor of Philosophy


Industrial Engineering

First Adviser

Boris Defourny


The primary focus of this dissertation is the design, analysis and implementation of stochastic optimal control of grid-level storage. It provides stochastic, quantitative models to aid decision-makers with rigorous, analytical tools that capture high uncertainty of storage control problems. The first part of the dissertation presents a $p$-periodic Markov Decision Process (MDP) model, which is suitable for mitigating end-of-horizon effects. This is an extension of basic MDP, where the process follows the same pattern every $p$ time periods. We establish improved near-optimality bounds for a class of greedy policies, and derive a corresponding value-iteration algorithm suitable for periodic problems. A parallel implementation of the algorithm is provided on a grid-level storage control problem that involves stochastic electricity prices following a daily cycle. Additional analysis shows that the optimal policy is threshold policy. The second part of the dissertation is concerned with grid-level battery storage operations, taking battery aging phenomenon (battery degradation) into consideration. We still model the storage control problem as a MDP with an extra state variable indicating the aging status of the battery. An algorithm that takes advantage of the problem structure and works directly on the continuous state space is developed to maximize the expected cumulated discounted rewards over the life of the battery. The algorithm determines an optimal policy by solving a sequence of quasiconvex problems indexed by a battery-life state. Computational results are presented to compare the proposed approach to a standard dynamic programming method, and to evaluate the impact of refinements in the battery model. Error bounds for the proposed algorithm are established to demonstrate its accuracy. A generalization of price model to a class of Markovian regime-switching processes is also provided. The last part of this dissertation is concerned with how the ownership of energy storage make an impact on the price. Instead of one player in most storage control problems, we consider two players (consumer and supplier) in this market. Energy storage operations are modeled as an infinite-horizon Markov Game with random demand to maximize the expected discounted cumulated welfare of different players. A value iteration framework with bimatrix game embedded is provided to find equilibrium policies for players. Computational results show that the gap between optimal policies and obtained policies can be ignored. The assumption that storage levels are common knowledge is made without much loss of generality, because a learning algorithm is proposed that allows a player to ultimately identify the storage level of the other player. The expected value improvement from keeping the storage information private at the beginning of the game is then shown to be insignificant.

Available for download on Tuesday, February 05, 2019