Name: | Description: | Size: | Format: | |
---|---|---|---|---|
778.9 KB | Adobe PDF |
Authors
Advisor(s)
Abstract(s)
We consider an optimal control problem with a deterministic finite horizon and state
variable dynamics given by a Markov-switching jump–diffusion stochastic differential
equation. Our main results extend the dynamic programming technique to this larger
family of stochastic optimal control problems. More specifically, we provide a detailed
proof of Bellman’s optimality principle (or dynamic programming principle) and obtain
the corresponding Hamilton–Jacobi–Belman equation, which turns out to be a partial
integro-differential equation due to the extra terms arising from the Lévy process and the
Markov process. As an application of our results, we study a finite horizon consumption–
investment problem for a jump–diffusion financial market consisting of one risk-free asset
and one risky asset whose coefficients are assumed to depend on the state of a continuous
time finite state Markov process. We provide a detailed study of the optimal strategies
for this problem, for the economically relevant families of power utilities and logarithmic utilities.
Description
Keywords
Stochastic optimal control Jump–diffusion Markov-switching Optimal consumption–investment
Citation
In "Journal of Computational and Applied Mathematics". ISSN 0377-0427. 267 (2014) 1-19