Relevant states and memory in Markov chain bootstrapping and simulation

Roy Cerqueti

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Markov chain theory is proving to be a powerful approach to bootstrap and simulate highly nonlinear time series. In this work, we provide a method to estimate the memory of a Markov chain (i.e. its order) and to identify its relevant states. In particular, the choice of memory lags and the aggregation of irrelevant states are obtained by looking for regularities in the transition probabilities. Our approach is based on an optimization model. More specifically, we consider two competing objectives that a researcher will in general pursue when dealing with bootstrapping and simulation: preserving the “structural” similarity between the original and the resampled series, and assuring a controlled diversification of the latter. A discussion based on information theory is developed to define the desirable properties for such optimal criteria. Two numerical tests are developed to verify the effectiveness of the proposed method.
Original languageEnglish
Pages (from-to)163-177
JournalEuropean Journal of Operational Research
DOIs
Publication statusPublished - Jan 2017
Externally publishedYes

Fingerprint

Dive into the research topics of 'Relevant states and memory in Markov chain bootstrapping and simulation'. Together they form a unique fingerprint.

Cite this