Approximating Markov Chains for Bootstrapping and Simulation

Roy Cerqueti

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

In this work we develop a bootstrap method based on the theory of Markov chains. The method moves from the two competing objectives that a researcher pursues when performing a bootstrap procedure: (i) to preserve the structural similarity -in statistical sense- between the original and the bootstrapped sample; (ii) to assure a diversification of the latter with respect to the former. The original sample is assumed to be driven by a Markov chain. The approach we follow is to implement an optimization problem to estimate the memory of a Markov chain (i.e. its order) and to identify its relevant states. The basic ingredients of the model are the transition probabilities, whose distance is measured through a suitably defined functional. We apply the method to the series of electricity prices in Spain. A comparison with the Variable Length Markov Chain bootstrap, which is a well established bootstrap method, shows the superiority of our proposal in reproducing the dependence among data.
Original languageEnglish
Title of host publicationStochastic Models, Statistics and Their Applications,Springer Proceedings in Mathematics & Statistics
Place of PublicationSwitzerland
PublisherSpringer
DOIs
Publication statusPublished - 5 Feb 2015
Externally publishedYes

Fingerprint

Dive into the research topics of 'Approximating Markov Chains for Bootstrapping and Simulation'. Together they form a unique fingerprint.

Cite this