An iterative decision-making scheme for markov decision processes and its application to self-adaptive systems

Citation data:

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 1611-3349, Vol: 9633, Page: 269-286

Publication Year:
2016
Usage 2
Abstract Views 2
Captures 281
Readers 281
Citations 4
Citation Indexes 4
Repository URL:
https://ro.uow.edu.au/eispapers1/159
DOI:
10.1007/978-3-662-49665-7_16
Author(s):
Guoxin Su; David S. Rosenblum; Taolue Chen; Yuan Feng; P. S. Thiagarajan
Publisher(s):
Springer Nature
Tags:
Mathematics; Computer Science; decision; markov; scheme; systems; decision-making; self-adaptive; iterative; application; its; processes; Engineering; Science and Technology Studies
conference paper description
Software is often governed by and thus adapts to phenomena that occur at runtime. Unlike traditional decision problems, where a decision-making model is determined for reasoning, the adaptation logic of such software is concerned with empirical data and is subject to practical constraints. We present an Iterative Decision-Making Scheme (IDMS) that infers both point and interval estimates for the undetermined transition probabilities in a Markov Decision Process (MDP) based on sampled data, and iteratively computes a confidently optimal scheduler from a given finite subset of schedulers. The most important feature of IDMS is the flexibility for adjusting the criterion of confident optimality and the sample size within the iteration, leading to a tradeoff between accuracy, data usage and computational overhead. We apply IDMS to an existing self-adaptation framework Rainbow and conduct a case study using a Rainbow system to demonstrate the flexibility of IDMS.