Symbolic State Partitioning for Reinforcement Learning

Abstract

Tabular reinforcement learning methods cannot operate di- rectly on continuous state spaces. One solution to this problem is to partition the state space. A good partitioning enables generalization during learning and more efficient exploitation of prior experiences. Con- sequently, the learning process becomes faster and produces more reliable policies. However, partitioning introduces approximation, which is par- ticularly harmful in the presence of nonlinear relations between state components. An ideal partition should be as coarse as possible, while capturing the key structure of the state space for the given problem. This work extracts partitions from the environment dynamics by symbolic execution. We show that symbolic partitioning improves state space cov- erage with respect to environmental behavior and allows reinforcement learning to perform better for sparse rewards. We evaluate symbolic state space partitioning with respect to precision, scalability, learning agent performance and state space coverage for the learned policies.

Publication
Proc. 25th Intl. Conf. on Fundamental Approaches to Software Engineering (FASE 2025). © Springer 2025. To appear.