Testing configurable systems, which are becoming prevalent, is expensive due to the large number of configurations and test cases. Existing approaches reduce this expense by selecting or prioritizing configurations. However, these approaches redundantly run the full test suite for the selected configurations. To address this redundancy, we propose a test case selection approach by analyzing the impact of configuration changes with static program slicing. Given an existing test suite T used for testing a system S under a configuration C, our approach decides for each t in T if t has to be used for testing S under a different configuration C'. We have evaluated our approach on a large industrial system within ABB with promising results.