Software engineering is undergoing a paradigm shift in order to accommodate new CPU architectures with many cores, in which concurrency will play a more fundamental role in programming languages and libraries. Development of new models and specialized software frameworks is needed to assist LHC scientists in developing their software algorithms and applications that allow for maximally parallel execution. In this paper we present our current ideas for evolving the frameworks in use by the LHC experiments to support the decomposition of the data processing of each event into smaller tasks that can be executed simultaneously on different CPUs, together with the ability to process several events at the same time. Results from the prototype used to exercise the key aspects of the new frameworks are described.