In this paper, a strategy to increase the performance of particle swarm optimization is proposed. The idea is to altering the content of the worst particle of the personal best particles after each iteration. The behavior of the worst personal best particle is then forced to move out its regular path and then affects other particles' behavior. This approach prevents the particles getting stuck on local minimum. To altering the worst personal particle, some of its elements are replaced by its opposition values, inspired by the concept of opposition-based learning, and some elements are taken from the global best ever found or other personal best particle. Depending on which personal best particles are used, two variants are developed. The strategy enhances the exploration and exploitation capability of particle swarm optimization, since both approaches achieve better solution quality and convergent speed when tested on a suite of benchmark function, especially for multimodal functions, as demonstrated in the paper.