Research on sentence compression has been undergoing for many years in other languages, especially in English, but research on Chinese sentence compression is rarely found. In this paper, we describe an efficient probabilistic and syntactic approach to Chinese sentence compression. We introduce the classical noisy-channel approach into Chinese sentence compression and improve it in many ways. Since there is no parallel training corpus in Chinese, we use the unsupervised learning method. This paper also presents a novel bottom-up optimizing algorithm which considers both bigram and syntactic probabilities for generating candidate compressed sentences. We evaluate results against manual compressions and a simple baseline. The experiments show the effectiveness of the proposed approach.