Extractive summarization systems attempt to automatically pick out representative sentences from a source text or spoken document and concatenate them into a concise summary so as to help people grasp salient information effectively and efficiently. Recent advances in applying nonnegative matrix factorization (NMF) on various tasks including summarization motivate us to extend this line of research and provide the following contributions. First, we propose to employ graph-regularized nonnegative matrix factorization (GNMF), in which an affinity graph with its similarity measure tailored to the evaluation metric of summarization is constructed and in turn serves as a neighborhood preserving constraint of NMF, so as to better represent the semantic space of sentences in the document to be summarized. Second, we further consider sparsity and orthogonality constraints on NMF and GNMF for better selection of representative sentences to form a summary. Extensive experiments conducted on a Mandarin broadcast news speech dataset demonstrate the effectiveness of the proposed unsupervised summarization models, in relation to several widely-used state-of-the-art methods compared in the paper.