Inspired by the success of deep neural network-hidden Markov model (DNN-HMM) in acoustic modeling for automatic speech recognition, a number of researchers from various fields have independently proposed the idea of combining DNN and conditional random fields (CRFs). Despite their subtle differences, this class of models is collectively referred to as “NeuroCRF” in this paper. We focus our attention on applying a linear-chain NeuroCRF to the fundamental and ubiquitous problem of sequence labeling in natural language processing with distributed word representations. We question the necessity of previous works' use of the neural network to learn a low-rank emission feature matrix, added to a transition feature matrix. By modeling a full-rank feature matrix directly, we show that statistically significant gains can be achieved on the CoNLL-2000 syntactic chunking task, without harming performance on tasks with low dependencies between consecutive labels, such as the CoNLL-2003 named entity recognition task.