Named entity recognition (NER) from open-domain conversation is challenging due to the informality of spoken language. Instead of increasing the size of labeled data, which is expensive and time-consuming, word embeddings learned from unlabeled data have been used by NER models to handle data sparsity. We propose a novel method for training the word embeddings specifically for the NER task. We show that our task-specific word embeddings outperform task-independent word embeddings when used as features of NER method.