Facial expression is significant for face-to-face communication since it is one of our body language that increases data information during the communication. In recent surveys, some of the existing methods extracting features from facial images as the regions of interest. Such regions cover eyes and nose, eyes with eyebrows, mouth, etc. Then global features are extracted from those regions afterwards. This feature extraction method can outperform if some irrelevant features are eliminated. Moreover, this causes lower time consumption in the process of normalization and recognition. In this paper, there are two main parts: locating the points in face region to form graph-based features and training the neural networks to recognize the emotion from the corresponding feature vector. For the first phase, fourteen points are manually located to create graph with edges connecting among such points. Subsequently, the Euclidean distances from those edges are calculated and defined as features for training in the next phase. The next phase is using Multilayer-perceptrons (MLPs), a kind of Artificial Neural Networks (ANN), with back-propagation learning algorithm to recognize six basic emotions. In order to evaluate the performance, the proposed systems are applied to Cohn-Kanade AU-Coded facial expression database and perform 95.24% accuracy which is higher than the existing method.