The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Incremental learning allows incorporating new data in a classifier model without full retraining for computational efficiency. In this paper, we present two ways of performing incremental learning on Grassmann manifolds. In a Grassmann kernel learning framework, data are embedded on subspaces and kernels are constructed to map data subspaces to a projection space for classification. As new data samples...
Command extraction from human beings becomes easier for a machine if it can analyze the non verbal ways of communication such as emotions. This paper focuses on improving the efficiency of extracting emotion from human facial expression images. The features that were extracted in this experiment were obtained from JAFFE (Japanese Female Facial Expression) database which includes 213 images of different...
Previous models based on Deep Convolutional Neural Networks (DCNN) for face verification focused on learning face representations. The face features extracted from the models are applied to additional metric learning to improve a verification accuracy. The models extract high-dimensional face features to solve a multi-class classification. This results in a dependency of a model on specific training...
In this paper, we address the problem of gender classification based on facial images. The Speeded Up Robust Feature (SURF) algorithm descriptors are used as features to built dictionaries and a multi-task Sparse Representation Classification (SRC) is used as classifier to determine the gender of an individual face. Our approach uses smaller and compact dictionaries by removing the redundant atoms...
Recent work in the recognition of naturalistic expressions, which is also known as spontaneous facial expressions recognition, has attracted researchers' attention due to its importance in different behavioural and clinical applications. The main design challenges in the area of emotion computing for automatic recognition of spontaneous facial expression are the face pose, capture distance, illumination...
Video capturing using Unmanned Aerial Vehicles provides cinematographers with impressive shots but requires very adept handling of both the drone and the camera. Deep Learning techniques can be utilized in this process to facilitate the video shooting process by allowing the drone to analyze its input and make intelligent decisions regarding its flight path. Fast and accurate on-board face detection...
In video surveillance, face recognition (FR) systems seek to detect individuals of interest appearing over a distributed network of cameras. Still-to-video FR systems match faces captured in videos under challenging conditions against facial models, often designed using one reference still per individual. Although CNNs can achieve among the highest levels of accuracy in many real-world FR applications,...
Video-based face recognition (FR) is a challenging task in real-world applications. In still-to-video FR, probe facial regions of interest (ROIs) are typically captured with lower-quality video cameras under unconstrained conditions, where facial appearances vary according to pose, illumination, scale, expression, etc. These video ROIs are typically compared against facial models designed with high-quality...
This paper presents a method to analyse the various identities of a user and thus determine if any synthetic identity theft has been committed. Here three type of data is taken i.e., Input dataset (X), Normal dataset (Y) and Target Dataset (Z) are taken. The various identities used may be text or string data such as Candidate's Name, Date of Birth, Time of Birth Place of Birth, Home Address, Father's...
This study presents an age and gender estimation system that considers ethnic difference in face images using a Convolutional Neural Network(CNN) and Support Vector Machine(SVM). Most age and gender estimation systems using face images are trained on ethnicity-biased databases. Therefore, these systems show limited performance on face images of ethnic groups occupying a small proportion of the training...
This study analyzes the effectiveness of the global (the whole face) and local (regions of eyes, nose, and mouth) features for face recognition. Features describing human faces are encoded in local ternary patterns. The two-class support vector machine is used as the supervised learning algorithm for training recognition models. In the recognition process, recognition modes based on the global features...
In order to better learn the distributions of 2D and 3D faces and the mapping between them with limited training samples, a new 3D face reconstruction method based on progressive cascade regression is proposed. Firstly, it learns the mapping between 2D and 3D facial landmarks to estimate the initial 3D facial landmarks with a coupled space learning method. Secondly, a deformed space is constructed...
Surveillance cameras today often capture NIR (near infrared) images in low-light environments. However, most face datasets accessible for training and verification are only collected in the VIS (visible light) spectrum. It remains a challenging problem to match NIR to VIS face images due to the different light spectrum. Recently, breakthroughs have been made for VIS face recognition by applying deep...
Convolutional neural network (CNN) based face detectors are inefficient in handling faces of diverse scales. They rely on either fitting a large single model to faces across a large scale range or multi-scale testing. Both are computationally expensive. We propose Scale-aware Face Detection (SAFD) to handle scale explicitly using CNN, and achieve better performance with less computation cost. Prior...
This paper targets on the problem of set to set recognition, which learns the metric between two image sets. Images in each set belong to the same identity. Since images in a set can be complementary, they hopefully lead to higher accuracy in practical applications. However, the quality of each sample cannot be guaranteed, and samples with poor quality will hurt the metric. In this paper, the quality...
If I provide you a face image of mine (without telling you the actual age when I took the picture) and a large amount of face images that I crawled (containing labeled faces of different ages but not necessarily paired), can you show me what I would look like when I am 80 or what I was like when I was 5? The answer is probably a No. Most existing face aging works attempt to learn the transformation...
Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be...
Most of the conventional face hallucination methods assume the input image is sufficiently large and aligned, and all require the input image to be noise-free. Their performance degrades drastically if the input image is tiny, unaligned, and contaminated by noise. In this paper, we introduce a novel transformative discriminative autoencoder to 8X super-resolve unaligned noisy and tiny (16X16) low-resolution...
In this work we pursue a data-driven approach to the problem of estimating surface normals from a single intensity image, focusing in particular on human faces. We introduce new methods to exploit the currently available facial databases for dataset construction and tailor a deep convolutional neural network to the task of estimating facial surface normals in-the-wild. We train a fully convolutional...
The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem – unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) a Watch, Listen, Attend and Spell...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.