The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Facing the problem of poor detection effect and bad real-time performance of existing method, a fast and robust face detecting and tracking algorithm is proposed, which detects face region by a improved Adaboost method at first, and then tracks it by Mean Shift algorithm combined with the motion history image (MHI). The experimental results demonstrate that the proposed algorithm can robustly detect...
Image super-resolution (SR) has grown up to be a hot research field of image processing these years. SR methods via sparse representation code image patch as linear combination of a few atoms chosen out from an over-complete dictionary. However, a universal dictionary is potentially unstable to represent various image structures. As a result, we adopt PCA sub-dictionaries and exploit the low-resolution...
With the pervasiveness of monitoring cameras installed in public places, schools, hospitals and homes, video analytics technologies for interpreting the generated video content are becoming more and more relevant to people's lives. Along this context, we develop a human-centric video surveillance system that identifies and tracks people in a given scene. In this paper, a parallel processing pipeline...
In this paper, a software system is presented that supports finding, tagging, identifying and tracking multiple people in videos with uncontrolled capturing conditions. The work was focused on two aspects. One is to build a parallel video processing pipeline that integrates image analysis modules such as face detection, recognition and tracking, efficiently and smoothly, so that multiple people can...
With the ever-growing amount of videos on the internet, searching for desired videos in an effective and efficient way remains a challenge. In addition, repurposing videos of interest into new attractive photo/video products has been an open issue. In this paper, we propose a framework for video retrieval and repurposing by leveraging the face information in videos. Since text query cannot express...
This paper presents methods to explore and fuse various clothing features for person clustering in family photos. Our approach automatically detects the clothing regions, extracts the global and localized features based on color and texture, and then computes multiple partitions of people based on different features, which are combined into the final consolidated person clustering with a cluster ensemble...
Automatic person clustering, which groups photos based on the individuals appearing in a photo collection, is a key component to facilitate photo management and sharing. Traditionally, person clusters are basically built by detecting faces and matching facial features. But these facial clusters can perform poorly when there are large pose variations and occlusions, which are not uncommon in consumer...
In this paper, we proposed an advanced face analysis platform for large-scale consumer photos, namely PFAP. Leveraging Client/Server architecture, the platform provides users high-performance face clustering and near-real time image retrieval service. Advanced face analysis schema, two-level parallel computing architecture and analysis as a service are three key innovations in PFAP. In face analysis...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.