The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Number of 3D models is growing every day. Segmentation of such models has recently attracted lot of attentions. In this paper we propose a two-phase approach for segmentation of 3D models. We leveraged a well-known fact from electrical physics for both initial segment specification and boundary detections. The first phase tries to locate the initial segments having higher charge density while the...
In this paper, we propose an efficient method to reconstructing the 3D models of a human face from a single 2D face image robustness under a variety facial expressions using the Deformable Generic Elastic Model (D-GEM). We extended the Generic Elastic Model (GEM) approach and combined it with statistical information of the human face and deformed generic depth models by computing the distance around...
In this paper we describe designing and implementation of a powerful, fast and compact simple 3D modeler (SM3D). In addition to saving cost and time (due to high processing speed), 3D objects can be created with minimum system resources by using this application. Easy learning and using are other strengths of this application. Modularity using classification and applying Dynamic-Link Library files...
Locating an accurate desired object boundary using active contours and deformable models plays an important role in computer vision, particularly in medical imaging applications. Powerful segmentation methods have been introduced to address limitations associated with initialization and poor convergence to boundary concavities. This paper proposes a method to improve one of the strongest and recent...
Nowadays the steganographic methods use the more sophisticated image models to increase security; consequently, steganalysis algorithm should build the more accurate models of images to detect them. So, the number of extracted feature is increasing. Most modern steganalysis algorithms train a supervised classifier on the feature vectors. The most popular and accurate one is SVM, but the high training...
In this paper, a robust image watermarking method based on geometric modeling is presented. Eight samples of wavelet approximation coefficients on each image block are utilized to construct two line segments in the 2-D space. We change the angle formed between these line segments for data embedding. Geometrical tools are used to solve the tradeoff between the transparency and robustness of the watermark...
The extreme growth of using digital media has created a need for techniques that can be used to protect the copyrights of digital contents. One approach for copyright protection is to embed an invisible signal, known as a digital watermark, in the image. One of the most important features of an effective watermarking scheme is transparency. A good watermarking method should be invisible such that...
Application of the lossless compression method to hide texts is considered as a novel trend in research projects. Evaluation of the proposed methods in the field of steganography reflects a variety of approaches to create covert communication via text files. The extensiveness of steganographic issues and the presence of a huge variety of approaches make it difficult to precisely compare and evaluate...
Error concealment is a useful method for improving the damaged video quality in the decoder side. In this paper, a dynamic method with low computational complexity is presented to improve the visual quality of videos when up to 50% of the frames are damaged. In the proposed method, temporal replacement and the improved outer boundary matching algorithm are used for dynamical error concealment in inter-frames...
Motion estimation is a vital task in video compression and many algorithms are proposed to reduce its computational complexity. In a conventional Full Search (FS) algorithm, all blocks are searched for a match in the search window, resulting in a very acceptable PSNR compared to the other methods. However it suffers from heavy computational overhead. Three Step Search (TSS) algorithm which limits...
Compressive sensing (CS) is an efficient method to reconstruct sparse images with under-sampled data. In this method sensing and coding steps integrated to a one-step, low-complexity measurement acquisition system. In this paper, we use a Non-linear Conjugate Gradient (NLCG) algorithm to significantly increase the quality of reconstructed frames of video sequences. Our proposed framework divides sequence...
In this paper we proposed a new method for pedestrian detection in images and videos. Our method uses a sliding window to search through images. In order to extract the features, each window is divided into overlapping cells and features are extracted from them. The feature that we extracted to describe each window is based on analysis of gradient distribution of each cell. After gradient distribution...
Saliency map is a central part of many visual attention systems, particularly during learning and control of bottom-up attention. In this research we developed a hardware tool to extract saliency map from a video sequence. Saliency map is obtained by aggregating primary features of each frame, such as intensity, color, and lines orientation, along with temporal difference. The system is designed to...
Textline segmentation is an important preprocess before trying to recognize words. Handwritten texts include complex lines such as connected/overlapped, multi skewed, and curved textlines. In the proposed approach, to overcome these problems, local reliable text regions are locally extracted for each block of a handwritten text. Text image is first filtered by a set of directional 2D filters and filtered...
In this paper we address the problem of recognizing Farsi handwritten words. We extract two types of features from vertical stripes on word images: chain-code of word boundary and distribution of foreground density across the image word. The extracted feature vectors are coded using self organizing vector quantization. The result codes are used for training the model of each word in the database....
In this paper, we present a method for removing ruling lines from handwritten documents, making no damage to the existing characters. It is argued that ruling lines have a predictable position in the page, but their thickness and the distance between them may differ from one document to another, which is estimated with simple algorithm. Another important challenge in this regard is detecting the edge...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.