The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
For remote sensing image understanding, target detection is one of the most important tasks. In this paper, we propose one object detection method based on region proposal detection via active contour model and detection based on one-class classification method. The large scale remote sensing image is split into several connected components. And then, the proposed algorithm detects the object from...
This paper presents a general framework for live detection of broilers in poultry houses. The challenges for image recognition of broilers are posted by crowded scenes, poor image quality and difficulty in acquiring a benchmark of labeled samples. The proposed framework consists on the use of image thresholding, morphological transformations, feature engineering, in addition to supervised and unsupervised...
In most convolutional neural networks (CNNs), the output is a single classification result by combining all the neuron activations in the last layer. As we know, local connectivity is an important characteristic of CNNs. Each neuron in the network corresponds to a local region in the original image. Hence, it is possible to simultaneously obtain local visibility of a target object by analyzing neuron...
Real-time scene understanding has become crucial in many applications such as autonomous driving. In this paper, we propose a deep architecture, called BlitzNet, that jointly performs object detection and semantic segmentation in one forward pass, allowing real-time computations. Besides the computational gain of having a single network to perform several tasks, we show that object detection and semantic...
We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition...
Pedestrian detection is a critical problem in computer vision with significant impact on safety in urban autonomous driving. In this work, we explore how semantic segmentation can be used to boost pedestrian detection accuracy while having little to no impact on network efficiency. We propose a segmentation infusion network to enable joint supervision on semantic segmentation and pedestrian detection...
A first-person camera, placed at a person's head, captures, which objects are important to the camera wearer. Most prior methods for this task learn to detect such important objects from the manually labeled first-person data in a supervised fashion. However, important objects are strongly related to the camera wearer's internal state such as his intentions and attention, and thus, only the person...
It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, cause deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets...
Object detection and classification are two very important tasks for quality control in industrial process. The first step of a quality algorithm is the detection of the moving object. Afterwards, the detected object is classified according to its size and color related properties. In this study, an interactive image segmentation method is proposed to detect a moving object. The segmentation method...
An approach for object detection in depth images based on local and global convexity is presented. The approach consists of three steps: image segmentation into planar patches, greedy planar patch aggregation based on local convexity and segment grouping based on global convexity. The proposed approach improves upon existing similar methods, which use convexity as a cue for object detection, by detecting...
Studies of object detection have recently attracted increased interest. One of the applications of object detection is robotics. This paper present the real-time object detection integrated to humanoid robot soccer. In order to enhance the vision to detect ball and goal, the You Only Look Once (YOLO) methods is used as deep-learning object detection method. The real-time experiments have been carried...
This paper propose an image processing technology applied in a mentioned systems for tracking and image segmentation and extraction, which are able to capture the movement of objects when applied to security system via CCTV. The experimental and results are done on receiving “.avi” files from the surveillance installed cameras in 2 well-lit areas with 10 intrusions in different area. The intrusion...
Common visual recognition tasks such as classification, object detection, and semantic segmentation are rapidly reaching maturity, and given the recent rate of progress, it is not unreasonable to conjecture that techniques for many of these problems will approach human levels of performance in the next few years. In this paper we look to the future: what is the next frontier in visual recognition?...
Deep learning has been applied to saliency detection in recent years. The superior performance has proved that deep networks can model the semantic properties of salient objects. Yet it is difficult for a deep network to discriminate pixels belonging to similar receptive fields around the object boundaries, thus deep networks may output maps with blurred saliency and inaccurate boundaries. To tackle...
Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line, A link connects two adjacent...
This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation...
We introduce a new large-scale data set of video URLs with densely-sampled object bounding box annotations called YouTube-BoundingBoxes (YT-BB). The data set consists of approximately 380,000 video segments about 19s long, automatically selected to feature objects in natural settings without editing or post-processing, with a recording quality often akin to that of a hand-held cell phone camera. The...
Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps,...
This paper introduces WILDCAT, a deep learning method which jointly aims at aligning image regions for gaining spatial invariance and learning strongly localized features. Our model is trained using only global image labels and is devoted to three main visual recognition tasks: image classification, weakly supervised object localization and semantic segmentation. WILDCAT extends state-of-the-art Convolutional...
This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.