Serwis Infona wykorzystuje pliki cookies (ciasteczka). Są to wartości tekstowe, zapamiętywane przez przeglądarkę na urządzeniu użytkownika. Nasz serwis ma dostęp do tych wartości oraz wykorzystuje je do zapamiętania danych dotyczących użytkownika, takich jak np. ustawienia (typu widok ekranu, wybór języka interfejsu), zapamiętanie zalogowania. Korzystanie z serwisu Infona oznacza zgodę na zapis informacji i ich wykorzystanie dla celów korzytania z serwisu. Więcej informacji można znaleźć w Polityce prywatności oraz Regulaminie serwisu. Zamknięcie tego okienka potwierdza zapoznanie się z informacją o plikach cookies, akceptację polityki prywatności i regulaminu oraz sposobu wykorzystywania plików cookies w serwisie. Możesz zmienić ustawienia obsługi cookies w swojej przeglądarce.
This paper aims at task-oriented action prediction, i.e., predicting a sequence of actions towards accomplishing a specific task under a certain scene, which is a new problem in computer vision research. The main challenges lie in how to model task-specific knowledge and integrate it in the learning procedure. In this work, we propose to train a recurrent longshort term memory (LSTM) network for handling...
We propose to use action, scene and object concepts as semantic attributes for classification of video events in InTheWild content, such as YouTube videos. We model events using a variety of complementary semantic attribute features developed in a semantic concept space. Our contribution is to systematically demonstrate the advantages of this concept-based event representation (CBER) in applications...
In this study, video super-resolution using artificial neural network (ANN) is proposed to enlarge low-resolution (LR) frames. The proposed super-resolution method consists of three main modules, i.e., motion-trace volume collection, ANN training, and ANN prediction. In the proposed method, the LR frames are super-resolved to HR frames through ANN. The traditional motion estimation is used to catch...
Low-level appearance as well as spatio-temporal features, appropriately quantized and aggregated into Bag-of-Words (BoW) descriptors, have been shown to be effective in many detection and recognition tasks. However, their effcacy for complex event recognition in unconstrained videos have not been systematically evaluated. In this paper, we use the NIST TRECVID Multimedia Event Detection (MED11 [1])...
An improved motion-search method based on pattern classification is proposed in this paper. A new feature representing the maximum motion around the current block is introduced, allowing more precise description of the motion characteristics of the block. Ada-Boost classifiers can correctly classify the harder-to-classify samples by increasing the weights of misclassified samples, so are adopted to...
In this paper, we propose a max modular support vector machine (M2-SVM) and its two variations for pattern classification. The basic idea behind these methods is to decompose training samples of one class into several parts and learn each part by one modular classifier independently. To implement these methods, a dasiapart-against-otherspsila training strategy and a max modular combination principle...
The traditional anti-spam techniques like black and white list can not meet the needs of the spam filter nowadays. Some machine learning techniques become very popular in the research of spam filter. Support vector machine is one of the most excellent methods in classifying. But these techniques are usually applied to spam identity based on the mail body textual content only, seldom discussing about...
In this paper, we present a new feature to model a class of events that consist of complex interactions among multiple entities captured by tracks and inter-object relationships over space and time. Existing approaches represent these events using features that measure only pairwise relationships between entities at a time, such as relative distance and relative speed. Due to the limitations of the...
Podaj zakres dat dla filtrowania wyświetlonych wyników. Możesz podać datę początkową, końcową lub obie daty. Daty możesz wpisać ręcznie lub wybrać za pomocą kalendarza.