Serwis Infona wykorzystuje pliki cookies (ciasteczka). Są to wartości tekstowe, zapamiętywane przez przeglądarkę na urządzeniu użytkownika. Nasz serwis ma dostęp do tych wartości oraz wykorzystuje je do zapamiętania danych dotyczących użytkownika, takich jak np. ustawienia (typu widok ekranu, wybór języka interfejsu), zapamiętanie zalogowania. Korzystanie z serwisu Infona oznacza zgodę na zapis informacji i ich wykorzystanie dla celów korzytania z serwisu. Więcej informacji można znaleźć w Polityce prywatności oraz Regulaminie serwisu. Zamknięcie tego okienka potwierdza zapoznanie się z informacją o plikach cookies, akceptację polityki prywatności i regulaminu oraz sposobu wykorzystywania plików cookies w serwisie. Możesz zmienić ustawienia obsługi cookies w swojej przeglądarce.
Reduced basis methods for the approximation to parameter-dependent partial differential equations are now well-developed and start to be used for industrial applications. The classical implementation of the reduced basis method goes through two stages: in the first one, offline and time consuming, from standard approximation methods a reduced basis is constructed; then in a second stage, online and...
The Kullback-Leibler divergence is a widespread dissimilarity measure between probability density functions, based on the Shannon entropy. Unfortunately, there is no analytic formula available to compute this divergence between mixture models, imposing the use of costly approximation algorithms. In order to reduce the computational burden when a lot of divergence evaluations are needed, we introduce...
Gaussian mixture models are a widespread tool for modeling various and complex probability density functions. They can be estimated using Expectation- Maximization or Kernel Density Estimation. Expectation- Maximization leads to compact models but may be expensive to compute whereas Kernel Density Estimation yields to large models which are cheap to build. In this paper we present new methods to get...
We introduce an extension of the k-MLE algorithm, a fast algorithm for learning statistical mixture models relying on maximum likelihood estimators, which allows to build mixture of generalized Gaussian distributions without a fixed shape parameter. This allows us to model finely probability density functions which are made of highly non Gaussian components. We theoretically prove the local convergence...
Modeling data is often a critical step in many challenging applications in computer vision, bioinformatics or machine learning. Gaussian Mixture Models are a popular choice in many applications. Although these mixtures are powerful enough to approximate complex distributions, they may not be the best choice for some applications. Usual software mixtures libraries are often limited to a particular...
The scope of the well-known k-means algorithm has been broadly extended with some recent results: first, the k-means++ initialization method gives some approximation guarantees; second, the Bregman k-means algorithm generalizes the classical algorithm to the large family of Bregman divergences. The Bregman seeding framework combines approximation guarantees with Bregman divergences. We present here...
Bhattacharrya distance (BD) is a widely used distance in statistics to compare probability density functions (PDFs). It has shown strong statistical properties (in terms of Bayes error) and it relates to Fisher information. It has also practical advantages, since it strongly relates on measuring the overlap of the supports of the PDFs. Unfortunately, even with common parametric models on PDFs, few...
Podaj zakres dat dla filtrowania wyświetlonych wyników. Możesz podać datę początkową, końcową lub obie daty. Daty możesz wpisać ręcznie lub wybrać za pomocą kalendarza.