# Search results for: John Shawe-Taylor

Implementation Science > 2017 > 12 > 1 > 1-12

Information Fusion > 2017 > 35 > C > 117-131

Machine Learning > 2017 > 106 > 6 > 863-886

Journal of Neuroscience Methods > 2016 > 271 > C > 182-194

Lecture Notes in Computer Science > Deterministic and Statistical Methods in Machine Learning > 242-255

Lecture Notes in Computer Science > Advanced Data Mining and Applications > Multimedia Mining > 681-692

*η*

_{eff}, defined as the ratio of the learning rate to the length of the weight vector, remains constant. We prove that for

*η*

_{eff}sufficiently small the new algorithms converge in a finite number of steps and show that there exists a limit of the parameters involved in which convergence...

Lecture Notes in Computer Science > Neural Information Processing > Supervised/Unsupervised/Reinforcement Learning > 477-486

Lecture Notes in Computer Science > Machine Learning and Knowledge Discovery in Databases > Regular Papers > 554-569

Lecture Notes in Computer Science > Learning Theory and Kernel Machines > Poster Session 1 > 288-302

*kernel*or Gram matrix between data points. These square, symmetric, positive semi-definite matrices can informally be regarded as encoding pairwise similarity between all of the objects in a data-set. In this paper we propose an algorithm for manipulating the diagonal entries of a kernel matrix using semi-definite programming. Kernel matrix...

*m × m*Gram matrix

*K*for a kernel

*k(·, ·)*corresponding to a sample x

_{1}, … , x

_{m}drawn from a density

*p*(x) and the eigenvalues of the corresponding continuous eigenproblem. We bound the differences between the two spectra and provide a performance bound on kernel PCA.

*performance worm*. From such worms, general performance alphabets can be derived, and pianists’...