The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Linear programming relaxations have been used extensively in designing approximation algorithms for optimization problems. For vertex cover, linear programming and a thresholding technique gives a 2-approximate solution, rivaling the best known performance ratio. For a generalization of vertex cover we call vc t, in which we seek to cover t edges, this technique may not yield a feasible solution at...
We prove that if a quadratic system satisfies the direct sum conjecture strongly in the quadratic algorithm model, then it satisfies the direct sum conjecture strongly in the straight line algorithm model. Therefore, if the strong direct sum conjecture is true for the quadratic algorithm model then it is also true for the straight line algorithm model.
Many studies have been done in the literature on minimum disagreement problems and their connection to Agnostic learning and learning with Malicious errors. We further study these problems and some extensions of them. The classes that are studied in the literature are monomials, monotone monomials, antimonotone monomials, decision lists, halfspaces, neural networks and balls. For some of these classes...
We consider a few particular exact learning models based on a random walk stochastic process, and thus more restricted than the well known general exact learning models. We give positive and negative results as to whether learning in these particular models is easier than in the general learning models.
In this paper we study the complexity of rational functions and multirational functions. These include 1) functions containing the absolute value, max and min functions, 2) data structure functions such as sort, insert and merge 3) integer functions such as the gcd (greater common divisor), modulo, bitwise ‘and’ and 4) polynomial functions such as the gcd and modulo of two polynomials. We prove...
We study polynomial time learning algorithms for Multiplicity Automata (MA) and Multiplicity Automata Function (MAF) that minimize the access to one or more of the following resources: Equivalence queries, Membership queries or Arithmetic operations in the field . This is in particular interesting when access to one or more of the above resources is significantly more expensive than the...
The Kushilevitz-Mansour (KM)algorithm is an algorithm that finds all the “heavy” Fourier coefficients of a boolean function. It is the main tool for learning decision trees and DNF expressions in the PAC model with respect to the uniform distribution. The algorithm requires an access to the membership query (MQ)oracle. We weaken this requirement by producing an analogue of the KM algorithm...
We study the learnability of branching programs and small-depth circuits with modular and threshold gates in both the exact and PAC learning models with and without membership queries. Our results extend earlier works [11, 18, 15] and exhibit further applications of multiplicity automata [7] in learning theory.
From the literature, many lower bounds are known for the complexity of computing (or approximating) functions in random access machines (RAMs) that use arithmetic operations, comparisons and indirect addressing. However, no nontrivial lower bound is known when the RAM model also uses bitwise boolean operations or bit shift operations. This paper develops a new technique that finds lower bounds...
We study the proper learnability of axis parallel concept classes in the PAC learning model and in the exact learning model with membership and equivalence queries. These classes include union of boxes, DNF, decision trees and multivariate polynomials. For the constant dimensional axis parallel concepts C we show that the following problems have the same time complexity 1. C is α-properly...
We study a model of Probably Exactly Correct (PExact) learning that can be viewed either as the Exact model (learning from Equivalence Queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the Probably Approximately Correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of Probably...
We present results concerning the learning of Monotone DNF (MDNF) from Incomplete Membership Queries and Equivalence Queries. Our main result is a new algorithm that allows efficient learning of MDNF using Equivalence Queries and Incomplete Membership Queries with probability of $$ p = 1 - 1/poly\left( {n,t} \right) $$ of failing. Our algorithm is expected to make $$ 0\left( {\left( {\frac{{tn}}...
The Composition Lemma is one of the strongest tools for learning complex classes. It shows that if a class is learnable then composing the class with a class of polynomial number of concepts gives a learnable class. In this paper we extend the Composition Lemma as follows: we show that composing an attribute efficient learnable class with a learnable class with polynomial shatter coefficient gives...
In this paper, we study two combinatorial search problems: The coin weighing problem with a spring scale (also known as the vector reconstructing problem using additive queries) and the problem of reconstructing weighted graphs using additive queries. Suppose we are given n identical looking coins. Suppose that m out of the n coins are counterfeit and the rest are authentic. Assume that we are allowed...
Let R be a commutative Artinian ring with identity and let X be a finite subset of R. We present an exact learning algorithm with a polynomial query complexity for the class of functions representable as $$f(x) = \prod\limits_{i = 1}^n {A_i (x_i )}$$ where for each 1≤i≤n, Ai is a matrix-valued mapping Ai: X→ Rmixmi+1 and m1=m...
We give the first polynomial time prediction strategy for any PAC-learnable class C that probabilistically predicts the target with mistake probability $$\frac{poly(log(t))}{t}=\widetilde{O}\frac{1}{t}$$ where t is the number of trials. The lower bound for the mistake probability is [HLW94] Ω(1/t) so our algorithm is almost optimal.
In the paper, we construct a framework which allows to bound polynomially the distributions produced by certain boosting algorithms, without significant performance loss. Further,we study the case of Freund and Schapire’s AdaBoost algorithm, bounding its distributions to near-polynomial w.r.t. the example oracle’s distribution. An advantage of AdaBoost over other boosting techniques is that...
We study exact learning of halfspaces from equivalence queries. The algorithm uses an oracle RCH that returns a random consistent hypothesis to the counterexamples received from the equivalence query oracle. We use the RCH oracle to give a new polynomial time algorithm for exact learning halfspaces from majority of halfspaces and show that its query complexity is less (by some constant factor) than...
We analyze classification problems in which data is generated by a two-tiered random process. The class is generated first, then a layer of conditionally independent hidden variables, and finally the observed variables. For sources like this, the Bayes-optimal rule for predicting the class given the values of the observed variables is a two-layer neural network. We show that, if the hidden variables...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.