Given a p-concept classC, we define two important functionsd C (γ),d′ C (γ) (related to the notion ofγ-shattering). We prove a lower bound ofΩ((d C (γ)−1)/(εγ 2 )) on the number of examples required for learningCwith an (ε, γ)-good model of probability. We prove similar lower bounds for some other learning models like learning withγ-bounded absolute (or quadratic) difference or learning with aγ-good decision rule. For the class ND of nondecreasing p-concepts on the real domain,d ND (γ)=Ω(1/γ). It can be shown that the resulting lower bounds for learning ND (within the models in consideration) are tight to within a logarithmic factor. In order to get the “almost-matching” upper bounds, we introduce a new method for designing learning algorithms: dynamic partitioning of the domain by use of splitting trees. The paper also contains a discussion of the gaps between the general lower bounds and the corresponding general upper bounds. It can be shown that, under very mild conditions, these gaps are quite narrow.