We solve a manifold learning problem by searching for hypersurfaces fitted to the data. The method, called support vector manifold learning (SVML), transforms data to a kernel-induced feature space, duplicates points, shifts them in two opposite directions and solves a classification problem using support vector machines (SVM). Then, we cluster data by mapping found hypersurfaces to clusters, the method is called support vector manifold learning clustering (SVMLC). We analyze how the choice of direction of moving points influences the error for fitting to the data. Moreover, we derive the generalization bound with Vapnik-Chervonenkis (VC) dimension for SVML. The experiments on synthetic and real world data sets show that SVML is better in fitting to the data than one-class support vector machines (OCSVM) and kernel principal component analysis (KPCA) with statistical significance for OCSVM. The SVMLC method has comparable performance in clustering to OCSVM and KPCA. However, the SVMLC allows for improved grouping of points in the form of manifolds.