Support vector machines (SVMs) is a popular supervised learning method, providing state-of-the-art accuracy in various classification tasks. However, SVM training is a time-consuming task for large-scale problems. This paper proposes a scalable FPGA architecture which targets a geometric approach to SVM training based on Gilbertpsilas algorithm using kernel functions. The architecture is partitioned into floating-point and fixed-point domains in order to efficiently exploit the FPGApsilas available resources for the acceleration of the non-linear SVM training. Implementation results present a speed-up factor up to three orders of magnitude of the most computational expensive part of the algorithm compared to the algorithmpsilas software implementation.