Applications of complex-valued neural networks employed as neural adaptive filters are emerging, however, the associated learning algorithms are typically computationally expensive, slowly converging and sensitive. To help to circumvent some of these problems, we introduce the a posteriori data-reusing (DR) approach into the class of first order (sign) algorithms for complex-valued feedforward neural adaptive filters. This is achieved starting from the data-reusing complex-valued nonlinear gradient descent (DRCNGD) algorithm through to low complexity fast converging data-reusing sign algorithms. The analysis proves faster convergence, lower sensitivity and computational complexity when the DR approach is applied in this framework. Simulation results and statistical analysis support the analysis