The article proposes a new robot programming-by-demonstration framework, which integrates a visual servoing tracking control to robustly follow a trajectory generated from observed demonstrations. The constraints originating from the use of a visual servoing controller are incorporated into the trajectory learning phase, to guarantee feasibility of the generated plan for task execution. The observational learning is solved as a constrained optimization problem, with an objective to generalize from a set of trajectories of salient features in the image space of a vision camera. The proposed approach is evaluated experimentally for learning trajectories acquired from kinesthetic demonstrations.