This paper presents a method for generating vision-based humanoid behaviors by reinforcement learning with rhythmic walking parameters. The walking is stabilized by a rhythmic motion controller such as CPG or neural oscillator. The learning process consists of two stages: the first one is building an action space with two parameters (a forward step length and a turning angle) so that infeasible combinations of them are inhibited. The second one is reinforcement learning with the constructed action space and the state space consisting of visual features and posture parameters to find feasible action. The method is applied to a situation of the RoboCupSoccer Humanoid league, that is, to reach the ball and to shoot it into the goal. Instructions by human are given to start up the learning process and the rest is completely self-learning in real situations.