Path planning of multi-agent is much harder than single-agent. Reinforcement learning (RL) is a popular method for it. However, it cannot solve the path planning problem directly in unknown environment. In this paper, neural network (NN) is applied to estimate the unvisited space. The traditional multi-agent reinforcement learning is modified by the neural approximation. The path planning of this paper includes two stages: we first use RL to generate training samples for NN; then the trained NN gives an approximate action to agents. The advantage of this method is we do not need to repeat RL for the unvisited state. Experiment results show the proposed algorithm can generate suboptimal paths in the unknown environment for multiple agents.