The core concept of cloud computing is the resource pool. Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data in-parallel on large clusters of commodity hardware in a reliable, fault-tolerant manner. A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. We distribute the total slots according to Pi which is the percent of job’s unfulfilled tasks in the total unfulfilled tasks. Since the P i of the large job is bigger, the large job will be allocated more slots. We can clearly improve the response time of the large jobs. This new scheduling algorithm can improve the performance of the system, such as throughout, response time.