In today's world, data has become more precious than money. We are overwhelmed with flood of data today. Irrespective of the size of the enterprise, (it may be big or small); data retains to be a valued and irreplaceable asset. Data is generated from the homogeneous as well as heterogeneous sources. Earlier most of the data was stored in RDBMS with rows/records/tuples, columns/fields/attributes etc. Over a period, RDBMS became more robust, cost effective and efficient. RDBMS simplified the management of data. Now the problem is that RDBMS generally held structured data and data that is generated offline as well as online is not structured. A great amount of data is unstructured. In fact, according to Gartner estimation almost 80% of the data generated in enterprise today is unstructured. Roughly, around 20% data falls in structured and semi-structured category. However, the problem is dealing with big data that generates new challenges like capture, storage, search, analysis, transfer, visualization privacy violations. In this paper, a new modified pipeline of data processing is discussed and each phase introduces some new challenges. This paper briefly discusses these issues and challenges.