With the explosive increase of the data by volume in various fields of science, engineering, information services, etc., data-intensive computing has gained significant interest in recent years. Various challenges ranging from efficient peta-scale data management to the adoption of highly scalable cloud computing have become a norm for data center administrators. Highly scalable architectures such as Hadoop, BlobSeer and MapR are used in large data centers for efficient data management, and employ 3-way replication for fault tolerance or data availability. One means of reducing storage overhead of replication in data-centers is erasure coding. However, HDFS-RAID (erasure-coded Hadoop) uses large block sizes and does not support update operations. Therefore, changing any file-block content requires recreating the whole file, which effectively reduces the overall write and update performance of the system. We propose FINe Grained ERasure coding scheme (FINGER) for the erasure-coded Hadoop FileSystem, which improves both write and update performance without sacrificing the read performance. The main idea is to chunk the large block size (64 or 128 MB) into smaller chunks; the chunk layout is designed to mitigate extra reads when performing erasure coding on a large block update and maintains the same metdata size as HDFS-RAID. We implement the update operation in Hadoop and conduct testbed experiments to demonstrate that FINGER improves the write and update performance by 38.20% and 8.6% w.r.t. 3-way replication and by 8.08% and up to 5.68×w.r.t HDFS-RAID respectively.