As Applications are moved from physical servers to virtual machines sharing storage resources, they experience large variation in I/O latencies. While maintaining average performance in such virtualized environments is important to conform to service level agreements (SLA), cloud users also expect their applications to have minimum variation in tail end latencies like 90th percentile latency for predictable performance. This becomes a challenging problem as the deviation in the application's 90th percentile I/O latency from average latency under storage resource sharing (VM consolidation) can be very high. We show through experiments under VM consolidation that during peak loads this latency variation from average can be as much as 5 times compared to when the application has exclusive access to the storage devices. This variation in performance exists for both Hard drives (HDD) and Solid state drives (SSD). To minimize this large latency variation, we propose a dynamic I/O redirection and caching mechanism called Virt Cache. Virt Cache can pro-actively detect storage device contention at the storage server and temporarily redirect the peaking virtual disk workload to a dynamically instantiated distributed read-write cache. We have implemented our system in Gluster FS, a commonly used distributed file system deployed as a backing store in the cloud. Our system can achieve from 50% to 83% reduction in the 90th percentile latency deviation from average compared to previous work as we move from low load conditions to peak non uniform consolidated VM workloads. With our Virt Cache system, Cloud providers can guarantee predictable performance for the cloud users as if their application has exclusive access to the storage resources.