The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Most mass data processing applications nowadays often need long, continuous, and uninterrupted data access. Parallel/distributed file systems often use multiple metadata servers to manage the global namespace and provide a reliability guarantee. With the rapid increase of data amount and system scale, the probability of hardware or software failures keeps increasing, which easily leads to multiple...
Hadoop is a popular open-source framework that allows distributed analysis of large datasets using the MapReduce programming model. A distributed file system HDFS is implemented to provide high-throughput access to datasets. HDFS can achieve high performance metadata service but has two disadvantages. First, when the metadata server stores metadata on persistent devices, it is restricted to read and...
As application requirements increase in quantity and popularity, big data storage is becoming an important technology which data centers and Internet companies depend on. The cluster file system with centralized metadata management often encounters planned or unplanned downtime which requires higher reliability for metadata service. Current paradigms use the backup server to take over as the primary...
Internet applications are diverse, such as online web services, offline data analysis jobs and so on. To make different types of applications work effectively, many programming models and computing frameworks are designed, such as MapReduce, Dryad, and Hadoop. How to support the emerging applications and computing frameworks with high resource utilization is a serious challenge. In this paper, we...
To store and manage data efficiently is the critical issue which modern information infrastructures confront with. To accommodate the massive scale of data in the Internet environment, most common solutions utilize distributed file systems. However there still exist disadvantages preventing these systems from delivering satisfying performance. In this paper, we present a Name Node cluster file system...
As PC clusters increase in popularity and quantity, message-passing between nodes has been an important issue for high failure rate in the network. File access in a cluster file system often contains several sub-operations, each includes one or more network transmissions. Any network failures will cause the file system service unavailable. In this paper, we describe a highly reliable message-passing...
In cluster file systems, the metadata management is critical to the whole system. Past researches mainly focus on journaling which alone is not enough to provide high-available metadata service. Some others try to use replication, but the extra latency accompanied is a main problem. To guarantee both availability and efficiency, we propose a mechanism for building highly available metadata servers...
Checkpoint/restart has been widely used in computing systems for fault tolerance, job scheduling and system maintenance purposes. However, the lack of transparency has hindered adoptions of many implementations of it. In this paper, we present a fully transparent parallel checkpoint/restart framework, DCR, which takes the advantages of kernel-level checkpointing method and TCP session preservation...
In large-scale cluster systems, the failure rate of network connection is non-negligibly high. A cluster file system must have the ability to handle network failures in order to provide high-available data accesses service. Traditionally, network failure handling is only guaranteed by network protocol, or implemented within the file system semantic layer. We present the high-available message-passing...
The network file system (NFS) protocol, as the de facto standard for sharing files in a distributed environment, has deployed Infiniband as the underlying transport of sunRPC, namely NFS over RDMA. In the current Read-Write design of NFS over RDMA, NFS write performance is limited for not fully utilizing the features of Infiniband. In this paper, we take on the challenge of enhancing the write performance...
Daemon-based MPI launchers are the mainstream in nowadays, because they can startup processes rapidly. However, effective task management and fault tolerance become more important as the scale of supercomputers enlarges. A new fast-start and fault tolerant launcher, called SFLauncher, has been used to startup MPICH task on Dawning supercomputers. This paper details its features and implementation,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.