The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Growing core counts have highlighted the need for scalable on-chip coherence mechanisms. The increase in the number of on-chip cores exposes the energy and area costs of scaling the directories. Duplicate-tag-based directories require highly associative structures that grow with core count, precluding scalability due to prohibitive power consumption. Sparse directories overcome the power barrier by...
Traditional Distributed Hash Tables (DHT) abstraction distributes data items among peer nodes on a structured overlay network in storage-intensive applications. The question is whether DHT-based systems can provide reliable and scalable storage services also in stock-oriented applications, where logistics, trace ability and fault-tolerance are the main requirements. In this paper a novel approach...
The security management of group key is essential to secure multicast. This paper analyzes the advantages and disadvantages of a key management scheme for secure multicast, and proposes a new rekeying scheme based on it. By adding one-way hash function, some users can calculate the keys along the path from their own point to root by themselves, and during a departure procedure, one user is selected...
Group file operations are a new, intuitive idiom for tools and middleware - including parallel debuggers and runtimes, performance measurement and steering, and distributed resource management - that require scalable operations on large groups of distributed files. The idiom provides new semantics for using file groups in standard file operations to eliminate costly iteration. A file-based idiom promotes...
Resource discovery is an important aspect of many modern large-scale distributed systems. In the past, this problem has been solved using many different approaches, such as a central registry server, flooding-based protocols, and distributed hash tables. In this paper, these three widely used architectures are compared, using measurement results obtained from real implementations run on an Emulab...
In this paper, we present a novel file searching protocol to structure a DHT ring consisting of only ultrapeers, not all the nodes. The DHT ring in this protocol is much less sensitive to the churn rate because ultrapeers have much longer uptime compared with leaf nodes. Thus, this feature makes the protocol more scalable and efficient than the previous DHT ones in terms of costs of file search, node...
LakeFS[1] is a cluster file system designed for high scalable and reliable storage service. It provides excellent scalability and availability on data I/O operations. However, it lacks the ability to scale up the metadata operations, which makes the metadata server (MDS) as a single point of bottleneck especially when full scan of metadata is needed. In this paper, we present a way to cluster MDSs...
High-performance storage systems are evolving towards decentralized commodity clusters that can scale in capacity, processing power, and network throughput. Building such systems requires: (a)Sharing physical resources among applications; (b)Sharing data among applications; (c) Allowing customized views of data for applications. Current solutions satisfy typically the first two requirements through...
NAS and SAN have their own advantages respectively and they are two primary network storage systems nowadays. But they also have their own limitations and can't meet the demand of the high-speedly increasing network application. This paper presents a new network storage architecture made by integrating NAS and SAN in IP: the high performance storage network (HPSN). Firstly, with the help of the Global...
The trend in parallel computing toward clusters running thousands of cooperating processes per application has led to an I/O bottleneck that has only gotten more severe as the CPU density of clusters has increased. Current parallel file systems provide large amounts of aggregate I/O bandwidth; however, they do not achieve the high degrees of metadata scalability required to manage files distributed...
Efficient data management techniques are needed in wireless sensor networks (WSNs) to counteract issues related to limited resources, e.g. energy, memory, bandwidth, as well as limited connectivity. Self-organizing and cooperative algorithms are thought to be the optimal solution to overcome these limitations. On an abstract level, structured peer-to-peer protocols provide O(1) complexity for storing...
Collective I/O orchestrates I/O from parallel processes by aggregating fine-grained requests into large ones. However, its performance is typically a fraction of the potential I/O bandwidth on large scale platforms such as Cray XT. Based on our analysis, the time spent in global process synchronization dominates the actual time in file reads/writes, which imposes a 'collective wall' on the performance...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.