The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Advances in artificial intelligence have raised a basic question about human intelligence: Is human reasoning best emulated by applying task‐specific knowledge acquired from a wealth of prior experience, or is it based on the domain‐general manipulation and comparison of mental representations? We address this question for the case of visual analogical reasoning. Using realistic images of familiar...
Scientific simulations on high performance computing (HPC) platforms generate large quantities of data. To bridge the widening gap between compute and I/O, and enable data to be more efficiently stored and analyzed, simulation outputs need to be refactored, reduced, and appropriately mapped to storage tiers. However, a systematic solution to support these steps has been lacking on the current HPC...
Die-stacked DRAM (a.k.a., on-chip DRAM) provides much higher bandwidth and lower latency than off-chip DRAM. It is a promising technology to break the "memory wall". Die-stacked DRAM can be used either as a cache (i.e., DRAM cache) or as a part of memory (PoM). A DRAM cache design would suffer from more page faults than a PoM design as the DRAM cache cannot contribute towards capacity of...
As we continue toward exascale, scientific data volume is continuing to scale and becoming more burdensome to manage. In this paper, we lay out opportunities to enhance state of the art data management techniques. We emphasize well-principled data compression, and using it to achieve progressive refinement. This can both accelerate I/O and afford the user increased flexibility when she interacts with...
This paper explores the causality and responsibility problem (CRP) for the non-answers to probabilistic reverse skyline queries (PRSQ). Towards this, we propose an efficient algorithm called CP to compute the causality and responsibility for the non-answers to PRSQ. CP first finds candidate causes, and then, it performs verification to obtain actual causes with their responsibilities, during which...
Community detection is one of the most important ways that reflect the structure and mechanism beneath the social network. The overlapping communities are more in line with the reality of social network. In the society, the phenomenon of some members shared membership of different communities reflects as overlapping communities in the network. Facing big data network, it is a challenging and computationally...
Improving read performance is one of the major challenges with speeding up scientific data analytic applications. Utilizing the memory hierarchy is one major line of researches to address the read performance bottleneck. Related methods usually combine solide-state-drives(SSDs) with dynamic random-access memory(DRAM) and/or parallel file system(PFS) to mitigate the speed and space gap between DRAM...
The Cumulus Pricing Scheme (CPS) could be an important management functionality of the commercial networks in future and it is the only approach known so far defining a clear relation between different time-scales of accounting periods, measurement periods and charging periods. Prices in this scheme are based on flat fees and hence predictable and transparent. CPS is backed by the design of a generic...
As computing power increases exponentially, vast amount of data is created by many scientific research activities. However, the bandwidth for storing the data to disks and reading the data from disks has been improving at a much slower pace. These two trends produce an ever-widening data access gap. Our work brings together two distinct technologies to address this data access issue: indexing and...
Dynamic Difficulty Adjustment (DDA) can adjust game difficulty level dynamically; so it generates a tailor-made experience for each gamer. If a game is too easy, the gamer will feel bored; if it is too hard, the gamer will become frustrated. DDA is a mechanism to overcome this dilemma and augment the entertainment of a game by dynamically adjusting the parameters, scenarios and behaviors in the game...
With the development of web, there are an increasing number of incomplete information systems. Approaches to building ontology from this kind of data source have become much more important. Granular computing (GrC) is a nature way of human problem-solving. It is intended to deal with impression, uncertainty, and partial truth. In this paper, by applying the principle of granular computing and concept...
Peta-scale scientific applications running on High End Computing (HEC) platforms can generate large volumes of data. For high performance storage and in order to be useful to science end users, such data must be organized in its layout, indexed, sorted, and otherwise manipulated for subsequent data presentation, visualization, and detailed analysis. In addition, scientists desire to gain insights...
The modeling methods available can not describe impacts of adjustment ability of logistics system on payoff and cost of logistics system. In view of such a fact, theorem of adjustment ability of logistics system in supply chain circumstance on condition that payoff of supply chain is a unique increasing process was proposed. A sequence of MGF (moment generating function) of out-of-goods risk process...
The modeling methods available can not describe impacts of adjustment ability of logistics system on payoff and cost of logistics system. In view of such a fact, theorem of adjustment ability of logistics system in supply chain circumstance on condition that payoff of supply chain is a unique increasing process was proposed. A sequence of MGF (moment generating function) of out-of-goods risk process...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.