The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Ability to allocate task on the fly is considered to be one of the most desirable features in a swarm intelligent system. This paper presents a computational model in which swarms of autonomous agents (robots) carry out the task of cleaning the environment by collecting boxes from the environment and dumping them in a dump area. As agents work, they lose energy and when the energy is too low they...
We consider scheduling for a single-user energy harvesting channel in which the transmitter incurs processing cost per unit time it is on. The presence of processing costs forces the transmitter to operate in a bursty mode. We consider online transmission scheduling where the transmitter knows the energy harvests only causally as they arrive, and needs to determine the optimum transmit power and the...
On a cluster system running behind the Cloud computing, multiple processes are generated from most applications and then executed on multiple computing nodes. Their processes communicate with each other during their execution. The communication performance among multiple processes plays an important role in the total execution performance of an application. The SDN-enhanced JMS, which we have developed...
Data analytics frameworks shift towards larger degrees of parallelism. Efficient scheduling of data-parallel jobs (tasks) is critical for improving job performance such as response time, and resource utilization. It is an important challenge for large scale data analytics frameworks in which jobs are more complex and have diverse characteristics (e.g., diverse resource requirements). Prior work on...
In Volunteer Computing resources are provided by the own users, instead of by a single institution. One of its drawbacks is the unreliability of the provided resources, so their selection becomes a main point. In this paper we deal with the suitable selection of resources considering this kind of Volunteer Computing system. As the resources choice may be needed in a reduced amount of time, we cannot...
To support the various services quickly and flexibly, intelligence and automation are the key characters for future network resource management. How to build a big data based Telco cloud resource management framework is very important currently, which can leverages the power of Network Function Virtualization (NFV) and enhances telecom operators' availability of service, efficiency of resource, and...
Kelly betting is a prescription for optimal resource allocation among a set of gambles which are typically repeated in an independent and identically distributed manner. In this setting, there is a large body of literature which includes arguments that the theory often leads to bets which are “too aggressive” with respect to various risk metrics. To remedy this problem, many papers include prescriptions...
Various techniques of portfolio selection are applied to interpret the status of the market and predict the market's future trend, but they are not beneficial to small investors because these techniques should be administered by an expert. In addition, these techniques desire accumulation of data about the market and complicated calculations, which is too much effort for individual small investors...
This paper proposes a task allocation method in which, although social utility is attempted to be maximized, agents also give weight to individual preferences based on their own specifications and capabilities. Due to the recent advances in computer and network technologies, many services can be provided by appropriately combining multiple types of information and different computational capabilities...
The performance of a ROS application is a function of the individual performance of its constituent nodes. Since ROS nodes are typically configurable (parameterised), the specific parameter values adopted will determine the level of performance generated. In addition, ROS applications may be distributed across multiple computation devices, thus providing different options for node allocation. We address...
In this paper, we consider the problem of ensuring fairness in systems serving a mixture of fully backlogged applications, which continuously demand resources, and non-fully backlogged applications. We introduce a fairness metric, called interference fairness, the basic idea underlying which is that the interference caused by application A for another application B should be equal to that caused by...
Network-on-chip system plays an important role to improve the performance of chip multiprocessor systems. As the complexity of the network increases, congestion problem has become the major performance bottleneck and seriously influence the performance of NoCs. Prior works have focused on designing effective routing algorithm based on collecting contention and congestion information to load balance...
The continuance of Moore's law and failure of Dennard scaling force future chip multiprocessors (CMPs) to have considerable dark regions. How to use up available dark resources is an important concern for computer architects. In harmony with these changes, we must revise processor allocation schemes that severely affect the performance of a parallel on-chip system. A suitable allocation algorithm...
as the deceleration of processor scaling due to Moore's law accelerates research in new types of computing structures, the need arises for rethinking operating systems paradigms. Traditionally, an operating system is a layer between hardware and applications and its primary function is in managing hardware resources and providing a common abstraction to applications. How does this function apply,...
An idea or a concept can be classified as transformational, operational or tactical. In the recent past, there have been more rational developments in software testing techniques outsmarting the ones prevailing earlier which weighed more in favor of empirical rather than logical aspects of software testing. This paper presents one of a kind of developments in software testing methodologies that were...
Analyzing the behavioral patterns to maintain the workload in cloud computing is very important. There are number of cloud data centers that have the facility to maintain this. But this paper has provided the approach for resource utilization based on two metrics, energy and cost function. This approach works within the workload which helps to understand the relationship between users and servers,...
Advancement in computing technologies and increasing demand of computing resources (such as network, storage, servers, applications etc.) has made cloud computing as main computing basis for small or large IT enterprises. The enterprises which are not able to establish their own infrastructure and resources are taking help of different (Infrastructure as a Service) IaaS service providers on basis...
Grid technology is the structure of technology that provides highly efficient performance in the grid environment. Design of an efficient and reliable task scheduling algorithm is one of the challenging issues in grid computing. A novel improvised scheduling algorithm (IDSA) with deadline limitation for efficient job execution is proposed in this paper. This algorithm is compared with renowned task...
In order to improve the cloud computing utilization, an Improved Ant Colony Optimization (IACO) is proposed. The proposed IACO algorithm improves pheromone factors and inspired factors innovatively based on the existent algorithms. Emulation tests are conducted in the CloudSim and the results indicate that IACO is superior to the conventional ACO and the latest IABC in task executing efficiency.
The future of Moore's Law is in jeopardy. The number of cores of many-core systems is steadily increasing for every technology node generation. Voltage scaling does not keep pace with the unabated decrease of transistor size. Higher leakage power and manufacturing variabilities are the consequences and lead to extremely critical power as well as thermal issues. These phenomena can downgrade the performance...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.