The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The limitation of bandwidth resource in a wireless network has driven a large amount of work in resource scheduling to better utilize the available bandwidth. Most of these work aim to maximize the throughput by considering a uniform user distribution across the cellular network. In reality, users may be distributed non-uniformly. This work studies the effects of nonuniform user distribution on throughput...
We study approximation algorithms for scheduling problems with the objective of minimizing total weighted completion time, under identical and related machine models with job precedence constraints. We give algorithms that improve upon many previous 15 to 20-year-old state-of-art results. A major theme in these results is the use of time-indexed linear programming relaxations. These are natural relaxations...
This paper presents ongoing work on the formalism of Cyber-Physical Systems (CPS) simulations. These systems are distributed real-time systems, and their simulations might be distributed or not. In this paper, we propose a model to describe the modular components forming a simulation of a CPS. The main goal is to introduce a model of generic simulation distributed architecture, on which we are able...
Cloud computing is attracting an increased number of researches in delivering modeling and simulation abilities as a service. Among which, simulation execution as a service (EaaS) is a hot spot. It aims at releasing users from complex running configurations and meanwhile guaranteeing the QoS requirements. Under the motivation, focusing on EaaS for parallel and distributed simulation (PADS) application,...
The quality of service is one of challenges posed by the Cloud Computing. This issue plays an important role in making the Cloud services acceptable to customers, denotes the levels of performance, reliability, and availability offered by Cloud services. Literature has reported many implementations for measuring and ensuring QoS in Cloud Computing systems to achieve better results and meet the needs...
This work presents a dead-time compensation technique for robust predictive current control of grid-connected voltage source inverter. Among current controls, dead-beat predictive controllers are one of the fastest, but are extremely sensitive to inconsistencies between the model and the actual plant. Dead-time modifies the plant model and is one of the main sources of distortion in high dynamic range...
The advent of Cloud computing has provided a promising methodology for usage of distributed resources for complex scientific workflow applications. Due to the unique features of cloud technology, such as the pay-as-you-go pricing model and scaling, efficient workflow scheduling is a critical research topic. While most workflow scheduling algorithms are proposed to minimize the overall execution time,...
Over these last years, the number of cores witnessed a spectacular increase in digital signal and general use processors. Concurrently, significant researches are done to get benefit from the high degree of parallelism. Indeed, these researches are focused to provide an efficient scheduling from hardware/software systems to multicores architecture. The scheduling process consists on statically choose...
The embedded software systems are first designed and validated by high level models such as MATLAB/Simulink functional models. However, implementing a Simulink functional model on multicore architecture is not trivial. Designers might need first to select an adequate multicore architecture that provides a higher performance for a given Simulink model. Hence, it is important to have a set of performance...
Recommender systems that utilize pertinent and available contextual information are applicable to and useful in a broad range of domains. This paper utilizes context-aware recommendation to facilitate personalized education and assist students in selecting courses (or in non-traditional curricula, learning artifacts) that meet curricular requirements, leverage their skills and background, and are...
Cloud computing is a platform serving millions of users simultaneously. This platform employs various task scheduling algorithms that play a significant role in determining the cloud computing performance (waiting time, response time, execution time, and total finish time for all tasks). Tasks are usually different in their nature, importance, length, and requirements. Therefore, we will be using...
In the highly competitive business environment of Software as a Service (SaaS) clouds, Quality of Service (QoS) and fair pricing are of paramount importance for differentiating between similar cloud providers. In such platforms, the workload computational demand variability may have a significant impact on the system performance and thus on the provider's Service Level Agreement (SLA) commitments...
Fog computing preserves benefits of cloud computing and is strategically positioned to address effectively many local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premises. Resource management is a critical issue affecting system performance significantly. Due to the complex distribution and high mobility of fog devices,...
Reducing power and energy consumption in large scale networks remains a challenging problem. The increasing need to support multimedia applications in future Internets further compound this problem. This paper addresses a key issue of how to efficiently assign per-router flow delays and set per-processor execution speeds, along a routing path, to jointly minimize energy consumption and meet end-to-end...
Reproducibility of the execution of scientific applications on parallel and distributed systems is a growing interest, underlying the trustworthiness of the experiments and the conclusions derived from experiments. Dynamic loop scheduling (DLS) techniques are an effective approach towards performance improvement of scientific applications via load balancing. These techniques address algorithmic and...
Modern high performance computing (HPC) systems exhibit a rapid growth in size, both “horizontally” in the number of nodes, as well as “vertically” in the number of cores per node. As such, they offer additional levels of hardware parallelism. Each level requires and employs algorithms for appropriately scheduling the computational work at the respective level. The present work explores the relation...
Independent applications co-scheduled on the same hardware will interfere with one another, affecting performance in complicated ways. Predicting this interference is key to efficiently scheduling applications on shared hardware, but forming accurate predictions is difficult because there are many shared hardware features that could lead to the interference. In this paper we investigate machine learning...
In utility computing models, users consume services based on their Quality of Service (QoS) requirements. QoS provides a basis for task scheduling, but it also makes task scheduling problems more complex. In this paper, we present a heuristic scheduling algorithm, named Budget-Deadline Constrained Workflow Scheduling (BDCWS). The algorithm calculates the task priority by a new method to balance the...
In our previous works (Liu & Su, 2016a [1], 2016b [2]), we had studied the scheduling of medical resource order and distribution based on an influenza diffusion model. In paper [1], the order size in distribution centers (DCs) was set to be a constant number, and it was improved to be a decision variable in paper [2]. A core decision variable, the number of beds assigned for epidemic patients...
Cloud computing and its pay-as-you-go model continue to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scale and large-scale geo-distributed datacenters operated and managed by individual cloud service providers raises new challenges in terms of effective global resource sharing and management of autonomously-controlled individual...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.