Cyber-physical systems, such as mobile robots, must respond adaptively to dynamic operating conditions. Effective operation of these systems requires that sensing and actuation tasks are performed in a timely manner. Additionally, execution of mission specific tasks such as imaging a room must be balanced against the need to perform more general tasks such as obstacle avoidance. This problem has been addressed by maintaining relative utilization of shared resources among tasks near a user-specified target level. Producing optimal scheduling strategies requires complete prior knowledge of task behavior, which is unlikely to be available in practice. Instead, suitable scheduling strategies must be learned online through interaction with the system. We consider the sample complexity of reinforcement learning in this domain, and demonstrate that while the problem state space is countably infinite, we may leverage the problem's structure to guarantee efficient learning.
Computational Grids are a new trend in distributed computing systems. They allow the sharing of geographically distributed resources in an efficient way, extending the boundaries of what we perceive as distributed computing. Various sciences can benefit from the use of grids to solve CPU-intensive problems, creating potential benefits to the entire society. Job scheduling is an integrated part of parallel and distributed computing. It allows selecting correct match of resource for a particular job and thus increases the job throughput and utilization of resources. Job should be scheduled in an automatic way to make the system more reliable, accessible and less sensitive to subsystem failures. This paper provides a survey on various heuristic algorithms, used for scheduling in grid.
Zayas, Cilia E. (University of Florida) | He, Zhe (Florida State University) | Yuan, Jiawei ( Embry-Riddle Aeronautical University ) | Maldonado-Molina, Mildred (University of Florida) | Hogan, William (University of Florida) | Modave, François (University of Florida) | Guo, Yi (University of Florida) | Bian, Jiang (University of Florida)
Elderly patients, aged 65 or older, make up 13.5% of the U.S. population, but represent 45.2% of the top 10% of healthcare utilizers, in terms of expenditures. Middle-aged Americans, aged 45 to 64 make up another 37.0% of that category. Given the high demand for healthcare services by the aforementioned population, it is important to identify high-cost users of healthcare systems and, more importantly, ineffective utilization patterns to highlight where targeted interventions could be placed to improve care delivery. In this work, we present a novel multi-level framework applying machine learning (ML) methods (i.e., random forest regression and hierarchical clustering) to group patients with similar utilization profiles into clusters. We use a vector space model to characterize a patient’s utilization profile as the number of visits to different care providers and prescribed medications. We applied the proposed methods using the 2013 Medical Expenditures Panel Survey (MEPS) dataset. We identified clusters of healthcare utilization patterns of elderly and middle-aged adults in the United States, and assessed the general and clinical characteristics associated with these utilization patterns. Our results demonstrate the effectiveness of the proposed framework to model healthcare utilization patterns. Understanding of these patterns can be used to guide healthcare policy-making and practice.
The part agent prioritizes conflicting machine requests for the same parts, selects the best machine, and accepts the request from the best machine. If the part agent accepts the machine agent's request for parts, the parts are ordered by the machine (a machine agent commitment) and the part agent then commits to supply the parts. The operation is The allocation of agents to both parts and machines is an approach that has been around for over a decade (Duffle, Piper, Humphrey, and Hartwick 1986) and has been studied more recently in the context of autonomous agents (Lin and Solberg 1992). This work takes the same approach to agent allocation and uses a similar negotiation scheme but for different purposes.
Service discovery requests' messages have a vital role in sharing and locating resources in many of service discovery protocols. Sending more messages than a link can handle may cause congestion and loss of messages which dramatically influences the performance of these protocols. Re-send the lost messages result in latency and inefficiency in performing the tasks which user(s) require from the connected nodes. This issue become a serious problem in two cases: first, when the number of clients which performs a service discovery request is increasing, as this result in increasing in the number of sent discovery messages; second, when the network resources such as bandwidth capacity are consumed by other applications. These two cases lead to network congestion and loss of messages. This paper propose an algorithm to improve the services discovery protocols performance by separating each consecutive burst of messages with a specific period of time which calculated regarding the available network resources. It was tested when the routers were connected in two configurations; decentralised and centralised .In addition, this paper explains the impact of increasing the number of clients and the consumed network resources on the proposed algorithm.