Alumni

Major research topic

Reinforcement Learning-based computing continuum management system for dynamic resource ; allocation

Abstract

The computing continuum scenario is very complex, involving heterogeneous resources. Applications can be executed on various elements within the continuum (from edge devices to cloud servers), but choosing the best computational resource on which to map each task is crucial to speed up computation and save energy. My focus is on the Resource Selection and application Component Placement (RS-CP) problem. This has been tackled at design time, but there is the need to develop a system that can respond to workload fluctuations. These are present in real systems but are difficult to predict at design time, therefore possibly invalidating design time solutions. To solve such optimization task, Reinforcement Learning can be effectively implemented, providing agents that are capable of evolving with the system. In particular, Deep Reinforcement Learning has been proved to be suitable for high-dimensional state spaces such as the ones focus of my research.
I work on a framework that dynamically adjusts resources to incoming load variations and service times, leveraging design-time knowledge to notably reduce training time and minimize response time constraint violations.

Back to Alumni

Skip to content