In this work Vrije Universiteit Amsterdam is specifically looking for two graduate researchers, who will start PhD-candidate positions in this project:
Position-1
An edge computing environment combines extreme distribution of computing, networking, and storage resources, which all must be procured and scheduled real-time. There are currently no standard services for computation and resource management at the edge. Further, by the nature of the environment, edge computation is extremely distributed and should be agile to support mobility and offloading (between endpoints and between cloud and the edge), so that the right computation is done at the right place, in time. However such scheduling and placement decisions are challenging due to lack of right abstractions and services, and restrictions like privacy, performance, and energy requirements. Hence, the research challenge is: how to efficiently process data in an extremely distributed environment? To address this, first a rich generalized offloading model will be built to understand what to offload, and where. Secondly, based on the insights from the offloading model, a companion resource management and scheduling architecture will be designed and implemented. Lastly, we will explore dynamic and automatic offloading and scheduling techniques at the edge. The resulting system will be tested on an in-house edge tested together with our large distributed cloud+edge system, DAS-6 (https://www.cs.vu.nl/das/).
Position-2
This PhD project will focus on exploring computing abstractions, and performance modeling and engineering, for edge computing. While in the cloud environment various computing frameworks like Spark and TensorFlow have been proposed for simplifying the application development, there are no counterparts for the edge currently. One goal is thus to design and build such frameworks to ease the adoption of edge computing. On the other hand, edge environments are highly heterogeneous, involving diverse devices types ranging from microcontrollers, ARM-based processing units (e.g., NVIDIA Jetson boards and Raspberry Pis), and GPU-based accelerators. Understanding the performance of edge applications when running on these devices is important for making informed resource scheduling decisions. So the second goal is to study programming models and performance modeling of applications running on edge devices. Towards these goals, we will first design an expressive computing abstraction (e.g., domain-specific languages) for edge workloads, then build a specific compiler which can generate target-dependent code for different edge hardware platforms, and finally build a performance model for edge applications running on these edge platforms.