PhD Position F/M Optimization of resources placement in Fog-based IoT systems based on latency analysis

Inria
March 30, 2023
Contact:N/A
Offerd Salary:Negotiation
Location:N/A
Working address:N/A
Contract Type:Other
Working Time:Negotigation
Working type:N/A
Ref info:N/A

2023-05744 - PhD Position F/M Optimization of resources placement in Fog- based IoT systems based on latency analysis

Contract type : Fixed-term contract

Level of qualifications required : Graduate degree or equivalent

Fonction : PhD Position

About the research centre or Inria department

The Inria Rennes - Bretagne Atlantique Centre is one of Inria's eight centres and has more than thirty research teams. The Inria Center is a major and recognized player in the field of digital sciences. It is at the heart of a rich R&D and innovation ecosystem: highly innovative PMEs, large industrial groups, competitiveness clusters, research and higher education players, laboratories of excellence, technological research institute, etc.

Assignment

Cloud computing and its three facets (IaaS, PaaS and SaaS) have become essential in today Internet Applications, offering many advantages such as scalability, elasticity or flexibility. With its different service models, the cloud still faces many issues prone to impact either the end-user (QoS), the provider (Cost) and the environment (Sustainability).

The Fog computing is a recent paradigm that address such issues by provisioning resources outside the cloud and closer to the end-device, at the edge of the network. This allows to reduce the latency and minimize the traffic between end-user and the cloud plat-form 3. Several studies have shown that fog systems can indeed reduce the latency compared to cloud systems, but this reduction is not guaranteed and will highly depend on the components placement, leading sometimes to worse performance 8. It has been also demonstrated that less traffic is sent to the cloud when using fog systems. However, a lack of a proper monitoring or reconfiguration mechanism in the fog exists, especially when the application is related to the IoT 7, where the cloud infrastructure is known not to be a viable solution.

However, evaluating realistic large-scale fog infrastructure constitute a complex task given the cost of deployment and the absence of a realistic view of the real-world deployments. In an IoT context, geo-distributed fog infrastructures mostly rely on SDN approaches 5 that contribute to conceal the networking aspects such as the topology or the routing decisions. In consequence, it appears that the impact of the elasticity of a fog solution is mainly evaluated on the data plane side 4.

In the context of IoT applications (i.e. critical response time environments such as Smart City sensing or Vehicular networks) latency is at the center of a tremendous number of studies to optimize the placement of resources in distributed architectures. To ensure that the quality of service is guaranteed, several solutions exist to reconfigure the components placement (migration) and can reduce the overall latency by changing the components and routes. However, knowing precisely which component is the source of the problematical latency remains scarcely addressed. When taking a decision for a reconfiguration or a migration, which can be triggered due to latency issue, it can be beneficial to check if the source of the latency can be solved before instantiating a migration or a full reconfiguration. Some studies exist where a comparison of response time is done between the major cloud actors depending on the load 1, 6. Proper measurement protocols exist but always refer to specific case studies 2 and would not allow to be integrated in fog systems.

The objective of this thesis is to study the optimization of resource placement in Fog-based IoT systems based on latency measurement, by evaluating the control plane cost of a change in the architecture. It will particularly address the problem of how to identify the origin of a latency issue, and based on this finding, propose an optimization that take into account the cost and elasticity of the control plane.

References :

1 Dániel Géhberger, Dávid Balla, Markosz Maliosz, and Csaba Simon. Performance eval- uation of low latency communication alternatives in a containerized cloud environment. In 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), pages 9–16, 2018.

2 Devasena Inupakutika, Gerson Rodriguez, David Akopian, Palden Lama, Patricia Chalela, and Amelie G. Ramirez. On the performance of cloud-based mhealth ap- plications: A methodology on measuring service response time and a case study. IEEE Access, 10:53208–53224, 2022.

3 Zheng Li and Francisco Millar-Bilbao. Characterizing the cloud's outbound network latency: An experimental and modeling study. In 2020 IEEE Cloud Summit, pages 172–173, 2020.

4 Carla Mouradian, Diala Naboulsi, Sami Yangui, Roch H. Glitho, Monique J. Morrow, and Paul A. Polakos. A comprehensive survey on fog computing: State- of-the-art and research challenges. IEEE Communications Surveys and Tutorials, 20(1):416–464, 2018.

5 Feyza Yildirim Okay and Suat Ozdemir. Routing in fog-enabled iot platforms: A survey and an sdn-based solution. IEEE Internet of Things Journal, 5(6):4871–4889, 2018.

6 István Pelle, János Czentye, János Dóka, and Balázs Sonkoly. Towards latency sensitive cloud native applications: A performance study on aws. In 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), pages 272–280, 2019.

7 U. Tomer and P. Gandhi. An enhanced software framework for improving qos in iot. Engineering, Technology and Applied Science Research, 12(5):9172–9177, Oct. 2022.

8 Sami Yangui, Pradeep Ravindran, Ons Bibani, Roch H. Glitho, Nejib Ben Hadj- Alouane, Monique J. Morrow, and Paul A. Polakos. A platform as-a-service for hy- brid cloud/fog environments. In 2016 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN), pages 1–7, 2016.

Main activities
  • Explore the State-of-the-Art of the IoT/Fog Emulation/Simulation platform
  • Integrate an IoT solution in a Fog architecture platform
  • Propose a profile and a classification of latency issues
  • Propose an innovative way to optimize a resource placement taking into account the latency metrics and the control plane capabilities
  • Skills
  • A master degree in distributed systems and/or Cloud computing/Networking
  • Good knowledge of distributed systems
  • Good programming skills (e.g., C++ and Python)
  • Basic knowledge of simulation
  • Excellent communication and writing skills in English (Note that knowledge of French is appreciated but not required for this position)

  • Knowledge of the following technologies is not mandatory but will be considered as a plus:

  • Cloud resource scheduling
  • Software Defined networks, Openflow
  • Revision control systems: git, svn
  • Linux distribution: Debian, Ubuntu
  • Benefits package
  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Possibility of teleworking (90 days per year) and flexible organization of working hours
  • Partial payment of insurance costs
  • Remuneration

    monthly gross salary amounting to 2051 euros for the first and second years and 2158 euros for the third year

    General Information
  • Theme/Domain : Distributed Systems and middleware System & Networks (BAP E)

  • Town/city : Rennes

  • Inria Center : Centre Inria de l'Université de Rennes
  • Starting date : 2023-09-01
  • Duration of contract : 3 years
  • Deadline to apply : 2023-03-30
  • Contacts
  • Inria Team : MYRIADS
  • PhD Supervisor : Lemercier François / [email protected]
  • The keys to success

    Doing a PhD is a job unlike any other. Please read this document carefully to decide whether a PhD is the right career move for you:

    https: // medium.com/great-research/do-you-need-a-ph-d-f78d2fb0f286

    About Inria

    Inria is the French national research institute dedicated to digital science and technology. It employs 2,600 people. Its 200 agile project teams, generally run jointly with academic partners, include more than 3,500 scientists and engineers working to meet the challenges of digital technology, often at the interface with other disciplines. The Institute also employs numerous talents in over forty different professions. 900 research support staff contribute to the preparation and development of scientific and entrepreneurial projects that have a worldwide impact.

    Instruction to apply

    Please submit online : your resume, cover letter and letters of recommendation eventually

    For more information, please contact [email protected]

    Defence Security : This position is likely to be situated in a restricted area (ZRR), as defined in Decree No. 2011-1425 relating to the protection of national scientific and technical potential (PPST).Authorisation to enter an area is granted by the director of the unit, following a favourable Ministerial decision, as defined in the decree of 3 July 2012 relating to the PPST. An unfavourable Ministerial decision in respect of a position situated in a ZRR would result in the cancellation of the appointment.

    Recruitment Policy : As part of its diversity policy, all Inria positions are accessible to people with disabilities.

    Warning : you must enter your e-mail address in order to save your application to Inria. Applications must be submitted online on the Inria website. Processing of applications sent from other channels is not guaranteed.

    From this employer

    Recent blogs

    Recent news