Assist. Prof. Rehmat Ullah Khan - Cardiff Metropolitan University, UK.
Rehmat Ullah is an Assistant Professor at School of Technologies, Cardiff Metropolitan University, UK. He earned his B.S. and M.S. degrees in computer science from COMSATS university Islamabad, Pakistan and PhD degree in Electronics and Computer Engineering from Hongik University, South Korea. Previously, he worked as an Assistant Professor at Gachon University, South Korea and as a post-Doctorate at the University of St Andrews and Queen’s University Belfast, UK. He served as a TPC member, keynote speaker and as a session chair for several flagship conferences such as ACM ICN 2022, ACM IMC 2018, and ICC 2023.
His work has been published in good venues such as IEEE Communications Magazine, IEEE Internet of Things journal, IEEE Transactions on Network Science and Engineering, IEEE Wireless Communications Magazine, IEEE Network Magazine, Journal of Network and Computer Applications, and Future Generation Computer Systems. He currently holds six patents. In 2022, Dr. Rehmat was recognized as Global Talent by the Royal Academy of Engineering, UK.
- Distributed Systems
- Edge Computing
- Edge Intelligence/EdgeAI
- Federated Learning
- Internet of Things
- Information Centric Networking
Title: Edge Machine Learning: Challenges and Opportunities
Machine learning (ML) techniques make use of data to provide valuable services in many areas such as retails, finance, healthcare, media and travel. Standard ML techniques require centralising the training data on cloud data centers due to the high computing resources (e.g., GPUs) available on the cloud, so that large amounts of data can be analyzed to obtain useful information for the detection, classification, and prediction of future events with high accuracy. However, the proliferation of Internet-of-things (IoT) devices, ranging from smartphones to autonomous vehicles, drones, and various IoT devices such as wearable sensors and surveillance cameras, has resulted in a vast amount of data being generated. It is anticipated that 180 zettabytes of data will be generated by 2025. Data from all of these devices is collected in a distributed manner and sent to a central server, where it is used to train a powerful ML model. Due to network bandwidth, latency and data privacy concerns, sending all of the data to a remote cloud is impractical and often unnecessary. Furthermore, in many applications, user data contains sensitive personal information, raising privacy concerns as another reason to avoid offloading data to a centralized server.
The concept of Federated Learning (FL) provides privacy by design in an ML technique that collaboratively learns across multiple distributed devices without sending raw data to a central server while processing data locally on devices. However, given the limited availability of resources on many devices, performing FL on such devices is impractical due to increased training times. Moreover, for training ML model that may be a Deep Neural Network (DNN) massive amounts of parameter updates needs to be synchronized across distributed devices creating potential congestion and eventually slowdowns on the entire training process particularly in wireless scenarios. Therefore, efficient computation mechanisms and trimming the communication time for FL training is of upmost importance for fast convergence of ML models. In this talk, I will discuss distributed ML, with a focus on FL for edge computing systems. I will start by giving a quick explanation of Cloud Computing, Edge computing, FL, how FL solves the data island problem in IoT and state-of-art advances of FL. The edge federated learning applications with open-source platforms, current trends, and recent development will be discussed particularly computation and communication efficienct solutions for latency critical applications such as mobile robots, and autonomous cars. Furthermore, open research challenges with potential solutions will be presented.