Senior Assoc. Prof. Kim Phuc Tran

ENSAIT, GEMTEX, University of Lille, France & UDA & IAD, Vietnam

Speaker

Kim Phuc Tran is currently a Senior Associate Professor (Maître de Conférences HDR, equivalent to a UK Reader) of Artificial Intelligence and Data Science at the ENSAIT and the GEMTEX laboratory, University of Lille, France. He received an Engineer's degree and a Master of Engineering degree in Automated Manufacturing. He obtained a Ph.D. in Automation and Applied Informatics at the University of Nantes, and an HDR (Doctor of Science or Dr. Habil.) in Computer Science and Automation at the University of Lille, France. He has published more than 72 papers in peer-reviewed international journals and proceedings of international conferences. He edited 3 books with Springer Nature and Taylor & Francis. He is the Associate Editor, Editorial Board Member, and Guest Editor for several international journals such as IEEE Transactions on Intelligent Transportation Systems and Engineering Applications of Artificial Intelligence.

Kim Phuc Tran has supervised 12 Ph.D. students and 3 Postdocs. In addition, as the project coordinator (PI), he conducted a national project about Healthcare Systems with Federated Learning. He has been or is involved (PI, co-PI, or member) in 13 national and European projects. He is an expert and evaluator for the Public Service of Wallonia (SPW-EER), Belgium, the Natural Sciences and Engineering Research Council of Canada, ANRT (Association nationale de la recherche et de la technologie), and CY Cergy Paris University, France. He received the Award for Scientific Excellence (Prime d’Encadrement Doctoral et de Recherche) given by the Ministry of Higher Education, Research and Innovation, France for 4 years from 2021 to 2025 in recognition of his outstanding scientific achievements.

From 2017 until now, he has been the Senior Scientific Advisor at Dong A University and the International Research Institute for Artificial Intelligence and Data Science (IAD), Danang, Vietnam where he has held the International Chair in Data Science and Explainable Artificial Intelligence. His research interests include Explainable Trustworthy, and Transparent Artificial Intelligence; Ethical, and Human-centered Artificial Intelligence; Safety and Reliability of Artificial Intelligence; Statistical Computing; Intelligent Decision Support Systems; Digital Twins; and Applications of AI, Edge Computing, and Data Science in Industry 5.0.

Research Interests:

  • Explainable Trustworthy, and Transparent Artificial Intelligence: Self-Supervised Learning,  Anomaly Detection, Federated Learning, Federated Reinforcement Learning, Quantum Machine Learning, Inverse Reinforcement Learning
  • Ethical and Human-centered Artificial Intelligence: Embedded AI, Wearable AI Devices, Human-Centered Design to Address Biases in AI, Augmented Intelligence,  Human-Robot relations and collaborations, Human Impact, Augment Human Capabilities, and Intelligence 
     
  • Safety and Reliability of Artificial Intelligence: Adversarial Machine Learning, Detecting Poisoning Attacks, Cybersecurity for AI Systems, Blockchain Empowered Federated Learning, Consensus Protocol, Evolutionary Computing, Swarm Algorithms
  • Trustworthy AI for Robotics and Autonomous Systems:  Human-centered AI for Trustworthy Robots and Autonomous Systems, Ethical robotics, autonomous systems and AI, Responsible AI for Long-term Trustworthy Autonomous Systems
  • Statistical Computing: Statistical Process Monitoring,  Advanced Control Charts, Quality Control, Interpreting out-of-control signals using Machine Learning,  Control Chart Pattern Recognition (CVPR) with Machine Learning, Screening and Early Detection and Monitoring of Infectious Diseases
     
  • Intelligent Decision Support Systems: Embedding domain knowledge for Machine Learning, Clinical Decision Support Systems, Supply Chain Optimization, Production Optimization, Demand Forecasting,  Cybersecurity for industrial control systems, Fault Detection and Diagnostics, Predictive Maintenance, Natural Language Processing for Fashion Trends Detection
     
  • Digital Twins: Digital Twins in Healthcare, Digital Twin Application for Production Optimization, Digital Twin Drives Smart Manufacturing
     
  • Applications of AI, Edge Computing, and Data Science in Industry 5.0: Digital Transition and AI, Twin Green and Digital Transition, AI for Health and Wellbeing, Smart Healthcare, Smart Manufacturing, Workplace Safety Wearables, Reliability Engineering, AI-aided Knowledge Discovery, Sustainable Fashion.

 

Title: Explainable and Trustworthy Artificial Intelligence for Smart Healthcare System: A Federated Learning Approach

Abstract: 

The growing population around the globe has a significant impact on various sectors including the labor force, healthcare, and the global economy. The healthcare sector is among the most affected sectors by the growing population due to the increasing demand for resources including doctors, nurses, equipment, and healthcare facilities. Intelligent systems have been incorporated to enhance decision-making, management, prognosis, and diagnosis in order to tackle such issues and offer improved healthcare to patients. Among such systems, those based on deep learning (DL), a subclass of machine learning (ML) have outperformed many traditional statistical and ML systems owing to their capability of automatically discovering and learning related features for a given task and robustness. Therefore, the use of DL has seen a steady increase in many applications. Nevertheless, usually, the training of DL models relies on a single centralized server, which brings many challenges: (1) except for some big enterprises most of the small enterprises have limited quality data, which is insufficient to support the training of data-hungry DL models, (2) access to data, which is vital for these systems, often raises privacy concerns. The collection and analysis of sensitive patient information must be done in a secure and ethical manner to ensure the protection of individual privacy rights, (3) high communication cost and computation resources required, (4) a large number of trainable parameters make the outcome of DL hard to explain, which is required in some applications, such as healthcare. Compared to centralized ML, federated learning (FL) improves both privacy and communication costs, where clients collaboratively train a joint model without sharing the raw data directly. FL minimizes privacy breaches and safeguards sensitive data by keeping it distributed locally. This enables collaborative model training while reducing the risk of unauthorized access and data breaches. Additionally, it promotes data diversity and scalability by involving multiple sources in joint model training and decreases communication costs by sharing only model updates instead of the entire dataset.  However, FL brings its own challenges. For example,  heterogeneous local data among the clients makes it challenging to train a high-performing and robust global model. Sharing updates (hundreds of thousands of parameters) still has high communication costs. Additionally, the distributed nature and access control of local data in FL makes it more vulnerable to malicious attacks. Moreover, the challenge of explaining the results of DL still remains challenging, and methods are needed to be developed to bring trust, accountability, and transparency in sensitive applications, such as healthcare.  Therefore, the aim of this project is to create robust frameworks that are secure, high-performing, and privacy-friendly within federated settings. These frameworks will be specifically designed for end-to-end healthcare applications, considering the presence of non-identically distributed data among clients in FL to bring robustness. By addressing these challenges, the objective is to enhance the overall system's resilience and effectiveness. We also propose a methodology for detecting anomalies within federated settings, particularly in applications with limited available data for the abnormal class. Furthermore, clients in FL are usually resource-constrained with limited computation and communication resources available. Therefore, to support efficient computation and communication in a federated setting we propose a lightweight framework (in terms of the trainable number of parameters). Additionally, to provide explanations of the DL models' outcomes, which are usually hard to explain because of the large number of parameters, we propose model-agnostic explainable AI modules to help explain the results of DL models. Moreover, in order to protect the proposed frameworks against cyber attacks, such as poisoning attacks, we propose a framework in federated settings, which makes the proposed healthcare frameworks more secure and trustworthy. Finally, with experimental analysis using baseline datasets for one of the most common health conditions i.e., cardiovascular diseases (arrhythmia detection, ECG anomaly detection) and human activity recognition (used for supplementing cardiovascular diseases detection), we show the superiority of the proposed frameworks over state-of-the-art work.

Keywords: Federated Learning, Edge Computing, Healthcare, privacy, security, Explainable Artificial Intelligence, Explainable Anomaly Detection, Embedded Artificial Intelligence, Clinical Decision Support Systems, Safety and Reliability of Artificial Intelligence, poisoning attacks, data poisoning, model poisoning, Byzantine attacks.

 

Google Scholar: https://scholar.google.fr/citations?hl=en&user=uGv7zzQAAAAJ

Researchgate: https://www.researchgate.net/profile/Kim-Phuc-Tran

Personal webpage: https://www.gemtex.fr/gemtex-members/phuc-tran/