ENSAIT, GEMTEX, University of Lille, France & UDA & IAD, Vietnam
Speaker
Dr. habil. Kim Phuc Tran, PhD, Senior Member of IEEE, is a French–Vietnamese senior expert in Explainable Artificial Intelligence (XAI), scientific advisor, and venture builder, with over 17 years of experience supporting industrial innovation, research evaluation, and decision-support systems across Europe, Asia, and broader international ecosystems.
He is currently a Senior Associate Professor (HDR) in Artificial Intelligence and Data Science at ENSAIT – University of Lille (France) and a senior researcher at the GEMTEX laboratory. Since 2017, he has also served as Senior Scientific Advisor at Dong A University and the International Research Institute for Artificial Intelligence and Data Science (IAD), Vietnam, where he contributes to long-term AI research, innovation, and technology-transfer strategies at the interface between Europe and Southeast Asia.
In 2018, Dr. Tran was elected President of the Permanent External Scientific Advisory Board of Dong A University, positioning him at the highest strategic level of research and innovation governance. In this role, he oversees research quality, international partnerships, and strategic alignment with European and Asian industrial, regulatory, and societal priorities.
Dr. Tran is widely recognized as an independent expert and evaluator for major national and international research and innovation programs in France, Belgium, Canada, Israel, and other advanced research ecosystems. These activities span public research agencies, national funding bodies, doctoral–industrial collaboration schemes, and strategic university initiatives. In this capacity, he is regularly entrusted with the evaluation of complex and high-risk AI projects, assessing whether ambitious scientific and technical visions can realistically translate into deployable, trustworthy, and economically viable technologies under real-world industrial, regulatory, and market constraints.
Through these expert roles, he has developed a rare panoramic understanding of what differentiates investable and scalable AI technologies from purely academic prototypes, across multiple national innovation systems and funding cultures.
In parallel, Dr. Tran has accumulated extensive hands-on industrial experience as a long-term AI expert and strategic advisor for multiple European industrial groups, technology-oriented SMEs, and innovation-driven organizations operating in sectors such as advanced manufacturing, smart textiles, industrial analytics, and software-intensive systems. His contributions have supported the deployment of AI-driven analytics, explainable decision-support systems, quality monitoring solutions, and data-centric industrial platforms in real operational environments.
Alongside his research, evaluation, and industrial advisory activities, Dr. Tran has built a strong and sustained record in doctoral and postdoctoral supervision. To date, he has supervised and co-supervised more than 20 PhD candidates and postdoctoral researchers across Europe and Asia, covering topics such as Explainable and Trustworthy AI, federated and edge intelligence, industrial analytics, hybrid (physics–data) modeling, and AI-driven decision-support systems. Among them, 12 researchers have successfully completed their training and have since assumed key academic and strategic industrial positions, including appointments as Associate Professors, Directors of AI and Data Science, and senior technical leaders within large international industrial groups and technology-driven organizations. This track record reflects his ability to mentor researchers not only toward scientific excellence, but also toward leadership roles at the interface of research, industry, and strategic decision-making.
Beyond expert evaluation and industrial advisory, Dr. Tran is the founder and scientific director of the International Chair in Data Science & Explainable AI (XAI Chair) at Dong A University. Conceived as a startup-oriented innovation platform inspired by European Industry 5.0 models and adapted to the fast-growing Asian innovation landscape, the XAI Chair leverages his dense European industrial network and strong Asian academic–industrial connections to generate applied XAI technologies, intellectual property, spin-offs, and cross-regional innovation pipelines connecting Europe and Asia with investors.
With over 100 peer-reviewed publications, multiple edited volumes with leading international publishers, and leadership roles in multi-million-euro collaborative research and innovation projects, Dr. Tran combines deep technical authority in Explainable and Trustworthy AI with proven experience in expert evaluation, cross-regional industrial collaboration, doctoral mentorship, and venture-oriented innovation.
His core mission is to transform explainable and trustworthy AI into deployable, credible, and investable technologies, aligned with the regulatory, industrial, and human constraints of both European and Asian markets.
Research Interests:
-
Human-centered Artificial Intelligence: Human-Centered Edge AI, Embedded AI, Human-Centered Design to Address Biases in AI, Augmented Intelligence, Explainable Ambient Intelligence, Human-Robot relations and collaborations, Human Impact, Augment Human Capabilities, and Intelligence, Trustworthy and Transparent Artificial Intelligence, Self-Supervised Learning, Explainable by design Anomaly Detection, Generative AI, Federated Learning, Federated Reinforcement Learning, 1-bit Machine Learning Models, Multimodal Deep Learning, Quantum Machine Learning, Physics-informed Machine Learning, Inverse Reinforcement Learning, Analog AI
-
Safety and Reliability of Artificial Intelligence systems: Adversarial Attack Detection, Detecting Poisoning Attacks, Cybersecurity for AI Systems, Blockchain Empowered Federated Learning, Consensus Protocol, Hardware Bit-Flipping Attack, Ethical Artificial Intelligence, Trustworthy AI for Robotics and Autonomous Systems, Human-centered AI for Trustworthy Robots and Autonomous Systems, Ethical robotics, autonomous systems and AI, Responsible AI for Long-term Trustworthy Autonomous Systems
-
Smart Healthcare Technologies: Digital Twins in Healthcare, Clinical Decision Support Systems, AI for Health and Wellbeing, Smart Healthcare, Health Care Stroke Prediction
-
Intelligent Decision Support Systems: Embedding domain knowledge for Machine Learning, Supply Chain Optimization, Production Optimization, Demand Forecasting, Cybersecurity for industrial control systems, Fault Detection and Diagnostics, Predictive Maintenance, Natural Language Processing for Fashion Trends Detection
-
Applications of AI, Edge Computing, Digital Twins, and Data Science in Industry 5.0: Digital Transition and AI, Twin Green and Digital Transition, Digital Twin Drives Smart Manufacturing, Digital Twin Application for Production Optimization, Smart Manufacturing, Wearable AI Devices, Workplace Health and Safety, Reliability Engineering, AI-aided Knowledge Discovery, Sustainable Fashion, Energy-Harvesting-Based IoT Wearables, Evolutionary Computing, Swarm Algorithms,
-
Statistical Computing: Statistical Process Monitoring, Advanced Control Charts, Quality Control, Interpreting out-of-control signals using Machine Learning, Control Chart Pattern Recognition (CVPR) with Machine Learning, Screening and Early Detection and Monitoring of Infectious Diseases
-
Quantum consciousness and its applications: Quantum Mind Theory, Quantum Psychology, Quantifying Consciousness in Artificial Intelligence, Quantum Artificial Intelligence
---------------------------
Title: Explainable and Trustworthy Artificial Intelligence for Smart Healthcare System: A Federated Learning Approach
Abstract:
The growing population around the globe has a significant impact on various sectors including the labor force, healthcare, and the global economy. The healthcare sector is among the most affected sectors by the growing population due to the increasing demand for resources including doctors, nurses, equipment, and healthcare facilities. Intelligent systems have been incorporated to enhance decision-making, management, prognosis, and diagnosis in order to tackle such issues and offer improved healthcare to patients. Among such systems, those based on deep learning (DL), a subclass of machine learning (ML) have outperformed many traditional statistical and ML systems owing to their capability of automatically discovering and learning related features for a given task and robustness. Therefore, the use of DL has seen a steady increase in many applications. Nevertheless, usually, the training of DL models relies on a single centralized server, which brings many challenges: (1) except for some big enterprises most of the small enterprises have limited quality data, which is insufficient to support the training of data-hungry DL models, (2) access to data, which is vital for these systems, often raises privacy concerns. The collection and analysis of sensitive patient information must be done in a secure and ethical manner to ensure the protection of individual privacy rights, (3) high communication cost and computation resources required, (4) a large number of trainable parameters make the outcome of DL hard to explain, which is required in some applications, such as healthcare. Compared to centralized ML, federated learning (FL) improves both privacy and communication costs, where clients collaboratively train a joint model without sharing the raw data directly. FL minimizes privacy breaches and safeguards sensitive data by keeping it distributed locally. This enables collaborative model training while reducing the risk of unauthorized access and data breaches. Additionally, it promotes data diversity and scalability by involving multiple sources in joint model training and decreases communication costs by sharing only model updates instead of the entire dataset. However, FL brings its own challenges. For example, heterogeneous local data among the clients makes it challenging to train a high-performing and robust global model. Sharing updates (hundreds of thousands of parameters) still has high communication costs. Additionally, the distributed nature and access control of local data in FL makes it more vulnerable to malicious attacks. Moreover, the challenge of explaining the results of DL still remains challenging, and methods are needed to be developed to bring trust, accountability, and transparency in sensitive applications, such as healthcare. Therefore, the aim of this project is to create robust frameworks that are secure, high-performing, and privacy-friendly within federated settings. These frameworks will be specifically designed for end-to-end healthcare applications, considering the presence of non-identically distributed data among clients in FL to bring robustness. By addressing these challenges, the objective is to enhance the overall system's resilience and effectiveness. We also propose a methodology for detecting anomalies within federated settings, particularly in applications with limited available data for the abnormal class. Furthermore, clients in FL are usually resource-constrained with limited computation and communication resources available. Therefore, to support efficient computation and communication in a federated setting we propose a lightweight framework (in terms of the trainable number of parameters). Additionally, to provide explanations of the DL models' outcomes, which are usually hard to explain because of the large number of parameters, we propose model-agnostic explainable AI modules to help explain the results of DL models. Moreover, in order to protect the proposed frameworks against cyber attacks, such as poisoning attacks, we propose a framework in federated settings, which makes the proposed healthcare frameworks more secure and trustworthy. Finally, with experimental analysis using baseline datasets for one of the most common health conditions i.e., cardiovascular diseases (arrhythmia detection, ECG anomaly detection) and human activity recognition (used for supplementing cardiovascular diseases detection), we show the superiority of the proposed frameworks over state-of-the-art work.
Keywords: Federated Learning, Edge Computing, Healthcare, privacy, security, Explainable Artificial Intelligence, Explainable Anomaly Detection, Embedded Artificial Intelligence, Clinical Decision Support Systems, Safety and Reliability of Artificial Intelligence, poisoning attacks, data poisoning, model poisoning, Byzantine attacks.
Google Scholar: https://scholar.google.fr/citations?hl=en&user=uGv7zzQAAAAJ
Researchgate: https://www.researchgate.net/profile/Kim-Phuc-Tran
Personal webpage: https://www.gemtex.fr/gemtex-members/phuc-tran/