by Nguyen Hong Tan, Tran Manh Tuan, Pham Minh Chuan, Nguyen Duc Hoang, Le Quang Thanh, Le Hoang Son
Artificial Intelligence (AI) has been dramatically applied to healthcare in various tasks to support clinicians in disease diagnosis and prognosis. It has been known that accurate diagnosis must be drawn from multiple evidence, namely clinical records, X-Ray images, IoT data, etc called the multi-modal data. Despite the existence of various approaches for multi-modal medical data fusion, the development of comprehensive systems capable of integrating data from multiple sources and modalities remains a considerable challenge. Besides, many machine learning models face difficulties in representation and computation due to the uncertainty and diversity of medical data. This study proposes a novel multi-modal fuzzy knowledge graph framework, called FKG-MM, which integrates multi-modal medical data from multiple sources, offering enhanced computational performance compared to unimodal data. In addition, the FKG-MM framework is based on the fuzzy knowledge graph model, one of the models that represent and compute effectively with medical data in tabular form. Through some experiment scenarios utilizing the well-known BRSET dataset on multi-modal diabetic retinopathy, it has been experimentally validated that the feature selection method, when combining image features with tabular medical data features, gives the highest reliability results among 5 methods including Feature Selection Method, Tensor Product, Hadamard Product, Filter Selection, and Wrapper Selection. In addition, the experiment also confirms that the accuracy of FKG-MM increases by 12–14% when combining image data with tabular medical data than the related methods diagnosing only on tabular data.