Email: shunliu@buffalo.edu
I am in my final year at the Department of Computer Science, Shanghai University of Finance and Economics. During my undergraduate studies, I had the privilege to work at SUNY Buffalo, Dartmouth College, Zhejiang Lab, and Cardinal Operations. My research focuses on automating Large Language Models (LLMs) for feature discovery in tabular data, and vision foundation models with an emphasis on generalization and explainability.
Currently Working On:
Leverage multimodel data complementaries for vision-language
task adaptation, e.g. VQA, Vision Grounding, etc.
Exploring real-time representation & rendering algorithms for
high-fidelity generation of, e.g. face, head, etc.
(* first author(s), ‡ corresponding author(s))
Nguyen Minh Thao Phan*, Cong-Tinh Dao*, Chenwei Wu, Jian-Zhe Wang, Shun Liu, Jun-En Ding, David Restrepo, Feng Liu, Fang-Ming Huang, Wen-Chih Peng‡
Accepted, CIKM'24 (Short Research Paper Track)
We propose MEDFuse, a Multimodal EHR Data Fusion framework that incorporates masked lab-test modeling and large language models (LLMs) to effectively integrate structured and unstructured medical data. MEDFuse leverages multimodal embeddings extracted from two sources: LLMs fine-tuned on free clinical text and masked tabular transformers trained on structured lab test results.
[arxiv]
Shun Liu
arxiv, 2024
We proposed an innovative framework designed to reconcile the trade-off between model performance and interpretability. Our approach is centered around modular operations on high-dimensional statistics, which enable end-to-end processing while preserving interpretability. By fusing diverse interpretability techniques and modularized data processing, our framework sheds light on the decision-making processes of complex models without compromising their performance.
[arxiv]
Shun Liu, Jianan Zhang, Ruocheng Song, Teik Toe Teoh‡
arxiv, 2024
We proposed a deep detector which leverages dynamic feature localization and parallel regression for computer vision tasks through an adaptive head module. Empirical experiments were conducted on the Blood Cell Count and Detection (BCCD) dataset to evaluate the effectiveness of ADA-YOLO. The results showed that ADA-YOLO outperforms the YOLOv8 model in mAP (mean average precision) on the BCCD dataset while using more than 3X less memory than YOLOv8.
[arxiv]Email: kevinliuleo@gmail.com
WeChat: 18017622619