Juntao Jiang
PhD Student
Institute of Cyber-Systems and Control, Zhejiang University, China
Biography
I am pursuing my Ph.D. degree in College of Control Science and Engineering, Zhejiang University, Hangzhou, China. My major research interests include computer vision, medical image and video analysis, and AIGC.
Research and Interests
- Computer Vision
- Medical Image and Video Analysis
- AIGC
Publications
- Muxuan Gao, Juntao Jiang, Shuangming Lei, Huifeng Wu, Jun Chen, and Yong Liu. OnSort: An O(n) Comparison-Free Sorter for Large-Scale Dataset with Parallel Prefetching and Sparse-Aware Mechanism. IEEE Transactions on Circuits and Systems II: Express Briefs, 72:933-937, 2025.
[BibTeX] [Abstract] [DOI] [PDF]This brief proposes OnSort, a parallel comparison free sorting architecture with O(n) time complexity, utilizing the SRAM structure to support large-scale datasets efficiently. The performance of existing comparison-free sorters is limited by uneven value distribution and variable element numbers. To address these issues, we introduce a parallel prefetching strategy to accelerate the indexing process and a sparse-aware mechanism to narrow the indexing search range. Furthermore, OnSort implements streaming execution through a pipelined design, thereby optimizing the previously overlooked latency of the counting phase. Experimental results show that, under the configuration of sorting 65,536 16-bit data elements, OnSort achieves a 1.97× speedup and a 22.6× throughput-to-area ratio compared to the existing design. The source code is available athttps://github.com/gmx-hub/OnSort.
@article{gao2025onsort, title = {OnSort: An O(n) Comparison-Free Sorter for Large-Scale Dataset with Parallel Prefetching and Sparse-Aware Mechanism}, author = {Muxuan Gao and Juntao Jiang and Shuangming Lei and Huifeng Wu and Jun Chen and Yong Liu}, year = 2025, journal = {IEEE Transactions on Circuits and Systems II: Express Briefs}, volume = 72, pages = {933-937}, doi = {10.1109/TCSII.2025.3570797}, abstract = {This brief proposes OnSort, a parallel comparison free sorting architecture with O(n) time complexity, utilizing the SRAM structure to support large-scale datasets efficiently. The performance of existing comparison-free sorters is limited by uneven value distribution and variable element numbers. To address these issues, we introduce a parallel prefetching strategy to accelerate the indexing process and a sparse-aware mechanism to narrow the indexing search range. Furthermore, OnSort implements streaming execution through a pipelined design, thereby optimizing the previously overlooked latency of the counting phase. Experimental results show that, under the configuration of sorting 65,536 16-bit data elements, OnSort achieves a 1.97× speedup and a 22.6× throughput-to-area ratio compared to the existing design. The source code is available athttps://github.com/gmx-hub/OnSort.} } - Juntao Jiang, Yali Bi, Chunlin Zhou, Yong Liu, and Jiangning Zhang. Semi-CervixSeg: A Multi-Stage Training Strategy for Semi-Supervised Cervical Segmentation. In 2025 IEEE International Symposium on Biomedical Imaging (ISBI), 2025.
[BibTeX] [Abstract] [DOI] [PDF]Image segmentation plays a critical role in computer-aided diagnosis and treatment planning for cervical cancer. Obtaining a large number of labeled images for supervised cervical segmentation is often labor-intensive and time-consuming. In this paper, we propose a multi-stage semi-supervised learning framework (Semi-CervixSeg) to address the cervical seg-mentation task in ultrasound images for Fetal Ultrasound Grand Challenge: Semi-Supervised Cervical Segmentation in ISBI 2025. Specifically, in the initial stage, we utilize unlabeled data through a multi-view random augmentation strategy, using consistency constraints and a contrastive learning method. Subsequently, a progressive multi-stage training strategy is adopted to generate and optimize pseudo-labels, further improving segmentation results. Experimental re-sults demonstrate that the proposed method significantly im-proves segmentation performance compared with supervised methods. As a technical report for the challenge, this paper elaborates on our methodology and experimental findings in detail. The code can be accessed at h t t p s: / / g i t hub. com/juntaoJianggavin/Semi-CervixSeg.
@inproceedings{jiang2025sca, title = {Semi-CervixSeg: A Multi-Stage Training Strategy for Semi-Supervised Cervical Segmentation}, author = {Juntao Jiang and Yali Bi and Chunlin Zhou and Yong Liu and Jiangning Zhang}, year = 2025, booktitle = {2025 IEEE International Symposium on Biomedical Imaging (ISBI)}, doi = {10.1109/ISBI60581.2025.10981295}, abstract = {Image segmentation plays a critical role in computer-aided diagnosis and treatment planning for cervical cancer. Obtaining a large number of labeled images for supervised cervical segmentation is often labor-intensive and time-consuming. In this paper, we propose a multi-stage semi-supervised learning framework (Semi-CervixSeg) to address the cervical seg-mentation task in ultrasound images for Fetal Ultrasound Grand Challenge: Semi-Supervised Cervical Segmentation in ISBI 2025. Specifically, in the initial stage, we utilize unlabeled data through a multi-view random augmentation strategy, using consistency constraints and a contrastive learning method. Subsequently, a progressive multi-stage training strategy is adopted to generate and optimize pseudo-labels, further improving segmentation results. Experimental re-sults demonstrate that the proposed method significantly im-proves segmentation performance compared with supervised methods. As a technical report for the challenge, this paper elaborates on our methodology and experimental findings in detail. The code can be accessed at h t t p s: / / g i t hub. com/juntaoJianggavin/Semi-CervixSeg.} } - Weixuan Liu, Bairui Zhang, Tao Liu, Juntao Jiang, and Yong Liu. Artificial Intelligence in Pancreatic Image Analysis: A Review. Sensors, 24:4749, 2024.
[BibTeX] [Abstract] [DOI] [PDF]Pancreatic cancer is a highly lethal disease with a poor prognosis. Its early diagnosis and accurate treatment mainly rely on medical imaging, so accurate medical image analysis is especially vital for pancreatic cancer patients. However, medical image analysis of pancreatic cancer is facing challenges due to ambiguous symptoms, high misdiagnosis rates, and significant financial costs. Artificial intelligence (AI) offers a promising solution by relieving medical personnel’s workload, improving clinical decision-making, and reducing patient costs. This study focuses on AI applications such as segmentation, classification, object detection, and prognosis prediction across five types of medical imaging: CT, MRI, EUS, PET, and pathological images, as well as integrating these imaging modalities to boost diagnostic accuracy and treatment efficiency. In addition, this study discusses current hot topics and future directions aimed at overcoming the challenges in AI-enabled automated pancreatic cancer diagnosis algorithms.
@article{jiang2024aii, title = {Artificial Intelligence in Pancreatic Image Analysis: A Review}, author = {Weixuan Liu and Bairui Zhang and Tao Liu and Juntao Jiang and Yong Liu}, year = 2024, journal = {Sensors}, volume = 24, pages = {4749}, doi = {10.3390/s24144749}, abstract = {Pancreatic cancer is a highly lethal disease with a poor prognosis. Its early diagnosis and accurate treatment mainly rely on medical imaging, so accurate medical image analysis is especially vital for pancreatic cancer patients. However, medical image analysis of pancreatic cancer is facing challenges due to ambiguous symptoms, high misdiagnosis rates, and significant financial costs. Artificial intelligence (AI) offers a promising solution by relieving medical personnel's workload, improving clinical decision-making, and reducing patient costs. This study focuses on AI applications such as segmentation, classification, object detection, and prognosis prediction across five types of medical imaging: CT, MRI, EUS, PET, and pathological images, as well as integrating these imaging modalities to boost diagnostic accuracy and treatment efficiency. In addition, this study discusses current hot topics and future directions aimed at overcoming the challenges in AI-enabled automated pancreatic cancer diagnosis algorithms.} } - Juntao Jiang, Mengmeng Wang, Huizhong Tian, Linbo Cheng, and Yong Liu. LV-UNet: A Lightweight and Vanilla Model for Medical Image Segmentation. In 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 4240-4246, 2024.
[BibTeX] [Abstract] [DOI] [PDF]While large models have achieved significant progress in computer vision, challenges such as optimization complexity, the intricacy of transformer architectures, computational constraints, and practical application demands highlight the importance of simpler model designs in medical image segmentation. This need is particularly pronounced in mobile medical devices, which require lightweight, deployable models with real-time performance. However, existing lightweight models often suffer from poor robustness across datasets, limiting their widespread adoption. To address these challenges, this paper introduces LV-UNet, a lightweight and vanilla model that leverages pre-trained MobileNetv3-Large backbones and incorporates fusible modules. LV-UNet employs an enhanced deep training strategy and switches to a deployment mode during inference by re-parametrization, significantly reducing parameter count and computational overhead. Experimental results on ISIC 2016, BUSI, CVC-ClinicDB, CVC-ColonDB, and Kvair-SEG datasets demonstrate a better trade-off between performance and the computational load. The code will be released at https://github.com/juntaoJianggavin/LV-UNet.
@inproceedings{jiang2024lvu, title = {LV-UNet: A Lightweight and Vanilla Model for Medical Image Segmentation}, author = {Juntao Jiang and Mengmeng Wang and Huizhong Tian and Linbo Cheng and Yong Liu}, year = 2024, booktitle = {2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)}, pages = {4240-4246}, doi = {10.1109/BIBM62325.2024.10822465}, abstract = {While large models have achieved significant progress in computer vision, challenges such as optimization complexity, the intricacy of transformer architectures, computational constraints, and practical application demands highlight the importance of simpler model designs in medical image segmentation. This need is particularly pronounced in mobile medical devices, which require lightweight, deployable models with real-time performance. However, existing lightweight models often suffer from poor robustness across datasets, limiting their widespread adoption. To address these challenges, this paper introduces LV-UNet, a lightweight and vanilla model that leverages pre-trained MobileNetv3-Large backbones and incorporates fusible modules. LV-UNet employs an enhanced deep training strategy and switches to a deployment mode during inference by re-parametrization, significantly reducing parameter count and computational overhead. Experimental results on ISIC 2016, BUSI, CVC-ClinicDB, CVC-ColonDB, and Kvair-SEG datasets demonstrate a better trade-off between performance and the computational load. The code will be released at https://github.com/juntaoJianggavin/LV-UNet.} } - Juntao Jiang, Xiyu Chen, Guanzhong Tian, and Yong Liu. VIG-UNET: Vision Graph Neural Networks for Medical Image Segmentation. In IEEE 20th International Symposium on Biomedical Imaging (ISBI), 2023.
[BibTeX] [Abstract] [DOI] [PDF]Deep neural networks have been widely used in medical image analysis and medical image segmentation is one of the most important tasks. U-shaped neural networks with encoder-decoder are prevailing and have succeeded greatly in various segmentation tasks. While CNNs treat an image as a grid of pixels in Euclidean space and Transformers recognize an image as a sequence of patches, graph-based representation is more generalized and can construct connections for each part of an image. In this paper, we propose a novel ViG-UNet, a graph neural network-based U-shaped architecture with the encoder, the decoder, the bottleneck, and skip connections. The downsampling and upsampling modules are also carefully designed. The experimental results on ISIC 2016, ISIC 2017 and Kvasir-SEG datasets demonstrate that our proposed architecture outperforms most existing classic and state-of-the-art U-shaped networks.
@inproceedings{jiang2023vig, title = {VIG-UNET: Vision Graph Neural Networks for Medical Image Segmentation}, author = {Juntao Jiang and Xiyu Chen and Guanzhong Tian and Yong Liu}, year = 2023, booktitle = {IEEE 20th International Symposium on Biomedical Imaging (ISBI)}, doi = {10.1109/ISBI53787.2023.10230496}, abstract = {Deep neural networks have been widely used in medical image analysis and medical image segmentation is one of the most important tasks. U-shaped neural networks with encoder-decoder are prevailing and have succeeded greatly in various segmentation tasks. While CNNs treat an image as a grid of pixels in Euclidean space and Transformers recognize an image as a sequence of patches, graph-based representation is more generalized and can construct connections for each part of an image. In this paper, we propose a novel ViG-UNet, a graph neural network-based U-shaped architecture with the encoder, the decoder, the bottleneck, and skip connections. The downsampling and upsampling modules are also carefully designed. The experimental results on ISIC 2016, ISIC 2017 and Kvasir-SEG datasets demonstrate that our proposed architecture outperforms most existing classic and state-of-the-art U-shaped networks.} }
