{"id":1063,"date":"2026-05-16T08:35:21","date_gmt":"2026-05-16T00:35:21","guid":{"rendered":"https:\/\/www.eutaboo.com\/index.php\/2026\/05\/16\/2026-05-16-%e5%8c%bb%e5%ad%a6%e5%9b%be%e5%83%8f%e5%88%86%e5%89%b2%e8%ae%ba%e6%96%87%e7%b2%be%e8%af%bb%ef%bc%9amed-disseg-%e4%b8%8e-spectraflow\/"},"modified":"2026-05-16T08:35:21","modified_gmt":"2026-05-16T00:35:21","slug":"2026-05-16-%e5%8c%bb%e5%ad%a6%e5%9b%be%e5%83%8f%e5%88%86%e5%89%b2%e8%ae%ba%e6%96%87%e7%b2%be%e8%af%bb%ef%bc%9amed-disseg-%e4%b8%8e-spectraflow","status":"publish","type":"post","link":"https:\/\/www.eutaboo.com\/index.php\/2026\/05\/16\/2026-05-16-%e5%8c%bb%e5%ad%a6%e5%9b%be%e5%83%8f%e5%88%86%e5%89%b2%e8%ae%ba%e6%96%87%e7%b2%be%e8%af%bb%ef%bc%9amed-disseg-%e4%b8%8e-spectraflow\/","title":{"rendered":"2026-05-16 \u533b\u5b66\u56fe\u50cf\u5206\u5272\u8bba\u6587\u7cbe\u8bfb\uff1aMed-DisSeg \u4e0e SpectraFlow"},"content":{"rendered":"<h1>\u4eca\u65e5\u533b\u5b66\u56fe\u50cf\u5206\u5272\u6700\u65b0\u8bba\u6587\u7cbe\u8bfb\u8ffd\u8e2a<\/h1>\n<h2>\u4eca\u65e5\u7ed3\u8bba<\/h2>\n<p>\u4eca\u5929\u5728 arXiv 2026-05-14 \u65b0\u589e\u8bba\u6587\u4e2d\uff0c\u7b5b\u51fa 2 \u7bc7\u76f4\u63a5\u9762\u5411\u533b\u5b66\u56fe\u50cf\u5206\u5272\u3001\u4e14\u4e0e\u7528\u6237\u5173\u6ce8\u7684 polyp segmentation \/ boundary-aware segmentation \/ U-Net \u7c7b\u6846\u67b6\u6539\u9020\u76f8\u5173\u7684 preprint\uff1a<strong>Med-DisSeg<\/strong> \u4e0e <strong>SpectraFlow<\/strong>\u3002\u4e24\u7bc7\u90fd\u6765\u81ea\u540c\u4e00\u4f5c\u8005\u56e2\u961f\u3001\u90fd\u5f3a\u8c03\u201c\u7ed3\u6784\/\u8fb9\u754c\/\u8868\u793a\u5206\u6563\u201d\uff0c\u8bf4\u660e\u8fd1\u671f\u8d8b\u52bf\u4ecd\u5728\u4ece\u5355\u7eaf\u5806\u53e0 backbone \u8f6c\u5411 <strong>representation regularization + boundary\/frequency-aware decoder<\/strong>\uff1b\u4f46\u4e5f\u56e0\u4e3a\u4e24\u7bc7\u65b9\u6cd5\u53d9\u4e8b\u9ad8\u5ea6\u63a5\u8fd1\uff0c\u4eca\u5929\u7684\u7ed3\u8bba\u4f1a\u66f4\u8c28\u614e\uff0c\u91cd\u70b9\u770b\u54ea\u4e9b\u6a21\u5757\u53ef\u590d\u7528\u3001\u54ea\u4e9b\u8bc1\u636e\u8fd8\u4e0d\u8db3\u3002<\/p>\n<h2>\u68c0\u7d22\u8bf4\u660e<\/h2>\n<p>\u4eca\u65e5\u68c0\u7d22\u8303\u56f4\u8986\u76d6 arXiv 2026-05-14 \u81f3 2026-05-16 \u7684 <code>medical image segmentation<\/code>\u3001<code>polyp segmentation<\/code>\u3001<code>foundation medical segmentation<\/code>\u3001<code>U-Net medical image segmentation<\/code>\u3001<code>Mamba medical image segmentation<\/code>\u3001<code>3D medical image segmentation<\/code> \u7b49\u5173\u952e\u8bcd\uff0c\u5e76\u56de\u770b\u4e86\u672c\u5b9a\u65f6\u4efb\u52a1 2026-05-13\u30012026-05-14\u30012026-05-15 \u7684\u5386\u53f2\u8f93\u51fa\u6587\u4ef6\u3002\u4eca\u5929\u672a\u53d1\u73b0\u5df2\u6b63\u5f0f\u6807\u6ce8\u4e3a MICCAI \/ CVPR \/ ICCV \/ ECCV \/ NeurIPS \/ ICLR \/ ISBI \/ MedIA \/ TMI \u7b49\u9876\u4f1a\u9876\u520a\u63a5\u6536\u7684\u65b0\u589e\u533b\u5b66\u56fe\u50cf\u5206\u5272\u8bba\u6587\uff0c\u56e0\u6b64\u5165\u9009\u8bba\u6587\u5747\u4e3a arXiv preprint\u3002\u6240\u6709\u5165\u9009\u8bba\u6587\u5747\u4e3a 2025 \u5e74\u53ca\u4ee5\u540e\u3002<\/p>\n<p>\u5df2\u68c0\u67e5\u5386\u53f2\u63a8\u8350\u8bb0\u5f55\u5e76\u6392\u9664\u4e86\u91cd\u590d\u8bba\u6587\uff1b\u5df2\u8df3\u8fc7\u7684\u5386\u53f2\u63a8\u8350\u5019\u9009\u5305\u62ec <strong>MedCore: Boundary-Preserving Medical Core Pruning for MedSAM<\/strong>\u3001<strong>FEFormer<\/strong>\u3001<strong>USEMA<\/strong>\u3001<strong>XTinyU-Net<\/strong>\u3001<strong>Geometry-aware Prototype Learning for Cross-domain Few-shot Medical Image Segmentation<\/strong> \u7b49\u3002<\/p>\n<h2>WordPress \u53d1\u5e03<\/h2>\n<ul>\n<li>WordPress \u6587\u7ae0\u94fe\u63a5\uff1a\u5f85\u53d1\u5e03\u540e\u56de\u586b<\/li>\n<li>WordPress Post ID\uff1a\u5f85\u53d1\u5e03\u540e\u56de\u586b<\/li>\n<\/ul>\n<hr \/>\n<h2>\u8bba\u6587 1\uff1aMed-DisSeg: Dispersion-Driven Representation Learning for Fine-Grained Medical Image Segmentation<\/h2>\n<h3>\u57fa\u672c\u4fe1\u606f<\/h3>\n<ul>\n<li>\u6807\u9898\uff1aMed-DisSeg: Dispersion-Driven Representation Learning for Fine-Grained Medical Image Segmentation<\/li>\n<li>\u4f5c\u8005 \/ \u7b2c\u4e00\u4f5c\u8005\uff1aZhiquan Chen, Haitao Wang, Guowei Zou, Hejun Wu \/ \u7b2c\u4e00\u4f5c\u8005 Zhiquan Chen<\/li>\n<li>\u65f6\u95f4\uff1a2026-05-14<\/li>\n<li>\u6765\u6e90\uff1aarXiv preprint, arXiv:2605.14579v1<\/li>\n<li>\u8bba\u6587\u9875\u9762\u94fe\u63a5\uff1ahttps:\/\/arxiv.org\/abs\/2605.14579<\/li>\n<li>PDF \u6587\u4ef6 \/ PDF \u94fe\u63a5\uff1ahttps:\/\/arxiv.org\/pdf\/2605.14579v1 \uff08\u5df2\u4e0b\u8f7d\uff1aMEDIA:\/tmp\/medseg_daily_20260516\/med_disseg_2605.14579.pdf\uff09<\/li>\n<li>\u4ee3\u7801\u94fe\u63a5\uff1a\u8bba\u6587\u79f0 \u201csource code and pretrained models will be released upon acceptance\u201d\uff1b\u4eca\u65e5\u672a\u83b7\u53d6\u5230\u516c\u5f00\u4ed3\u5e93<\/li>\n<li>\u4efb\u52a1\uff1afine-grained medical image segmentation\uff1bpolyp segmentation\u3001sessile polyp segmentation\u3001gland segmentation\u3001skin lesion segmentation\uff0c\u5e76\u9644\u52a0 Synapse multi-organ CT \u6cdb\u5316\u5b9e\u9a8c<\/li>\n<li>\u6570\u636e\u96c6\uff1aKvasir-SEG\u3001Kvasir-Sessile\u3001GlaS\u3001ISIC-2016\u3001ISIC-2017\uff1b\u9644\u52a0 Synapse multi-organ CT<\/li>\n<li>\u65b9\u6cd5\u7c7b\u578b\uff1atwo-stage encoder-decoder\uff1bDispersive Loss\uff1badaptive attention\uff1bmulti-scale decoder calibration\uff1bCNN \/ U-Net-like segmentation framework<\/li>\n<\/ul>\n<h3>paper-deep-reader \u7cbe\u8bfb\u7ed3\u679c<\/h3>\n<h4>1. \u4e00\u53e5\u8bdd\u7ed3\u8bba<\/h4>\n<p>Med-DisSeg \u6700\u503c\u5f97\u5173\u6ce8\u7684\u4ef7\u503c\u662f\u628a\u201c\u8868\u793a\u574d\u584c\u5bfc\u81f4\u8fb9\u754c\u6df7\u6dc6\u201d\u660e\u786e\u4f5c\u4e3a\u533b\u5b66\u5206\u5272\u95ee\u9898\u6765\u5904\u7406\uff0c\u7528 Dispersive Loss \u7ea6\u675f encoder \u8868\u793a\uff0c\u518d\u914d\u5408 encoder attention \u4e0e\u591a\u5c3a\u5ea6 decoder calibration\uff0c\u5728 polyp \/ gland \/ skin lesion \u6570\u636e\u96c6\u4e0a\u53d6\u5f97\u8f83\u5f3a\u7ed3\u679c\uff1b\u4f46\u5b83\u7684\u67b6\u6784\u7ec4\u4ef6\u8f83\u591a\u3001\u4ee3\u7801\u5c1a\u672a\u516c\u5f00\uff0c\u590d\u73b0\u548c\u5f52\u56e0\u4ecd\u9700\u8c28\u614e\u3002<\/p>\n<h4>2. \u7814\u7a76\u80cc\u666f\u4e0e\u6838\u5fc3\u95ee\u9898<\/h4>\n<p>\u8bba\u6587\u5173\u6ce8\u7ec6\u7c92\u5ea6\u533b\u5b66\u56fe\u50cf\u5206\u5272\uff1a\u75c5\u7076\u6216\u89e3\u5256\u7ed3\u6784\u4e0e\u80cc\u666f\u7ec4\u7ec7\u5f80\u5f80\u5f3a\u5ea6\u3001\u7eb9\u7406\u76f8\u4f3c\uff0c\u8fb9\u754c\u4f4e\u5bf9\u6bd4\u3001\u5f62\u72b6\u53d8\u5316\u5927\uff0c\u5bb9\u6613\u51fa\u73b0\u6fc0\u6d3b\u6a21\u7cca\u3001\u8fb9\u754c\u6cc4\u6f0f\u548c\u5c0f\u7ed3\u6784\u6f0f\u5206\u3002\u4f5c\u8005\u628a\u8fd9\u4e00\u95ee\u9898\u5f52\u7ed3\u4e3a\u4e24\u4e2a\u73af\u8282\uff1a<\/p>\n<ol>\n<li><strong>encoding \u9636\u6bb5 representation collapse<\/strong>\uff1a\u5f02\u8d28\u7ed3\u6784\u88ab\u6620\u5c04\u5230\u8fc7\u4e8e\u76f8\u8fd1\u7684 embedding \u533a\u57df\uff0c\u5bfc\u81f4\u524d\u666f\/\u80cc\u666f\u6216\u75c5\u7076\/\u6b63\u5e38\u7ec4\u7ec7\u96be\u4ee5\u533a\u5206\u3002<\/li>\n<li><strong>decoding \u9636\u6bb5 fine-grained multi-scale reconstruction \u4e0d\u8db3<\/strong>\uff1a\u5c40\u90e8\u7eb9\u7406\u3001\u8fb9\u754c\u7ec6\u8282\u548c\u5168\u5c40\u5f62\u72b6\u6ca1\u6709\u88ab\u5e73\u8861\u6062\u590d\u3002<\/li>\n<\/ol>\n<p>paper map \u53ef\u6982\u62ec\u4e3a\uff1a\u8bba\u6587\u7814\u7a76\u4f4e\u5bf9\u6bd4\u3001\u5c0f\u76ee\u6807\u3001\u5f62\u6001\u53ef\u53d8\u7684\u533b\u5b66\u56fe\u50cf\u7ec6\u7c92\u5ea6\u5206\u5272\uff1b\u4e3b\u52a8\u4f5c\u662f\u6784\u5efa\u4e24\u9636\u6bb5 Med-DisSeg\uff0c\u5728 Stage I \u7528 ResNet-50 encoder + ELAT + Dispersive Loss \u5b66\u4e60\u66f4\u5206\u6563\u7684\u8868\u793a\uff0c\u5728 Stage II \u7528 CBAT \u591a\u5c3a\u5ea6 decoder \u4e0e\u7ee7\u7eed\u4f7f\u7528\u7684 Dispersive Loss \u505a mask \u7ec6\u5316\uff1b\u4f5c\u8005\u58f0\u79f0\u8be5\u7ec4\u5408\u5728\u4e94\u4e2a 2D \u6570\u636e\u96c6\u548c Synapse 3D multi-organ benchmark \u4e0a\u4f18\u4e8e\u591a\u79cd U-Net\u3001polyp-specific\u3001Transformer \/ hybrid \u4e0e ConDSeg \u7b49 baseline\uff1b\u8bc1\u636e\u4e3b\u8981\u6765\u81ea SOTA \u8868\u3001Kvasir-SEG \u6d88\u878d\u3001loss variant \/ hyperparameter \u5206\u6790\u548c Synapse \u6cdb\u5316\u8868\uff1b\u4e3b\u8981\u5931\u8d25\u98ce\u9669\u662f\u591a\u7ec4\u4ef6\u7cfb\u7edf\u53ef\u80fd\u5b58\u5728\u5de5\u7a0b\u53e0\u52a0\u6548\u5e94\uff0c\u4e14\u82e5\u65e0\u516c\u5f00\u4ee3\u7801\uff0c\u8bad\u7ec3\u7ec6\u8282\u3001split \u4e0e baseline \u590d\u73b0\u516c\u5e73\u6027\u96be\u4ee5\u6838\u67e5\u3002<\/p>\n<h4>3. \u73b0\u6709\u65b9\u6cd5\u4e0d\u8db3<\/h4>\n<p>\u4f5c\u8005\u8ba4\u4e3a\u73b0\u6709\u65b9\u6cd5\u4e3b\u8981\u4e0d\u8db3\u5982\u4e0b\uff1a<\/p>\n<ol>\n<li><strong>U-Net \/ nnU-Net \u7c7b encoder-decoder<\/strong>\uff1a\u5de5\u7a0b\u5f3a\u3001\u5c40\u90e8\u5efa\u6a21\u597d\uff0c\u4f46\u5728\u76ee\u6807\u548c\u80cc\u666f\u5916\u89c2\u76f8\u8fd1\u65f6\uff0cencoder feature \u53ef\u80fd\u65e0\u6cd5\u62c9\u5f00\u4e0d\u540c\u7ed3\u6784\u7684\u8ddd\u79bb\u3002<\/li>\n<li><strong>attention \/ Transformer \/ hybrid \u65b9\u6cd5<\/strong>\uff1a\u80fd\u5f15\u5165\u5168\u5c40\u4e0a\u4e0b\u6587\uff0c\u4f46\u53ef\u80fd\u590d\u6742\u5ea6\u9ad8\uff0c\u4e14\u672a\u5fc5\u771f\u6b63\u89e3\u51b3\u8fb9\u754c\u654f\u611f\u7684\u8868\u793a\u5206\u79bb\u95ee\u9898\u3002<\/li>\n<li><strong>contrastive segmentation \u65b9\u6cd5<\/strong>\uff1a\u5982 ConDSeg \u7b49\u5f00\u59cb\u628a\u533b\u5b66\u5206\u5272\u89c6\u4f5c\u8868\u793a\u5b66\u4e60\u95ee\u9898\uff0c\u4f46\u901a\u5e38\u9700\u8981 foreground\/background\/uncertainty \u7279\u5b9a\u8bbe\u8ba1\u3001\u91c7\u6837\u7b56\u7565\u6216\u989d\u5916 head\uff1b\u4f5c\u8005\u5e0c\u671b\u7528\u66f4\u901a\u7528\u7684 \u201call-negative\u201d dispersion \u6b63\u5219\u76f4\u63a5\u7f13\u89e3 collapse\u3002<\/li>\n<li><strong>\u666e\u901a decoder fusion<\/strong>\uff1a\u5355\u4e00\u5c3a\u5ea6\u6216\u7b80\u5355 skip fusion \u96be\u4ee5\u540c\u65f6\u6062\u590d\u5c40\u90e8\u7eb9\u7406\u3001\u7ec6\u5c0f\u8fb9\u754c\u548c\u5168\u5c40\u7ed3\u6784\u3002<\/li>\n<\/ol>\n<h4>4. \u65b9\u6cd5\u603b\u89c8<\/h4>\n<p>\u8def\u7ebf\u8bb0\u5f55\uff1aPrimary adapter = method-algorithm\uff1bSecondary adapter = benchmark-evaluation\uff08\u8f7b\u91cf\u4f7f\u7528\uff0c\u56e0\u4e3a\u5b9e\u9a8c\u8868\u548c\u6d88\u878d\u662f\u53ef\u4fe1\u5ea6\u6838\u5fc3\uff09\uff1bEvidence packs = general\u3001experimental-eval\u3001ablation-and-mechanism-isolation\u3001reproducibility-and-compute\uff1bRoute confidence = \u4e2d-\u9ad8\u3002\u9009\u62e9\u8be5\u8def\u7ebf\u662f\u56e0\u4e3a\u8bba\u6587\u4e3b\u8981\u8d21\u732e\u662f\u4e00\u4e2a\u65b0\u5206\u5272\u6846\u67b6\u4e0e\u8868\u793a\u6b63\u5219\uff0c\u4f46\u6700\u7ec8\u4ef7\u503c\u9ad8\u5ea6\u4f9d\u8d56\u591a\u6570\u636e\u96c6\u6bd4\u8f83\u3001\u6d88\u878d\u548c\u590d\u73b0\u6027\u3002<\/p>\n<p>Med-DisSeg \u7684\u6574\u4f53\u6d41\u7a0b\u5982\u4e0b\uff1a<\/p>\n<ol>\n<li>\n<p><strong>Stage I\uff1arobust encoder pre-training<\/strong><br \/>\n   \u4f7f\u7528 ResNet-50 \u4f5c\u4e3a\u9ed8\u8ba4 encoder\uff0c\u5728\u5f3a photometric perturbation \u4e0b\u8bad\u7ec3\uff1bencoder blocks \u63a5\u5165 <strong>ELAT<\/strong> \u6a21\u5757\uff1b\u7528 segmentation loss \u52a0 <strong>Dispersive Loss<\/strong> \u4f18\u5316\u3002\u8fd9\u91cc\u7684 prediction head \u4e3b\u8981\u7528\u4e8e\u76d1\u7763 encoder\uff0c\u800c\u4e0d\u662f\u6700\u7ec8 decoder\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Dispersive Loss<\/strong><br \/>\n   \u4ece InfoNCE \u7684\u8d1f\u6837\u672c repulsion \u90e8\u5206\u51fa\u53d1\uff0c\u53bb\u6389 positive alignment\uff0c\u628a batch \u5185\u6240\u6709 hidden representations \u90fd\u89c6\u4f5c negative pairs\uff0c\u4fc3\u4f7f\u4e0d\u540c\u6837\u672c\/\u7ed3\u6784\u7684\u8868\u793a\u5f7c\u6b64\u8fdc\u79bb\u3002\u8bba\u6587\u7ed9\u51fa\u56db\u79cd\u5b9e\u4f8b\u5316\uff1aInfoNCE-L2\u3001InfoNCE-Cosine\u3001Hinge\u3001Covariance off-diagonal penalty\uff1b\u4e3b\u5b9e\u9a8c\u4e2d InfoNCE-L2 \u6700\u597d\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Stage II\uff1amulti-scale feature decoding<\/strong><br \/>\n   \u5c06 Stage I \u8bad\u7ec3\u597d\u7684 encoder \u63a5\u5165\u5b8c\u6574 Med-DisSeg\uff0c\u5e76\u4ee5\u66f4\u5c0f\u5b66\u4e60\u7387\u5fae\u8c03 encoder\uff0c\u540c\u65f6\u7ee7\u7eed\u4f7f\u7528 Dispersive Loss \u4f5c\u4e3a\u8f85\u52a9\u9879\u3002encoder \u8f93\u51fa\u591a\u5c42\u7279\u5f81 <code>{f1,\u2026,f4}<\/code>\uff0c\u6df1\u5c42\u7279\u5f81\u7ecf CBT blocks\uff0c\u968f\u540e\u901a\u8fc7 CDFA \u4e0e\u4e09\u6761\u4e0d\u540c\u5c3a\u5ea6\u7684 CBAT decoder path \u8fdb\u884c\u7ec6\u7c92\u5ea6\u91cd\u5efa\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Decoder-side multi-scale calibration<\/strong><br \/>\n   \u4e09\u6761\u89e3\u7801\u8def\u5f84\u5206\u522b\u5bf9\u5e94 small \/ medium \/ large receptive fields\uff1asmall path \u504f\u5411\u7ec6\u8fb9\u754c\u548c\u7eb9\u7406\uff0cmedium path \u4fdd\u6301\u533a\u57df\u8bed\u4e49\uff0clarge path \u4fdd\u6301\u6574\u4f53\u7ed3\u6784\u3002\u6700\u540e\u628a\u4e09\u6761\u8f93\u51fa\u76f8\u52a0\u5f97\u5230 final mask\u3002<\/p>\n<\/li>\n<\/ol>\n<h4>5. \u6838\u5fc3\u6a21\u5757\u62c6\u89e3<\/h4>\n<ul>\n<li>\n<p><strong>Dispersive Loss\uff08DL\uff09<\/strong>\uff1a\u8f93\u5165\u4e3a batch hidden representations\uff0c\u8f93\u51fa\u4e3a\u4e00\u4e2a repulsive regularization loss\u3002\u6838\u5fc3\u662f\u6700\u5c0f\u5316 <code>log E_{i\u2260j}[exp(-D(h_i,h_j)\/\u03c4)]<\/code>\uff0c\u4f7f batch \u5185\u8868\u793a\u66f4\u5206\u6563\u3002\u5b83\u89e3\u51b3 representation collapse\uff0c\u7406\u8bba\u4e0a\u53ef\u8fc1\u79fb\u5230 U-Net\u3001DAMamba\u3001TransUNet\u3001MedNeXt \u7b49\u4efb\u4f55 supervised segmentation training pipeline\u3002\u771f\u6b63\u521b\u65b0\u70b9\u4e0d\u5728\u516c\u5f0f\u672c\u8eab\uff0c\u800c\u5728\u628a\u8fd9\u4e2a all-negative regularizer \u7cfb\u7edf\u63a5\u5165\u533b\u5b66\u56fe\u50cf\u5206\u5272\u7684\u4e24\u9636\u6bb5\u8bad\u7ec3\u3002<\/p>\n<\/li>\n<li>\n<p><strong>InfoNCE-L2 variant<\/strong>\uff1a\u8bba\u6587\u5728 Kvasir-SEG \u4e0a\u6bd4\u8f83\u56db\u79cd dispersion \u7248\u672c\uff0cInfoNCE-L2 \u53d6\u5f97 mIoU 85.6\u3001mDSC 91.2\uff0c\u4f18\u4e8e Hinge\u3001InfoNCE-Cosine \u548c Covariance\u3002\u4f5c\u8005\u89e3\u91ca L2 \u8ddd\u79bb\u80fd\u66f4\u5f3a\u5730\u62c9\u5f00\u51e0\u4f55\u8ddd\u79bb\uff0c\u800c cosine \u53ea\u7ba1\u65b9\u5411\u3001\u53ef\u80fd\u5ffd\u7565\u8fb9\u754c\u654f\u611f\u7684 magnitude cue\u3002<\/p>\n<\/li>\n<li>\n<p><strong>ELAT encoder-side adaptive attention<\/strong>\uff1a\u8f93\u5165\u4e3a encoder feature map\uff0c\u8f93\u51fa\u4e3a\u91cd\u52a0\u6743\u7279\u5f81\u3002\u5176\u8bbe\u8ba1\u5305\u542b channel-aware branch \u548c multi-scale spatial branch\uff0c\u76ee\u6807\u662f\u5728 encoder \u4e2d\u540c\u65f6\u4fdd\u7559\u8bed\u4e49\u91cd\u8981\u6027\u548c\u7a7a\u95f4\u663e\u8457\u533a\u57df\u3002\u53ef\u7406\u89e3\u4e3a\u4e00\u79cd\u9762\u5411\u5f31\u8fb9\u754c\/\u4f4e\u5bf9\u6bd4\u7ed3\u6784\u7684 attention block\u3002\u9002\u5408\u8fc1\u79fb\u5230 U-Net encoder \u6216 DAMamba block \u524d\u540e\uff0c\u4f46\u9700\u505a\u53ef\u63a7\u6d88\u878d\uff0c\u907f\u514d\u548c\u5df2\u6709 attention \u91cd\u53e0\u3002<\/p>\n<\/li>\n<li>\n<p><strong>CBAT decoder \/ multi-scale calibration<\/strong>\uff1adecoder \u901a\u8fc7 small \/ medium \/ large \u4e09\u4e2a\u5c3a\u5ea6\u8def\u5f84\u6062\u590d\u7ec6\u8282\u3001\u533a\u57df\u8bed\u4e49\u548c\u5168\u5c40\u5f62\u72b6\uff1b\u6bcf\u6761\u8def\u5f84\u4e2d\u4f7f\u7528 CBT block \u548c CBAT attention\u3002\u8fd9\u4e2a\u6a21\u5757\u5bf9 polyp segmentation \u6709\u76f4\u63a5\u53c2\u8003\u4ef7\u503c\uff0c\u56e0\u4e3a sessile polyp \u548c\u5c0f\u606f\u8089\u9ad8\u5ea6\u4f9d\u8d56\u5c40\u90e8\u8fb9\u754c\u4e0e\u5168\u5c40\u8f6e\u5ed3\u4e00\u81f4\u6027\u3002<\/p>\n<\/li>\n<li>\n<p><strong>\u4e24\u9636\u6bb5\u8bad\u7ec3\u7b56\u7565<\/strong>\uff1aStage I \u5f3a\u8c03 encoder \u8868\u793a\u5206\u6563\uff0cStage II \u5f3a\u8c03 mask \u91cd\u5efa\u548c\u7ee7\u7eed\u4fdd\u6301 dispersion\u3002\u4f18\u70b9\u662f\u673a\u5236\u6e05\u695a\uff1b\u7f3a\u70b9\u662f\u8bad\u7ec3\u6d41\u7a0b\u590d\u6742\uff0c\u548c\u5355\u9636\u6bb5 U-Net \/ nnU-Net baseline \u7684\u516c\u5e73\u6bd4\u8f83\u9700\u8981\u786e\u8ba4\u8bad\u7ec3 epoch\u3001augmentation\u3001early stopping\u3001pretraining \u7b49\u7ec6\u8282\u3002<\/p>\n<\/li>\n<li>\n<p><strong>\u662f\u5426\u9002\u5408 polyp \/ 3D segmentation<\/strong>\uff1a\u5bf9 polyp segmentation \u5f88\u76f4\u63a5\uff0c\u56e0\u4e3a\u4e3b\u8868\u5305\u542b Kvasir-SEG \u548c Kvasir-Sessile\uff0c\u540e\u8005\u66f4\u63a5\u8fd1\u6241\u5e73\/\u8fb9\u754c\u56f0\u96be\u606f\u8089\u3002\u5bf9 3D segmentation\uff0c\u8bba\u6587\u53ea\u505a Synapse \u8868\u683c\u7ea7\u6cdb\u5316\uff0c\u4e0d\u7b49\u540c\u4e8e\u5b8c\u6574 3D nnU-Net \/ volumetric pipeline\uff1bDL \u53ef\u8fc1\u79fb\u5230 3D\uff0c\u4f46 decoder \u7ed3\u6784\u548c\u8ba1\u7b97\u5f00\u9500\u9700\u91cd\u8bbe\u3002<\/p>\n<\/li>\n<\/ul>\n<h4>6. \u5b9e\u9a8c\u8bbe\u8ba1\u4e0e\u7ed3\u679c<\/h4>\n<p>\u5b9e\u9a8c\u8bbe\u7f6e\uff1a\u5355\u5f20 NVIDIA RTX 4090\uff0c\u8f93\u5165\u5206\u8fa8\u7387 256\u00d7256\uff0cbatch size 4\uff0cAdam optimizer\uff1b\u9ed8\u8ba4 encoder \u4e3a ResNet-50\u3002Stage I learning rate \u4e3a <code>1e-4<\/code>\uff0cStage II encoder lr \u964d\u4e3a <code>1e-5<\/code>\uff0c\u5176\u4f59\u90e8\u5206 <code>1e-4<\/code>\u3002\u8bba\u6587\u79f0\u6709\u5b98\u65b9\u5b9e\u73b0\u5219\u590d\u73b0\uff0c\u5426\u5219\u5f15\u7528\u539f\u8bba\u6587\u7ed3\u679c\u3002<\/p>\n<p>\u4e3b\u8981\u6570\u636e\u96c6\uff1a<br \/>\n- <strong>Kvasir-SEG<\/strong>\uff1apolyp segmentation\uff0c880\/120 train\/validation split\u3002<br \/>\n- <strong>Kvasir-Sessile<\/strong>\uff1asessile polyp subset\uff0c156\/20\/20 train\/val\/test split\u3002<br \/>\n- <strong>GlaS<\/strong>\uff1ahistopathology gland segmentation\uff0c85\/80 official split\u3002<br \/>\n- <strong>ISIC-2016 \/ ISIC-2017<\/strong>\uff1askin lesion segmentation official splits\u3002<br \/>\n- <strong>Synapse<\/strong>\uff1amulti-organ CT\uff0c\u4f5c\u4e3a\u9644\u52a0\u6cdb\u5316\u5b9e\u9a8c\u3002<\/p>\n<p>\u5173\u952e\u7ed3\u679c\uff1a<\/p>\n<ul>\n<li><strong>Kvasir-Sessile \/ Kvasir-SEG \/ GlaS \u8868 1<\/strong>\uff1aMed-DisSeg \u5728 Kvasir-Sessile \u4e0a mIoU <strong>84.6<\/strong>\u3001mDSC <strong>91.3<\/strong>\uff1bKvasir-SEG \u4e0a mIoU <strong>85.9<\/strong>\u3001mDSC <strong>91.6<\/strong>\uff1bGlaS \u4e0a mIoU <strong>85.7<\/strong>\u3001mDSC <strong>92.2<\/strong>\u3002\u76f8\u8f83 ConDSeg\uff08AAAI 2025\uff09\u5206\u522b\u6709\u5c0f\u5230\u4e2d\u7b49\u63d0\u5347\uff0c\u4f8b\u5982 Kvasir-SEG mIoU 84.6\u219285.9\u3001mDSC 90.5\u219291.6\u3002<\/li>\n<li><strong>ISIC \u8868 2<\/strong>\uff1aISIC-2016 mIoU <strong>87.4<\/strong>\u3001mDSC <strong>93.1<\/strong>\uff1bISIC-2017 mIoU <strong>81.4<\/strong>\u3001mDSC <strong>89.7<\/strong>\uff0c\u9ad8\u4e8e\u8868\u4e2d U-Net\u3001CE-Net\u3001FAT-Net\u3001EIU-Net\u3001ConDSeg \u7b49\u3002<\/li>\n<li><strong>\u6d88\u878d Table 3\uff08Kvasir-SEG\uff09<\/strong>\uff1abaseline mIoU <strong>84.3<\/strong>\u3001mDSC <strong>89.7<\/strong>\uff1b\u52a0\u5165 ELAT \u540e 85.0\/90.8\uff1b\u52a0\u5165 CBAT \u540e 84.6\/90.2\uff1bELAT+CBAT \u4e3a 85.2\/91.1\uff1b\u518d\u52a0\u5165 Stage I\/II Dispersive Loss \u540e\u5b8c\u6574\u6a21\u578b\u8fbe <strong>85.9\/91.6<\/strong>\u3002\u8fd9\u8bf4\u660e\u4e09\u7c7b\u7ec4\u4ef6\u90fd\u6709\u8d21\u732e\uff0c\u4f46\u6bcf\u4e2a\u5355\u9879\u589e\u76ca\u5e76\u4e0d\u5de8\u5927\u3002<\/li>\n<li><strong>Synapse Table 4<\/strong>\uff1aMed-DisSeg mean DSC <strong>83.4<\/strong>\uff0c\u4f4e\u4e8e WMREN \u7684 84.4\uff0c\u4f46\u9ad8\u4e8e ConDSeg 80.2\u3001SwinUNet 79.1\u3001TransUNet 77.5\u3002\u6ce8\u610f\u8fd9\u4e0e\u8bba\u6587\u6458\u8981\u201ccompetitive\u201d\u66f4\u4e00\u81f4\uff0c\u800c\u4e0d\u662f\u7edd\u5bf9 SOTA\u3002<\/li>\n<li><strong>\u590d\u6742\u5ea6 \/ \u53c2\u6570<\/strong>\uff1a\u6b63\u6587\u63d0\u5230 Fig. 5(d) \u6bd4\u8f83\u53c2\u6570\u548c\u8ba1\u7b97\u6210\u672c\uff0c\u4f46\u6587\u672c\u62bd\u53d6\u4e2d\u672a\u80fd\u7a33\u5b9a\u5f97\u5230\u5177\u4f53 Params\/FLOPs \u6570\u5b57\uff1b\u4eca\u65e5\u4e0d\u7f16\u9020\u3002<\/li>\n<\/ul>\n<h4>7. \u5b9e\u9a8c\u53ef\u4fe1\u5ea6\u5224\u65ad<\/h4>\n<p>\u53ef\u4fe1\u4e4b\u5904\uff1a<\/p>\n<ul>\n<li>\u6570\u636e\u96c6\u8986\u76d6 polyp\u3001sessile polyp\u3001gland\u3001skin lesion\uff0c\u5e76\u9644\u52a0 Synapse\uff0c\u591a\u6837\u6027\u8f83\u597d\u3002<\/li>\n<li>\u4e3b\u8868\u5305\u542b ConDSeg\uff08AAAI 2025\uff09\u3001DoubleAANet\uff082025\uff09\u3001DTAN\u3001XBFormer\u3001PraNet \u7b49\u8f83\u591a\u76f8\u5173 baseline\u3002<\/li>\n<li>\u6d88\u878d\u4e0d\u662f\u53ea\u5220\u4e00\u4e2a\u6a21\u5757\uff0c\u800c\u662f\u62c6\u5f00 ELAT\u3001CBAT\u3001Stage I DL\u3001Stage II DL\uff0c\u5e76\u6bd4\u8f83\u4e86\u56db\u79cd Dispersive Loss variant \u4e0e\u6e29\u5ea6\/\u6743\u91cd\/\u5c42\u4f4d\u7f6e\u3002<\/li>\n<li>Kvasir-Sessile \u5bf9\u7528\u6237\u505a polyp segmentation \u5f88\u6709\u53c2\u8003\u4ef7\u503c\uff0c\u56e0\u4e3a\u5b83\u6bd4\u666e\u901a Kvasir-SEG \u66f4\u5f3a\u8c03\u6241\u5e73\u3001\u8fb9\u754c\u96be\u7684\u606f\u8089\u3002<\/li>\n<\/ul>\n<p>\u9700\u8981\u8c28\u614e\u4e4b\u5904\uff1a<\/p>\n<ul>\n<li>\u4ee3\u7801\u672a\u516c\u5f00\uff0csplit\u3001\u8bad\u7ec3 schedule\u3001augmentation\u3001early stopping \u548c baseline \u590d\u73b0\u5b9e\u9a8c\u6682\u65f6\u65e0\u6cd5\u6838\u67e5\u3002<\/li>\n<li>\u591a\u6570\u7ed3\u679c\u662f\u767e\u5206\u6bd4\u5355\u70b9\uff0c\u6ca1\u6709\u770b\u5230\u591a\u6b21\u8fd0\u884c\u5747\u503c\u65b9\u5dee\u6216\u7edf\u8ba1\u663e\u8457\u6027\u68c0\u9a8c\u3002<\/li>\n<li>\u4e3b\u7ed3\u679c\u63d0\u5347\u76f8\u5bf9 ConDSeg \u5728 Kvasir-SEG \/ ISIC \u4e0a\u5e76\u975e\u538b\u5012\u6027\uff0c\u5f88\u591a\u662f 0.5\u20131.5 mIoU \/ Dice \u91cf\u7ea7\uff0c\u9700\u8981\u9632\u6b62\u8fc7\u5ea6\u89e3\u8bfb\u3002<\/li>\n<li>\u8bba\u6587\u65e2\u6709 DL\u3001ELAT\u3001CBAT\u3001CDFA\u3001CBT\u3001\u591a\u5c3a\u5ea6 decoder\uff0c\u53c8\u6709\u4e24\u9636\u6bb5\u8bad\u7ec3\uff0c\u7cfb\u7edf\u590d\u6742\u5ea6\u8f83\u9ad8\uff1b\u5982\u679c\u53ea\u60f3\u8fc1\u79fb\u4e00\u4e2a\u6a21\u5757\uff0c\u5fc5\u987b\u505a\u5c40\u90e8\u6d88\u878d\u3002<\/li>\n<li>Synapse \u6cdb\u5316\u8868\u4e2d Med-DisSeg \u5e76\u975e\u6700\u4f18\uff08mean DSC 83.4 vs WMREN 84.4\uff09\uff0c\u56e0\u6b64\u4e0d\u80fd\u628a\u5b83\u5ba3\u4f20\u4e3a\u5f3a 3D \u5206\u5272\u6846\u67b6\u3002<\/li>\n<\/ul>\n<h4>8. \u4e0e\u4e3b\u6d41\u533b\u5b66\u56fe\u50cf\u5206\u5272\u6846\u67b6\u7684\u5173\u7cfb<\/h4>\n<ul>\n<li><strong>U-Net \/ nnU-Net<\/strong>\uff1aMed-DisSeg \u5c5e\u4e8e U-shaped encoder-decoder \u8c31\u7cfb\uff0c\u4f46\u4e0d\u662f nnU-Net recipe \u6539\u8fdb\u3002\u5b83\u66f4\u50cf\u201cResNet encoder + attention + multi-scale decoder + representation regularization\u201d\u7684\u624b\u5de5\u6846\u67b6\u3002<\/li>\n<li><strong>MedNeXt \/ CNN segmentation<\/strong>\uff1aDL \u53ef\u4ee5\u76f4\u63a5\u4f5c\u4e3a\u8bad\u7ec3\u6b63\u5219\u8fc1\u79fb\u5230 MedNeXt\uff1bCBAT\/ELAT \u5219\u5c5e\u4e8e\u8f7b\u91cf attention \u7ec4\u4ef6\uff0c\u9700\u8981\u4e0e\u5927 kernel CNN \u7684\u5df2\u6709\u5f52\u7eb3\u504f\u7f6e\u505a\u6d88\u878d\u6bd4\u8f83\u3002<\/li>\n<li><strong>UNETR \/ Swin-UNet \/ TransUNet \/ TransFuse<\/strong>\uff1a\u8bba\u6587\u628a\u8fd9\u4e9b\u4f5c\u4e3a\u4e0a\u4e0b\u6587\u76f8\u5173\u5de5\u4f5c\uff0c\u4f46\u81ea\u8eab\u4e3b\u8981\u662f CNN\/attention\uff1bDL \u4e0e decoder calibration \u53ef\u4f5c\u4e3a Transformer segmentation \u7684\u8bad\u7ec3\/decoder\u589e\u5f3a\u4ef6\u3002<\/li>\n<li><strong>Mamba \/ VMamba \/ SegMamba \/ DAMamba<\/strong>\uff1a\u8bba\u6587\u4e0d\u4f7f\u7528 Mamba\uff0c\u4f46 DL \u5bf9 DAMamba \u5f88\u6709\u7528\uff1a\u53ef\u4ee5\u68c0\u67e5 Mamba branch \u662f\u5426\u771f\u7684\u6269\u5927\u524d\u666f\/\u80cc\u666f\u6216\u8fb9\u754c\/\u5185\u90e8 feature margin\uff0c\u800c\u4e0d\u4ec5\u662f\u589e\u52a0\u957f\u7a0b\u4f9d\u8d56\u3002CBAT \u7684\u591a\u5c3a\u5ea6\u6821\u51c6\u4e5f\u53ef\u4f5c\u4e3a DAMamba decoder \u5bf9\u6bd4\u6a21\u5757\u3002<\/li>\n<li><strong>Foundation model segmentation<\/strong>\uff1a\u4e0e SAM\/MedSAM \u65e0\u76f4\u63a5\u5173\u7cfb\uff1b\u5b83\u662f\u4e13\u7528\u76d1\u7763\u8bad\u7ec3\u6846\u67b6\u3002<\/li>\n<\/ul>\n<h4>9. \u5bf9\u6211\u8bfe\u9898\u7684\u4ef7\u503c<\/h4>\n<p>\u5bf9\u7528\u6237\u7684 polyp segmentation \u548c DAMamba \u65b9\u5411\uff0cMed-DisSeg \u4ef7\u503c\u8f83\u9ad8\u4f46\u5e94\u62c6\u5f00\u4f7f\u7528\uff1a<\/p>\n<ul>\n<li><strong>polyp segmentation<\/strong>\uff1aKvasir-SEG + Kvasir-Sessile \u7ed3\u679c\u76f4\u63a5\u76f8\u5173\u3002\u5c24\u5176 Kvasir-Sessile \u53ef\u4f5c\u4e3a\u8fb9\u754c\u56f0\u96be\u573a\u666f\u53c2\u8003\u3002<\/li>\n<li><strong>DAMamba \u6539\u9020<\/strong>\uff1a\u6700\u503c\u5f97\u8fc1\u79fb\u7684\u662f Dispersive Loss\uff0c\u800c\u4e0d\u662f\u6574\u5957 ELAT\/CBAT\u3002\u53ef\u4ee5\u5728 DAMamba training \u4e2d\u52a0\u5165 DL\uff0c\u89c2\u5bdf t-SNE \/ class margin \/ boundary Dice \/ HD95 \u662f\u5426\u6539\u5584\u3002<\/li>\n<li><strong>baseline \/ related work<\/strong>\uff1a\u53ef\u4f5c\u4e3a 2026 \u5e74 representation regularization + fine-grained decoder \u65b9\u5411\u5f15\u7528\u3002<\/li>\n<li><strong>\u590d\u73b0\u5b9e\u9a8c\u5efa\u8bae<\/strong>\uff1a\u4e0d\u8981\u4e00\u5f00\u59cb\u590d\u73b0\u6574\u4e2a Med-DisSeg\u3002\u5efa\u8bae\u5148\u5728\u73b0\u6709 U-Net \u6216 DAMamba \u4e0a\u52a0 DL\uff0c\u9a8c\u8bc1\u662f\u5426\u80fd\u7a33\u5b9a\u63d0\u5347 Kvasir-SEG \/ ClinicDB \/ ColonDB\uff1b\u518d\u8003\u8651 CBAT \u591a\u5c3a\u5ea6 decoder\u3002<\/li>\n<\/ul>\n<h4>10. \u9605\u8bfb\u5efa\u8bae<\/h4>\n<p><strong>\u5efa\u8bae\u7cbe\u8bfb\uff0c\u4f46\u4e0d\u5efa\u8bae\u76f2\u76ee\u6574\u6a21\u578b\u590d\u73b0\u3002<\/strong> \u4f18\u5148\u8bfb Dispersive Loss \u7684\u516c\u5f0f\u3001Table 3 \u6d88\u878d\u548c Kvasir-Sessile \/ Kvasir-SEG \u7ed3\u679c\u3002\u82e5\u7528\u6237\u76ee\u6807\u662f\u5199\u8bba\u6587\u6216\u6539 DAMamba\uff0c\u6700\u6709\u7528\u7684\u662f\u201crepresentation collapse \u2192 all-negative dispersion regularization \u2192 boundary-aware improvement\u201d\u8fd9\u6761\u53d9\u4e8b\uff1bELAT\/CBAT \u53ef\u4f5c\u4e3a\u5907\u9009\u6a21\u5757\uff0c\u4f46\u9700\u8981\u4e25\u683c\u63a7\u5236\u5b9e\u9a8c\u53d8\u91cf\u3002<\/p>\n<hr \/>\n<h2>\u8bba\u6587 2\uff1aSpectraFlow: Unifying Structural Pretraining and Frequency Adaptation for Medical Image Segmentation<\/h2>\n<h3>\u57fa\u672c\u4fe1\u606f<\/h3>\n<ul>\n<li>\u6807\u9898\uff1aSpectraFlow: Unifying Structural Pretraining and Frequency Adaptation for Medical Image Segmentation<\/li>\n<li>\u4f5c\u8005 \/ \u7b2c\u4e00\u4f5c\u8005\uff1aZhiquan Chen, Haitao Wang, Guowei Zou, Hejun Wu \/ \u7b2c\u4e00\u4f5c\u8005 Zhiquan Chen<\/li>\n<li>\u65f6\u95f4\uff1a2026-05-14<\/li>\n<li>\u6765\u6e90\uff1aarXiv preprint, arXiv:2605.14566v1<\/li>\n<li>\u8bba\u6587\u9875\u9762\u94fe\u63a5\uff1ahttps:\/\/arxiv.org\/abs\/2605.14566<\/li>\n<li>PDF \u6587\u4ef6 \/ PDF \u94fe\u63a5\uff1ahttps:\/\/arxiv.org\/pdf\/2605.14566v1 \uff08\u5df2\u4e0b\u8f7d\uff1aMEDIA:\/tmp\/medseg_daily_20260516\/spectraflow_2605.14566.pdf\uff09<\/li>\n<li>\u4ee3\u7801\u94fe\u63a5\uff1a\u6458\u8981\u79f0 \u201cThe code is in the appendix materials\u201d\uff1b\u4eca\u65e5\u672a\u83b7\u53d6\u5230\u516c\u5f00 GitHub \/ \u9879\u76ee\u9875\u94fe\u63a5\uff0carXiv \u9875\u9762\u4e5f\u672a\u786e\u8ba4\u53ef\u76f4\u63a5\u8bbf\u95ee\u4ee3\u7801<\/li>\n<li>\u4efb\u52a1\uff1alow-data medical image segmentation\uff1bpolyp segmentation\u3001gland segmentation\u3001skin lesion segmentation\uff0c\u5e76\u9644\u52a0 3D Synapse \u6cdb\u5316<\/li>\n<li>\u6570\u636e\u96c6\uff1aKvasir-SEG\u3001GlaS\u3001ISIC-2016\uff1b\u4f4e\u6807\u6ce8\u6bd4\u4f8b\u5b9e\u9a8c\uff1bappearance corruption robustness\uff1b\u9644\u52a0 Synapse multi-organ CT<\/li>\n<li>\u65b9\u6cd5\u7c7b\u578b\uff1astructure-aware pretraining\uff1bMeanFlow latent transport\uff1bDispersive Loss\uff1bfrequency-directional dynamic convolution\uff1bDINOv2 encoder adaptation\uff1bboundary-aware decoder<\/li>\n<\/ul>\n<h3>paper-deep-reader \u7cbe\u8bfb\u7ed3\u679c<\/h3>\n<h4>1. \u4e00\u53e5\u8bdd\u7ed3\u8bba<\/h4>\n<p>SpectraFlow \u7684\u4e3b\u8981\u4ef7\u503c\u5728\u4e8e\u628a\u4f4e\u6807\u6ce8\u533b\u5b66\u5206\u5272\u4e2d\u7684\u201c\u7eb9\u7406\u504f\u7f6e\u201d\u95ee\u9898\u62c6\u6210\u4e24\u6b65\u89e3\u51b3\uff1a\u5148\u7528 image+mask mixed-domain MeanFlow \u9884\u8bad\u7ec3\u628a encoder \u8868\u793a\u63a8\u5411\u51e0\u4f55\u7ed3\u6784\uff0c\u518d\u7528 DAF + FDConv decoder \u4fee\u590d\u9ad8\u9891\u8fb9\u754c\uff1b\u5b83\u5bf9 low-data polyp segmentation \u5f88\u6709\u542f\u53d1\uff0c\u4f46\u65b9\u6cd5\u4f9d\u8d56 mask-guided pretraining\uff0c\u4e14\u4e0e\u540c\u65e5 Med-DisSeg \u7684\u4e3b\u9898\u9ad8\u5ea6\u91cd\u53e0\uff0c\u9700\u8981\u8c28\u614e\u770b\u5f85\u72ec\u7acb\u8d21\u732e\u3002<\/p>\n<h4>2. \u7814\u7a76\u80cc\u666f\u4e0e\u6838\u5fc3\u95ee\u9898<\/h4>\n<p>\u8bba\u6587\u7814\u7a76\u4f4e\u6570\u636e\u533b\u5b66\u56fe\u50cf\u5206\u5272\u3002\u4f5c\u8005\u8ba4\u4e3a\uff0c\u5f53\u6807\u6ce8\u5c11\u65f6\uff0cCNN \u6216 Transformer encoder \u90fd\u5bb9\u6613\u5b66\u4e60 scanner \/ protocol \/ patient appearance \u76f8\u5173\u7684 texture cue\uff0c\u800c\u4e0d\u662f\u7a33\u5b9a\u7684\u89e3\u5256\u51e0\u4f55\u7ed3\u6784\uff1b\u8fd9\u4f1a\u5bfc\u81f4\u8fb9\u754c\u6a21\u7cca\u3001\u65ad\u88c2\u3001\u5c0f\u7ed3\u6784\u4e22\u5931\u3002\u666e\u901a self-supervised pretraining\uff08\u5982 MIM\u3001token prediction\u3001reconstruction\uff09\u6539\u5584 transferability\uff0c\u4f46\u4ecd\u53ef\u80fd\u504f\u5411 appearance reconstruction\uff0c\u800c\u4e0d\u662f segmentation \u771f\u6b63\u9700\u8981\u7684 topology\u3001shape continuity \u548c boundary preservation\u3002<\/p>\n<p>paper map \u53ef\u6982\u62ec\u4e3a\uff1a\u8bba\u6587\u7814\u7a76\u4f4e\u6807\u6ce8\u533b\u5b66\u5206\u5272\u4e2d\u7684 texture bias \u4e0e\u9ad8\u9891\u8fb9\u754c\u9519\u8bef\uff1b\u4e3b\u52a8\u4f5c\u662f\u63d0\u51fa SpectraFlow\uff0c\u4e24\u9636\u6bb5\u7ed3\u5408 Mixed-Domain MeanFlow Pretraining\u3001Dispersive Loss\u3001Direct Attentional Fusion \u548c Frequency-Directional Dynamic Convolution\uff1b\u4f5c\u8005\u58f0\u79f0\u5728 ISIC-2016\u3001Kvasir-SEG\u3001GlaS \u4e0a\u4f18\u4e8e U-Net\u3001PraNet\u3001DCSAU-Net\u3001ConDSeg \u7b49\uff0c\u5e76\u5728\u4f4e\u6807\u6ce8\u548c\u6270\u52a8\u6d4b\u8bd5\u4e2d\u66f4\u7a33\u5065\uff1b\u8bc1\u636e\u4e3b\u8981\u6765\u81ea SOTA \u8868\u3001Stage-1\/Stage-2 \u6d88\u878d\u300110%\/20%\/50% \u6807\u6ce8\u66f2\u7ebf\u3001appearance shift robustness\u3001Synapse \u6cdb\u5316\u8868\uff1b\u4e3b\u8981\u5931\u8d25\u98ce\u9669\u662f Stage-1 \u7528 mask \u4f5c\u4e3a structural input\uff0c\u867d\u7136\u4f5c\u8005\u8bf4\u4e0d\u662f prediction target\uff0c\u4f46\u5b83\u4ecd\u4f9d\u8d56\u6807\u6ce8 mask\uff0c\u56e0\u6b64\u201c\u4f4e\u6570\u636e\/\u9884\u8bad\u7ec3\u201d\u7684\u8bbe\u5b9a\u9700\u8981\u4ed4\u7ec6\u533a\u5206\u65e0\u76d1\u7763\u3001\u81ea\u76d1\u7763\u548c\u6709 mask \u6761\u4ef6\u7684\u7ed3\u6784\u9884\u8bad\u7ec3\u3002<\/p>\n<h4>3. \u73b0\u6709\u65b9\u6cd5\u4e0d\u8db3<\/h4>\n<p>\u4f5c\u8005\u6279\u8bc4\u70b9\u5305\u62ec\uff1a<\/p>\n<ol>\n<li><strong>\u4f20\u7edf U-Net \/ CNN<\/strong>\uff1a\u4f9d\u8d56\u5c40\u90e8\u7eb9\u7406\uff0c\u5728\u8de8 scanner\u3001protocol \u6216\u4f4e\u6570\u636e\u65f6\u66f4\u5bb9\u6613\u5b66\u5230 appearance shortcut\u3002<\/li>\n<li><strong>Transformer encoder<\/strong>\uff1a\u5f31\u5316\u5c40\u90e8\u504f\u7f6e\uff0c\u4f46\u4f4e\u6570\u636e\u4e0b\u4ecd\u53ef\u80fd\u9009\u62e9\u66f4\u5bb9\u6613\u5b66\u7684\u5916\u89c2 cue\uff0c\u800c\u4e0d\u662f\u51e0\u4f55\u7ed3\u6784\u3002<\/li>\n<li><strong>\u666e\u901a self-supervised pretraining<\/strong>\uff1amasked image modeling \u6216 reconstruction \u66f4\u5173\u6ce8\u50cf\u7d20\/\u7eb9\u7406\u6062\u590d\uff0c\u672a\u663e\u5f0f\u5efa\u6a21 segmentation \u6240\u9700\u7684 topology\u3001boundary layout \u548c shape continuity\u3002<\/li>\n<li><strong>\u666e\u901a frequency \/ boundary module<\/strong>\uff1a\u8bb8\u591a\u9891\u57df\u6a21\u5757\u662f\u5168\u5c40\u6216\u5404\u5411\u540c\u6027\u7684\u56fa\u5b9a\u64cd\u4f5c\uff0c\u7f3a\u5c11\u5bf9\u5c40\u90e8\u8fb9\u754c\u65b9\u5411\u548c\u4e0a\u4e0b\u6587\u53d8\u5316\u7684\u9002\u914d\u3002<\/li>\n<\/ol>\n<h4>4. \u65b9\u6cd5\u603b\u89c8<\/h4>\n<p>\u8def\u7ebf\u8bb0\u5f55\uff1aPrimary adapter = method-algorithm\uff1bSecondary adapter = benchmark-evaluation\uff08\u8f7b\u91cf\u4f7f\u7528\uff09\uff1bEvidence packs = general\u3001experimental-eval\u3001robustness-and-ood\u3001ablation-and-mechanism-isolation\uff1bRoute confidence = \u4e2d\u3002\u9009\u62e9\u8be5\u8def\u7ebf\u662f\u56e0\u4e3a\u8bba\u6587\u4e3b\u8981\u8d21\u732e\u662f\u4e24\u9636\u6bb5\u7b97\u6cd5\/\u8bad\u7ec3\u6846\u67b6\uff0c\u4f46\u5176\u53ef\u4fe1\u5ea6\u4f9d\u8d56\u4f4e\u6570\u636e\u3001\u6270\u52a8\u9c81\u68d2\u6027\u548c ablation \u662f\u5426\u652f\u6491\u673a\u5236\u53d9\u4e8b\u3002<\/p>\n<p>SpectraFlow \u7684\u4e24\u9636\u6bb5\u6d41\u7a0b\uff1a<\/p>\n<ol>\n<li>\n<p><strong>Stage-1\uff1aMixed-Domain MeanFlow Pretraining<\/strong><br \/>\n   \u4f7f\u7528 DINOv2 encoder \u4ea7\u751f latent feature <code>z0=E(x)<\/code>\u3002\u5c06 image \u548c binary mask \u90fd\u8f93\u5165 encoder latent space\uff0cmask \u901a\u8fc7\u91cd\u590d channel \u5f62\u5f0f\u4f5c\u4e3a structure-only input\uff1b\u4f5c\u8005\u5f3a\u8c03 mask \u4e0d\u662f prediction target\uff0c\u4e5f\u6ca1\u6709 segmentation loss\uff0c\u800c\u662f structural guidance\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Latent perturbation \u4e0e MeanFlow objective<\/strong><br \/>\n   \u6784\u9020 <code>z_t = \u03b1_t z0 + \u03c3_t \u03b5<\/code>\uff0c\u5176\u4e2d <code>\u03b1_t=1-t<\/code>\u3001<code>\u03c3_t=t<\/code>\u3002MeanFlow head \u5b66\u4e60\u4e24\u4e2a\u65f6\u95f4\u70b9\u4e4b\u95f4\u7684 time-averaged velocity \/ transport direction\uff0c\u4ee5 latent transport regression \u7ec4\u7ec7\u8868\u793a\u7a7a\u95f4\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Dispersive Loss<\/strong><br \/>\n   \u5728 Stage-1 \u4e2d\u52a0\u5165 all-negative Dispersive Loss\uff0c\u907f\u514d medical images \u9ad8\u76f8\u4f3c\u5916\u89c2\u5bfc\u81f4 latent collapse\u3002PDF \u4e2d\u4e3b\u8981\u4f7f\u7528 squared L2 repulsion \u7248\u672c\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Stage-2\uff1aSegmentation finetuning<\/strong><br \/>\n   \u4e22\u5f03 MeanFlow head\uff0c\u63a5\u5165 lightweight decoder\uff1b\u53ea unfreeze encoder \u6700\u540e\u4e00\u4e2a block\uff0c\u907f\u514d\u4f4e\u6570\u636e\u4e0b full fine-tuning \u7834\u574f\u7ed3\u6784\u8868\u793a\u3002loss \u4e3a Dice+BCE segmentation loss \u52a0 boundary-aware loss\u3002<\/p>\n<\/li>\n<li>\n<p><strong>DAF + FDConv decoder<\/strong><br \/>\n   DAF\uff08Direct Attentional Fusion\uff09\u7528 local\/global context \u4ea7\u751f gate map\uff0c\u6291\u5236 noisy skip feature\uff1bFDConv\uff08Frequency-Directional Dynamic Convolution\uff09\u66ff\u6362 refinement block \u4e2d\u7684\u666e\u901a 3\u00d73 conv\uff0c\u7528\u65b9\u5411\u611f\u77e5\u3001\u9ad8\u9891\u54cd\u5e94\u5f3a\u5316\u8fb9\u754c\u3002<\/p>\n<\/li>\n<\/ol>\n<h4>5. \u6838\u5fc3\u6a21\u5757\u62c6\u89e3<\/h4>\n<ul>\n<li>\n<p><strong>Mixed-Domain MeanFlow Pretraining<\/strong>\uff1a\u8f93\u5165\u662f\u56fe\u50cf\u548c\u5bf9\u5e94\u4e8c\u503c mask\uff0c\u8f93\u51fa\u662f\u66f4 geometry-consistent \u7684 encoder \u8868\u793a\u3002\u5173\u952e\u70b9\u662f mask \u88ab\u5f53\u4f5c conditional structural input\uff0c\u800c\u4e0d\u662f\u76f4\u63a5\u5206\u5272\u76d1\u7763 target\u3002\u8be5\u8bbe\u5b9a\u5f88\u6709\u60f3\u6cd5\uff0c\u4f46\u4e25\u683c\u8bf4\u4ecd\u4f7f\u7528\u4e86\u6807\u6ce8 mask \u4fe1\u606f\uff0c\u4e0d\u662f\u7eaf self-supervised\uff1b\u4f4e\u6807\u6ce8\u573a\u666f\u4e2d\u53ef\u884c\uff0c\u4f46\u5fc5\u987b\u786e\u4fdd Stage-1 \u4f7f\u7528\u7684 mask \u6570\u91cf\u4e0e\u4e0b\u6e38\u6807\u6ce8\u6bd4\u4f8b\u4e00\u81f4\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Latent transport regression<\/strong>\uff1aMeanFlow \u4e0d\u505a pixel reconstruction\uff0c\u800c\u662f\u5728 latent space \u5b66 transport direction\u3002\u5b83\u8bd5\u56fe\u8ba9\u8868\u793a\u6cbf\u7ed3\u6784\u53d8\u5316\u5e73\u6ed1\u79fb\u52a8\uff0c\u4ece\u800c\u51cf\u5c11 appearance texture bias\u3002\u8fd9\u4e2a\u6a21\u5757\u65b0\u9896\u5ea6\u8f83\u9ad8\uff0c\u4f46\u5b9e\u73b0\u590d\u6742\uff0c\u4e14\u4f9d\u8d56 DINOv2\/MeanFlow \u7ec6\u8282\uff1b\u77ed\u671f\u590d\u73b0\u6210\u672c\u9ad8\u4e8e\u666e\u901a U-Net module\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Dispersive Loss<\/strong>\uff1a\u548c Med-DisSeg \u4e00\u6837\uff0c\u7528 batch \u5185 repulsion \u9632\u6b62\u8868\u793a\u574d\u584c\u3002SpectraFlow \u5728 Table 2 \u4e2d\u663e\u793a\uff1aNo Stage-1 DINOv2 Dice 80.12\uff1bMeanFlow image-only 82.15\uff1bMixed-domain MAE 85.50\uff1bMixed-domain MeanFlow 87.10\uff1b\u52a0 Dispersive Loss \u540e 88.62\u3002\u8fd9\u662f\u8bba\u6587\u6700\u6e05\u695a\u7684\u673a\u5236\u8bc1\u636e\u4e4b\u4e00\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Direct Attentional Fusion\uff08DAF\uff09<\/strong>\uff1a\u8f93\u5165 deep decoder feature \u548c mid-level skip feature\uff0c\u8f93\u51fa gated fusion feature\u3002\u5c40\u90e8\u548c\u5168\u5c40 attention \u5171\u540c\u751f\u6210 gate map\uff0c\u76ee\u6807\u662f\u51cf\u5c11 skip \u4e2d\u80cc\u666f\u566a\u58f0\u4e0e\u8bed\u4e49 gap\u3002\u5bf9 U-Net \/ DAMamba decoder \u5f88\u53ef\u8fc1\u79fb\uff0c\u53ef\u66ff\u4ee3\u7b80\u5355 concat skip\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Frequency-Directional Dynamic Convolution\uff08FDConv\uff09<\/strong>\uff1a\u7528\u4e8e boundary refinement\uff0c\u5f3a\u8c03\u65b9\u5411\u611f\u77e5\u7684\u9ad8\u9891\u8fb9\u754c\u54cd\u5e94\u3002\u6d88\u878d\u663e\u793a FDConv \u5355\u72ec\u80fd\u663e\u8457\u964d\u4f4e HD95\uff0817.24\u219212.15\uff09\uff0c\u8bf4\u660e\u5b83\u6bd4\u666e\u901a CBAM \u66f4\u8d34\u5408\u8fb9\u754c refinement\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Partial fine-tuning \u7b56\u7565<\/strong>\uff1a\u53ea fine-tune last encoder block\u3002Appendix Table 5 \u663e\u793a Frozen Encoder DSC 91.28 \/ HD95 12.52\uff0cFull fine-tuning 89.72 \/ 16.68\uff0cLast Block fine-tuning 92.98 \/ 10.86\u3002\u8fd9\u5bf9\u4f4e\u6570\u636e\u533b\u5b66\u5206\u5272\u5f88\u91cd\u8981\uff1afoundation\/self-supervised encoder \u4e0d\u4e00\u5b9a\u8d8a\u591a\u5fae\u8c03\u8d8a\u597d\u3002<\/p>\n<\/li>\n<li>\n<p><strong>\u662f\u5426\u9002\u5408 polyp \/ 3D segmentation<\/strong>\uff1a\u5bf9 polyp segmentation \u5f88\u76f8\u5173\uff0c\u56e0\u4e3a\u5305\u542b Kvasir-SEG \u4e14\u6709 appearance shift robustness\uff1b\u5bf9 3D segmentation\uff0c\u76ee\u524d\u53ea\u662f Synapse \u8868\u683c\u9a8c\u8bc1\uff0c\u65b9\u6cd5\u672c\u4f53\u662f 2D 224\u00d7224 pipeline\uff0c\u4e0d\u80fd\u76f4\u63a5\u5f53 3D framework\u3002<\/p>\n<\/li>\n<\/ul>\n<h4>6. \u5b9e\u9a8c\u8bbe\u8ba1\u4e0e\u7ed3\u679c<\/h4>\n<p>\u5b9e\u9a8c\u6570\u636e\u4e0e\u6307\u6807\uff1a<br \/>\n- Kvasir-SEG\uff1a880\/120 train\/validation\uff0cpolyp segmentation\u3002<br \/>\n- GlaS\uff1a85\/80 official split\uff0cgland segmentation\u3002<br \/>\n- ISIC-2016\uff1a900\/379 official split\uff0cskin lesion segmentation\u3002<br \/>\n- \u6307\u6807\uff1amIoU\u3001mDSC\u3001Recall\u3001Precision\u3001HD95\uff1b\u8f93\u5165 resize \u5230 224\u00d7224\u3002<br \/>\n- Stage-2 \u8bad\u7ec3\u6700\u591a 100 epochs\uff0cearly stopping by validation Dice\uff1blr plateau reduce factor 0.5\uff0cminimum lr <code>1e-6<\/code>\u3002<\/p>\n<p>\u5173\u952e\u7ed3\u679c\uff1a<\/p>\n<ul>\n<li><strong>Table 1 SOTA comparison<\/strong>\uff1aSpectraFlow \u5728 ISIC-2016 \u8fbe mIoU <strong>86.88<\/strong>\u3001mDSC <strong>92.98<\/strong>\uff1bKvasir-SEG \u8fbe mIoU <strong>85.90<\/strong>\u3001mDSC <strong>91.60<\/strong>\uff1bGlaS \u8fbe mIoU <strong>85.63<\/strong>\u3001mDSC <strong>92.12<\/strong>\u3002\u5bf9\u6bd4 ConDSeg\uff08AAAI 2025\uff09\uff1aISIC-2016 86.28\/92.24\uff0cKvasir-SEG 84.62\/90.45\uff0cGlaS 84.96\/91.38\uff0cSpectraFlow \u5747\u5c0f\u5e45\u9886\u5148\u3002<\/li>\n<li><strong>Table 2 Stage-1 pretraining ablation\uff08ISIC-2016\uff09<\/strong>\uff1aOfficial DINOv2 without Stage-1 Dice <strong>80.12<\/strong>\u3001mIoU 70.14\u3001HD95 34.18\uff1bMeanFlow image-only Dice 82.15\uff1bMixed-domain MAE 85.50\uff1bMixed-domain MeanFlow 87.10\uff1b+ Dispersive Loss \u540e Dice <strong>88.62<\/strong>\u3001mIoU <strong>80.55<\/strong>\u3001HD95 <strong>17.24<\/strong>\u3002<\/li>\n<li><strong>Table 3 Stage-2 decoder ablation<\/strong>\uff1abaseline Dice 88.62\u3001HD95 17.24\uff1b+ FDConv \u4e3a Dice <strong>91.65<\/strong>\u3001HD95 <strong>12.15<\/strong>\uff1b+ DAF \u4e3a Dice <strong>91.50<\/strong>\u3001HD95 <strong>12.34<\/strong>\uff1bFull DAF+FDConv \u4e3a Dice <strong>92.98<\/strong>\u3001mIoU <strong>86.88<\/strong>\u3001HD95 <strong>10.86<\/strong>\u3002CBAM \u5355\u72ec\u6216\u4e0e DAF \u5806\u53e0\u53cd\u800c\u66f4\u5dee\uff0c\u8bf4\u660e\u4e0d\u662f\u4efb\u610f attention \u90fd\u6709\u6548\u3002<\/li>\n<li><strong>\u4f4e\u6807\u6ce8\u6bd4\u4f8b\u5b9e\u9a8c<\/strong>\uff1a\u8bba\u6587\u62a5\u544a\u5728 10%\u300120%\u300150%\u3001100% \u6807\u6ce8\u4e0b\u5747\u4f18\u4e8e baseline\uff1b\u7279\u522b\u662f\u5728 10% labeled data \u4e0b\u4f18\u52bf\u66f4\u660e\u663e\u3002\u4f46\u56fe\u4e2d\u6587\u5b57\u62bd\u53d6\u6ca1\u6709\u7ed9\u51fa\u5168\u90e8\u7cbe\u786e\u6570\u503c\uff0c\u4eca\u65e5\u4e0d\u7f16\u9020\u3002<\/li>\n<li><strong>appearance shift robustness\uff08Kvasir-SEG\uff09<\/strong>\uff1aU-Net++ clean Dice 77.2\uff0c\u5728 contrast corruption \u4e0b\u6389\u5230 56.5\uff08-20.7%\uff09\uff1bSpectraFlow clean Dice 91.6\uff0c\u5728 contrast \u4e0b 84.9\uff08-6.7%\uff09\uff0cbrightness \/ blur \/ noise \u4e0b\u4e5f\u66f4\u7a33\u3002<\/li>\n<li><strong>Synapse 3D\u6cdb\u5316 Table 4<\/strong>\uff1amean DSC <strong>85.2<\/strong>\uff0c\u9ad8\u4e8e WMREN 84.4\u3001ConDSeg 80.2\u3001SwinUNet 79.1\uff1bpancreas 72.6\u3001gallbladder 76.5 \u662f\u4eae\u70b9\u3002<\/li>\n<\/ul>\n<h4>7. \u5b9e\u9a8c\u53ef\u4fe1\u5ea6\u5224\u65ad<\/h4>\n<p>\u53ef\u4fe1\u4e4b\u5904\uff1a<\/p>\n<ul>\n<li>\u6709\u6e05\u6670\u7684 Stage-1 \u548c Stage-2 \u6d88\u878d\uff0c\u80fd\u652f\u6491\u201c\u7ed3\u6784\u9884\u8bad\u7ec3\u201d\u548c\u201c\u9891\u7387\u65b9\u5411\u8fb9\u754c refinement\u201d\u5206\u522b\u6709\u6548\u3002<\/li>\n<li>\u62a5\u544a HD95\uff0c\u5e76\u7528 FDConv \/ DAF \u6d88\u878d\u663e\u793a\u8fb9\u754c\u6307\u6807\u786e\u5b9e\u6539\u5584\u3002<\/li>\n<li>\u4f4e\u6807\u6ce8\u6bd4\u4f8b\u548c appearance corruption \u5206\u6790\u8d34\u5408\u8bba\u6587\u58f0\u79f0\u7684 low-data \/ texture-bias \u95ee\u9898\u3002<\/li>\n<li>Kvasir-SEG \u76f4\u63a5\u76f8\u5173\u4e8e polyp segmentation\uff0c\u4e14 corruption robustness \u5bf9\u8de8\u4e2d\u5fc3\u5185\u955c\u5f88\u6709\u610f\u4e49\u3002<\/li>\n<\/ul>\n<p>\u9700\u8981\u8c28\u614e\u4e4b\u5904\uff1a<\/p>\n<ul>\n<li>Stage-1 \u4f7f\u7528 binary masks \u4f5c\u4e3a structural input\uff0c\u867d\u7136\u4e0d\u662f prediction target\uff0c\u4f46\u4ecd\u4f9d\u8d56\u6807\u6ce8 mask\uff1b\u5728\u771f\u5b9e\u4f4e\u6807\u6ce8\u573a\u666f\u4e2d\uff0c\u5b83\u4e0d\u80fd\u66ff\u4ee3\u65e0\u6807\u6ce8\u81ea\u76d1\u7763\u3002<\/li>\n<li>\u6ca1\u6709\u786e\u8ba4\u516c\u5f00\u4ee3\u7801\u94fe\u63a5\uff1b\u6458\u8981\u79f0 appendix \u6709\u4ee3\u7801\uff0c\u4f46\u4eca\u65e5\u672a\u80fd\u83b7\u53d6\u5230\u53ef\u8bbf\u95ee\u4ed3\u5e93\u3002<\/li>\n<li>\u4e0e Med-DisSeg \u540c\u4f5c\u8005\u3001\u540c\u65e5\u53d1\u5e03\uff0c\u4e14\u5171\u4eab Dispersive Loss\u3001Kvasir\/GlaS\/ISIC\u3001ConDSeg baseline \u7b49\u53d9\u4e8b\uff0c\u72ec\u7acb\u6027\u548c\u5dee\u5f02\u5316\u9700\u8981\u8bfb\u8005\u81ea\u884c\u5224\u65ad\u3002<\/li>\n<li>\u591a\u6570\u4e3b\u8868\u4ecd\u662f\u5355\u6b21\u7ed3\u679c\uff0c\u7f3a\u5c11\u591a seed \u5747\u503c\/\u65b9\u5dee\u548c\u663e\u8457\u6027\u68c0\u9a8c\u3002<\/li>\n<li>224\u00d7224 resize \u5bf9 boundary HD95 \u7684\u4e34\u5e8a\u610f\u4e49\u6709\u9650\uff1b\u771f\u5b9e\u606f\u8089\u8fb9\u754c\u8bc4\u4f30\u5e94\u5728\u539f\u5206\u8fa8\u7387\u6216\u7edf\u4e00\u7269\u7406\u5c3a\u5ea6\u4e0b\u8fdb\u884c\u3002<\/li>\n<li>3D Synapse \u7ed3\u679c\u5f88\u5f3a\uff0c\u4f46\u65b9\u6cd5\u6b63\u6587\u662f 2D pipeline\uff1b\u82e5\u58f0\u79f0\u9002\u5408 3D\uff0c\u9700\u8981\u66f4\u591a volumetric \u7ec6\u8282\u3002<\/li>\n<\/ul>\n<h4>8. \u4e0e\u4e3b\u6d41\u533b\u5b66\u56fe\u50cf\u5206\u5272\u6846\u67b6\u7684\u5173\u7cfb<\/h4>\n<ul>\n<li><strong>U-Net \/ nnU-Net<\/strong>\uff1aSpectraFlow \u4e0d\u662f nnU-Net recipe\uff0c\u800c\u662f DINOv2 encoder + lightweight decoder \u7684\u4e24\u9636\u6bb5\u6846\u67b6\u3002DAF \u53ef\u4ee5\u76f4\u63a5\u66ff\u4ee3 U-Net skip fusion\uff1bFDConv \u53ef\u4f5c\u4e3a decoder refinement block\u3002<\/li>\n<li><strong>MedNeXt \/ CNN segmentation<\/strong>\uff1aFDConv \u4e0e MedNeXt \u7684\u5927 kernel \/ convolutional inductive bias \u6709\u4e92\u8865\u6027\uff0c\u53ef\u4f5c\u4e3a\u8fb9\u754c refinement head\uff1b\u4f46\u9700\u6bd4\u8f83\u8ba1\u7b97\u91cf\u3002<\/li>\n<li><strong>UNETR \/ Swin-UNet \/ TransUNet \/ TransFuse<\/strong>\uff1aDINOv2 encoder + decoder \u66f4\u63a5\u8fd1 foundation\/self-supervised encoder adaptation\uff0c\u800c\u4e0d\u662f\u4f20\u7edf UNETR\uff1b\u4f46 DAF\/FDConv \u53ef\u8fc1\u79fb\u5230 Transformer decoder\u3002<\/li>\n<li><strong>Mamba \/ VMamba \/ SegMamba \/ DAMamba<\/strong>\uff1aSpectraFlow \u4e0d\u7528 Mamba\uff0c\u4f46\u5b83\u63d0\u9192 DAMamba \u7814\u7a76\u4e0d\u8981\u53ea\u5f3a\u8c03 long-range scanning\uff0c\u8fd8\u8981\u663e\u5f0f\u5904\u7406 texture bias\u3001\u7ed3\u6784\u9884\u8bad\u7ec3\u548c high-frequency boundary error\u3002DAF+FDConv \u53ef\u4f5c\u4e3a DAMamba decoder \u6539\u9020\u5019\u9009\u3002<\/li>\n<li><strong>Foundation model segmentation<\/strong>\uff1a\u4e0d\u5c5e\u4e8e SAM\/MedSAM promptable segmentation\uff0c\u4f46\u5c5e\u4e8e DINOv2-style pretrained visual encoder adaptation\u3002\u5b83\u4e0e MedSAM \u8def\u7ebf\u7684\u5171\u540c\u70b9\u662f\u5229\u7528\u901a\u7528\u89c6\u89c9\u8868\u5f81\uff0c\u5dee\u5f02\u662f\u5b83\u901a\u8fc7 mask-guided MeanFlow \u505a\u7ed3\u6784\u9884\u8bad\u7ec3\u3002<\/li>\n<\/ul>\n<h4>9. \u5bf9\u6211\u8bfe\u9898\u7684\u4ef7\u503c<\/h4>\n<p>\u5bf9\u7528\u6237\u7684 polyp segmentation \/ DAMamba \u65b9\u5411\uff0cSpectraFlow \u6709\u8f83\u9ad8\u65b9\u6cd5\u542f\u53d1\u4f46\u590d\u73b0\u6210\u672c\u8f83\u9ad8\uff1a<\/p>\n<ul>\n<li><strong>polyp segmentation<\/strong>\uff1aKvasir-SEG + corruption robustness \u76f4\u63a5\u76f8\u5173\u3002\u5c24\u5176\u9002\u5408\u5199\u201c\u8de8\u4e2d\u5fc3\/\u5916\u89c2\u6270\u52a8\u5bfc\u81f4\u606f\u8089\u8fb9\u754c\u9519\u8bef\u201d\u7684 related work\u3002<\/li>\n<li><strong>DAMamba \u6539\u9020<\/strong>\uff1a\u53ef\u501f\u9274 DAF + FDConv \u4f5c\u4e3a decoder-side boundary module\uff1b\u4e5f\u53ef\u501f\u9274 partial fine-tuning \u7b56\u7565\uff0c\u5982\u679c\u7528\u6237\u540e\u7eed\u7528 DINOv2 \/ SAM encoder \u505a polyp segmentation\u3002<\/li>\n<li><strong>\u4f4e\u6807\u6ce8\u5b9e\u9a8c<\/strong>\uff1a\u5982\u679c\u7528\u6237\u505a\u5c11\u6807\u6ce8\u606f\u8089\u5206\u5272\uff0c\u53ef\u4ee5\u501f\u9274 10\/20\/50\/100% split \u8bbe\u8ba1\uff0c\u4f46\u8981\u660e\u786e Stage-1 mask usage\uff0c\u907f\u514d\u628a\u5b83\u79f0\u4e3a\u7eaf self-supervised\u3002<\/li>\n<li><strong>\u590d\u73b0\u4f18\u5148\u7ea7<\/strong>\uff1a\u4e0d\u5efa\u8bae\u5148\u590d\u73b0 MeanFlow \u5168\u6d41\u7a0b\uff1b\u66f4\u73b0\u5b9e\u7684\u662f\u5148\u5b9e\u73b0 DAF+FDConv \u6216 partial fine-tuning ablation\u3002<\/li>\n<\/ul>\n<h4>10. \u9605\u8bfb\u5efa\u8bae<\/h4>\n<p><strong>\u5efa\u8bae\u7cbe\u8bfb\u65b9\u6cd5\u548c\u6d88\u878d\uff0c\u4f46\u590d\u73b0\u4f18\u5148\u7ea7\u4f4e\u4e8e\u53ef\u63d2\u62d4\u6a21\u5757\u3002<\/strong> \u5982\u679c\u7528\u6237\u5f53\u524d\u5199 polyp\/DAMamba \u8bba\u6587\uff0c\u5efa\u8bae\u91cd\u70b9\u8bfb Table 2\u3001Table 3\u3001appearance corruption \u56fe\u548c DAF\/FDConv \u8bbe\u8ba1\uff1bMeanFlow \u9884\u8bad\u7ec3\u53ef\u4f5c\u4e3a\u4e2d\u957f\u671f\u65b9\u5411\uff0c\u4e0d\u5efa\u8bae\u77ed\u671f\u4f5c\u4e3a\u4e3b\u7ebf\uff0c\u56e0\u4e3a\u5b9e\u73b0\u590d\u6742\u4e14\u4f9d\u8d56 mask-guided pretraining\u3002<\/p>\n<hr \/>\n<h2>\u4eca\u65e5\u63a8\u8350\u4f18\u5148\u7ea7<\/h2>\n<ol>\n<li>\n<p><strong>Med-DisSeg: Dispersion-Driven Representation Learning for Fine-Grained Medical Image Segmentation<\/strong><br \/>\n   \u66f4\u9002\u5408\u4f5c\u4e3a\u7528\u6237\u8fd1\u671f polyp segmentation \/ DAMamba \u7814\u7a76\u7684\u76f4\u63a5\u53c2\u8003\u3002\u539f\u56e0\u662f\u5b83\u7684 Dispersive Loss \u66f4\u5bb9\u6613\u8fc1\u79fb\u5230\u73b0\u6709\u8bad\u7ec3 pipeline\uff0c\u4e14\u5305\u542b Kvasir-Sessile \u8fd9\u79cd\u8fb9\u754c\u56f0\u96be\u606f\u8089\u573a\u666f\u3002<\/p>\n<\/li>\n<li>\n<p><strong>SpectraFlow: Unifying Structural Pretraining and Frequency Adaptation for Medical Image Segmentation<\/strong><br \/>\n   \u66f4\u9002\u5408\u4f5c\u4e3a low-data \/ boundary-aware \/ pretrained encoder adaptation \u7684\u65b9\u6cd5\u542f\u53d1\u3002DAF+FDConv \u5f88\u503c\u5f97\u62c6\u51fa\u6765\u590d\u73b0\uff0c\u4f46 MeanFlow mixed-domain pretraining \u77ed\u671f\u6210\u672c\u8f83\u9ad8\u3002<\/p>\n<\/li>\n<\/ol>\n<h2>\u4eca\u65e5 PDF \u83b7\u53d6\u60c5\u51b5<\/h2>\n<ul>\n<li>\u8bba\u6587 1\uff1a\u5df2\u9644 PDF \/ \u63d0\u4f9b PDF \u94fe\u63a5\uff1aMEDIA:\/tmp\/medseg_daily_20260516\/med_disseg_2605.14579.pdf\uff1bhttps:\/\/arxiv.org\/pdf\/2605.14579v1<\/li>\n<li>\u8bba\u6587 2\uff1a\u5df2\u9644 PDF \/ \u63d0\u4f9b PDF \u94fe\u63a5\uff1aMEDIA:\/tmp\/medseg_daily_20260516\/spectraflow_2605.14566.pdf\uff1bhttps:\/\/arxiv.org\/pdf\/2605.14566v1<\/li>\n<\/ul>\n<h2>\u4eca\u65e5\u53ef\u6267\u884c\u5efa\u8bae<\/h2>\n<ol>\n<li><strong>\u5148\u628a Dispersive Loss \u52a0\u5230\u73b0\u6709 polyp \/ DAMamba \u8bad\u7ec3\u4e2d\u505a\u6700\u5c0f\u5b9e\u9a8c\u3002<\/strong> \u7528 Kvasir-SEG\u3001CVC-ClinicDB\u3001CVC-ColonDB\u3001ETIS \u62a5\u544a Dice\u3001mIoU\u3001HD95 \/ Boundary F1\uff0c\u5e76\u753b feature distribution \u6216 boundary error map\uff0c\u9a8c\u8bc1\u662f\u5426\u771f\u7684\u7f13\u89e3\u524d\u666f\/\u80cc\u666f\u8868\u793a\u6df7\u6dc6\u3002<\/li>\n<li><strong>\u628a SpectraFlow \u7684 DAF+FDConv \u62c6\u6210 decoder plug-in\uff0c\u800c\u4e0d\u662f\u590d\u73b0\u6574\u5957 MeanFlow\u3002<\/strong> \u5728 U-Net \/ TransFuse \/ DAMamba decoder \u7684 skip fusion \u540e\u52a0\u5165 DAF \u6216 FDConv\uff0c\u505a\u9010\u6a21\u5757\u6d88\u878d\uff0c\u907f\u514d\u201c\u591a\u6a21\u5757\u4e00\u8d77\u52a0\u4f46\u4e0d\u77e5\u9053\u8c01\u6709\u6548\u201d\u3002<\/li>\n<li><strong>related work \u53ef\u4ee5\u65b0\u589e\u4e00\u7c7b\uff1arepresentation dispersion and boundary-frequency refinement\u3002<\/strong> Med-DisSeg \u8d1f\u8d23 representation collapse \/ dispersive regularization\uff0cSpectraFlow \u8d1f\u8d23 structure-aware pretraining \/ frequency-directional boundary refinement\uff1b\u4f46\u9700\u6ce8\u660e\u4e24\u8005\u5747\u4e3a 2026 arXiv preprint\uff0c\u4ee3\u7801\u672a\u786e\u8ba4\u516c\u5f00\u3002<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>\u4eca\u65e5\u533b\u5b66\u56fe\u50cf\u5206\u5272\u6700\u65b0\u8bba\u6587\u7cbe\u8bfb\u8ffd\u8e2a \u4eca\u65e5\u7ed3\u8bba \u4eca\u5929\u5728 arXiv 2026-05-14 \u65b0\u589e\u8bba\u6587\u4e2d\uff0c\u7b5b\u51fa 2 \u7bc7\u76f4\u63a5\u9762\u5411\u533b\u5b66\u56fe\u50cf\u5206\u5272 &#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"emotion":"","emotion_color":"","title_style":"","license":"","footnotes":""},"categories":[85],"tags":[],"class_list":["post-1063","post","type-post","status-publish","format-standard","hentry","category-85"],"views":6,"_links":{"self":[{"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/posts\/1063","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/comments?post=1063"}],"version-history":[{"count":0,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/posts\/1063\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/media?parent=1063"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/categories?post=1063"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/tags?post=1063"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}