{"id":1048,"date":"2026-05-14T08:39:07","date_gmt":"2026-05-14T00:39:07","guid":{"rendered":"https:\/\/www.eutaboo.com\/index.php\/2026\/05\/14\/2026-05-14-%e5%8c%bb%e5%ad%a6%e5%9b%be%e5%83%8f%e5%88%86%e5%89%b2%e8%ae%ba%e6%96%87%e7%b2%be%e8%af%bb%ef%bc%9afeformer-%e4%b8%8e-usema\/"},"modified":"2026-05-14T08:39:07","modified_gmt":"2026-05-14T00:39:07","slug":"2026-05-14-%e5%8c%bb%e5%ad%a6%e5%9b%be%e5%83%8f%e5%88%86%e5%89%b2%e8%ae%ba%e6%96%87%e7%b2%be%e8%af%bb%ef%bc%9afeformer-%e4%b8%8e-usema","status":"publish","type":"post","link":"https:\/\/www.eutaboo.com\/index.php\/2026\/05\/14\/2026-05-14-%e5%8c%bb%e5%ad%a6%e5%9b%be%e5%83%8f%e5%88%86%e5%89%b2%e8%ae%ba%e6%96%87%e7%b2%be%e8%af%bb%ef%bc%9afeformer-%e4%b8%8e-usema\/","title":{"rendered":"2026-05-14 \u533b\u5b66\u56fe\u50cf\u5206\u5272\u8bba\u6587\u7cbe\u8bfb\uff1aFEFormer \u4e0e USEMA"},"content":{"rendered":"<h1>\u4eca\u65e5\u533b\u5b66\u56fe\u50cf\u5206\u5272\u6700\u65b0\u8bba\u6587\u7cbe\u8bfb\u8ffd\u8e2a<\/h1>\n<h2>\u4eca\u65e5\u7ed3\u8bba<\/h2>\n<p>\u4eca\u5929\u68c0\u7d22\u5230 2026-05-11 \u81f3 2026-05-12 arXiv \u4e0a\u591a\u7bc7\u533b\u5b66\u56fe\u50cf\u5206\u5272\u65b0\u7a3f\uff0c\u5176\u4e2d\u6700\u503c\u5f97\u5173\u6ce8\u7684\u4e24\u7bc7\u5206\u522b\u4ee3\u8868\u4e24\u4e2a\u8d8b\u52bf\uff1a\u4e00\u662f\u628a\u9891\u57df\u5efa\u6a21\u7cfb\u7edf\u6027\u5d4c\u5165 3D\/volumetric Transformer \u5206\u5272\u6846\u67b6\uff0c\u4e8c\u662f\u628a Mamba-like\/linear attention \u7684\u5c40\u90e8-\u5168\u5c40\u673a\u5236\u5d4c\u5165 U-Net\u3002\u4e24\u7bc7\u5747\u4e3a 2026 \u5e74 arXiv preprint\uff0c\u5c1a\u672a\u786e\u8ba4\u9876\u4f1a\/\u9876\u520a\u63a5\u6536\uff1b\u4f46\u90fd\u76f4\u63a5\u9762\u5411\u533b\u5b66\u56fe\u50cf\u5206\u5272\u4e3b\u5e72\u8bbe\u8ba1\uff0c\u4e14\u6709\u5b8c\u6574 PDF \u4e0e\u5b9e\u9a8c\u8868\u683c\uff0c\u56e0\u6b64\u9002\u5408\u4eca\u65e5\u7cbe\u8bfb\u3002<\/p>\n<h2>\u68c0\u7d22\u8bf4\u660e<\/h2>\n<p>\u68c0\u7d22\u8303\u56f4\u8986\u76d6 arXiv 2026-05-01 \u81f3 2026-05-14 \u7684 <code>medical image segmentation<\/code>\u3001<code>Mamba medical image segmentation<\/code>\u3001<code>U-Net medical image segmentation<\/code>\u3001<code>volumetric medical image segmentation<\/code> \u7b49\u5173\u952e\u8bcd\uff0c\u5e76\u68c0\u67e5\u4e86\u5386\u53f2\u5b9a\u65f6\u4efb\u52a1\u8f93\u51fa\u3002\u6240\u6709\u5165\u9009\u8bba\u6587\u5747\u4e3a 2025 \u5e74\u53ca\u4ee5\u540e\uff1b\u4eca\u5929\u672a\u53d1\u73b0\u5df2\u6b63\u5f0f\u6807\u6ce8\u4e3a MICCAI\/CVPR\/MedIA\/TMI \u7b49\u9876\u4f1a\u9876\u520a\u63a5\u6536\u7684\u5168\u65b0\u8bba\u6587\uff0c\u56e0\u6b64\u4f18\u5148\u9009\u62e9\u6700\u65b0\u4e14\u65b9\u6cd5\u8d21\u732e\u8f83\u660e\u786e\u7684 arXiv preprint\u3002\u5df2\u68c0\u67e5\u5386\u53f2\u63a8\u8350\u8bb0\u5f55\u5e76\u6392\u9664\u4e86\u91cd\u590d\u8bba\u6587\uff1b\u5386\u53f2\u5df2\u63a8\u8350\u5e76\u8df3\u8fc7\u7684\u91cd\u590d\u5019\u9009\u5305\u62ec <strong>Geometry-aware Prototype Learning for Cross-domain Few-shot Medical Image Segmentation<\/strong> \u4e0e <strong>XTinyU-Net: Training-Free U-Net Scaling via Initialization-Time Sensitivity<\/strong>\u3002<\/p>\n<h2>WordPress \u53d1\u5e03<\/h2>\n<ul>\n<li>WordPress \u6587\u7ae0\u94fe\u63a5\uff1a&lt;\u5f85\u53d1\u5e03\u540e\u586b\u5199&gt;<\/li>\n<li>WordPress Post ID\uff1a&lt;\u5f85\u53d1\u5e03\u540e\u586b\u5199&gt;<\/li>\n<\/ul>\n<hr \/>\n<h2>\u8bba\u6587 1\uff1aFEFormer: Frequency-enhanced Vision Transformer for Generic Knowledge Extraction and Adaptive Feature Fusion in Volumetric Medical Image Segmentation<\/h2>\n<h3>\u57fa\u672c\u4fe1\u606f<\/h3>\n<ul>\n<li>\u6807\u9898\uff1aFEFormer: Frequency-enhanced Vision Transformer for Generic Knowledge Extraction and Adaptive Feature Fusion in Volumetric Medical Image Segmentation<\/li>\n<li>\u4f5c\u8005 \/ \u7b2c\u4e00\u4f5c\u8005\uff1aJin Yang, Xiaobing Yu, Peijie Qiu \/ \u7b2c\u4e00\u4f5c\u8005 Jin Yang<\/li>\n<li>\u65f6\u95f4\uff1a2026-05-12<\/li>\n<li>\u6765\u6e90\uff1aarXiv preprint, arXiv:2605.11434v1<\/li>\n<li>\u8bba\u6587\u9875\u9762\u94fe\u63a5\uff1ahttps:\/\/arxiv.org\/abs\/2605.11434<\/li>\n<li>PDF \u6587\u4ef6 \/ PDF \u94fe\u63a5\uff1ahttps:\/\/arxiv.org\/pdf\/2605.11434v1 \uff08\u5df2\u4e0b\u8f7d\uff1aMEDIA:\/tmp\/medseg_daily_20260514\/feformer.pdf\uff09<\/li>\n<li>\u4ee3\u7801\u94fe\u63a5\uff1a\u672a\u83b7\u53d6 \/ \u672a\u5728 arXiv \u9875\u9762\u6216 PDF \u6b63\u6587\u4e2d\u786e\u8ba4\u5b98\u65b9\u4ee3\u7801<\/li>\n<li>\u4efb\u52a1\uff1a3D \/ volumetric medical image segmentation\uff0c\u5305\u62ec\u591a\u5668\u5b98\u3001\u809d\u8840\u7ba1\/\u80bf\u7624\u3001\u8111\u80bf\u7624\u3001\u8179\u90e8\u5668\u5b98\u5206\u5272<\/li>\n<li>\u6570\u636e\u96c6\uff1aMICCAI 2022 AMOS CT\uff08300 \u4f8b\uff0c15 \u7c7b\u5668\u5b98\uff09\u3001MSD Hepatic Vessel Tumor CT\uff08303 \u4f8b\uff0cvessel\/tumor\uff09\u3001MSD Brain Tumor multimodal MRI\uff08484 \u4f8b\uff0cED\/ET\/NET\uff09\u3001FLARE CT\uff08361 \u4f8b\uff0c4 \u7c7b\u8179\u90e8\u5668\u5b98\uff09<\/li>\n<li>\u65b9\u6cd5\u7c7b\u578b\uff1aTransformer-based segmentation\uff1bfrequency-domain attention\uff1bwavelet-based feature fusion\uff1b3D medical image segmentation framework<\/li>\n<\/ul>\n<h3>paper-deep-reader \u7cbe\u8bfb\u7ed3\u679c<\/h3>\n<h4>1. \u4e00\u53e5\u8bdd\u7ed3\u8bba<\/h4>\n<p>FEFormer \u7684\u4e3b\u8981\u4ef7\u503c\u5728\u4e8e\u628a \u201c\u9ad8\u9891\u8fb9\u754c\/\u7ec6\u8282 + \u4f4e\u9891\u5168\u5c40\u7ed3\u6784\u201d \u8fd9\u4e2a\u533b\u5b66\u5206\u5272\u4e2d\u5e38\u88ab\u53e3\u5934\u63d0\u5230\u7684\u9700\u6c42\uff0c\u5177\u4f53\u843d\u5230 3D Transformer block\u3001MLP\u3001skip fusion \u4e0e stem bridge \u56db\u4e2a\u4f4d\u7f6e\uff0c\u5e76\u5728 4 \u4e2a\u4f53\u79ef\u5206\u5272\u6570\u636e\u96c6\u4e0a\u7ed9\u51fa\u8f83\u7cfb\u7edf\u7684\u7cbe\u5ea6\u3001HD95\u3001\u590d\u6742\u5ea6\u548c\u6d88\u878d\u8bc1\u636e\u3002<\/p>\n<h4>2. \u7814\u7a76\u80cc\u666f\u4e0e\u6838\u5fc3\u95ee\u9898<\/h4>\n<p>\u8bba\u6587\u7814\u7a76 volumetric medical image segmentation\uff1a\u5728 CT\/MRI \u4f53\u6570\u636e\u4e2d\u5206\u5272\u5668\u5b98\u3001\u8840\u7ba1\u3001\u80bf\u7624\u548c\u8111\u80bf\u7624\u4e9a\u533a\u3002\u6838\u5fc3\u95ee\u9898\u662f\uff1aCNN\/U-Net \u64c5\u957f\u5c40\u90e8\u8fb9\u754c\u4f46\u957f\u7a0b\u4e0a\u4e0b\u6587\u4e0d\u8db3\uff1bViT\/UNETR\/nnFormer\/Swin-UNETR \u64c5\u957f\u5168\u5c40\u4f9d\u8d56\u4f46\u5bb9\u6613\u5f31\u5316\u9ad8\u9891\u7ec6\u8282\uff0c\u4e14\u666e\u901a encoder-decoder \u7684 skip concatenation \u96be\u4ee5\u5904\u7406\u6d45\u5c42\u7ec6\u8282\u4e0e\u6df1\u5c42\u8bed\u4e49\u4e4b\u95f4\u7684\u9891\u7387\u548c\u8bed\u4e49\u9519\u914d\u3002<\/p>\n<p>paper map \u53ef\u6982\u62ec\u4e3a\uff1a\u8bba\u6587\u7814\u7a76 3D \u533b\u5b66\u56fe\u50cf\u5206\u5272\u4e2d\u7684\u5168\u5c40-\u5c40\u90e8\u548c\u9891\u57df\u4fe1\u606f\u6574\u5408\uff1b\u4e3b\u52a8\u4f5c\u662f\u6784\u5efa FEFormer\uff0c\u628a FDSA\u3001FGMLP\u3001WAFF\u3001FCSB \u5206\u522b\u63d2\u5165 attention\u3001MLP\u3001decoder fusion \u4e0e stem bridge\uff1b\u4f5c\u8005\u58f0\u79f0\u5b83\u5728 AMOS\u3001Hepatic Vessel Tumor\u3001Brain Tumor\u3001FLARE \u4e0a\u4f18\u4e8e VNet\u3001nnU-Net\u3001nnFormer\u3001UNETR\u3001Swin UNETR\u3001MedNeXt\u3001VSmTrans\u3001MixUNETR \u7b49\uff1b\u8bc1\u636e\u4e3b\u8981\u6765\u81ea 5-fold cross-validation\u3001DSC\/HD95\u3001Wilcoxon \u68c0\u9a8c\u3001\u590d\u6742\u5ea6\u8868\u548c\u6a21\u5757\u6d88\u878d\uff1b\u5173\u952e\u5931\u8d25\u98ce\u9669\u662f\u9891\u57df\u6a21\u5757\u8f83\u591a\uff0c\u82e5\u5bf9\u7167\u5b9e\u73b0\u6216\u8bad\u7ec3 recipe \u4e0d\u5b8c\u5168\u4e00\u81f4\uff0c\u63d0\u5347\u53ef\u80fd\u90e8\u5206\u6765\u81ea\u5de5\u7a0b\u914d\u7f6e\u800c\u975e\u9891\u57df\u673a\u5236\u672c\u8eab\u3002<\/p>\n<h4>3. \u73b0\u6709\u65b9\u6cd5\u4e0d\u8db3<\/h4>\n<p>\u4f5c\u8005\u8ba4\u4e3a\u73b0\u6709\u65b9\u6cd5\u6709\u56db\u7c7b\u4e0d\u8db3\uff1a<\/p>\n<ol>\n<li><strong>CNN \/ U-Net \/ nnU-Net<\/strong>\uff1a\u5377\u79ef\u5c40\u90e8\u6027\u5f3a\uff0c\u80fd\u6355\u6349\u8fb9\u7f18\u548c\u7eb9\u7406\uff0c\u4f46\u5bf9\u8de8\u5668\u5b98\u3001\u8de8\u5207\u7247\u3001\u5168\u5c40\u89e3\u5256\u5173\u7cfb\u7684\u957f\u7a0b\u4f9d\u8d56\u5efa\u6a21\u4e0d\u8db3\u3002<\/li>\n<li><strong>ViT \/ UNETR \/ nnFormer \u7c7b\u6a21\u578b<\/strong>\uff1aself-attention \u5168\u5c40\u805a\u5408\u504f\u5411\u4f4e\u9891\u8bed\u4e49\uff0c\u53ef\u80fd\u524a\u5f31\u8fb9\u754c\u3001\u5c0f\u7ed3\u6784\u3001\u7ec6\u7ba1\u72b6\u7ed3\u6784\u7b49\u9ad8\u9891\u4fe1\u606f\u3002<\/li>\n<li><strong>\u6807\u51c6 MLP block<\/strong>\uff1a\u7f3a\u5c11\u663e\u5f0f\u7a7a\u95f4\u7ed3\u6784\u4fdd\u6301\u673a\u5236\uff0c\u65e0\u6cd5\u4e3b\u52a8\u533a\u5206\u4f4e\u9891\u8bed\u4e49\u548c\u9ad8\u9891\u8fb9\u754c\u3002<\/li>\n<li><strong>\u666e\u901a skip connection \/ concatenation<\/strong>\uff1a\u628a encoder \u6d45\u5c42\u7279\u5f81\u548c decoder \u6df1\u5c42\u7279\u5f81\u76f4\u63a5\u62fc\u63a5\uff0c\u672a\u5904\u7406\u4e8c\u8005\u7684\u8bed\u4e49\u5dee\u8ddd\u548c\u9891\u7387\u6210\u5206\u5dee\u5f02\uff0c\u53ef\u80fd\u5bfc\u81f4\u878d\u5408\u4e0d\u7a33\u3002<\/li>\n<\/ol>\n<p>\u8fd9\u5957\u95ee\u9898\u5b9a\u4e49\u4e0e\u533b\u5b66\u56fe\u50cf\u5206\u5272\u8f83\u5951\u5408\uff0c\u5c24\u5176\u9002\u7528\u4e8e\u80f0\u817a\u3001\u80be\u4e0a\u817a\u3001\u809d\u8840\u7ba1\u3001\u8111\u80bf\u7624\u589e\u5f3a\u533a\u7b49\u5c0f\u76ee\u6807\u3001\u8fb9\u754c\u5f31\u3001\u5f62\u6001\u53d8\u5316\u5927\u7684\u7ed3\u6784\u3002<\/p>\n<h4>4. \u65b9\u6cd5\u603b\u89c8<\/h4>\n<p>\u8def\u7ebf\u8bb0\u5f55\uff1aPrimary adapter = method-algorithm\uff1bSecondary adapter = \u65e0\uff1bEvidence packs = general\u3001experimental-eval\u3001ablation-and-mechanism-isolation\u3001reproducibility-and-compute\uff1bRoute confidence = \u9ad8\u3002\u9009\u62e9\u8be5\u8def\u7ebf\u662f\u56e0\u4e3a\u8bba\u6587\u4e3b\u8981\u8d21\u732e\u662f\u65b0\u7f51\u7edc\u7ed3\u6784\uff0c\u8bc1\u636e\u8d1f\u8f7d\u96c6\u4e2d\u5728\u8de8\u6570\u636e\u96c6\u5b9e\u9a8c\u3001\u6d88\u878d\u548c\u590d\u6742\u5ea6\u6bd4\u8f83\u3002<\/p>\n<p>FEFormer \u7684\u6574\u4f53\u6d41\u7a0b\u5982\u4e0b\uff1a<\/p>\n<ol>\n<li>\u8f93\u5165\u4e3a 3D patch\uff0c\u8bba\u6587\u7edf\u4e00\u4f7f\u7528 <code>96 \u00d7 96 \u00d7 96<\/code> patch\u3002<\/li>\n<li>\u7f16\u7801\u5668\u91c7\u7528 hierarchical ViT \u98ce\u683c\u7ed3\u6784\uff0c\u4f46\u628a\u6807\u51c6 self-attention \u66ff\u6362\u4e3a <strong>Frequency-enhanced Dynamic Self-Attention\uff08FDSA\uff09<\/strong>\u3002<\/li>\n<li>\u6bcf\u4e2a Transformer block \u4e2d\u7684\u666e\u901a MLP \u66ff\u6362\u4e3a <strong>Frequency-decomposed Gating MLP\uff08FGMLP\uff09<\/strong>\u3002<\/li>\n<li>decoder \u4e2d\u7684 skip fusion \u4e0d\u4f7f\u7528\u7b80\u5355 concat\uff0c\u800c\u4f7f\u7528 <strong>Wavelet-guided Adaptive Feature Fusion\uff08WAFF\uff09<\/strong>\uff0c\u901a\u8fc7 DWT\/wavelet \u5b50\u5e26\u5bf9 encoder\/decoder \u7279\u5f81\u505a\u9891\u57df\u5bf9\u9f50\u4e0e\u878d\u5408\u3002<\/li>\n<li>\u5728 encoder stem \u4e0e decoder \u4e4b\u95f4\u52a0\u5165 <strong>Frequency-enabled Cross-scale Stem Bridge\uff08FCSB\uff09<\/strong>\uff0c\u8ba9\u6d45\u5c42\u4f4e\u7ea7\u7ec6\u8282\u8de8\u5c3a\u5ea6\u4f20\u9012\u5230\u89e3\u7801\u4fa7\u3002<\/li>\n<li>\u8bad\u7ec3\u635f\u5931\u4e3a cross-entropy loss + Dice loss\uff0c\u4f18\u5316\u5668 AdamW\uff0c1000 epochs\uff0c5-fold cross-validation\uff0c\u6307\u6807\u4e3a DSC \u548c HD95\u3002<\/li>\n<\/ol>\n<h4>5. \u6838\u5fc3\u6a21\u5757\u62c6\u89e3<\/h4>\n<ul>\n<li>\n<p><strong>FDSA\uff08Frequency-enhanced Dynamic Self-Attention\uff09<\/strong>\uff1a\u8f93\u5165\u4e3a token\/feature map\uff0c\u8f93\u51fa\u4e3a\u878d\u5408\u5c40\u90e8\u5377\u79ef\u504f\u7f6e\u548c\u9891\u57df attention \u7684\u7279\u5f81\u3002\u5b83\u5148\u7528 large-kernel depthwise convolution \u5f15\u5165\u5c40\u90e8\u7ed3\u6784\uff0c\u518d\u901a\u8fc7 FFT \u540e\u7684\u9891\u57df attention \u5efa\u6a21\u957f\u7a0b\u4f9d\u8d56\uff0c\u5e76\u7528 multi-frequency dynamic mechanism \u5efa\u6a21\u4e0d\u540c\u9891\u6bb5\u7684\u91cd\u8981\u6027\u3002\u89e3\u51b3\u7684\u95ee\u9898\u662f\u6807\u51c6 attention \u5bf9\u7ec6\u8282\u4e0d\u654f\u611f\u3001\u5bf9 channel\/frequency \u5173\u7cfb\u5efa\u6a21\u4e0d\u8db3\u3002\u521b\u65b0\u70b9\u8f83\u660e\u786e\uff0c\u4f46\u5de5\u7a0b\u590d\u6742\u5ea6\u9ad8\u3002<\/p>\n<\/li>\n<li>\n<p><strong>FGMLP\uff08Frequency-decomposed Gating MLP\uff09<\/strong>\uff1a\u8f93\u5165\u4e3a Transformer block \u5185\u7279\u5f81\uff0c\u8f93\u51fa\u4e3a\u7ecf\u9891\u7387\u5206\u89e3\u548c gating \u540e\u7684\u7279\u5f81\u3002\u5176\u4f5c\u7528\u662f\u628a\u4f4e\u9891\u5168\u5c40\u8bed\u4e49\u4e0e\u9ad8\u9891\u5c40\u90e8\u7ec6\u8282\u5206\u522b\u8c03\u5236\uff0c\u800c\u4e0d\u662f\u8ba9 MLP \u4f5c\u4e3a\u7eaf channel mixing\u3002\u5b83\u9002\u5408\u8fc1\u79fb\u5230 Swin-UNETR\u3001UNETR\u3001\u751a\u81f3 Mamba block \u540e\u7684 feed-forward \u90e8\u5206\u3002<\/p>\n<\/li>\n<li>\n<p><strong>WAFF\uff08Wavelet-guided Adaptive Feature Fusion\uff09<\/strong>\uff1a\u8f93\u5165\u4e3a encoder skip feature \u4e0e decoder upsampled feature\uff0c\u8f93\u51fa\u4e3a\u9891\u57df\u5bf9\u9f50\u540e\u7684\u878d\u5408\u7279\u5f81\u3002DWT \u628a\u7279\u5f81\u62c6\u6210\u4f4e\u9891\u548c\u9ad8\u9891\u5b50\u5e26\uff0c\u5728\u5bf9\u5e94\u5b50\u5e26\u4e0a\u505a adaptive fusion\uff0c\u518d\u56de\u5230\u7a7a\u95f4\u57df\u3002\u8be5\u6a21\u5757\u5bf9 U-Net \u7c7b\u7ed3\u6784\u6700\u6709\u8fc1\u79fb\u4ef7\u503c\uff0c\u56e0\u4e3a\u5b83\u76f4\u63a5\u66ff\u4ee3 skip concatenation\uff0c\u9002\u5408\u5c1d\u8bd5\u5728 U-Net\u3001nnU-Net\u3001TransUNet\u3001UNetR\u3001DAMamba decoder \u4e2d\u4f5c\u4e3a feature fusion block\u3002<\/p>\n<\/li>\n<li>\n<p><strong>FCSB\uff08Frequency-enabled Cross-scale Stem Bridge\uff09<\/strong>\uff1a\u8f93\u5165\u4e3a\u6d45\u5c42 stem features \u548c\u66f4\u6df1\u5c42\/decoder features\uff0c\u8f93\u51fa\u4e3a\u52a0\u5f3a\u7684\u4f4e\u7ea7\u7ec6\u8282\u4f20\u64ad\u3002\u5b83\u5c1d\u8bd5\u89e3\u51b3\u4f53\u79ef\u5206\u5272\u4e2d\u4e0b\u91c7\u6837\u5bfc\u81f4\u7684\u7ec6\u8282\u4e22\u5931\uff0c\u7279\u522b\u662f\u5c0f\u5668\u5b98\u3001\u8840\u7ba1\u548c\u8fb9\u754c\u3002\u5bf9 3D segmentation \u66f4\u6709\u610f\u4e49\uff1b\u5bf9 2D polyp segmentation \u4e5f\u53ef\u501f\u9274\u4e3a shallow feature bridge\u3002<\/p>\n<\/li>\n<li>\n<p><strong>\u662f\u5426\u9002\u5408 polyp segmentation \/ 3D segmentation<\/strong>\uff1aFEFormer \u672c\u8eab\u662f 3D \u4f53\u79ef\u5206\u5272\u6846\u67b6\uff0c\u5bf9 3D CT\/MRI \u66f4\u76f4\u63a5\uff1b\u5bf9 polyp segmentation\uff0c\u6700\u503c\u5f97\u8fc1\u79fb\u7684\u662f WAFF \u4e0e FGMLP\uff0c\u800c\u4e0d\u662f\u6574\u5957 3D ViT\u3002\u606f\u8089\u8fb9\u754c\u5f31\u3001\u989c\u8272\/\u7eb9\u7406\u76f8\u4f3c\uff0c\u9ad8\u9891\u5b50\u5e26\u4e0e wavelet skip fusion \u53ef\u80fd\u6709\u4ef7\u503c\uff0c\u4f46\u9700\u8981\u63a7\u5236\u6a21\u578b\u590d\u6742\u5ea6\uff0c\u907f\u514d\u5728\u5c0f\u6570\u636e\u96c6\u4e0a\u8fc7\u62df\u5408\u3002<\/p>\n<\/li>\n<\/ul>\n<h4>6. \u5b9e\u9a8c\u8bbe\u8ba1\u4e0e\u7ed3\u679c<\/h4>\n<p>\u5b9e\u9a8c\u8986\u76d6\u56db\u4e2a\u6570\u636e\u96c6\u548c\u591a\u79cd baseline\uff1aVNet\u3001Attention U-Net\u3001nnU-Net\u3001nnFormer\u3001SegFormer\u3001TransBTS\u3001UNETR\u3001Swin UNETR\u3001UX-Net\u3001MedNeXt\u3001TransHRNet\u3001VSmTrans\u3001MixUNETR\u3002<\/p>\n<p>\u5173\u952e\u7ed3\u679c\u5982\u4e0b\uff1a<\/p>\n<ul>\n<li><strong>AMOS 2022 multi-organ CT<\/strong>\uff1aFEFormer mean DSC <strong>90.11\u00b110.60<\/strong>\uff0cmean HD95 <strong>1.78\u00b12.04 mm<\/strong>\uff0c\u9ad8\u4e8e nnU-Net \u7684 88.21\u00b114.31 DSC \u548c 2.02\u00b12.69 HD95\uff1b\u8868\u4e2d 15 \u4e2a\u5668\u5b98\u5747\u62a5\u544a FEFormer \u6700\u4f18\uff0c\u4e14 Wilcoxon <code>p&lt;0.01<\/code>\u3002<\/li>\n<li><strong>MSD Hepatic Vessel Tumor<\/strong>\uff1aFEFormer mean DSC <strong>67.97\u00b120.08<\/strong>\uff0cmean HD95 <strong>9.94\u00b18.98<\/strong>\uff1b\u9ad8\u4e8e nnU-Net \u7684 65.96\u00b120.94 \u548c nnFormer \u7684 66.26\u00b120.55\u3002\u7c7b\u522b\u5c42\u9762 vessel DSC 64.96\uff0ctumor DSC 70.98\u3002<\/li>\n<li><strong>FLARE Abdomen Organ<\/strong>\uff1aFEFormer mean DSC <strong>95.02\u00b15.96<\/strong>\uff0cmean HD95 <strong>1.40\u00b11.05<\/strong>\uff1b\u56db\u4e2a\u5668\u5b98 liver\/kidney\/spleen\/pancreas \u5206\u522b\u4e3a 98.65\u300197.25\u300198.42\u300185.74\u3002<\/li>\n<li><strong>Brain Tumor<\/strong>\uff1a\u6b63\u6587\u62a5\u544a mean DSC <strong>74.97%<\/strong>\u3001mean HD95 <strong>5.01 mm<\/strong>\uff0c\u5e76\u79f0 ET\u3001ED\u3001NET \u4e09\u4e2a\u4e9a\u533a\u5747\u53d6\u5f97\u6700\u9ad8 DSC\uff0c\u4e14 HD95&gt;100mm \u7684 failure rate \u4e3a 0\u3002<\/li>\n<li><strong>\u590d\u6742\u5ea6<\/strong>\uff1aFEFormer \u53c2\u6570 <strong>18.54M<\/strong>\u3001FLOPs <strong>39.13G<\/strong>\uff0c\u4f4e\u4e8e nnU-Net 68.38M\/357.13G\u3001nnFormer 149.33M\/284.28G\u3001VSmTrans 50.39M\/358.21G\uff1b\u4f46\u9ad8\u4e8e\u6781\u8f7b\u91cf SegFormer 4.50M\/5.02G\u3002<\/li>\n<li><strong>\u6d88\u878d<\/strong>\uff1aplain ViT baseline \u5728 AMOS \u4e0a mean DSC 84.08\u3001HD95 2.86\uff1b\u52a0\u5165 FDSA \u540e 86.32\uff1b\u52a0\u5165 FGMLP \u540e 86.21\uff1bFDSA+FGMLP \u4e3a 87.56\uff1b\u518d\u52a0 WAFF \u4e3a 88.98\uff1b\u5b8c\u6574 FEFormer \u4e3a 90.11\u3001HD95 1.78\u3002\u8fd9\u4e2a\u9636\u68af\u5f0f\u6d88\u878d\u652f\u6301\u56db\u4e2a\u6a21\u5757\u5747\u6709\u8d21\u732e\u3002<\/li>\n<\/ul>\n<h4>7. \u5b9e\u9a8c\u53ef\u4fe1\u5ea6\u5224\u65ad<\/h4>\n<p>\u53ef\u4fe1\u4e4b\u5904\uff1a\u6570\u636e\u96c6\u8986\u76d6 CT\u3001MRI\u3001\u591a\u5668\u5b98\u3001\u80bf\u7624\u3001\u7ba1\u72b6\u7ed3\u6784\uff1bbaseline \u8f83\u5168\uff0c\u5305\u542b CNN\u3001ViT\u3001hybrid \u548c\u73b0\u4ee3 MedNeXt\/MixUNETR\uff1b\u62a5\u544a DSC\u3001HD95\u3001\u53c2\u6570\u3001FLOPs\u30015-fold cross-validation \u548c Wilcoxon \u68c0\u9a8c\uff1b\u6d88\u878d\u4e0d\u662f\u53ea\u5220\u4e00\u4e2a\u6a21\u5757\uff0c\u800c\u662f\u9010\u6b65\u9a8c\u8bc1 FDSA\u3001FGMLP\u3001WAFF\u3001FCSB\u3002<\/p>\n<p>\u9700\u8981\u8c28\u614e\u4e4b\u5904\uff1a\u7b2c\u4e00\uff0c\u8bba\u6587\u4e3a arXiv preprint\uff0c\u672a\u786e\u8ba4\u4ee3\u7801\uff0c\u590d\u73b0\u95e8\u69db\u8f83\u9ad8\uff1b\u7b2c\u4e8c\uff0c\u6a21\u5757\u5f88\u591a\uff0c\u9891\u57df attention\u3001frequency MLP\u3001wavelet fusion\u3001stem bridge \u540c\u65f6\u51fa\u73b0\uff0c\u53ef\u80fd\u5e26\u6765\u201c\u7ec4\u5408\u5de5\u7a0b\u201d\u800c\u975e\u5355\u4e00\u673a\u5236\u6e05\u6670\u6027\uff1b\u7b2c\u4e09\uff0c\u867d\u7136\u6709\u7edf\u8ba1\u68c0\u9a8c\uff0c\u4f46\u672a\u770b\u5230\u5916\u90e8\u6d4b\u8bd5\u96c6\u6216\u8de8\u4e2d\u5fc3\u6cdb\u5316\u5b9e\u9a8c\uff1b\u7b2c\u56db\uff0c\u6bd4\u8f83\u662f\u5426\u5b8c\u5168\u590d\u73b0\u540c\u4e00\u8bad\u7ec3 recipe \u9700\u8981\u4ee3\u7801\u786e\u8ba4\uff1b\u7b2c\u4e94\uff0c\u548c nnU-Net \u7684\u516c\u5e73\u6027\u4ecd\u9700\u8c28\u614e\uff0c\u56e0\u4e3a nnU-Net \u901a\u5e38\u4f9d\u8d56\u81ea\u52a8\u914d\u7f6e\u548c\u5f3a\u5de5\u7a0b\u7ec6\u8282\uff0c\u800c\u8bba\u6587\u7edf\u4e00 patch\/epoch \u8bbe\u7f6e\u4e0d\u4e00\u5b9a\u7b49\u540c\u4e8e\u6700\u4f73 nnU-Net pipeline\u3002<\/p>\n<h4>8. \u4e0e\u4e3b\u6d41\u533b\u5b66\u56fe\u50cf\u5206\u5272\u6846\u67b6\u7684\u5173\u7cfb<\/h4>\n<ul>\n<li><strong>U-Net \/ nnU-Net<\/strong>\uff1aFEFormer \u4e0d\u662f nnU-Net recipe \u6539\u8fdb\uff0c\u800c\u662f\u66f4\u590d\u6742\u7684 Transformer-style encoder-decoder\u3002\u5bf9 U-Net \u6700\u53ef\u590d\u7528\u7684\u662f WAFF skip fusion \u548c FCSB shallow bridge\u3002<\/li>\n<li><strong>MedNeXt \/ CNN segmentation<\/strong>\uff1a\u8bba\u6587\u628a MedNeXt \u4f5c\u4e3a\u5f3a CNN-like baseline\uff1bFEFormer \u8bf4\u660e\u9891\u57df\u5168\u5c40\u5efa\u6a21\u53ef\u5728\u590d\u6742\u5668\u5b98\u4e0a\u8d85\u8fc7\u5927 kernel CNN\uff0c\u4f46\u9700\u8981\u8fdb\u4e00\u6b65\u9a8c\u8bc1\u8ba1\u7b97\u4ee3\u4ef7\u4e0e\u8bad\u7ec3\u7a33\u5b9a\u6027\u3002<\/li>\n<li><strong>UNETR \/ Swin-UNETR \/ TransUNet \/ TransFuse<\/strong>\uff1aFEFormer \u5c5e\u4e8e\u8fd9\u4e00\u8c31\u7cfb\u7684 frequency-enhanced \u53d8\u4f53\uff0c\u4e3b\u8981\u6539 attention\u3001MLP \u548c feature fusion\u3002\u82e5\u505a Transformer-based segmentation related work\uff0c\u5e94\u91cd\u70b9\u5f15\u7528\u5b83\u7684\u201cfrequency-aware Transformer for 3D segmentation\u201d\u5b9a\u4f4d\u3002<\/li>\n<li><strong>Mamba \/ VMamba \/ SegMamba \/ DAMamba<\/strong>\uff1aFEFormer \u4e0d\u4f7f\u7528 SSM\/Mamba\uff0c\u4f46\u9891\u57df\u6a21\u5757\u4e0e Mamba \u662f\u6b63\u4ea4\u601d\u8def\u3002\u5bf9 DAMamba\uff0c\u53ef\u501f\u9274 WAFF \u6216 FGMLP\uff0c\u628a Mamba long-range modeling \u4e0e frequency-aware skip fusion \u7ec4\u5408\u3002<\/li>\n<li><strong>Foundation model segmentation<\/strong>\uff1a\u8bba\u6587\u4e0d\u8d70 SAM\/MedSAM \u8def\u7ebf\uff0c\u4e5f\u672a\u9a8c\u8bc1 promptable\/foundation model \u573a\u666f\uff1b\u5b83\u66f4\u50cf\u4e13\u7528 3D segmentation backbone\u3002<\/li>\n<\/ul>\n<h4>9. \u5bf9\u6211\u8bfe\u9898\u7684\u4ef7\u503c<\/h4>\n<p>\u5bf9 polyp segmentation\uff0cFEFormer \u4e0d\u662f\u6700\u76f4\u63a5 baseline\uff0c\u56e0\u4e3a\u5b83\u4e3b\u8981\u662f 3D volumetric \u6846\u67b6\uff0c\u53c2\u6570\u548c\u8bad\u7ec3\u6210\u672c\u4e5f\u9ad8\u4e8e\u5e38\u89c4 2D polyp \u6a21\u578b\u3002\u4f46\u5176 <strong>WAFF wavelet skip fusion<\/strong> \u975e\u5e38\u503c\u5f97\u62c6\u51fa\u6765\u505a\u8f7b\u91cf\u5b9e\u9a8c\uff1a\u5728 U-Net\u3001PraNet\u3001TransFuse\u3001DAMamba decoder \u4e2d\u66ff\u6362 concat\/add skip\uff0c\u89c2\u5bdf\u8fb9\u754c Dice\u3001HD95\u3001mIoU\u3001S-measure \u662f\u5426\u6539\u5584\u3002\u5bf9 DAMamba \u6539\u9020\uff0cFEFormer \u63d0\u9192\u6211\u4eec\uff1a\u5982\u679c\u53ea\u5f3a\u8c03 Mamba \u7684\u957f\u7a0b\u4f9d\u8d56\uff0c\u53ef\u80fd\u5ffd\u89c6\u9ad8\u9891\u8fb9\u754c\uff1b\u53ef\u4ee5\u5c1d\u8bd5\u201cDAMamba encoder + wavelet\/frequency-aware decoder fusion\u201d\u3002<\/p>\n<h4>10. \u9605\u8bfb\u5efa\u8bae<\/h4>\n<p><strong>\u5efa\u8bae\u7cbe\u8bfb\uff0c\u4f46\u4f18\u5148\u8bfb\u65b9\u6cd5\u56fe\u3001FDSA\/WAFF \u548c\u6d88\u878d\u8868\u3002<\/strong> \u82e5\u5f53\u524d\u76ee\u6807\u662f 3D CT\/MRI \u591a\u5668\u5b98\u6216\u80bf\u7624\u5206\u5272\uff0c\u53ef\u6df1\u5165\u590d\u73b0\uff1b\u82e5\u76ee\u6807\u662f 2D polyp segmentation\uff0c\u5219\u4e0d\u5efa\u8bae\u6574\u6a21\u578b\u590d\u73b0\uff0c\u5efa\u8bae\u62bd\u53d6 WAFF\/FCSB \u4f5c\u4e3a\u53ef\u63a7\u6a21\u5757\u52a0\u5165\u73b0\u6709 U-Net\/DAMamba \u6846\u67b6\u3002<\/p>\n<hr \/>\n<h2>\u8bba\u6587 2\uff1aUSEMA: a Scalable Efficient Mamba Like Attention for Medical Image Segmentation<\/h2>\n<h3>\u57fa\u672c\u4fe1\u606f<\/h3>\n<ul>\n<li>\u6807\u9898\uff1aUSEMA: a Scalable Efficient Mamba Like Attention for Medical Image Segmentation<\/li>\n<li>\u4f5c\u8005 \/ \u7b2c\u4e00\u4f5c\u8005\uff1aElisha Dayag, Nhat Thanh Tran, Jack Xin \/ \u7b2c\u4e00\u4f5c\u8005 Elisha Dayag<\/li>\n<li>\u65f6\u95f4\uff1a2026-05-11<\/li>\n<li>\u6765\u6e90\uff1aarXiv preprint, arXiv:2605.11131v1<\/li>\n<li>\u8bba\u6587\u9875\u9762\u94fe\u63a5\uff1ahttps:\/\/arxiv.org\/abs\/2605.11131<\/li>\n<li>PDF \u6587\u4ef6 \/ PDF \u94fe\u63a5\uff1ahttps:\/\/arxiv.org\/pdf\/2605.11131v1 \uff08\u5df2\u4e0b\u8f7d\uff1aMEDIA:\/tmp\/medseg_daily_20260514\/usema.pdf\uff09<\/li>\n<li>\u4ee3\u7801\u94fe\u63a5\uff1a\u672a\u83b7\u53d6 \/ GitHub API \u672a\u68c0\u7d22\u5230\u660e\u786e\u5b98\u65b9 USEMA \u4ed3\u5e93<\/li>\n<li>\u4efb\u52a1\uff1a2D medical image segmentation\uff1b\u8179\u90e8 MRI \u591a\u5668\u5b98\u3001\u5185\u955c\u624b\u672f\u5668\u68b0\u3001\u663e\u5fae\u7ec6\u80de\u5b9e\u4f8b\u5206\u5272<\/li>\n<li>\u6570\u636e\u96c6\uff1aMICCAI 2022 AMOS Abdomen MRI\uff0860 scans\/5615 slices train\uff0c50 scans\/3357 slices test\uff0c13 organs\uff09\u3001MICCAI 2017 Endovis\uff081800 train\uff0c1200 test\uff0c7 \u7c7b\u624b\u672f\u5668\u68b0\uff09\u3001NeurIPS 2022 Cell Segmentation Challenge\uff081000 train\uff0c101 test\uff09<\/li>\n<li>\u65b9\u6cd5\u7c7b\u578b\uff1aU-Net hybrid architecture\uff1bMamba-like efficient attention\uff1blocal window attention + global averaging approximation\uff1b2D segmentation backbone<\/li>\n<\/ul>\n<h3>paper-deep-reader \u7cbe\u8bfb\u7ed3\u679c<\/h3>\n<h4>1. \u4e00\u53e5\u8bdd\u7ed3\u8bba<\/h4>\n<p>USEMA \u7684\u4ef7\u503c\u5728\u4e8e\u7ed9\u51fa\u4e00\u4e2a\u6bd4\u201c\u76f4\u63a5\u628a Mamba block \u585e\u8fdb U-Net\u201d\u66f4\u6e05\u6670\u7684\u5c40\u90e8-\u5168\u5c40\u6ce8\u610f\u529b\u89e3\u91ca\uff1a\u7528 window attention \u4fdd\u6301\u5c40\u90e8\u9009\u62e9\u6027\uff0c\u7528\u5168\u5c40 value \u5e73\u5747\u8fd1\u4f3c\u957f\u5e8f\u5217 self-attention \u7684\u5747\u5300\u5316\u8d8b\u52bf\uff0c\u518d\u901a\u8fc7 Mamba-like gating \u653e\u5165 U-Net encoder\u3002<\/p>\n<h4>2. \u7814\u7a76\u80cc\u666f\u4e0e\u6838\u5fc3\u95ee\u9898<\/h4>\n<p>\u8bba\u6587\u7814\u7a76 2D \u533b\u5b66\u56fe\u50cf\u5206\u5272\u4e2d\u5982\u4f55\u540c\u65f6\u6355\u83b7\u5c40\u90e8\u7ec6\u8282\u548c\u5168\u5c40\u4e0a\u4e0b\u6587\u3002Transformer full self-attention \u6709\u5168\u5c40\u611f\u53d7\u91ce\uff0c\u4f46\u590d\u6742\u5ea6\u4e3a <code>O(n^2)<\/code>\uff0c\u5728\u5927\u5c3a\u5bf8\u5185\u955c\u3001\u663e\u5fae\u6216\u9ad8\u5206\u8fa8\u7387\u533b\u5b66\u56fe\u50cf\u4e2d\u6602\u8d35\uff1bMamba\/SSM \u6709\u7ebf\u6027\u590d\u6742\u5ea6\uff0c\u4f46\u5176\u5e8f\u5217\u5316\u548c\u9012\u5f52\u673a\u5236\u662f\u5426\u6700\u9002\u5408\u533b\u5b66\u5206\u5272\u4ecd\u9700\u9a8c\u8bc1\u3002\u8bba\u6587\u7684\u6838\u5fc3\u95ee\u9898\u662f\uff1a\u80fd\u5426\u4fdd\u7559 attention \u7684\u5c40\u90e8\u9009\u62e9\u6027\u4e0e\u5168\u5c40\u4fe1\u606f\uff0c\u540c\u65f6\u907f\u514d full attention \u7684\u4e8c\u6b21\u590d\u6742\u5ea6\u548c\u957f\u5e8f\u5217 attention dispersion\uff1f<\/p>\n<p>paper map \u53ef\u6982\u62ec\u4e3a\uff1a\u8bba\u6587\u7814\u7a76 2D medical image segmentation \u4e2d\u7684\u9ad8\u6548\u5168\u5c40\u5efa\u6a21\uff1b\u4e3b\u52a8\u4f5c\u662f\u628a SEMA attention \u5d4c\u5165 U-Net encoder\uff0c\u5f62\u6210 USEMA\uff1b\u4f5c\u8005\u58f0\u79f0 USEMA \u5728 Abdomen MRI\u3001Endovis\u3001Microscopy \u4e0a\u4f18\u4e8e UNETR\/SwinUNETR\/nnFormer \u548c U-Mamba\/Mamba UNet\/Swin-UMamba\/MLLA-UNet\uff1b\u8bc1\u636e\u4e3b\u8981\u6765\u81ea 3 \u4e2a\u516c\u5f00\u6570\u636e\u96c6\u7684 DSC\/NSD\/F1 \u5bf9\u6bd4\u548c\u53bb\u6389 global averaging \u7684\u6d88\u878d\uff1b\u5173\u952e\u5931\u8d25\u98ce\u9669\u662f\u5b9e\u9a8c\u89c4\u6a21\u548c\u6d88\u878d\u8f83\u5c11\uff0c\u4e14\u7f3a\u5c11\u4ee3\u7801\u3001FLOPs\/\u901f\u5ea6\u3001\u7edf\u8ba1\u663e\u8457\u6027\u4e0e\u66f4\u5f3a nnU-Net\/CNN baseline\u3002<\/p>\n<h4>3. \u73b0\u6709\u65b9\u6cd5\u4e0d\u8db3<\/h4>\n<p>\u4f5c\u8005\u6307\u51fa\u4e24\u4e2a\u4e0d\u8db3\uff1a<\/p>\n<ol>\n<li><strong>Transformer full self-attention<\/strong>\uff1a\u590d\u6742\u5ea6\u968f token \u6570\u4e8c\u6b21\u589e\u957f\uff1b\u5f53\u5e8f\u5217\u5f88\u957f\u65f6\uff0csoftmax attention matrix \u7684\u5143\u7d20\u8d8b\u8fd1 <code>1\/n<\/code>\uff0c\u6ce8\u610f\u529b\u5206\u6570\u63a5\u8fd1\u5747\u5300\uff0c\u9009\u62e9\u5173\u952e token \u7684\u80fd\u529b\u4e0b\u964d\u3002\u8bba\u6587\u5728 UNETR + Endovis \u5927\u5206\u8fa8\u7387 patch \u4e0a\u53ef\u89c6\u5316 attention matrix\uff0c\u663e\u793a\u5206\u6570\u96c6\u4e2d\u5728 <code>1\/seq_len<\/code> \u9644\u8fd1\u3002<\/li>\n<li><strong>Mamba \/ Mamba-like segmentation<\/strong>\uff1aMamba \u63d0\u4f9b\u7ebf\u6027\u590d\u6742\u5ea6\u548c\u52a8\u6001\u6743\u91cd\uff0c\u4f46\u5176\u56e0\u679c\u9012\u5f52\u53ef\u88ab\u7406\u89e3\u4e3a\u5e26\u6307\u6570\u9057\u5fd8\u7684 unnormalized attention\uff1b\u533b\u5b66\u56fe\u50cf\u5206\u5272\u4ecd\u9700\u8981\u5c40\u90e8\u7a7a\u95f4\u805a\u7126\u4e0e\u5168\u5c40\u4e0a\u4e0b\u6587\u7ed3\u5408\uff0c\u800c\u4e0d\u662f\u5355\u7eaf\u957f\u5e8f\u5217\u626b\u63cf\u3002<\/li>\n<\/ol>\n<p>\u56e0\u6b64 USEMA \u8bd5\u56fe\u8d70\u4e2d\u95f4\u8def\u7ebf\uff1a\u5c40\u90e8\u90e8\u5206\u7528 window attention\uff0c\u907f\u514d\u5206\u6563\u5e76\u4fdd\u6301 focus\uff1b\u5168\u5c40\u90e8\u5206\u7528 arithmetic average\uff0c\u4f5c\u4e3a\u957f\u5e8f\u5217 full attention \u8fd1\u4f3c\u5747\u5300\u65f6\u7684\u4f4e\u6210\u672c\u5168\u5c40\u9879\u3002<\/p>\n<h4>4. \u65b9\u6cd5\u603b\u89c8<\/h4>\n<p>\u8def\u7ebf\u8bb0\u5f55\uff1aPrimary adapter = method-algorithm\uff1bSecondary adapter = \u65e0\uff1bEvidence packs = general\u3001experimental-eval\u3001ablation-and-mechanism-isolation\u3001reproducibility-and-compute\uff1bRoute confidence = \u4e2d-\u9ad8\u3002\u9009\u62e9\u8be5\u8def\u7ebf\u662f\u56e0\u4e3a\u8bba\u6587\u8d21\u732e\u662f\u7f51\u7edc\u7ed3\u6784\u548c attention \u8fd1\u4f3c\uff1b\u4f46\u5b9e\u9a8c\u8bc1\u636e\u8f83 FEFormer \u66f4\u8584\u3002<\/p>\n<p>USEMA \u65b9\u6cd5\u6b65\u9aa4\u5982\u4e0b\uff1a<\/p>\n<ol>\n<li>\u4ece\u666e\u901a U-Net \u51fa\u53d1\uff0c\u4fdd\u7559 symmetric encoder-decoder \u548c skip connection\u3002<\/li>\n<li>\u6bcf\u4e2a encoder building block \u5305\u542b\u4e24\u4e2a residual convolution blocks\uff0c\u7136\u540e\u63a5\u4e00\u4e2a SEMA block\u3002<\/li>\n<li>residual block \u4e3a convolution + instance normalization + LeakyReLU\u3002<\/li>\n<li>feature \u4ece <code>(B, C, H, W)<\/code> reshape \u4e3a <code>(B, C, HW)<\/code> \u540e\u8fdb\u5165 SEMA block\u3002<\/li>\n<li>SEMA block \u5148\u52a0 conditional positional embedding \u548c layer normalization\u3002<\/li>\n<li>\u7279\u5f81\u5206\u4e3a\u4e24\u652f\uff1a\u4e00\u652f linear + SiLU \u4f5c\u4e3a Mamba-like gating\uff1b\u53e6\u4e00\u652f linear + depthwise convolution \u540e\u8fdb\u5165 SEMA attention\u3002<\/li>\n<li>SEMA attention \u5b9a\u4e49\u4e3a <code>SEMA(Q,K,V)=A_w(Q,K,V)+broadcast(1\/n \u03a3_j v_j)<\/code>\uff1a\u524d\u8005\u662f window attention\uff0c\u540e\u8005\u662f\u5168\u5c40 arithmetic averaging\u3002<\/li>\n<li>\u4e24\u652f\u901a\u8fc7 Hadamard product \u5408\u5e76\uff0c\u518d\u52a0 positional encoding \u548c feed-forward network\u3002<\/li>\n<li>bottleneck \u5904\u7a7a\u95f4\u7ef4\u5ea6\u5df2\u538b\u7f29\uff0c\u4f5c\u8005\u4f7f\u7528 full self-attention\uff1bdecoder \u53ea\u5305\u542b residual blocks \u548c transposed convolutions\uff0c\u901a\u8fc7 concat skip \u8fde\u63a5\u6062\u590d\u5206\u8fa8\u7387\u3002<\/li>\n<li>\u4f7f\u7528 nnUNet framework \u505a\u9884\u5904\u7406\uff0c\u8bad\u7ec3 1000 epochs\uff0cAdamW\uff0cDice + CE loss\uff0cdeep supervision\u3002<\/li>\n<\/ol>\n<h4>5. \u6838\u5fc3\u6a21\u5757\u62c6\u89e3<\/h4>\n<ul>\n<li>\n<p><strong>Attention dispersion argument<\/strong>\uff1a\u4f5c\u8005\u5f15\u7528\u5e76\u590d\u8ff0\u4e86\u957f\u5e8f\u5217 softmax attention \u5728\u4e00\u5b9a\u6761\u4ef6\u4e0b\u6bcf\u4e2a\u5143\u7d20\u843d\u5728 <code>C1\/n<\/code> \u5230 <code>C2\/n<\/code> \u7684\u7ed3\u8bba\uff0c\u8ba4\u4e3a\u957f\u5e8f\u5217 full attention \u4f1a\u8d8b\u8fd1\u5747\u5300\u3002\u8fd9\u662f USEMA \u9009\u62e9 \u201cwindow attention + global average\u201d \u7684\u7406\u8bba\u52a8\u673a\u3002\u5b83\u6bd4\u5355\u7eaf\u8bf4 full attention \u592a\u8d35\u66f4\u6709\u673a\u5236\u6027\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Window attention <code>A_w<\/code><\/strong>\uff1a\u8f93\u5165 Q\/K\/V\uff0c\u8f93\u51fa\u6bcf\u4e2a token \u5728\u5c40\u90e8\u7a97\u53e3\u5185\u805a\u5408\u7684 value\u3002\u5b83\u89e3\u51b3\u5c40\u90e8\u9009\u62e9\u6027\u95ee\u9898\uff0c\u907f\u514d\u5168\u5c40 softmax \u5728\u8d85\u957f\u5e8f\u5217\u4e0a\u7a00\u91ca\u6ce8\u610f\u529b\u3002\u5bf9\u533b\u5b66\u5206\u5272\u4e2d\u7684\u8fb9\u754c\u3001\u5668\u68b0\u3001\u7ec6\u80de\u5c40\u90e8\u7ed3\u6784\u6709\u610f\u4e49\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Global arithmetic averaging<\/strong>\uff1a\u8f93\u5165\u6240\u6709 value token\uff0c\u8f93\u51fa <code>1\/n \u03a3 v_j<\/code> \u5e76\u5e7f\u64ad\u5230\u6240\u6709 token\u3002\u5b83\u662f\u5bf9\u957f\u5e8f\u5217 full attention \u5747\u5300\u5316\u8d8b\u52bf\u7684\u4f4e\u6210\u672c\u8fd1\u4f3c\u3002\u4f18\u70b9\u662f\u7b80\u5355\u3001\u7ebf\u6027\u3001\u7a33\u5b9a\uff1b\u7f3a\u70b9\u662f\u5168\u5c40\u9879\u8fc7\u4e8e\u7c97\u7cd9\uff0c\u65e0\u6cd5\u8868\u8fbe\u5668\u5b98\u4e4b\u95f4\u7684\u7ed3\u6784\u5316\u5173\u7cfb\u6216\u8fdc\u8ddd\u79bb\u7279\u5b9a\u4f9d\u8d56\u3002<\/p>\n<\/li>\n<li>\n<p><strong>Mamba-like gating branch<\/strong>\uff1alinear + SiLU \u540e\u4e0e attention branch \u505a Hadamard product\uff0c\u7c7b\u4f3c Mamba \u7684 selective gating\u3002\u5b83\u8ba9\u6a21\u578b\u80fd\u591f\u5bf9\u5c40\u90e8-\u5168\u5c40\u6df7\u5408\u4fe1\u606f\u505a\u52a8\u6001\u7b5b\u9009\uff0c\u800c\u4e0d\u53ea\u662f\u52a0\u6cd5\u878d\u5408\u3002<\/p>\n<\/li>\n<li>\n<p><strong>U-Net integration<\/strong>\uff1aSEMA \u53ea\u653e\u5728 encoder block \u540e\uff1bdecoder \u4fdd\u6301 residual + transposed conv\u3002\u8fd9\u4f7f USEMA \u6bd4\u5b8c\u6574 Transformer decoder \u66f4\u8f7b\uff0c\u4e5f\u66f4\u63a5\u8fd1\u53ef\u8fc1\u79fb\u7684 U-Net \u63d2\u4ef6\u3002<\/p>\n<\/li>\n<li>\n<p><strong>\u662f\u5426\u9002\u5408 polyp segmentation \/ 3D segmentation<\/strong>\uff1a\u5bf9 polyp segmentation \u6709\u76f4\u63a5\u53ef\u8bd5\u4ef7\u503c\uff0c\u56e0\u4e3a\u5b83\u662f 2D U-Net \u98ce\u683c\uff0c\u4e14\u5185\u955c Endovis \u5b9e\u9a8c\u8bf4\u660e\u5b83\u80fd\u5904\u7406\u9ad8\u5206\u8fa8\u7387 endoscopic scene\uff1b\u4f46 Endovis \u662f\u624b\u672f\u5668\u68b0\uff0c\u4e0d\u662f\u606f\u8089\uff0c\u8fb9\u754c\/\u989c\u8272\/\u5f62\u6001\u5dee\u5f02\u4ecd\u9700\u9a8c\u8bc1\u3002\u5bf9 3D segmentation\uff0cSEMA \u53ef\u6269\u5c55\u4e3a 3D window attention + global average\uff0c\u4f46 memory\u3001\u7a97\u53e3\u5212\u5206\u548c 3D positional encoding \u9700\u8981\u91cd\u65b0\u8bbe\u8ba1\u3002<\/p>\n<\/li>\n<\/ul>\n<h4>6. \u5b9e\u9a8c\u8bbe\u8ba1\u4e0e\u7ed3\u679c<\/h4>\n<p>\u8bba\u6587\u5728\u4e09\u4e2a\u6570\u636e\u96c6\u4e0a\u6bd4\u8f83 Transformer \u548c Mamba \u7cfb\u5217 baseline\uff1aUNETR\u3001SwinUNETR\u3001nnFormer\u3001U-Mamba Enc\u3001Mamba UNet\u3001Swin-UMamba\u3001MLLA-UNet\u3002<\/p>\n<p>\u5173\u952e\u7ed3\u679c\uff1a<\/p>\n<ul>\n<li><strong>AMOS Abdomen MRI 2D slice setting<\/strong>\uff1aUSEMA DSC <strong>0.7704<\/strong>\u3001NSD <strong>0.8345<\/strong>\u3001\u53c2\u6570 <strong>52M<\/strong>\uff1b\u7565\u9ad8\u4e8e U-Mamba Enc \u7684 0.7625\/0.8327\uff0867M\uff09\uff0c\u9ad8\u4e8e Mamba UNet 0.7496\/0.8178\u3001Swin-UMamba 0.7054\/0.7647\u3001nnFormer 0.7279\/0.7963\u3002<\/li>\n<li><strong>Endovis 2017 instrument segmentation<\/strong>\uff1aUSEMA DSC <strong>0.6463<\/strong>\u3001NSD <strong>0.6621<\/strong>\u3001\u53c2\u6570 <strong>52M<\/strong>\uff1b\u9ad8\u4e8e Swin-UMamba 0.6402\/0.6547\u3001U-Mamba Enc 0.6303\/0.6451\u3001Mamba UNet 0.6256\/0.6370\u3001nnFormer 0.6135\/0.6228\u3002<\/li>\n<li><strong>NeurIPS 2022 Cell Segmentation<\/strong>\uff1aUSEMA F1 <strong>0.5791<\/strong>\u3001\u53c2\u6570 <strong>52M<\/strong>\uff1b\u9ad8\u4e8e U-Mamba Enc 0.5607\uff0892M\uff09\u3001nnFormer 0.5332\u3001Mamba UNet 0.5215\u3001MLLA-UNet 0.4857\u3002<\/li>\n<li><strong>Ablation of global averaging<\/strong>\uff1a\u53bb\u6389 attention approximation \u540e\uff0cAbdomen MRI \u4ece 0.7704\/0.8345 \u964d\u5230 0.7574\/0.8214\uff1bEndovis \u4ece 0.6463\/0.6621 \u964d\u5230 0.6218\/0.6367\uff1bMicroscopy F1 \u4ece 0.5791 \u964d\u5230 0.5443\u3002\u8be5\u6d88\u878d\u8bf4\u660e global average \u9879\u4e0d\u662f\u88c5\u9970\uff0c\u786e\u5b9e\u8d21\u732e\u4e86\u6027\u80fd\u3002<\/li>\n<\/ul>\n<h4>7. \u5b9e\u9a8c\u53ef\u4fe1\u5ea6\u5224\u65ad<\/h4>\n<p>\u53ef\u4fe1\u4e4b\u5904\uff1a\u8bba\u6587\u7684\u52a8\u673a\u8f83\u6e05\u6670\uff0c\u6709 attention dispersion \u7684\u7406\u8bba\/\u53ef\u89c6\u5316\u652f\u6491\uff1b\u6570\u636e\u96c6\u8986\u76d6 MRI\u3001\u5185\u955c\u3001\u663e\u5fae\uff0c\u5206\u8fa8\u7387\u548c\u4efb\u52a1\u7c7b\u578b\u591a\u6837\uff1b\u4e0e\u591a\u79cd Transformer\/Mamba baseline \u6bd4\u8f83\uff1bglobal average \u6d88\u878d\u76f4\u63a5\u9a8c\u8bc1\u6838\u5fc3\u8bbe\u8ba1\u3002<\/p>\n<p>\u4e0d\u8db3\u4e4b\u5904\uff1a\u7b2c\u4e00\uff0c\u672a\u83b7\u53d6\u5b98\u65b9\u4ee3\u7801\uff0c\u590d\u73b0\u6027\u6682\u65f6\u6709\u9650\uff1b\u7b2c\u4e8c\uff0c\u672a\u62a5\u544a FLOPs\u3001\u541e\u5410\u3001\u663e\u5b58\u6216\u8bad\u7ec3\u65f6\u95f4\uff0c\u56e0\u6b64\u201cefficient\u201d\u4e3b\u8981\u7531\u7ed3\u6784\u590d\u6742\u5ea6\u63a8\u65ad\uff0c\u8bc1\u636e\u4e0d\u5b8c\u6574\uff1b\u7b2c\u4e09\uff0c\u7f3a\u5c11\u7edf\u8ba1\u663e\u8457\u6027\u3001\u591a\u6b21\u8fd0\u884c\u5747\u503c\u65b9\u5dee\u548c\u5916\u90e8\u6cdb\u5316\uff1b\u7b2c\u56db\uff0cbaseline \u4e2d\u6ca1\u6709\u666e\u901a nnU-Net\/UNet++\/PraNet \u7b49\u5f3a CNN \u6216\u606f\u8089\u4e13\u7528\u6a21\u578b\uff0c\u65e0\u6cd5\u8bf4\u660e\u5b83\u4e00\u5b9a\u4f18\u4e8e\u5f3a U-Net recipe\uff1b\u7b2c\u4e94\uff0cMicroscopy \u662f instance segmentation\uff0c\u4f46\u8bba\u6587\u53ea\u62a5\u544a F1\uff0c\u4efb\u52a1\u9002\u914d\u7ec6\u8282\u4e0d\u591f\u5145\u5206\uff1b\u7b2c\u516d\uff0c\u6027\u80fd\u63d0\u5347\u76f8\u5bf9 U-Mamba Enc \u5728 Abdomen MRI \u4e0a\u8f83\u5c0f\uff080.7704 vs 0.7625\uff09\uff0c\u5e94\u907f\u514d\u8fc7\u5ea6\u5ba3\u4f20\u3002<\/p>\n<h4>8. \u4e0e\u4e3b\u6d41\u533b\u5b66\u56fe\u50cf\u5206\u5272\u6846\u67b6\u7684\u5173\u7cfb<\/h4>\n<ul>\n<li><strong>U-Net \/ nnU-Net<\/strong>\uff1aUSEMA \u662f U-Net encoder \u63d2\u4ef6\u5316\u6539\u9020\uff0c\u9884\u5904\u7406\u4f7f\u7528 nnUNet framework\uff0c\u4f46\u4e0d\u662f\u5b8c\u6574 nnU-Net recipe\u3002\u5b83\u7684 SEMA block \u53ef\u4f5c\u4e3a U-Net bottleneck\/encoder block \u66ff\u4ee3\u6a21\u5757\u3002<\/li>\n<li><strong>MedNeXt \/ CNN segmentation<\/strong>\uff1a\u8bba\u6587\u6ca1\u6709\u76f4\u63a5\u6bd4\u8f83 MedNeXt\u3002\u82e5\u7528\u4e8e\u4e25\u8083\u5b9e\u9a8c\uff0c\u9700\u8981\u8865 MedNeXt \u6216\u5f3a CNN baseline\uff0c\u907f\u514d\u53ea\u8bc1\u660e\u6bd4\u90e8\u5206 Mamba\/Transformer \u597d\u3002<\/li>\n<li><strong>UNETR \/ Swin-UNet \/ TransUNet \/ TransFuse<\/strong>\uff1aUSEMA \u4e0e\u8fd9\u4e9b\u65b9\u6cd5\u540c\u5c5e hybrid CNN-attention \u5206\u5272\u6846\u67b6\uff0c\u4f46\u5b83\u7528 window attention + global averaging \u4ee3\u66ff full attention\uff0c\u91cd\u70b9\u89e3\u51b3\u957f\u5e8f\u5217\u590d\u6742\u5ea6\u548c attention dispersion\u3002<\/li>\n<li><strong>Mamba \/ VMamba \/ SegMamba \/ DAMamba<\/strong>\uff1aUSEMA \u4e0d\u662f\u6807\u51c6 SSM \u626b\u63cf\uff0c\u800c\u662f Mamba-like attention\uff1a\u501f\u9274 Mamba \u7684 gating \u548c\u6307\u6570\u9057\u5fd8\u76f4\u89c9\uff0c\u4f46\u4e3b\u4f53\u4ecd\u662f attention approximation\u3002\u5bf9 DAMamba \u6709\u76f4\u63a5\u53c2\u8003\u610f\u4e49\uff1a\u53ef\u4ee5\u628a\u5168\u5c40\u5e73\u5747\u9879\u6216\u5c40\u90e8\u7a97\u53e3\u9879\u4f5c\u4e3a DAMamba scan \u7684\u8865\u5145\uff0c\u5c24\u5176\u7528\u4e8e\u51cf\u5c11\u5e8f\u5217\u626b\u63cf\u65b9\u5411\u504f\u7f6e\u3002<\/li>\n<li><strong>Foundation model segmentation<\/strong>\uff1a\u4e0e SAM\/MedSAM \u65e0\u76f4\u63a5\u5173\u7cfb\uff1b\u5b83\u662f\u4e13\u7528\u5c0f\/\u4e2d\u578b\u7f51\u7edc\u8def\u7ebf\u3002<\/li>\n<\/ul>\n<h4>9. \u5bf9\u6211\u8bfe\u9898\u7684\u4ef7\u503c<\/h4>\n<p>\u5bf9 polyp segmentation \u548c DAMamba\uff0cUSEMA \u7684\u4ef7\u503c\u8f83\u9ad8\u4f46\u9700\u8981\u8c28\u614e\u590d\u73b0\u3002\u5b83\u7ed9 DAMamba \u6539\u9020\u63d0\u4f9b\u4e00\u4e2a\u6e05\u695a\u65b9\u5411\uff1a\u4e0d\u8981\u53ea\u6bd4\u8f83 CNN vs Mamba\uff0c\u4e5f\u53ef\u4ee5\u8bbe\u8ba1 <strong>local window selection + cheap global context + gating<\/strong> \u7684\u6df7\u5408\u6a21\u5757\u3002\u5bf9\u606f\u8089\u5206\u5272\uff0c\u53ef\u5728 CVC-ClinicDB\u3001Kvasir-SEG\u3001CVC-ColonDB\u3001ETIS\u3001EndoScene \u4e0a\u6d4b\u8bd5\uff1a\u628a SEMA block \u653e\u5728 encoder \u9ad8\u5c42\u6216 bottleneck\uff0c\u6bd4\u8f83 U-Net\u3001TransFuse\u3001VM-UNet\u3001DAMamba\uff0c\u5e76\u989d\u5916\u62a5\u544a FPS\/FLOPs\/Params\u3002\u7531\u4e8e USEMA \u53c2\u6570 52M\uff0c\u4e0d\u7b97\u8f7b\u91cf\uff0c\u82e5\u7528\u6237\u76ee\u6807\u662f\u5b9e\u65f6 polyp\uff0c\u5e94\u505a\u901a\u9053\u7f29\u653e\u6216\u53ea\u5728\u4f4e\u5206\u8fa8\u7387\u5c42\u4f7f\u7528\u3002<\/p>\n<h4>10. \u9605\u8bfb\u5efa\u8bae<\/h4>\n<p><strong>\u5efa\u8bae\u7cbe\u8bfb\u65b9\u6cd5\u90e8\u5206\uff0c\u5b9e\u9a8c\u90e8\u5206\u53ef\u5e26\u7740\u8d28\u7591\u8bfb\u3002<\/strong> \u5b83\u7684\u7406\u8bba\u52a8\u673a\u548c\u6a21\u5757\u8bbe\u8ba1\u5bf9 DAMamba\/efficient attention \u5f88\u6709\u542f\u53d1\uff1b\u4f46\u7531\u4e8e\u7f3a\u5c11\u4ee3\u7801\u3001\u901f\u5ea6\u6307\u6807\u3001\u7edf\u8ba1\u663e\u8457\u6027\u548c\u5f3a CNN\/polyp baseline\uff0c\u4e0d\u5efa\u8bae\u76f4\u63a5\u628a\u5b83\u5f53\u4f5c\u5df2\u5145\u5206\u9a8c\u8bc1\u7684 SOTA\uff0c\u53ea\u5efa\u8bae\u4f5c\u4e3a\u53ef\u6539\u9020\u6a21\u5757\u548c related work \u5019\u9009\u3002<\/p>\n<hr \/>\n<h2>\u4eca\u65e5\u63a8\u8350\u4f18\u5148\u7ea7<\/h2>\n<ol>\n<li><strong>FEFormer<\/strong>\uff1a\u66f4\u9002\u5408 3D medical image segmentation\u3001Transformer-based segmentation\u3001frequency-aware feature fusion \u65b9\u5411\uff1b\u5b9e\u9a8c\u66f4\u7cfb\u7edf\uff0c\u6d88\u878d\u66f4\u5b8c\u6574\uff0c\u9002\u5408\u6df1\u5165\u8bfb\u5168\u6587\u5e76\u62c6\u89e3 WAFF\/FDSA\u3002<\/li>\n<li><strong>USEMA<\/strong>\uff1a\u66f4\u9002\u5408 DAMamba \/ Mamba-like efficient attention \/ U-Net \u63d2\u4ef6\u6539\u9020\u65b9\u5411\uff1b\u65b9\u6cd5\u601d\u60f3\u6e05\u695a\uff0c\u4f46\u5b9e\u9a8c\u8bc1\u636e\u8f83\u8584\uff0c\u5efa\u8bae\u4f5c\u4e3a\u6a21\u5757\u542f\u53d1\u4f18\u5148\u4e8e\u4f5c\u4e3a\u5f3a SOTA baseline\u3002<\/li>\n<\/ol>\n<h2>\u4eca\u65e5 PDF \u83b7\u53d6\u60c5\u51b5<\/h2>\n<ul>\n<li>\u8bba\u6587 1\uff1a\u5df2\u9644 PDF \/ \u63d0\u4f9b PDF \u94fe\u63a5\uff1aMEDIA:\/tmp\/medseg_daily_20260514\/feformer.pdf\uff1bhttps:\/\/arxiv.org\/pdf\/2605.11434v1<\/li>\n<li>\u8bba\u6587 2\uff1a\u5df2\u9644 PDF \/ \u63d0\u4f9b PDF \u94fe\u63a5\uff1aMEDIA:\/tmp\/medseg_daily_20260514\/usema.pdf\uff1bhttps:\/\/arxiv.org\/pdf\/2605.11131v1<\/li>\n<\/ul>\n<h2>\u4eca\u65e5\u53ef\u6267\u884c\u5efa\u8bae<\/h2>\n<ol>\n<li>\u82e5\u4f60\u8fd1\u671f\u505a <strong>DAMamba \u6216 polyp segmentation<\/strong>\uff0c\u4f18\u5148\u4ece USEMA \u62bd\u53d6 \u201cwindow attention + global average + gating\u201d \u601d\u8def\uff0c\u5728 DAMamba bottleneck \u6216 encoder \u9ad8\u5c42\u505a\u4e00\u4e2a\u8f7b\u91cf\u66ff\u6362\u5b9e\u9a8c\uff0c\u540c\u65f6\u8865\u5145 FLOPs\/FPS\uff0c\u907f\u514d\u53ea\u6bd4\u8f83 Dice\u3002<\/li>\n<li>\u82e5\u4f60\u5173\u6ce8 <strong>\u8fb9\u754c\u8d28\u91cf\u548c skip fusion<\/strong>\uff0c\u4ece FEFormer \u590d\u73b0\u4e00\u4e2a\u7b80\u5316\u7248 WAFF\uff1a\u53ea\u66ff\u6362 U-Net\/DAMamba \u7684 skip concat\uff0c\u7528 wavelet low\/high-frequency \u5b50\u5e26\u505a\u878d\u5408\uff0c\u89c2\u5bdf HD95\u3001boundary F-score\u3001mIoU \u662f\u5426\u6539\u5584\u3002<\/li>\n<li>related work \u5199\u4f5c\u4e2d\u53ef\u628a FEFormer \u653e\u5728 frequency-aware Transformer \/ 3D volumetric segmentation\uff0c\u628a USEMA \u653e\u5728 Mamba-like efficient attention \/ hybrid U-Net\uff1b\u4e24\u8005\u5747\u6807\u6ce8\u4e3a arXiv preprint\uff0c\u907f\u514d\u5199\u6210\u5df2\u63a5\u6536\u9876\u4f1a\/\u9876\u520a\u8bba\u6587\u3002<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>\u4eca\u65e5\u533b\u5b66\u56fe\u50cf\u5206\u5272\u6700\u65b0\u8bba\u6587\u7cbe\u8bfb\u8ffd\u8e2a \u4eca\u65e5\u7ed3\u8bba \u4eca\u5929\u68c0\u7d22\u5230 2026-05-11 \u81f3 2026-05-12 arXiv \u4e0a\u591a\u7bc7\u533b\u5b66\u56fe\u50cf &#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"emotion":"","emotion_color":"","title_style":"","license":"","footnotes":""},"categories":[85],"tags":[],"class_list":["post-1048","post","type-post","status-publish","format-standard","hentry","category-85"],"views":10,"_links":{"self":[{"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/posts\/1048","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/comments?post=1048"}],"version-history":[{"count":0,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/posts\/1048\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/media?parent=1048"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/categories?post=1048"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.eutaboo.com\/index.php\/wp-json\/wp\/v2\/tags?post=1048"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}