feat: 完整中文翻译 maths-cs-ai-compendium(数学·计算机科学·AI 知识大全)
翻译自英文原版 maths-cs-ai-compendium,共 20 章全部完成。 第01章 向量 | 第02章 矩阵 | 第03章 微积分 第04章 统计学 | 第05章 概率论 | 第06章 机器学习 第07章 计算语言学 | 第08章 计算机视觉 | 第09章 音频与语音 第10章 多模态学习 | 第11章 自主系统 | 第12章 图神经网络 第13章 计算与操作系统 | 第14章 数据结构与算法 第15章 生产级软件工程 | 第16章 SIMD与GPU编程 第17章 AI推理 | 第18章 ML系统设计 第19章 应用人工智能 | 第20章 前沿人工智能 翻译说明: - 所有数学公式 $...$ / $$...$$、代码块、图片引用完整保留 - mkdocs.yml 配置中文导航 + language: zh - README.md 已翻译为中文(兼 docs/index.md) - docs/ 目录包含指向各章文件的 symlink - 约 29,000 行中文内容,排除 .cache/ 构建缓存
This commit is contained in:
@@ -0,0 +1,66 @@
|
||||
<svg width="720" height="280" xmlns="http://www.w3.org/2000/svg">
|
||||
<defs>
|
||||
<marker id="rlhf-arrow" markerWidth="7" markerHeight="5" refX="7" refY="2.5" orient="auto">
|
||||
<polygon points="0 0, 7 2.5, 0 5" fill="#555"/>
|
||||
</marker>
|
||||
</defs>
|
||||
<text x="360" y="22" fill="#333" font-size="14" font-weight="bold" text-anchor="middle">RLHF: Reinforcement Learning from Human Feedback</text>
|
||||
|
||||
<!-- Stage 1: SFT -->
|
||||
<rect x="20" y="45" width="200" height="115" rx="10" fill="#3498db" opacity="0.06" stroke="#3498db" stroke-width="2"/>
|
||||
<text x="120" y="68" fill="#3498db" font-size="12" font-weight="bold" text-anchor="middle">Stage 1: SFT</text>
|
||||
|
||||
<rect x="40" y="78" width="160" height="24" rx="4" fill="#f5f5f5" stroke="#ccc" stroke-width="1"/>
|
||||
<text x="120" y="95" fill="#555" font-size="9" text-anchor="middle">Human-written responses</text>
|
||||
|
||||
<line x1="120" y1="102" x2="120" y2="118" stroke="#555" stroke-width="1" marker-end="url(#rlhf-arrow)"/>
|
||||
|
||||
<rect x="55" y="120" width="130" height="28" rx="5" fill="#3498db" opacity="0.15" stroke="#3498db" stroke-width="1.5"/>
|
||||
<text x="120" y="139" fill="#3498db" font-size="10" text-anchor="middle" font-weight="bold">SFT Model</text>
|
||||
|
||||
<!-- Arrow 1→2 -->
|
||||
<line x1="220" y1="100" x2="255" y2="100" stroke="#555" stroke-width="2" marker-end="url(#rlhf-arrow)"/>
|
||||
|
||||
<!-- Stage 2: Reward Model -->
|
||||
<rect x="260" y="45" width="200" height="115" rx="10" fill="#e74c3c" opacity="0.06" stroke="#e74c3c" stroke-width="2"/>
|
||||
<text x="360" y="68" fill="#e74c3c" font-size="12" font-weight="bold" text-anchor="middle">Stage 2: Reward Model</text>
|
||||
|
||||
<rect x="278" y="78" width="70" height="24" rx="4" fill="#27ae60" opacity="0.15" stroke="#27ae60" stroke-width="1"/>
|
||||
<text x="313" y="94" fill="#27ae60" font-size="9" text-anchor="middle">Response A</text>
|
||||
<rect x="355" y="78" width="70" height="24" rx="4" fill="#e74c3c" opacity="0.15" stroke="#e74c3c" stroke-width="1"/>
|
||||
<text x="390" y="94" fill="#e74c3c" font-size="9" text-anchor="middle">Response B</text>
|
||||
|
||||
<text x="432" y="94" fill="#333" font-size="9">Human</text>
|
||||
<text x="432" y="106" fill="#333" font-size="9">ranks</text>
|
||||
|
||||
<line x1="360" y1="102" x2="360" y2="118" stroke="#555" stroke-width="1" marker-end="url(#rlhf-arrow)"/>
|
||||
|
||||
<rect x="295" y="120" width="130" height="28" rx="5" fill="#e74c3c" opacity="0.15" stroke="#e74c3c" stroke-width="1.5"/>
|
||||
<text x="360" y="139" fill="#e74c3c" font-size="10" text-anchor="middle" font-weight="bold">Reward r(x, y)</text>
|
||||
|
||||
<!-- Arrow 2→3 -->
|
||||
<line x1="460" y1="100" x2="495" y2="100" stroke="#555" stroke-width="2" marker-end="url(#rlhf-arrow)"/>
|
||||
|
||||
<!-- Stage 3: RL Fine-tuning -->
|
||||
<rect x="500" y="45" width="205" height="115" rx="10" fill="#27ae60" opacity="0.06" stroke="#27ae60" stroke-width="2"/>
|
||||
<text x="602" y="68" fill="#27ae60" font-size="12" font-weight="bold" text-anchor="middle">Stage 3: RL (PPO)</text>
|
||||
|
||||
<rect x="518" y="78" width="80" height="24" rx="4" fill="#3498db" opacity="0.15" stroke="#3498db" stroke-width="1"/>
|
||||
<text x="558" y="95" fill="#3498db" font-size="9" text-anchor="middle">LM generates</text>
|
||||
|
||||
<rect x="608" y="78" width="80" height="24" rx="4" fill="#e74c3c" opacity="0.15" stroke="#e74c3c" stroke-width="1"/>
|
||||
<text x="648" y="95" fill="#e74c3c" font-size="9" text-anchor="middle">RM scores</text>
|
||||
|
||||
<line x1="602" y1="102" x2="602" y2="118" stroke="#555" stroke-width="1" marker-end="url(#rlhf-arrow)"/>
|
||||
|
||||
<rect x="537" y="120" width="130" height="28" rx="5" fill="#27ae60" opacity="0.15" stroke="#27ae60" stroke-width="1.5"/>
|
||||
<text x="602" y="139" fill="#27ae60" font-size="10" text-anchor="middle" font-weight="bold">RLHF Model</text>
|
||||
|
||||
<!-- Bottom: DPO comparison -->
|
||||
<rect x="120" y="185" width="480" height="80" rx="10" fill="#9b59b6" opacity="0.05" stroke="#9b59b6" stroke-width="1.5"/>
|
||||
<text x="360" y="208" fill="#9b59b6" font-size="12" font-weight="bold" text-anchor="middle">DPO: Direct Preference Optimisation (simpler alternative)</text>
|
||||
|
||||
<text x="360" y="228" fill="#555" font-size="10" text-anchor="middle">Skips the reward model entirely. Trains directly on preference pairs using a classification loss.</text>
|
||||
<text x="360" y="245" fill="#555" font-size="10" text-anchor="middle">Mathematically equivalent to RLHF, but no PPO, no reward model, just supervised training.</text>
|
||||
<text x="360" y="260" fill="#9b59b6" font-size="9" text-anchor="middle">Stages 2 + 3 collapse into a single step: compare log-probs of preferred vs dispreferred completions.</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 4.4 KiB |
Reference in New Issue
Block a user