<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">

  <title><![CDATA[Yi's Blog]]></title>
  <link href="https://wangyi.ai/atom.xml" rel="self"/>
  <link href="https://wangyi.ai/"/>
  <updated>2026-02-16T12:30:03-08:00</updated>
  <id>https://wangyi.ai/</id>
  <author>
    <name><![CDATA[Yi]]></name>
    
  </author>
  <generator uri="http://octopress.org/">Octopress</generator>

  
  <entry>
    <title type="html"><![CDATA[Solving Jane Street's 'Dropped a Neural Net' Puzzle]]></title>
    <link href="https://wangyi.ai/blog/2026/02/16/solving-jane-street-dropped-neural-net/"/>
    <updated>2026-02-16T12:00:00-08:00</updated>
    <id>https://wangyi.ai/blog/2026/02/16/solving-jane-street-dropped-neural-net</id>
    <content type="html"><![CDATA[<p>Jane Street’s January 2026 puzzle<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>, <a href="https://huggingface.co/spaces/jane-street/droppedaneuralnet">“Dropped a Neural Net”</a>, presents a deceptively simple premise: a neural network was “dropped” and its 97 pieces scattered. Your job is to put them back together. Behind this simple framing lies a deep combinatorial optimization problem that I solved two different ways — first with gradient-based permutation learning and combined swaps, then again with a simpler approach that revealed a key insight: <strong>pairing corrections unlock cascading improvements in ordering</strong>.</p>

<!-- more -->

<h2 id="the-problem">The Problem</h2>

<p>You’re given 97 weight/bias files (<code class="language-plaintext highlighter-rouge">piece_0.pth</code> through <code class="language-plaintext highlighter-rouge">piece_96.pth</code>) and a dataset (<code class="language-plaintext highlighter-rouge">historical_data.csv</code> with 10,000 rows of 48 input features, plus <code class="language-plaintext highlighter-rouge">pred</code> and <code class="language-plaintext highlighter-rouge">true</code> columns). The neural network architecture is:</p>

<ul>
  <li><strong>48 residual blocks</strong>, each consisting of:
    <ul>
      <li>An “inp” layer: <code class="language-plaintext highlighter-rouge">Linear(48 → 96)</code> followed by ReLU</li>
      <li>An “out” layer: <code class="language-plaintext highlighter-rouge">Linear(96 → 48)</code></li>
      <li>A residual connection: <code class="language-plaintext highlighter-rouge">x = x + out(relu(inp(x)))</code></li>
    </ul>
  </li>
  <li><strong>1 final layer</strong>: <code class="language-plaintext highlighter-rouge">Linear(48 → 1)</code> producing the prediction</li>
</ul>

<p>The 97 pieces split into three groups by weight shape:</p>
<ul>
  <li>48 pieces with shape <code class="language-plaintext highlighter-rouge">(96, 48)</code> — the inp layers</li>
  <li>48 pieces with shape <code class="language-plaintext highlighter-rouge">(48, 96)</code> — the out layers</li>
  <li>1 piece with shape <code class="language-plaintext highlighter-rouge">(1, 48)</code> — the final layer</li>
</ul>

<p>The solution is a permutation of indices 0–96 specifying which piece goes where. Positions 0,2,4,…,94 hold inp layers, positions 1,3,5,…,95 hold out layers, and position 96 holds the final layer. The solution is verified by <strong>SHA-256 hash</strong> — there’s exactly one correct answer, no MSE threshold to meet.</p>

<p>This means you need to solve two sub-problems simultaneously:</p>
<ol>
  <li><strong>Pairing</strong>: Which inp layer goes with which out layer in each block?</li>
  <li><strong>Ordering</strong>: In what sequence do the 48 blocks execute?</li>
</ol>

<p>The search space is enormous: 48! × 48! ≈ 10<sup>121</sup> possible configurations.</p>

<h2 id="phase-1-first-order-approximations-mse-07">Phase 1: First-Order Approximations (MSE ~0.7)</h2>

<p>My first instinct was to exploit the linear structure. If all 48 blocks see roughly the same input <code class="language-plaintext highlighter-rouge">X</code> (a first-order approximation), then each block’s contribution is independent, and we can use the <strong>Hungarian algorithm</strong> to find the optimal pairing.</p>

<p>For each candidate pair <code class="language-plaintext highlighter-rouge">(i, j)</code>, I computed the block’s effect on the prediction:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">h</span> <span class="o">=</span> <span class="n">F</span><span class="p">.</span><span class="nf">relu</span><span class="p">(</span><span class="n">F</span><span class="p">.</span><span class="nf">linear</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">L1_W</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">L1_B</span><span class="p">[</span><span class="n">i</span><span class="p">]))</span>
<span class="n">delta</span> <span class="o">=</span> <span class="n">F</span><span class="p">.</span><span class="nf">linear</span><span class="p">(</span><span class="n">h</span><span class="p">,</span> <span class="n">L2_W</span><span class="p">[</span><span class="n">j</span><span class="p">],</span> <span class="n">L2_B</span><span class="p">[</span><span class="n">j</span><span class="p">])</span>  <span class="c1"># (N, 48)
</span><span class="n">pred_delta</span> <span class="o">=</span> <span class="p">(</span><span class="n">delta</span> <span class="o">*</span> <span class="n">l3_dir</span><span class="p">).</span><span class="nf">sum</span><span class="p">(</span><span class="n">dim</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="o">*</span> <span class="n">l3_w</span><span class="p">.</span><span class="nf">norm</span><span class="p">()</span>
</code></pre></div></div>

<p>Then built a cost matrix and ran <code class="language-plaintext highlighter-rouge">linear_sum_assignment</code>. This got MSE down to ~0.7 — a starting point, but far from correct. The first-order approximation breaks down because blocks modify <code class="language-plaintext highlighter-rouge">x</code> sequentially, and the cumulative change is large (~6× the input norm).</p>

<h2 id="phase-2-gumbel-sinkhorn--differentiable-permutation-learning-mse-003">Phase 2: Gumbel-Sinkhorn — Differentiable Permutation Learning (MSE ~0.03)</h2>

<p>The breakthrough came from treating permutations as differentiable objects using the <strong>Gumbel-Sinkhorn</strong> framework.</p>

<h3 id="the-key-idea">The Key Idea</h3>

<p>Instead of searching over discrete permutations, parameterize a continuous relaxation. A 48×48 matrix of learnable logits <code class="language-plaintext highlighter-rouge">log_alpha</code> is transformed into a doubly-stochastic matrix (a “soft permutation”) via iterated row/column normalization (Sinkhorn’s algorithm):</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">sinkhorn</span><span class="p">(</span><span class="n">log_alpha</span><span class="p">,</span> <span class="n">n_iters</span><span class="o">=</span><span class="mi">25</span><span class="p">,</span> <span class="n">tau</span><span class="o">=</span><span class="mf">1.0</span><span class="p">):</span>
    <span class="n">log_alpha</span> <span class="o">=</span> <span class="n">log_alpha</span> <span class="o">/</span> <span class="n">tau</span>
    <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nf">range</span><span class="p">(</span><span class="n">n_iters</span><span class="p">):</span>
        <span class="n">log_alpha</span> <span class="o">=</span> <span class="n">log_alpha</span> <span class="o">-</span> <span class="n">torch</span><span class="p">.</span><span class="nf">logsumexp</span><span class="p">(</span><span class="n">log_alpha</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">keepdim</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
        <span class="n">log_alpha</span> <span class="o">=</span> <span class="n">log_alpha</span> <span class="o">-</span> <span class="n">torch</span><span class="p">.</span><span class="nf">logsumexp</span><span class="p">(</span><span class="n">log_alpha</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">keepdim</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">log_alpha</span><span class="p">.</span><span class="nf">exp</span><span class="p">()</span>
</code></pre></div></div>

<p>Adding Gumbel noise before normalization enables exploration, and annealing the temperature <code class="language-plaintext highlighter-rouge">tau</code> from high to low gradually sharpens the soft permutation toward a hard one. The MSE loss is fully differentiable through this soft permutation, so we can use Adam to optimize the logits.</p>

<h3 id="alternating-optimization">Alternating Optimization</h3>

<p>Jointly optimizing both the ordering permutation and the pairing permutation is expensive — the forward pass with two soft permutations involves <code class="language-plaintext highlighter-rouge">O(48³)</code> operations per position. The key insight was to <strong>alternate</strong>:</p>

<ol>
  <li><strong>Fix pairing, optimize ordering</strong>: The soft forward pass weights different block orderings:
    <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">forward_soft_order</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">pairing</span><span class="p">,</span> <span class="n">order_weights</span><span class="p">):</span>
 <span class="k">for</span> <span class="n">pos</span> <span class="ow">in</span> <span class="nf">range</span><span class="p">(</span><span class="mi">48</span><span class="p">):</span>
     <span class="c1"># Precompute all block deltas with fixed pairing
</span>     <span class="n">all_deltas</span> <span class="o">=</span> <span class="p">[</span><span class="nf">block_i_j</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">i</span><span class="p">,</span><span class="n">j</span> <span class="ow">in</span> <span class="n">pairing</span><span class="p">]</span>
     <span class="c1"># Weighted combination based on soft ordering
</span>     <span class="n">delta</span> <span class="o">=</span> <span class="nf">einsum</span><span class="p">(</span><span class="sh">'</span><span class="s">i,bid-&gt;bd</span><span class="sh">'</span><span class="p">,</span> <span class="n">order_weights</span><span class="p">[</span><span class="n">pos</span><span class="p">],</span> <span class="n">all_deltas</span><span class="p">)</span>
     <span class="n">x</span> <span class="o">=</span> <span class="n">x</span> <span class="o">+</span> <span class="n">delta</span>
</code></pre></div>    </div>
  </li>
  <li><strong>Fix ordering, optimize pairing</strong>: Each block position softly selects among all possible out layers:
    <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">forward_soft_pair</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">order</span><span class="p">,</span> <span class="n">pair_weights</span><span class="p">):</span>
 <span class="k">for</span> <span class="n">inp_idx</span> <span class="ow">in</span> <span class="n">order</span><span class="p">:</span>
     <span class="n">h</span> <span class="o">=</span> <span class="nf">relu</span><span class="p">(</span><span class="nf">linear</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">L1_W</span><span class="p">[</span><span class="n">inp_idx</span><span class="p">],</span> <span class="n">L1_B</span><span class="p">[</span><span class="n">inp_idx</span><span class="p">]))</span>
     <span class="c1"># Soft-select out layer
</span>     <span class="n">weighted_w</span> <span class="o">=</span> <span class="nf">einsum</span><span class="p">(</span><span class="sh">'</span><span class="s">j,jdo-&gt;do</span><span class="sh">'</span><span class="p">,</span> <span class="n">pair_weights</span><span class="p">[</span><span class="n">inp_idx</span><span class="p">],</span> <span class="n">L2_W</span><span class="p">)</span>
     <span class="n">delta</span> <span class="o">=</span> <span class="nf">linear</span><span class="p">(</span><span class="n">h</span><span class="p">,</span> <span class="n">weighted_w</span><span class="p">,</span> <span class="n">weighted_b</span><span class="p">)</span>
     <span class="n">x</span> <span class="o">=</span> <span class="n">x</span> <span class="o">+</span> <span class="n">delta</span>
</code></pre></div>    </div>
  </li>
</ol>

<p>Each sub-problem only involves one 48×48 permutation matrix, making it much faster. After optimization, I extract hard permutations using the Hungarian algorithm on the negative logits.</p>

<p>With 5-6 alternations of 500-800 gradient steps each, MSE dropped from 0.8 to <strong>~0.03</strong> — an order of magnitude better than first-order methods.</p>

<h3 id="why-alternating-works">Why Alternating Works</h3>

<p>Alternating optimization works here because the ordering and pairing sub-problems are partially decoupled. Fixing one makes the other a “standard” assignment problem with a smooth loss landscape. The Gumbel noise acts as a form of stochastic exploration, and the temperature annealing provides a natural curriculum from exploration to exploitation.</p>

<h2 id="phase-3-local-search--getting-stuck-mse-003">Phase 3: Local Search — Getting Stuck (MSE ~0.03)</h2>

<p>With a good Gumbel-Sinkhorn solution in hand, I tried various local search strategies:</p>

<ul>
  <li><strong>2-opt</strong>: Swap pairs of positions in the ordering, or pairs of pairings</li>
  <li><strong>3-opt</strong>: Try all triples of positions with all 6 permutations</li>
  <li><strong>Insertion moves</strong>: Remove a block and reinsert at every other position</li>
  <li><strong>Coordinate descent</strong>: For each position, try all 48×48 possible replacements</li>
</ul>

<p>None of these could escape the MSE ~0.03 basin. The solution was at a strict local minimum for all single-element and pair-element moves. Multiple random restarts with the Gumbel approach also converged to similar MSE values.</p>

<h2 id="phase-4-two-paths-to-the-solution-mse-0008--00">Phase 4: Two Paths to the Solution (MSE 0.008 → 0.0)</h2>

<p>From MSE ~0.008, I found two different approaches that both reach MSE = 0. Each reveals something different about the problem structure.</p>

<h3 id="approach-a-combined-2-opt">Approach A: Combined 2-opt</h3>

<p>The first insight was that standard 2-opt treats order swaps and pairing swaps as <strong>independent moves</strong>. But the correct solution might require simultaneously changing both the order AND the pairing of two positions.</p>

<p><strong>Combined 2-opt</strong> tests all three modifications for each pair of positions <code class="language-plaintext highlighter-rouge">(p1, p2)</code>:</p>
<ol>
  <li>Swap their order positions only</li>
  <li>Swap their pairings only</li>
  <li>Swap both order AND pairing simultaneously</li>
</ol>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">for</span> <span class="n">p1</span> <span class="ow">in</span> <span class="nf">range</span><span class="p">(</span><span class="mi">48</span><span class="p">):</span>
    <span class="k">for</span> <span class="n">p2</span> <span class="ow">in</span> <span class="nf">range</span><span class="p">(</span><span class="n">p1</span><span class="o">+</span><span class="mi">1</span><span class="p">,</span> <span class="mi">48</span><span class="p">):</span>
        <span class="n">i1</span><span class="p">,</span> <span class="n">i2</span> <span class="o">=</span> <span class="n">order</span><span class="p">[</span><span class="n">p1</span><span class="p">],</span> <span class="n">order</span><span class="p">[</span><span class="n">p2</span><span class="p">]</span>
        <span class="n">j1</span><span class="p">,</span> <span class="n">j2</span> <span class="o">=</span> <span class="n">pairing</span><span class="p">[</span><span class="n">i1</span><span class="p">],</span> <span class="n">pairing</span><span class="p">[</span><span class="n">i2</span><span class="p">]</span>

        <span class="k">for</span> <span class="n">swap_order</span><span class="p">,</span> <span class="n">swap_pair</span> <span class="ow">in</span> <span class="p">[(</span><span class="bp">True</span><span class="p">,</span><span class="bp">False</span><span class="p">),</span> <span class="p">(</span><span class="bp">False</span><span class="p">,</span><span class="bp">True</span><span class="p">),</span> <span class="p">(</span><span class="bp">True</span><span class="p">,</span><span class="bp">True</span><span class="p">)]:</span>
            <span class="k">if</span> <span class="n">swap_order</span><span class="p">:</span> <span class="n">order</span><span class="p">[</span><span class="n">p1</span><span class="p">],</span> <span class="n">order</span><span class="p">[</span><span class="n">p2</span><span class="p">]</span> <span class="o">=</span> <span class="n">i2</span><span class="p">,</span> <span class="n">i1</span>
            <span class="k">if</span> <span class="n">swap_pair</span><span class="p">:</span> <span class="n">pairing</span><span class="p">[</span><span class="n">i1</span><span class="p">],</span> <span class="n">pairing</span><span class="p">[</span><span class="n">i2</span><span class="p">]</span> <span class="o">=</span> <span class="n">j2</span><span class="p">,</span> <span class="n">j1</span>
            <span class="n">mse</span> <span class="o">=</span> <span class="nf">full_eval</span><span class="p">(</span><span class="n">order</span><span class="p">,</span> <span class="n">pairing</span><span class="p">)</span>
            <span class="k">if</span> <span class="n">mse</span> <span class="o">&lt;</span> <span class="n">best_mse</span><span class="p">:</span>
                <span class="c1"># Accept improvement
</span>                <span class="bp">...</span>
</code></pre></div></div>

<p>This is <code class="language-plaintext highlighter-rouge">O(48² × 3)</code> = 6,912 evaluations per sweep. Starting from MSE 0.0085, it made 86 consecutive improving swaps in a single pass down to MSE = 0.</p>

<p>The intuition: when two blocks have tangled errors, swapping just their order or just their pairing each makes things worse, but swapping <strong>both simultaneously</strong> moves between consistent configurations. In optimization terms, the individual moves each increase the loss, but their composition decreases it — a “valley” that requires moving diagonally.</p>

<h3 id="approach-b-alternating-cycles-with-insertions-simpler-same-result">Approach B: Alternating Cycles with Insertions (Simpler, Same Result)</h3>

<p>The second approach is simpler but equally effective: <strong>cycle through three move types</strong> and keep going long after apparent convergence.</p>

<p>The three moves:</p>
<ol>
  <li><strong>Pairing swaps</strong>: Try all <code class="language-plaintext highlighter-rouge">C(48,2)</code> = 1,128 L2 partner exchanges</li>
  <li><strong>Order swaps</strong>: Try all 1,128 position exchanges</li>
  <li><strong>Block insertions</strong>: For each of 48 blocks, remove it and try all 48 positions (2,304 evals)</li>
</ol>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">for</span> <span class="nb">round</span> <span class="ow">in</span> <span class="nf">range</span><span class="p">(</span><span class="n">many</span><span class="p">):</span>
    <span class="c1"># Pairing swaps
</span>    <span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">j</span> <span class="ow">in</span> <span class="nf">combinations</span><span class="p">(</span><span class="nf">range</span><span class="p">(</span><span class="mi">48</span><span class="p">),</span> <span class="mi">2</span><span class="p">):</span>
        <span class="n">swap</span> <span class="n">pairing</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">pairing</span><span class="p">[</span><span class="n">j</span><span class="p">];</span> <span class="n">accept</span> <span class="k">if</span> <span class="n">improved</span>

    <span class="c1"># Order swaps
</span>    <span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">j</span> <span class="ow">in</span> <span class="nf">combinations</span><span class="p">(</span><span class="nf">range</span><span class="p">(</span><span class="mi">48</span><span class="p">),</span> <span class="mi">2</span><span class="p">):</span>
        <span class="n">swap</span> <span class="n">order</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">order</span><span class="p">[</span><span class="n">j</span><span class="p">];</span> <span class="n">accept</span> <span class="k">if</span> <span class="n">improved</span>

    <span class="c1"># Block insertions
</span>    <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nf">range</span><span class="p">(</span><span class="mi">48</span><span class="p">):</span>
        <span class="n">block</span> <span class="o">=</span> <span class="n">order</span><span class="p">.</span><span class="nf">pop</span><span class="p">(</span><span class="n">i</span><span class="p">)</span>
        <span class="k">try</span> <span class="nb">all</span> <span class="mi">48</span> <span class="n">insert</span> <span class="n">positions</span><span class="p">;</span> <span class="n">keep</span> <span class="n">best</span>
</code></pre></div></div>

<p>What makes this work is <strong>patience</strong> — continuing to cycle when each individual move type appears converged. The key discovery: <strong>pairing corrections trigger cascading order improvements</strong>.</p>

<p>Starting from MSE 0.0098 (where standard 2-opt appeared stuck), the trajectory looked like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Cycle  5: Pairing fix:    0.008274  ← corrected one L1/L2 pair
          ...18 order swaps...
          Order swap:      0.006588  ← cascade!
          ...7 insertions...
          Block insertion:  0.003861

Cycle  6: Pairing fix:    0.002379  ← biggest single improvement
          ...16 order swaps...
          Order swap:      0.000177  ← nearly there
          Block insertion:  0.000064
          Block insertion:  0.000000  ← EXACT!
</code></pre></div></div>

<p>Each pairing correction fixed a block that had been paired with the wrong L2 layer. With the wrong partner, no ordering could make that block work correctly — so the optimizer was forced into a compromise. Once the pairing was fixed, a flood of previously-blocked order improvements became available.</p>

<h3 id="why-insertions-matter">Why Insertions Matter</h3>

<p>Insert moves find improvements that swaps cannot. A swap exchanges two elements; an insert slides one element to a new position, shifting everything in between. The final three moves to MSE = 0 were all insertions — they refined block positions with a precision that pairwise swaps couldn’t match.</p>

<h3 id="error-analysis-the-tail-tells-the-story">Error Analysis: The Tail Tells the Story</h3>

<p>At MSE ~0.01, analyzing the per-row error distribution was revealing:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Percentiles of |error|:
  50th: 0.026    (median row is nearly correct)
  95th: 0.210
  99th: 0.415
  100th: 1.496   (worst row is way off)

Top 100 rows: MSE 0.348  (35x more error per row)
Bottom 9900:  MSE 0.006
</code></pre></div></div>

<p>The error was concentrated in ~45 extreme rows. This pattern — a mostly-correct solution with a few outliers — is the signature of a few specific misconfigurations rather than a globally wrong solution. It motivated continued cycling over restart.</p>

<h2 id="the-full-pipeline">The Full Pipeline</h2>

<p>Both paths share the same initialization and diverge at Phase 4:</p>

<ol>
  <li><strong>First-order pairing</strong> (200 random restarts + swap optimization) → MSE ~0.7</li>
  <li><strong>Gumbel-Sinkhorn alternating optimization</strong> → MSE ~0.03</li>
  <li><strong>Standard 2-opt + insertion moves</strong> → MSE ~0.008</li>
  <li><strong>Either</strong>:
    <ul>
      <li><strong>(A) Combined 2-opt</strong> → MSE = 0.0 ✓ (single pass, ~7K evals)</li>
      <li><strong>(B) Alternating pair/order/insert cycles</strong> → MSE = 0.0 ✓ (~10 cycles, ~45 min)</li>
    </ul>
  </li>
</ol>

<p>Approach A is faster per pass but requires the insight to try simultaneous swaps. Approach B is slower but conceptually simpler — just keep cycling basic moves and let pairing corrections cascade into order improvements.</p>

<p>Total computation: under an hour on a MacBook Pro (M-series, CPU only).</p>

<h2 id="lessons-learned">Lessons Learned</h2>

<p><strong>Differentiable relaxations are powerful initialization.</strong> Gumbel-Sinkhorn took us from a random permutation to within ~1% of the correct answer. Without it, local search would have no hope in a space of 10<sup>121</sup> configurations.</p>

<p><strong>Pairing corrections unlock order improvements.</strong> A wrong L1/L2 pairing poisons the ordering — no arrangement of blocks can compensate for a block producing the wrong intermediate values. Each pairing fix unblocked 15-20 order improvements that had been invisible before.</p>

<p><strong>Insert moves find what swaps miss.</strong> The final three moves to MSE = 0 were all block insertions. Insertions shift an entire segment of the ordering, exploring a richer neighborhood than pairwise swaps.</p>

<p><strong>Cycle, don’t stop.</strong> After apparent convergence, continuing to cycle through move types found improvements for 5+ more rounds. Each round took ~90 seconds, so patience was cheap.</p>

<p><strong>The right neighborhood matters more than the right algorithm.</strong> Standard 2-opt, 3-opt, simulated annealing, and coordinate descent all failed at MSE ~0.01. Both solutions came from expanding the move set — either by combining swap types (Approach A) or by adding insertions and being patient (Approach B).</p>

<p><strong>Save incrementally.</strong> I learned this the hard way — a script that only saves at the end can lose hours of progress if killed. Every improving move should write to disk immediately.</p>

<p><strong>Exact verification changes the game.</strong> The SHA-256 hash means only MSE = 0 is correct. This motivated exhaustive local search: even a tiny MSE improvement matters because there’s no “good enough.”</p>

<h2 id="dead-ends-and-abandoned-approaches">Dead Ends and Abandoned Approaches</h2>

<p>Before finding the two approaches that worked, I tried several others that didn’t pan out:</p>

<p><strong>Simulated annealing.</strong> The natural response to getting stuck at a local minimum. I implemented SA with multiple move types (order swaps, pairing swaps, block insertions, segment reversals) and ran it for hundreds of thousands of steps. The problem: each evaluation requires a full sequential forward pass through 48 blocks on thousands of samples (~7ms per eval). At 500K steps, that’s nearly an hour per run — and SA needs many restarts to be effective. Worse, the high-dimensional discrete landscape (two interleaved 48-element permutations) makes it hard to set a temperature schedule that explores enough without wasting time in bad regions. The occasional improvements SA found were always things that deterministic local search could have found faster by just cycling more.</p>

<p><strong>Greedy sequential construction.</strong> Rather than optimizing the ordering, build it greedily: at each step, try all remaining blocks and pick the one that minimizes the partial prediction error. This was fast (~1 second per full construction) but gave MSE ~1.8 — worse than the starting point. The problem is myopia: the block that looks best at step k might be terrible for what’s needed at steps k+1 through 47. The residual structure means early blocks fundamentally reshape the input for later blocks, so local greedy choices cascade into globally poor orderings.</p>

<p><strong>3-opt (triple rotations).</strong> If 2-opt is stuck, try 3-opt — cyclic rotations of three elements. The cost is O(n³) = 17,296 triples, each tested in two rotation directions, times ~7ms per eval = ~4 minutes per sweep. I ran this on both ordering and pairing. It was too slow to iterate and never found improvements that the simpler approach (cycling 2-opt with insertions) couldn’t find faster. The 3-element moves that matter are better discovered by doing 2-opt after an insertion changes the landscape.</p>

<p><strong>SiLU activation.</strong> The puzzle description says ReLU, but in first-order (non-residual) models, SiLU gives much lower MSE (~0.9 vs ~11.0). This was a red herring — SiLU only wins when you ignore the residual connections. In the full sequential model, ReLU gives MSE 0.12 while SiLU gives 4.37. The lesson: test with the full architecture, not a simplified proxy.</p>

<p><strong>Group swaps.</strong> Instead of swapping individual blocks, try swapping contiguous groups of 2, 3, 4, or 8 blocks. This occasionally found tiny improvements (~0.001) but was never transformative. The blocks that need to move aren’t in contiguous groups — they’re scattered, and the real bottleneck is fixing pairings, not rearranging chunks.</p>

<p><strong>Lasso/sparse selection.</strong> Precompute all 48×48 = 2,304 possible block outputs and use Lasso regression to select a sparse subset of 48. Elegant in theory, but Lasso doesn’t enforce the constraint that each L1 and L2 layer is used exactly once. Post-hoc matching from the Lasso solution didn’t produce better pairings than direct swap optimization.</p>

<p><strong>Training a surrogate model, then matching layers.</strong> I trained a fresh neural network with the same architecture on the 10K dataset, hoping to match its learned layers against the puzzle pieces. The results were poor — I suspect 10K samples simply aren’t enough to recover a model similar enough to the target for layer-wise matching to work. The trained model converges to a different local minimum with different internal representations, making piece-to-layer correspondence unreliable.</p>

<p><strong>Training a transformer to predict swaps.</strong> The most ambitious attempt: train a transformer model to learn which swaps improve the objective, then let it predict a sequence of moves to solve the puzzle. This ran into a bootstrapping problem — generating training data (pairs of configurations and their MSE changes) required the same expensive forward passes we were trying to avoid, and I couldn’t produce enough samples to train on. The model would need to generalize from a tiny fraction of the 10<sup>121</sup> search space, with no clear inductive bias for this specific combinatorial structure. In hindsight, domain-specific search (exploiting the residual network structure directly) was always going to beat a general-purpose learned search policy for a one-off puzzle like this.</p>

<p>The common thread: <strong>the bottleneck was always pairing, not ordering.</strong> Approaches that focused on finding better orderings (SA, greedy construction, 3-opt, group swaps) couldn’t overcome wrong pairings. The approaches that worked were the ones that could fix pairings and then let order improvements cascade.</p>

<hr />

<p>Good luck if you’re attempting this one — it’s a satisfying puzzle to crack.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>Jane Street publishes monthly puzzles at <a href="https://www.janestreet.com/puzzles/">janestreet.com/puzzles</a>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[HRM Explained: A 27M Parameter Model That Reasons Without Chain-of-Thought]]></title>
    <link href="https://wangyi.ai/blog/2026/02/12/hierarchical-reasoning-model-explained/"/>
    <updated>2026-02-12T10:00:00-08:00</updated>
    <id>https://wangyi.ai/blog/2026/02/12/hierarchical-reasoning-model-explained</id>
    <content type="html"><![CDATA[<p>What if you could build a model that solves complex Sudoku puzzles, navigates mazes, and tackles abstract reasoning — all with just 27 million parameters and 1,000 training examples? No pre-training on massive datasets, no Chain-of-Thought prompting, no language at all. That’s the claim behind the <strong>Hierarchical Reasoning Model (HRM)</strong> from Sapient Intelligence.</p>

<p>In this post, I’ll walk through how HRM actually works by tracing the code and architecture step by step. I’ll also cover the important follow-up critiques that question some of these claims.</p>

<!-- more -->

<h2 id="the-big-idea">The Big Idea</h2>

<p>Current LLMs reason by writing out their thinking step by step (Chain-of-Thought). This works, but it’s slow, requires huge models, and needs lots of training data. HRM takes a completely different approach: it reasons <strong>in latent space</strong> — inside the model’s hidden states — through iterative refinement.</p>

<p>The core insight is borrowed from neuroscience: the human brain processes information hierarchically, with slow abstract planning and fast detailed computation happening at different timescales. HRM mimics this with two transformer modules that talk to each other.</p>

<h2 id="the-two-level-architecture">The Two-Level Architecture</h2>

<p>HRM has two recurrent transformer modules:</p>

<p><strong>H-level (High-level planner)</strong> — 4 transformer layers, responsible for slow, abstract reasoning. Think of it as the part that asks: <em>“What strategy should I use?”</em></p>

<p><strong>L-level (Low-level executor)</strong> — 4 transformer layers, responsible for fast, detailed computation. This handles: <em>“What goes in this specific cell?”</em></p>

<p>They interact in a nested loop:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>For each H-cycle (2x):
    For each L-cycle (2x):
        z_L = L_level(z_L, z_H + input_embeddings)
    z_H = H_level(z_H, z_L)
</code></pre></div></div>

<p>The L-level refines its understanding using the H-level’s guidance <strong>plus</strong> the raw input. Then the H-level updates its plan based on what L found. Both use <strong>non-causal attention</strong> — every position can see every other position simultaneously.</p>

<p>One important detail: both modules are <code class="language-plaintext highlighter-rouge">ReasoningModule</code> wrappers that <strong>add</strong> the injection to the hidden state before running through their transformer layers:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="n">self</span><span class="p">,</span> <span class="n">hidden_states</span><span class="p">,</span> <span class="n">input_injection</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
    <span class="n">hidden_states</span> <span class="o">=</span> <span class="n">hidden_states</span> <span class="o">+</span> <span class="n">input_injection</span>   <span class="c1"># inject
</span>    <span class="k">for</span> <span class="n">layer</span> <span class="ow">in</span> <span class="n">self</span><span class="p">.</span><span class="n">layers</span><span class="p">:</span>
        <span class="n">hidden_states</span> <span class="o">=</span> <span class="nf">layer</span><span class="p">(</span><span class="n">hidden_states</span><span class="o">=</span><span class="n">hidden_states</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">hidden_states</span>
</code></pre></div></div>

<p>So L doesn’t replace its state — it adds <code class="language-plaintext highlighter-rouge">z_H + input</code> to its existing state, then processes. Same for H adding <code class="language-plaintext highlighter-rouge">z_L</code>.</p>

<h2 id="adaptive-computation-time-act-the-outer-loop">Adaptive Computation Time (ACT): The Outer Loop</h2>

<p>The H/L cycles above describe what happens <strong>within a single step</strong>. But HRM can take <strong>multiple steps</strong>, deciding dynamically how long to think. This is the Adaptive Computation Time (ACT) wrapper.</p>

<p>Each call to <code class="language-plaintext highlighter-rouge">model.forward(carry, batch)</code> is one ACT step. The training/evaluation loop calls it repeatedly:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Evaluation loop
</span><span class="k">while</span> <span class="bp">True</span><span class="p">:</span>
    <span class="n">carry</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">metrics</span><span class="p">,</span> <span class="n">preds</span><span class="p">,</span> <span class="n">all_finish</span> <span class="o">=</span> <span class="nf">model</span><span class="p">(</span><span class="n">carry</span><span class="p">,</span> <span class="n">batch</span><span class="p">)</span>
    <span class="k">if</span> <span class="n">all_finish</span><span class="p">:</span>
        <span class="k">break</span>
</code></pre></div></div>

<p>The model can take up to 16 ACT steps (configurable). At each step, it decides: <strong>halt or continue?</strong></p>

<p>Here’s how the two levels of looping connect:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ACT Step 1  ──→  H/L cycles (2x2) inside  ──→  logits + Q-values
                                                   │
                                            Q says "continue"
                                                   ↓
ACT Step 2  ──→  H/L cycles (2x2) inside  ──→  logits + Q-values
                 (carry from step 1                │
                  flows in)                  Q says "continue"
                                                   ↓
ACT Step 3  ──→  H/L cycles (2x2) inside  ──→  logits + Q-values
                                                   │
                                            Q says "HALT"
                                                   ↓
                                            Final answer used
</code></pre></div></div>

<p>With 16 ACT steps, each containing 2 H-cycles x 2 L-cycles, the model can perform up to <strong>64 L-passes + 32 H-passes</strong> — massive computational depth from a tiny model, because the same weights are reused every time.</p>

<h2 id="z_h-and-z_l-the-models-working-memory">z_H and z_L: The Model’s Working Memory</h2>

<p>So what exactly are <code class="language-plaintext highlighter-rouge">z_H</code> and <code class="language-plaintext highlighter-rouge">z_L</code>? They’re <strong>hidden state tensors</strong> — the model’s evolving “thoughts” at each level.</p>

<p>Let’s make this concrete with a Sudoku example. A 9x9 puzzle gets flattened into 81 integers:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>inputs = [5, 3, 0, 0, 7, 0, 0, 0, 0, 6, 0, 0, ...]
          cell1  cell2  cell3  ...              cell81
</code></pre></div></div>

<p>Each integer gets embedded into a 512-dimensional vector. Then a <strong>puzzle embedding</strong> (more on this later) is prepended as position 0. So the final sequence has 82 positions:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>position 0:  puzzle embedding    ← 512-dim vector
position 1:  cell 1 embedding   ← 512-dim vector
position 2:  cell 2 embedding   ← 512-dim vector
...
position 81: cell 81 embedding  ← 512-dim vector
</code></pre></div></div>

<p>Both <code class="language-plaintext highlighter-rouge">z_H</code> and <code class="language-plaintext highlighter-rouge">z_L</code> have this same shape: <code class="language-plaintext highlighter-rouge">(batch_size, 82, 512)</code>. Each position holds a 512-dimensional vector representing the model’s current “thoughts” about that cell.</p>

<p>When a sequence starts fresh, both are initialized to <strong>learned vectors</strong> — <code class="language-plaintext highlighter-rouge">H_init</code> and <code class="language-plaintext highlighter-rouge">L_init</code> — broadcast across all positions. The model starts with the same state everywhere and must differentiate through the input injection and attention.</p>

<p>After each ACT step, both are <strong>detached</strong> (gradients cut) and stored in a <code class="language-plaintext highlighter-rouge">carry</code> dataclass. The next step picks up where the last left off — but no gradients flow backward between steps. This is what makes the whole thing memory-feasible.</p>

<p>Position 0 is special. Since it holds the puzzle embedding (not a cell value), it acts as a <strong>global summary token</strong>. Through non-causal attention, it sees all 81 cells. The Q-head reads <code class="language-plaintext highlighter-rouge">z_H[:, 0]</code> specifically to make the halt/continue decision:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">q_logits</span> <span class="o">=</span> <span class="n">self</span><span class="p">.</span><span class="nf">q_head</span><span class="p">(</span><span class="n">z_H</span><span class="p">[:,</span> <span class="mi">0</span><span class="p">])</span>   <span class="c1"># position 0 → halt decision
</span></code></pre></div></div>

<p>And the final answer is read from the remaining positions:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">output</span> <span class="o">=</span> <span class="n">self</span><span class="p">.</span><span class="nf">lm_head</span><span class="p">(</span><span class="n">z_H</span><span class="p">)[:,</span> <span class="n">puzzle_emb_len</span><span class="p">:]</span>   <span class="c1"># positions 1-81 → predictions
</span></code></pre></div></div>

<h2 id="puzzle-embeddings-per-puzzle-identity">Puzzle Embeddings: Per-Puzzle Identity</h2>

<p>Not all puzzle types need this, and the difference is revealing.</p>

<p><strong>Sudoku</strong>: every puzzle follows the same rule (fill digits 1-9, no repeats in row/column/box). So <code class="language-plaintext highlighter-rouge">puzzle_identifiers = 0</code> for every example. One universal algorithm.</p>

<p><strong>ARC</strong>: every puzzle has a <strong>different rule</strong>. Puzzle 42 might be “rotate the shape 90°”, puzzle 137 might be “fill enclosed regions with blue”. The model needs to know <em>which</em> puzzle it’s solving.</p>

<p>For ARC, the dataset builder assigns each puzzle a unique integer ID (1 through ~960). The model has a learnable embedding table:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>puzzle_emb: shape (961, 512)

Row 0:   [0, 0, ..., 0]            ← blank (unused)
Row 1:   [0.12, -0.34, ..., 0.56]  ← learned embedding for puzzle 1
Row 2:   [-0.78, 0.91, ..., 0.23]  ← learned embedding for puzzle 2
...
</code></pre></div></div>

<p>Each embedding starts at zero and is trained via <strong>SignSGD</strong> — a simple optimizer that only uses the <strong>sign</strong> of the gradient:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>w = w * (1 - lr * weight_decay) - lr * sign(gradient)
</code></pre></div></div>

<p>Every weight goes up by <code class="language-plaintext highlighter-rouge">lr</code> or down by <code class="language-plaintext highlighter-rouge">lr</code>, regardless of gradient magnitude. Why not Adam? Because puzzle embeddings are <strong>extremely sparse</strong> — with ~960 puzzles and a batch of 768, most rows get no gradient on any given step. Adam would approximate SignSGD anyway for such sparse updates, but SignSGD is simpler and needs zero optimizer state (no momentum, no second moment to track).</p>

<p>The puzzle embedding is trained with a separate optimizer at 100x the learning rate of the main model (0.01 vs 0.0001) and 10x the weight decay (1.0 vs 0.1). It updates rarely, so it needs to move fast when it does.</p>

<h2 id="the-q-learning-halting-mechanism">The Q-Learning Halting Mechanism</h2>

<p>How does the model decide when to stop thinking? Through two Q-values produced by a tiny linear head:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">self</span><span class="p">.</span><span class="n">q_head</span> <span class="o">=</span> <span class="nc">CastedLinear</span><span class="p">(</span><span class="n">hidden_size</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="n">bias</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>   <span class="c1"># 512 → 2 numbers
</span></code></pre></div></div>

<p>It reads <code class="language-plaintext highlighter-rouge">z_H[:, 0]</code> (the summary token) and outputs:</p>
<ul>
  <li><strong>q_halt</strong>: “how confident am I that my current answer is correct?”</li>
  <li><strong>q_continue</strong>: “how confident am I that continuing will lead to a correct answer?”</li>
</ul>

<p>If <code class="language-plaintext highlighter-rouge">q_halt &gt; q_continue</code>, the model halts.</p>

<h3 id="training-q_halt-supervised-from-ground-truth">Training q_halt: supervised from ground truth</h3>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">seq_is_correct</span> <span class="o">=</span> <span class="p">(</span><span class="n">number_of_correct_cells</span> <span class="o">==</span> <span class="n">total_cells</span><span class="p">)</span>   <span class="c1"># True or False
</span><span class="n">q_halt_loss</span> <span class="o">=</span> <span class="nf">binary_cross_entropy</span><span class="p">(</span><span class="n">q_halt_logits</span><span class="p">,</span> <span class="n">seq_is_correct</span><span class="p">)</span>
</code></pre></div></div>

<p>Simple. Did you get every cell right? Push <code class="language-plaintext highlighter-rouge">q_halt</code> toward 1. Wrong? Push toward 0.</p>

<h3 id="training-q_continue-bootstrapping-from-the-future">Training q_continue: bootstrapping from the future</h3>

<p>This is the trickier part. There’s no ground truth for “will continuing help?” So the model <strong>peeks ahead</strong> — it runs one more forward pass from the current carry state:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">next_q_halt</span><span class="p">,</span> <span class="n">next_q_continue</span> <span class="o">=</span> <span class="n">self</span><span class="p">.</span><span class="nf">inner</span><span class="p">(</span><span class="n">new_inner_carry</span><span class="p">,</span> <span class="n">new_current_data</span><span class="p">)[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
</code></pre></div></div>

<p>The target for <code class="language-plaintext highlighter-rouge">q_continue</code> at step <code class="language-plaintext highlighter-rouge">t</code> is: <strong>the best outcome achievable from step <code class="language-plaintext highlighter-rouge">t+1</code> onward</strong>.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">target</span> <span class="o">=</span> <span class="nf">sigmoid</span><span class="p">(</span>
    <span class="nf">where</span><span class="p">(</span><span class="n">is_last_step</span><span class="p">,</span>
        <span class="n">next_q_halt</span><span class="p">,</span>                            <span class="c1"># forced to halt next step
</span>        <span class="nf">max</span><span class="p">(</span><span class="n">next_q_halt</span><span class="p">,</span> <span class="n">next_q_continue</span><span class="p">)</span>        <span class="c1"># best option at next step
</span>    <span class="p">)</span>
<span class="p">)</span>
</code></pre></div></div>

<p>This is the Bellman equation from reinforcement learning. If at the next step, halting gives 82% confidence and continuing gives 69%, then the value of continuing now is 82% (you’d halt next step). The target follows whichever future path leads to the best outcome.</p>

<h3 id="the-bootstrapping-cold-start">The bootstrapping cold start</h3>

<p>At the beginning of training, both Q-values are meaningless. The Q-head is initialized with bias = -5, so <code class="language-plaintext highlighter-rouge">sigmoid(-5) ≈ 0.007</code> — the model believes there’s a 0.7% chance of being correct for everything. Since <code class="language-plaintext highlighter-rouge">q_halt ≈ q_continue</code>, nobody halts early; everything runs to the maximum 16 steps.</p>

<p>The chain reaction goes:</p>

<ol>
  <li><code class="language-plaintext highlighter-rouge">lm_loss</code> gradually teaches the model to produce correct answers</li>
  <li><code class="language-plaintext highlighter-rouge">q_halt</code> starts learning which answers are correct (grounded in truth)</li>
  <li>Once <code class="language-plaintext highlighter-rouge">q_halt</code> is meaningful at step 16, <code class="language-plaintext highlighter-rouge">q_continue</code> at step 15 gets a real target</li>
  <li>That propagates backward: step 14, 13, 12…</li>
  <li>Eventually the model learns to halt early for easy puzzles, run longer for hard ones</li>
</ol>

<h3 id="exploration">Exploration</h3>

<p>Without exploration, the Q-head can get stuck — if it always halts at step 3, it never discovers that step 8 would give the right answer. So 10% of the time, each batch item gets a random minimum number of steps it must run before halting is allowed:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">min_halt_steps</span> <span class="o">=</span> <span class="p">(</span><span class="nf">rand</span><span class="p">()</span> <span class="o">&lt;</span> <span class="mf">0.1</span><span class="p">)</span> <span class="o">*</span> <span class="nf">randint</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="n">max_steps</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">halted</span> <span class="o">=</span> <span class="n">halted</span> <span class="o">&amp;</span> <span class="p">(</span><span class="n">steps</span> <span class="o">&gt;=</span> <span class="n">min_halt_steps</span><span class="p">)</span>
</code></pre></div></div>

<p>This ensures the model occasionally sees deeper computation and can update its estimates.</p>

<h2 id="training-two-optimizers-one-loss">Training: Two Optimizers, One Loss</h2>

<p>Each training step:</p>

<ol>
  <li><strong>Forward pass</strong> — puzzle embeddings copied to local buffer, flow through L/H cycles, produce logits + Q-values</li>
  <li><strong>Single backward pass</strong> — gradients flow through everything</li>
  <li><strong>Two optimizers step</strong>:
    <ul>
      <li><strong>SignSGD</strong> for puzzle embeddings (lr=0.01, weight_decay=1.0)</li>
      <li><strong>Adam</strong> for all transformer weights (lr=0.0001, weight_decay=0.1)</li>
    </ul>
  </li>
</ol>

<p>The total loss combines three terms:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">total_loss</span> <span class="o">=</span> <span class="n">lm_loss</span> <span class="o">+</span> <span class="mf">0.5</span> <span class="o">*</span> <span class="p">(</span><span class="n">q_halt_loss</span> <span class="o">+</span> <span class="n">q_continue_loss</span><span class="p">)</span>
</code></pre></div></div>

<p>All three losses backpropagate through the entire model. The Q-losses aren’t just training the Q-head — they shape the representations in <code class="language-plaintext highlighter-rouge">z_H</code> and <code class="language-plaintext highlighter-rouge">z_L</code> throughout, forcing the model to develop internal representations of “how solved is this puzzle.”</p>

<h3 id="the-gradient-efficiency-trick">The gradient efficiency trick</h3>

<p>Within each ACT step, only the <strong>final</strong> H/L cycle computes gradients. All earlier cycles run in <code class="language-plaintext highlighter-rouge">torch.no_grad()</code>:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">with</span> <span class="n">torch</span><span class="p">.</span><span class="nf">no_grad</span><span class="p">():</span>
    <span class="c1"># Run H_cycles * L_cycles - 1 warmup iterations
</span>    <span class="k">for</span> <span class="n">H_step</span> <span class="ow">in</span> <span class="nf">range</span><span class="p">(</span><span class="n">H_cycles</span><span class="p">):</span>
        <span class="k">for</span> <span class="n">L_step</span> <span class="ow">in</span> <span class="nf">range</span><span class="p">(</span><span class="n">L_cycles</span><span class="p">):</span>
            <span class="k">if</span> <span class="ow">not</span> <span class="p">(</span><span class="n">last</span> <span class="n">H</span> <span class="ow">and</span> <span class="n">last</span> <span class="n">L</span><span class="p">):</span>
                <span class="n">z_L</span> <span class="o">=</span> <span class="nc">L_level</span><span class="p">(</span><span class="n">z_L</span><span class="p">,</span> <span class="n">z_H</span> <span class="o">+</span> <span class="nb">input</span><span class="p">)</span>
        <span class="k">if</span> <span class="ow">not</span> <span class="n">last</span> <span class="n">H</span><span class="p">:</span>
            <span class="n">z_H</span> <span class="o">=</span> <span class="nc">H_level</span><span class="p">(</span><span class="n">z_H</span><span class="p">,</span> <span class="n">z_L</span><span class="p">)</span>

<span class="c1"># Only this final step has gradients:
</span><span class="n">z_L</span> <span class="o">=</span> <span class="nc">L_level</span><span class="p">(</span><span class="n">z_L</span><span class="p">,</span> <span class="n">z_H</span> <span class="o">+</span> <span class="nb">input</span><span class="p">)</span>
<span class="n">z_H</span> <span class="o">=</span> <span class="nc">H_level</span><span class="p">(</span><span class="n">z_H</span><span class="p">,</span> <span class="n">z_L</span><span class="p">)</span>
</code></pre></div></div>

<p>The hidden states carry forward information from the no-grad iterations, but only the final refinement contributes to the loss. This dramatically reduces memory usage.</p>

<h2 id="limitations-no-branching-no-backtracking">Limitations: No Branching, No Backtracking</h2>

<p>HRM’s computation is a <strong>single linear path</strong>:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>carry → step 1 → step 2 → step 3 → ... → answer
</code></pre></div></div>

<p>As humans, when we solve puzzles, we do something different:</p>

<ul>
  <li><em>“What if this cell is 5?”</em> → follow implications → contradiction → <strong>backtrack</strong></li>
  <li><em>“OK, what if it’s 7?”</em> → follow implications → works → keep going</li>
</ul>

<p>That’s tree search — branching, evaluating, backtracking. HRM can’t do this. If step 2 goes down a wrong path, step 3 builds on that wrong foundation.</p>

<p>The non-causal attention can partially compensate by processing all positions simultaneously (like parallel constraint propagation rather than sequential hypothesis testing). But for tasks that fundamentally require exploring multiple hypotheses — like playing Go, where you need to simulate opponent responses many moves ahead — HRM’s single-path architecture won’t work.</p>

<table>
  <thead>
    <tr>
      <th>Task type</th>
      <th>What’s needed</th>
      <th>HRM works?</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Sudoku</td>
      <td>Constraint propagation</td>
      <td>Yes</td>
    </tr>
    <tr>
      <td>Maze</td>
      <td>Path finding</td>
      <td>Yes</td>
    </tr>
    <tr>
      <td>ARC</td>
      <td>Pattern recognition + rule inference</td>
      <td>Partially</td>
    </tr>
    <tr>
      <td>Go / Chess</td>
      <td>Multi-step adversarial tree search</td>
      <td>No</td>
    </tr>
    <tr>
      <td>Theorem proving</td>
      <td>Hypothesis testing + backtracking</td>
      <td>No</td>
    </tr>
  </tbody>
</table>

<h2 id="the-follow-up-critiques">The Follow-Up Critiques</h2>

<p>Two important independent analyses appeared after HRM’s release, and they paint a different picture than the original paper.</p>

<h3 id="arc-prize-team-analysis">ARC Prize Team Analysis</h3>

<p>The <a href="https://arcprize.org/blog/hrm-analysis">ARC Prize team</a> verified HRM’s results and ran ablation studies. Their key findings:</p>

<p><strong>The hierarchy barely matters.</strong> A regular transformer with the same parameter count came within ~5 percentage points of HRM without any hyperparameter tuning. The H/L architectural split isn’t the secret sauce.</p>

<p><strong>The refinement loop is the real driver.</strong> Performance jumped +13 percentage points from zero to one refinement iteration. This is the ACT outer loop — but any recurrent architecture could benefit from iterative refinement.</p>

<p><strong>Puzzle embeddings limit generalization.</strong> Since each puzzle gets a learned embedding by ID, the model can only work on puzzles it has seen during training. This makes HRM closer to “test-time training” (memorizing each puzzle’s pattern) than genuine reasoning that generalizes to novel puzzles.</p>

<h3 id="ge-liao--poggio-analysis-arxiv-251000355">Ge, Liao &amp; Poggio Analysis (arXiv 2510.00355)</h3>

<p>Researchers from MIT published <a href="https://arxiv.org/abs/2510.00355">“Hierarchical Reasoning Models: Perspectives and Misconceptions”</a> with further findings:</p>

<p><strong>A flat model works equally well.</strong> An 8-layer L-only model (no H module at all) achieved similar performance and trained faster (1h 48m vs 4h 21m).</p>

<p><strong>The one-step gradient trick isn’t novel.</strong> The no-grad warmup + 1-step gradient pattern is mathematically equivalent to how diffusion models and Latent Consistency Models train. It’s a known technique.</p>

<p><strong>ACT doesn’t help at inference.</strong> Running for the maximum number of steps always gives the best results. The learned halting policy is never actually useful — the code itself always runs to <code class="language-plaintext highlighter-rouge">halt_max_steps</code> during evaluation.</p>

<p><strong>Is it even recurrent?</strong> Since only the last cycle has gradients and the carry is detached between ACT steps, the paper questions whether HRM is truly recurrent or just a very deep feedforward model.</p>

<h2 id="whats-genuinely-interesting">What’s Genuinely Interesting</h2>

<p>Despite the critiques, HRM points toward ideas worth taking seriously:</p>

<p><strong>Latent-space reasoning works.</strong> Instead of generating tokens to “think” (Chain-of-Thought), you can reason inside hidden states. This is fundamentally faster — no autoregressive token generation — and the ARC results show it’s viable even at 27M parameters.</p>

<p><strong>Iterative refinement is powerful.</strong> Running the same model multiple times with carried state is a simple idea with outsized impact. The +13pp jump from zero to one refinement iteration shows this clearly.</p>

<p><strong>Small models can do complex reasoning.</strong> With the right architecture and training setup, you don’t need billions of parameters for tasks like Sudoku and maze solving. The computational depth comes from recurrence, not model size.</p>

<p>The specific hierarchical architecture may not be essential, and the puzzle embeddings are a significant limitation. But the broader research direction — compact models that reason through iterative latent computation — is one worth watching.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[BrushNet & BrushEdit Explained: From Inpainting Architecture to Intelligent Editing]]></title>
    <link href="https://wangyi.ai/blog/2026/02/07/brushnet-explained-inpainting/"/>
    <updated>2026-02-07T10:00:00-08:00</updated>
    <id>https://wangyi.ai/blog/2026/02/07/brushnet-explained-inpainting</id>
    <content type="html"><![CDATA[<p>You’ve probably seen AI tools that can erase objects from photos and fill in the gap seamlessly. But how does the model know what to put there — and how does it figure out <em>where</em> to edit when you just say “remove the dog”? In this post, I’ll break down two papers: <strong>BrushNet</strong>, a clever architecture that adds inpainting ability to any diffusion model, and <strong>BrushEdit</strong>, an agent pipeline that wraps BrushNet with language understanding to turn natural instructions into image edits.</p>

<!-- more -->

<h2 id="part-1-brushnet--the-inpainting-engine">Part 1: BrushNet — The Inpainting Engine</h2>

<h2 id="the-problem-teaching-a-model-to-fill-holes">The Problem: Teaching a Model to Fill Holes</h2>

<p>Imagine you have a photo of a dog on a beach. You want to replace the dog with a sandcastle. You need a model that:</p>
<ol>
  <li>Understands what’s around the hole (beach, sky, waves)</li>
  <li>Generates something new that matches (a sandcastle)</li>
  <li>Blends it seamlessly at the edges</li>
</ol>

<p>The simplest approach? Fine-tune the entire diffusion model for inpainting. But this has a big downside — you break the original model. It can’t do normal image generation anymore, and you can’t swap in a better base model later.</p>

<p><strong>BrushNet’s solution:</strong> keep the original model frozen, and add a separate trainable branch alongside it.</p>

<h2 id="the-two-branch-architecture">The Two-Branch Architecture</h2>

<p>BrushNet runs <strong>two U-Nets in parallel</strong>:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>                 ┌─────────────────────────┐
  Text prompt ──→│  Base U-Net (FROZEN)     │──→ Predicted noise
                 │  Has cross-attention     │
                 │  to understand text      │
                 └────────────▲────────────┘
                              │
                         + (add features)
                              │
                 ┌────────────┴────────────┐
  Masked image ─→│  BrushNet (TRAINABLE)    │
  + mask ────────→│  NO cross-attention      │
  + noisy latent →│  Processes spatial info  │
                 └─────────────────────────┘
</code></pre></div></div>

<p>The Base U-Net does what it always does — denoise an image guided by a text prompt. BrushNet runs alongside it, processing the mask and surrounding context, then <strong>injects hints</strong> into the Base U-Net at every layer.</p>

<h2 id="what-goes-into-brushnet">What Goes Into BrushNet?</h2>

<p>BrushNet takes 3 things, concatenated into a 9-channel input:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>┌──────────────────┐  ┌──────────────────┐  ┌──────────────────┐
│  Noisy latent    │  │  Masked image    │  │  Binary mask     │
│  (4 channels)    │  │  (4 channels)    │  │  (1 channel)     │
│                  │  │                  │  │                  │
│  Current state   │  │  What's around   │  │  Where the       │
│  of denoising    │  │  the hole        │  │  hole is         │
└──────────────────┘  └──────────────────┘  └──────────────────┘
         │                     │                     │
         └─────────────────────┴─────────────────────┘
                               │
                     Concatenate → 9 channels
                               │
                         ┌─────▼─────┐
                         │ BrushNet  │
                         └───────────┘
</code></pre></div></div>

<h3 id="why-these-3-inputs-what-does-each-one-do">Why these 3 inputs? What does each one do?</h3>

<p>Each input answers a different question:</p>

<p><strong>1. Noisy latent <code class="language-plaintext highlighter-rouge">z_t</code> (4 channels) — “What step are we at?”</strong></p>

<p>This is the current state of the image being denoised. At each timestep during the denoising loop, the image goes from pure noise to clean image. BrushNet needs to see this so it knows how much noise is left and can produce appropriate injection features for the current step.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>t=T (start):   z_t = pure noise          → BrushNet: "everything is noisy, give strong guidance"
t=T/2 (mid):   z_t = half noise/half image → BrushNet: "refine the details"
t=0 (end):     z_t = nearly clean         → BrushNet: "just fix edges"
</code></pre></div></div>

<p><strong>2. Masked image latent <code class="language-plaintext highlighter-rouge">z_masked</code> (4 channels) — “What’s around the hole?”</strong></p>

<p>This is the original image with the masked region zeroed out, then VAE-encoded. It tells BrushNet what the surrounding context looks like — colors, textures, edges near the mask boundary.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Original:     [beach][dog][beach]
Mask applied: [beach][ 0 ][beach]    ← dog region zeroed out
VAE encode:   [4-channel latent]     ← this goes to BrushNet
</code></pre></div></div>

<p>Why 4 channels instead of 3 (RGB)? Because the U-Net operates in VAE latent space, not pixel space. Raw pixels would be mismatched — like feeding English text into a Chinese language model. The VAE encoder translates the image into the same “language” the U-Net understands.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Original image (512×512×3)
        │
   Apply mask (zero out hole region)
        │
   VAE Encoder
        │
Masked image latent (64×64×4)   ← This goes to BrushNet
</code></pre></div></div>

<p><strong>3. Mask (1 channel) — “Where is the hole?”</strong></p>

<p>A simple binary map: 1 = inpaint here, 0 = keep original. You might think BrushNet could figure this out from the masked image alone (just look for the zeros), but zeroed-out pixels are ambiguous:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Without mask channel:
  z_masked has zeros at (2,3) → Is this black pixels or a hole? 🤷

With mask channel:
  z_masked has zeros at (2,3) + mask=1 at (2,3) → Definitely a hole! ✓
</code></pre></div></div>

<h3 id="why-all-3-are-necessary">Why all 3 are necessary</h3>

<table>
  <thead>
    <tr>
      <th>Without…</th>
      <th>Problem</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Noisy latent</td>
      <td>BrushNet doesn’t know which denoising step → wrong features</td>
    </tr>
    <tr>
      <td>Masked image</td>
      <td>BrushNet can’t see surrounding context → can’t blend</td>
    </tr>
    <tr>
      <td>Mask</td>
      <td>BrushNet can’t distinguish “black pixel” from “hole”</td>
    </tr>
  </tbody>
</table>

<p>Each input answers a different question: <strong>when</strong> (timestep), <strong>what’s around</strong> (context), and <strong>where</strong> (hole location).</p>

<h2 id="the-key-innovation-zero-convolutions">The Key Innovation: Zero Convolutions</h2>

<p>Here’s the clever part. BrushNet’s features are injected into the Base U-Net through <strong>zero convolutions</strong> — 1×1 convolutions where all weights start at zero.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>At training start:

BrushNet feature ──→ ZeroConv ──→ 0.0 ──→ + Base U-Net feature
                     (all zeros)           (unchanged!)
</code></pre></div></div>

<p>Why? Because the Base U-Net is a carefully trained model. If you inject random noise into it on day one, you’d destroy its ability to generate images. Starting from zero means:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Training step 0:     BrushNet contributes nothing     (U-Net works normally)
Training step 100:   BrushNet whispers tiny hints      (weights: 0.001)
Training step 10K:   BrushNet provides real guidance   (weights: 0.1)
</code></pre></div></div>

<h3 id="concrete-example">Concrete Example</h3>

<p>Say BrushNet produces a feature value of <code class="language-plaintext highlighter-rouge">0.8</code> at some position. Here’s what the zero convolution does with it over training:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Step 0:     weight = 0.0    →  0.0 × 0.8 = 0.0    (silent)
Step 1000:  weight = 0.02   →  0.02 × 0.8 = 0.016  (whispering)
Step 10000: weight = 0.25   →  0.25 × 0.8 = 0.2    (contributing)
</code></pre></div></div>

<p>It’s like slowly turning up the volume from mute. The Base U-Net is never shocked by sudden changes.</p>

<h2 id="where-are-features-injected">Where Are Features Injected?</h2>

<p>Unlike ControlNet (which only injects into the decoder), BrushNet injects at <strong>every single layer</strong> — all encoder blocks, the mid block, and all decoder blocks:</p>

<p><img src="/images/blog/brushnet_architecture.png" alt="BrushNet Dual-Branch Architecture" /></p>

<p>The left column (green) is the trainable BrushNet branch — no cross-attention to text. The right column (blue) is the frozen Base U-Net with text cross-attention. The red arrows are zero-conv injection points where BrushNet features are added element-wise to the Base U-Net.</p>

<p>Each arrow is actually multiple injection points (one per sub-layer), totaling about <strong>25 injection points</strong> in total. This dense injection gives BrushNet pixel-level control, which is crucial for inpainting — you need precise boundaries where the generated content meets the original image.</p>

<h2 id="why-no-cross-attention-in-brushnet">Why No Cross-Attention in BrushNet?</h2>

<p>The Base U-Net has cross-attention layers that let it understand text prompts:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Base U-Net block:    ResBlock → CrossAttention("a sunflower") → output
BrushNet block:      ResBlock →                                output
                                   ↑
                             (removed!)
</code></pre></div></div>

<p>This is by design. BrushNet’s job is purely spatial — “here’s a hole, here’s what’s around it.” The text understanding stays in the Base U-Net. This separation means:</p>

<ul>
  <li>BrushNet is <strong>smaller</strong> (~480M vs ~520M params) because it skips attention layers</li>
  <li>It focuses entirely on <strong>where</strong> to inpaint, not <strong>what</strong> to generate</li>
  <li><strong>What</strong> to generate is handled by the Base U-Net via the text prompt</li>
</ul>

<h2 id="how-training-works">How Training Works</h2>

<p>The training loop is surprisingly simple — it uses the standard diffusion denoising loss:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>For each training step:

1. Take a clean image                    "cat on a couch"
2. Generate a RANDOM mask                (random shape, random position)
3. Apply mask to image                   (hole in it)
4. VAE-encode both                       z₀ (clean latent), z_masked (masked latent)
5. Add random noise to clean latent      z_t = mix(z₀, noise, t)
6. Run through both branches:
     BrushNet(z_t, z_masked, mask)       → injection features
     Base_UNet(z_t, text) + features     → predicted noise
7. Loss = ‖ predicted_noise - actual_noise ‖²       (MSE)
</code></pre></div></div>

<h3 id="wait--the-loss-compares-noise-not-images">Wait — the loss compares noise, not images?</h3>

<p>Yes! The model predicts <strong>what noise was added</strong>, not what the clean image looks like. We know the actual noise because we added it ourselves in step 5. If the model can perfectly predict the noise, we can subtract it to recover the clean image.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>We added noise ε to get z_t.
Model predicts ε_θ.
If ε_θ ≈ ε, then z₀ ≈ (z_t - ε_θ) / scale   ← clean image recovered!
</code></pre></div></div>

<h3 id="no-special-mask-weighted-loss">No special mask-weighted loss?</h3>

<p>Nope. The loss is computed over the <strong>entire</strong> image, not just the masked region. But the model naturally focuses on the mask because:</p>

<ul>
  <li><strong>Outside the mask</strong>: the frozen Base U-Net already handles this well. BrushNet’s zero-convs learn to stay quiet here (contributing nothing reduces loss just fine).</li>
  <li><strong>Inside the mask</strong>: the Base U-Net struggles without context. BrushNet’s features are the only thing that helps here, so gradients push the zero-convs to output useful values.</li>
</ul>

<p>The mask guides learning <strong>implicitly through gradients</strong>, not explicitly through loss weighting.</p>

<h3 id="training-data-just-clean-images">Training data: just clean images</h3>

<p>BrushNet doesn’t need paired before/after examples. It’s self-supervised:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Dataset: clean images + text descriptions (same data as Stable Diffusion)
Masks:   generated randomly during training
</code></pre></div></div>

<p>The model learns to reconstruct whatever was behind a random mask, using the surrounding context and text prompt. At inference, you provide a real mask of what you want to replace.</p>

<h2 id="brushnet-vs-controlnet-vs-standard-inpainting">BrushNet vs. ControlNet vs. Standard Inpainting</h2>

<table>
  <thead>
    <tr>
      <th>Feature</th>
      <th>SD Inpainting</th>
      <th>ControlNet</th>
      <th>BrushNet</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Base model</td>
      <td>Modified (retrained)</td>
      <td>Frozen</td>
      <td>Frozen</td>
    </tr>
    <tr>
      <td>Branch coverage</td>
      <td>N/A (single model)</td>
      <td>Encoder only</td>
      <td>Full U-Net</td>
    </tr>
    <tr>
      <td>Injection points</td>
      <td>N/A</td>
      <td>~12 (decoder only)</td>
      <td>~25 (everywhere)</td>
    </tr>
    <tr>
      <td>Swap base models?</td>
      <td>No</td>
      <td>Yes</td>
      <td>Yes</td>
    </tr>
    <tr>
      <td>Extra params</td>
      <td>0</td>
      <td>~360M</td>
      <td>~480M</td>
    </tr>
    <tr>
      <td>Text handling</td>
      <td>Single model</td>
      <td>Branch has cross-attn</td>
      <td>Branch has NO cross-attn</td>
    </tr>
    <tr>
      <td>Best for</td>
      <td>General inpainting</td>
      <td>Structural control</td>
      <td>Precise inpainting</td>
    </tr>
  </tbody>
</table>

<h3 id="why-full-u-net-matters-for-inpainting">Why full U-Net matters for inpainting</h3>

<p>ControlNet copies only the encoder half — it injects features into the decoder via the skip connections. This works well for structural guidance (edges, poses) but not for inpainting, where you need fine-grained control at every spatial resolution.</p>

<p>The BrushNet paper showed this clearly:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Full U-Net (BrushNet):  PSNR 19.86  ← best quality
Half U-Net:             PSNR 19.01
ControlNet-style:       PSNR 18.28  ← worst quality
</code></pre></div></div>

<p>Inpainting needs dense per-pixel control, especially at mask boundaries where generated content must blend seamlessly with the original image.</p>

<h2 id="inference-putting-it-all-together">Inference: Putting It All Together</h2>

<p>At inference time, the full pipeline looks like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>1. User provides: image + mask + text prompt ("a sunflower")

2. Encode:
   masked_image = apply_mask(image, mask)
   z_masked = VAE_encode(masked_image)         [4, 64, 64]
   mask_small = downsample(mask)                [1, 64, 64]

3. Start from pure noise:
   z_T ~ N(0, I)                                [4, 64, 64]

4. Denoise loop (T steps, e.g. 25-50):
   for t in T → 0:
       brushnet_feats = BrushNet(z_t, z_masked, mask_small, t)
       noise_pred = BaseUNet(z_t, t, "a sunflower") + brushnet_feats
       z_{t-1} = scheduler_step(z_t, noise_pred)

5. Decode final latent:
   result = VAE_decode(z_0)                     [3, 512, 512]

6. Blend:
   output = blur_blend(result, original_image, mask)
</code></pre></div></div>

<p>The final blending step uses a Gaussian-blurred mask to smooth the transition between generated and original pixels, avoiding hard edges.</p>

<h2 id="the-plug-and-play-promise">The Plug-and-Play Promise</h2>

<p>Because the Base U-Net is never modified, you can:</p>

<ul>
  <li>Train one BrushNet and use it with <strong>any</strong> compatible base model</li>
  <li>Swap in a photorealistic model, an anime model, or a custom fine-tune</li>
  <li>The base model keeps all its original capabilities (text-to-image still works)</li>
  <li>Adjust the <code class="language-plaintext highlighter-rouge">conditioning_scale</code> (0.0 to 1.0) to control how much BrushNet influences the output</li>
</ul>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>scale = 0.0  →  Base U-Net only (no inpainting guidance)
scale = 0.5  →  Gentle inpainting hints
scale = 1.0  →  Full BrushNet influence (default)
</code></pre></div></div>

<h2 id="model-size">Model Size</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Base U-Net (frozen):     ~520M params
BrushNet (trainable):    ~480M params
  └─ Zero-conv layers:    25 layers, ~20M params
Total at inference:      ~1,000M params (1B)
</code></pre></div></div>

<p>BrushNet is nearly the same size as the Base U-Net — the only difference is removing cross-attention layers (~40M params saved). The trade-off is clear: <strong>2x memory for plug-and-play flexibility</strong>.</p>

<h2 id="brushnet-summary">BrushNet Summary</h2>

<p>BrushNet gives us a powerful inpainting engine. But using it requires you to provide two things manually: a <strong>mask</strong> (where to edit) and a <strong>text prompt</strong> (what to generate). For simple cases that’s fine — draw a circle around the dog, type “a sunflower.”</p>

<p>But what if you just want to say <strong>“remove the dog”</strong> and have the system figure out the rest?</p>

<p>That’s exactly what BrushEdit does. It wraps BrushNet in an intelligent agent pipeline that automates the mask and prompt generation.</p>

<hr />

<h2 id="part-2-brushedit--from-remove-the-dog-to-edited-image">Part 2: BrushEdit — From “Remove the Dog” to Edited Image</h2>

<p>BrushEdit (arXiv 2412.10316) doesn’t change BrushNet’s architecture at all. Instead, it asks: <strong>how do you go from a natural language instruction to a BrushNet-ready mask and prompt?</strong></p>

<p>The answer is an assembly line of 4 AI models:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>User: "Remove the dog from the garden"
                │
                ▼
  ┌───────────────────────────┐
  │ 1. MLLM (Qwen2-VL)       │  "What kind of edit? What object?"
  │    Classify + Identify    │  → edit_type = "remove"
  │    + Generate caption     │  → target = "dog"
  └────────────┬──────────────┘  → caption = "garden with flowers"
               ▼
  ┌───────────────────────────┐
  │ 2. GroundingDINO          │  "Where is the dog?"
  │    Text → bounding box    │  → bbox around the dog
  └────────────┬──────────────┘
               ▼
  ┌───────────────────────────┐
  │ 3. SAM                    │  "What's the exact shape?"
  │    Bbox → pixel mask      │  → silhouette of the dog
  └────────────┬──────────────┘
               ▼
  ┌───────────────────────────┐
  │ 4. BrushNet + SD 1.5      │  "Fill the hole"
  │    Mask + caption → image │  → dog replaced with garden
  └───────────────────────────┘
</code></pre></div></div>

<p>Each model does one thing well. Let’s walk through each step.</p>

<h2 id="step-1-the-mllm-understands-your-instruction">Step 1: The MLLM Understands Your Instruction</h2>

<p>The MLLM (a vision-language model like Qwen2-VL or GPT-4o) is called <strong>three separate times</strong>, each with a different question. No fine-tuning — it’s used purely through prompt engineering.</p>

<h3 id="call-1-what-kind-of-edit">Call 1: “What kind of edit?”</h3>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>System: "Classify this editing instruction into one of:
         addition, remove, local, global, background.
         Reply with a single word."
User:   "Remove the dog from the garden"

→ "remove"
</code></pre></div></div>

<p>This classification matters because each edit type needs a <strong>different mask strategy</strong>:</p>

<table>
  <thead>
    <tr>
      <th>Edit Type</th>
      <th>What Happens to the Mask</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Remove</strong> “Remove the dog”</td>
      <td>Detect dog → segment it → dilate mask edges</td>
    </tr>
    <tr>
      <td><strong>Addition</strong> “Add a cat on the sofa”</td>
      <td>No detection needed — MLLM predicts a bounding box</td>
    </tr>
    <tr>
      <td><strong>Local</strong> “Make the car blue”</td>
      <td>Detect car → segment it → use mask as-is</td>
    </tr>
    <tr>
      <td><strong>Background</strong> “Change to a beach”</td>
      <td>Detect foreground → segment → <strong>invert</strong> the mask</td>
    </tr>
    <tr>
      <td><strong>Global</strong> “Make it nighttime”</td>
      <td>Mask the entire image</td>
    </tr>
  </tbody>
</table>

<h3 id="call-2-what-object">Call 2: “What object?”</h3>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>System: "Identify the main object being edited.
         Reply with no more than 5 words, a single noun phrase."
User:   "Remove the dog from the garden"

→ "dog"
</code></pre></div></div>

<p>This short phrase will be fed to GroundingDINO as a search query. It needs to be concise — just enough to find the right thing in the image.</p>

<h3 id="call-3-what-should-the-result-look-like">Call 3: “What should the result look like?”</h3>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>System: "Describe what the image should look like AFTER the edit.
         Do NOT include elements that are removed or changed."
User:   [source image] + "Remove the dog from the garden"

→ "A peaceful garden path with green grass and flowers"
</code></pre></div></div>

<p>This becomes the text prompt for BrushNet’s inpainting. Notice: it describes the scene <strong>without</strong> the dog — because we’re removing it. The MLLM has to understand the instruction well enough to describe the <em>result</em>, not just parrot the input.</p>

<h3 id="why-training-free-works-here">Why training-free works here</h3>

<p>All three calls use the MLLM <strong>off-the-shelf</strong>. No fine-tuning. This means you can swap backends freely:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GPT-4o  →  Best quality, requires API key, costs money
Qwen2-VL →  Best open-source, runs locally, ~16 GB VRAM
LLaVA   →  Lighter alternative, ~17 GB VRAM
</code></pre></div></div>

<p>The paper doesn’t fine-tune any of these models. It just writes good prompts. This is a deliberate design choice — it keeps the system modular and easy to upgrade as better VLMs come out.</p>

<h2 id="step-2-groundingdino-finds-the-object">Step 2: GroundingDINO Finds the Object</h2>

<p>Now we know we’re looking for “dog.” But where in the image is it?</p>

<p>GroundingDINO is an open-vocabulary object detector. Unlike traditional detectors that only recognize a fixed set of classes (like COCO’s 80 categories), it takes <strong>any text query</strong> and finds matching objects:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Input:  image + "dog"
Output: bounding box (128, 128, 384, 384), confidence 0.89
</code></pre></div></div>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>┌────────────────────────┐
│                        │
│    ┌──────────┐        │
│    │          │        │
│    │   dog    │        │
│    │          │        │
│    └──────────┘        │
│         ↑              │
│    bounding box        │
│    from DINO           │
└────────────────────────┘
</code></pre></div></div>

<p>This works for any object you can describe in words. “Red car,” “wooden table,” “person in blue shirt” — GroundingDINO handles them all.</p>

<p><strong>Exception: addition edits.</strong> If the instruction is “add a cat on the sofa,” there’s no cat to detect yet. In this case, GroundingDINO is skipped entirely. Instead, the MLLM predicts where the new object should go by outputting a bounding box: “given this 512×512 image, the cat should go at [256, 170, 128, 170].”</p>

<h2 id="step-3-sam-cuts-the-exact-shape">Step 3: SAM Cuts the Exact Shape</h2>

<p>A bounding box is too rough. The box around the dog also includes chunks of grass, maybe a bit of fence. We need the exact silhouette.</p>

<p>SAM (Segment Anything Model) takes the bounding box and produces a pixel-precise mask:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Before (bounding box):          After (SAM mask):

┌────────────────────────┐      ┌────────────────────────┐
│                        │      │                        │
│    ┌──────────┐        │      │      ████████          │
│    │ grass    │        │      │    ████████████        │
│    │   dog    │        │      │    ██████████          │
│    │ grass    │        │      │      ██████            │
│    └──────────┘        │      │        ██              │
│                        │      │                        │
└────────────────────────┘      └────────────────────────┘

Box includes background         Mask follows the dog's
around the dog                   exact silhouette
</code></pre></div></div>

<h3 id="edit-type-specific-mask-adjustments">Edit-type-specific mask adjustments</h3>

<p>After SAM produces the mask, BrushEdit adjusts it based on the edit type:</p>

<ul>
  <li><strong>Remove:</strong> <strong>Dilate</strong> the mask by a few pixels. Fur, hair, and shadows often extend slightly beyond the segmentation boundary. Expanding the mask catches these fuzzy edges.</li>
  <li><strong>Background:</strong> <strong>Invert</strong> the mask. Instead of masking the dog, mask everything <em>except</em> the dog. Now BrushNet will regenerate the entire background while keeping the dog untouched.</li>
  <li><strong>Local:</strong> Use the mask as-is. The object is being modified, so we need to cover exactly that region.</li>
</ul>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Remove (dilated):            Background (inverted):

┌────────────────────────┐   ┌────────────────────────┐
│                        │   │████████████████████████│
│     ██████████         │   │████            ████████│
│   ██████████████       │   │██                ██████│
│   ████████████         │   │████            ████████│
│     ████████           │   │██████        ██████████│
│       ████             │   │████████████████████████│
│                        │   │████████████████████████│
└────────────────────────┘   └────────────────────────┘
Expanded to catch fur/shadow  Everything EXCEPT the dog
</code></pre></div></div>

<h2 id="step-4-brushnet-fills-the-hole">Step 4: BrushNet Fills the Hole</h2>

<p>Now we have everything BrushNet needs:</p>

<table>
  <thead>
    <tr>
      <th>Input</th>
      <th>Value</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Mask</strong></td>
      <td>Pixel-precise segmentation from SAM (dilated for removal)</td>
    </tr>
    <tr>
      <td><strong>Caption</strong></td>
      <td>“A peaceful garden path with green grass and flowers”</td>
    </tr>
    <tr>
      <td><strong>Original image</strong></td>
      <td>The source photo</td>
    </tr>
  </tbody>
</table>

<p>This is the exact same BrushNet pipeline we covered in Part 1:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>1. masked_image = original × (1 - mask)          ← zero out the dog region
2. z_masked = VAE.encode(masked_image)            ← encode to latent space
3. conditioning = concat(z_masked, mask)          ← 5-channel conditioning
4. Denoising loop (50 steps):
     BrushNet features = BrushNet(z_t, conditioning)
     noise_pred = Base_UNet(z_t, "garden with flowers") + BrushNet features
     z_{t-1} = scheduler.step(z_t, noise_pred)
5. result = VAE.decode(z_0)                       ← back to pixel space
6. output = blur(mask) × result + (1-blur(mask)) × original  ← blend
</code></pre></div></div>

<p>The blurred mask blending at the end creates a smooth transition at the boundary. Without it, you’d see a hard edge where the generated content meets the original image. This single step accounts for a +10 PSNR improvement in ablation studies.</p>

<h2 id="the-full-pipeline-end-to-end">The Full Pipeline, End to End</h2>

<p>Let’s trace through one more example to make sure it’s clear. Instruction: <strong>“Change the background to a tropical beach.”</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Step 1: MLLM classifies → "background"
        MLLM identifies  → "person" (the foreground object to keep)
        MLLM captions    → "A person standing on a tropical beach with
                            palm trees and turquoise water"

Step 2: GroundingDINO("person") → bounding box around the person

Step 3: SAM(bbox) → pixel mask of the person
        Mask is INVERTED → now covers everything EXCEPT the person
        Coverage: ~75% of the image

Step 4: BrushNet inpaints the masked region (the background)
        using caption "tropical beach with palm trees"
        Person is preserved in the unmasked region
        Blended at edges for seamless transition
</code></pre></div></div>

<p>The key insight for background edits: GroundingDINO detects the <strong>foreground</strong> object (the person), SAM segments it, then the mask is <strong>inverted</strong>. BrushNet never touches the person — it only regenerates the background.</p>

<h2 id="why-decompose-instead-of-end-to-end">Why Decompose Instead of End-to-End?</h2>

<p>You might wonder: why not train one big model that takes “remove the dog” and directly outputs an edited image? That’s what InstructPix2Pix does. BrushEdit’s decomposed approach has three advantages:</p>

<p><strong>1. Transparency.</strong> Every intermediate result is visible. You can see the edit classification (“remove”), the detected object (“dog”), the mask, and the caption. If something goes wrong, you know exactly where.</p>

<p><strong>2. User control.</strong> You can override any step. Don’t like the auto-generated mask? Draw your own. Want a different caption? Type one. The pipeline doesn’t force you into a black box.</p>

<p><strong>3. No paired training data.</strong> InstructPix2Pix needs millions of (instruction, before, after) triples — expensive to create. BrushEdit needs none. The MLLM is used off-the-shelf, GroundingDINO and SAM are pre-trained, and BrushNet trains on standard images with random masks.</p>

<p>The trade-off is complexity. BrushEdit orchestrates 4 separate models totaling ~66 GB of weights. But each model is best-in-class at its job, and you can upgrade any component independently.</p>

<h2 id="how-does-it-compare">How Does It Compare?</h2>

<h3 id="vs-inversion-based-methods-ddimp2p-null-text">vs. Inversion-based methods (DDIM+P2P, Null-Text)</h3>

<p>These methods invert the image to noise, then re-denoise with edits. BrushEdit skips inversion entirely — it generates directly in the masked region.</p>

<table>
  <thead>
    <tr>
      <th>Method</th>
      <th>PSNR (quality)</th>
      <th>Time</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>DDIM + P2P</td>
      <td>22.67</td>
      <td>11s</td>
    </tr>
    <tr>
      <td>Null-Text + P2P</td>
      <td>26.52</td>
      <td>148s</td>
    </tr>
    <tr>
      <td><strong>BrushEdit</strong></td>
      <td><strong>32.16</strong></td>
      <td><strong>3.6s</strong></td>
    </tr>
  </tbody>
</table>

<p>5 PSNR better and 3-40x faster.</p>

<h3 id="vs-original-brushnet">vs. Original BrushNet</h3>

<p>BrushEdit uses BrushNet internally, but improves on it:</p>

<table>
  <thead>
    <tr>
      <th> </th>
      <th>BrushNet</th>
      <th>BrushEdit</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Mask generation</td>
      <td>Manual</td>
      <td>Automatic (MLLM + DINO + SAM)</td>
    </tr>
    <tr>
      <td>Caption</td>
      <td>Manual</td>
      <td>Automatic (MLLM)</td>
    </tr>
    <tr>
      <td>Model checkpoints</td>
      <td>2 separate (seg masks, random masks)</td>
      <td>1 unified model</td>
    </tr>
    <tr>
      <td>Object removal</td>
      <td>Limited</td>
      <td>Trained explicitly with removal data</td>
    </tr>
    <tr>
      <td>Multi-round editing</td>
      <td>No</td>
      <td>Yes (output becomes next input)</td>
    </tr>
  </tbody>
</table>

<p>The unified model comes from training on <strong>BrushData-v2</strong> — a merged dataset that combines segmentation masks and random masks, plus new removal training pairs where clean-background images are paired with random masks.</p>

<h2 id="brushedits-limitations">BrushEdit’s Limitations</h2>

<p>No system is perfect. BrushEdit struggles with:</p>

<p><strong>Irregular masks.</strong> Very thin, fragmented, or oddly shaped masks can produce artifacts. The model was trained mostly on blob-like masks and object silhouettes.</p>

<p><strong>Text-mask misalignment.</strong> If the caption says “a large elephant” but the mask is tiny, the model can’t fit an elephant in there. The MLLM doesn’t always reason well about spatial constraints.</p>

<p><strong>Base model ceiling.</strong> BrushEdit uses Stable Diffusion 1.5 as its backbone. Output quality is bounded by what SD 1.5 can generate. It can’t produce FLUX-quality images because the underlying diffusion model isn’t that capable.</p>

<p><strong>VLM errors cascade.</strong> If the MLLM misclassifies the edit type (calling a “remove” a “local edit”), the entire downstream pipeline produces wrong results. There’s no error recovery between steps.</p>

<h2 id="key-takeaways">Key Takeaways</h2>

<p><strong>BrushNet</strong> (Part 1):</p>

<ol>
  <li><strong>Dual-branch design</strong>: Frozen base model + trainable BrushNet branch. Plug-and-play.</li>
  <li><strong>9-channel input</strong>: Noisy latent (4) + masked image latent (4) + mask (1).</li>
  <li><strong>Zero convolutions</strong>: Start silent, gradually learn. Stable training.</li>
  <li><strong>Full U-Net coverage</strong>: Encoder + mid + decoder injection. Not just the encoder (ControlNet-style).</li>
  <li><strong>No cross-attention in BrushNet</strong>: Text stays in the Base U-Net. BrushNet handles spatial information only.</li>
</ol>

<p><strong>BrushEdit</strong> (Part 2):</p>

<ol>
  <li><strong>4-model assembly line</strong>: MLLM → GroundingDINO → SAM → BrushNet. Each model does one job well.</li>
  <li><strong>Training-free VLM</strong>: The MLLM is used off-the-shelf through prompt engineering. No fine-tuning. Swap backends freely.</li>
  <li><strong>Edit-type-aware masks</strong>: Different edit types get different mask treatments (dilated for removal, inverted for background, bbox for addition).</li>
  <li><strong>Transparent pipeline</strong>: Every intermediate result is visible and overridable by the user.</li>
  <li><strong>Unified inpainting model</strong>: One BrushNet checkpoint handles all mask types, trained on BrushData-v2.</li>
</ol>

<p>The two papers together tell a clean story: BrushNet solves <strong>how to inpaint</strong> (the architecture), and BrushEdit solves <strong>what to inpaint</strong> (the intelligence layer that turns natural language into masks and captions).</p>

<hr />

<p><em>This post covers BrushNet (ECCV 2024) and BrushEdit (arXiv 2412.10316). The architecture diagrams come from hands-on experimentation and code analysis of the <a href="https://github.com/TencentARC/BrushEdit">TencentARC/BrushEdit</a> repository.</em></p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[U-Net Explained: A Visual Guide for Beginners]]></title>
    <link href="https://wangyi.ai/blog/2026/02/03/unet-explained-visual-guide/"/>
    <updated>2026-02-03T10:00:00-08:00</updated>
    <id>https://wangyi.ai/blog/2026/02/03/unet-explained-visual-guide</id>
    <content type="html"><![CDATA[<p>If you’ve explored image generation, segmentation, or diffusion models, you’ve probably heard of U-Net. But what exactly is it, and why is it so widely used? In this post, I’ll break down U-Net step by step with concrete examples and visual diagrams.</p>

<!-- more -->

<h2 id="what-is-u-net">What is U-Net?</h2>

<p>U-Net is a neural network architecture designed for tasks where you need an <strong>image in</strong> and an <strong>image out</strong> of the same size. It was originally created for medical image segmentation in 2015, but has since become the backbone of many modern AI systems, including Stable Diffusion.</p>

<p>The name comes from its shape—when you draw the architecture, it looks like the letter “U”:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Input Image
    │
    ▼
┌─────────────────────────────────────────┐
│  ENCODER (Downsampling)                 │
│  ┌─────┐    ┌─────┐    ┌─────┐         │
│  │64ch │ →  │128ch│ →  │256ch│ → ...   │
│  │128² │    │64²  │    │32²  │         │
│  └──┬──┘    └──┬──┘    └──┬──┘         │
│     │ skip     │ skip     │ skip       │
│     ▼          ▼          ▼            │
│  ┌──┴──┐    ┌──┴──┐    ┌──┴──┐         │
│  │64ch │ ←  │128ch│ ←  │256ch│ ← ...   │
│  │128² │    │64²  │    │32²  │         │
│  └─────┘    └─────┘    └─────┘         │
│  DECODER (Upsampling)                   │
└─────────────────────────────────────────┘
    │
    ▼
Output Image
</code></pre></div></div>

<h2 id="the-three-key-parts">The Three Key Parts</h2>

<h3 id="1-encoder-the-down-path">1. Encoder (The Down Path)</h3>

<p>The encoder compresses the image, making it spatially smaller but with more channels:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>128×128×3  →  64×64×64  →  32×32×128  →  16×16×256  →  8×8×512
   │              │             │             │            │
   └──────────────┴─────────────┴─────────────┴────────────┘
                    Shrinking spatially
                    Growing in channels
</code></pre></div></div>

<p>At each step:</p>
<ul>
  <li><strong>Spatial size halves</strong> (128 → 64 → 32 → 16 → 8)</li>
  <li><strong>Channels increase</strong> (3 → 64 → 128 → 256 → 512)</li>
</ul>

<p>This is like summarizing a book—you lose details but capture the main ideas.</p>

<h3 id="2-bottleneck">2. Bottleneck</h3>

<p>The bottleneck is the smallest point in the network:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>┌─────────────────────────────────┐
│          8×8×512                │
│                                 │
│  Only 64 spatial positions      │
│  but 512 features each          │
│                                 │
│  "Compressed understanding"     │
└─────────────────────────────────┘
</code></pre></div></div>

<p>At this point, the network has maximum semantic understanding but minimum spatial detail. It knows “what” is in the image but has lost “where” things are precisely.</p>

<h3 id="3-decoder-the-up-path">3. Decoder (The Up Path)</h3>

<p>The decoder expands the image back to full resolution:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>8×8×512  →  16×16×256  →  32×32×128  →  64×64×64  →  128×128×3
</code></pre></div></div>

<p>But here’s the problem: how do you recover the spatial details that were lost?</p>

<h2 id="the-secret-sauce-skip-connections">The Secret Sauce: Skip Connections</h2>

<p>This is what makes U-Net special. Skip connections pass information directly from the encoder to the decoder, bypassing the bottleneck:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ENCODER                              DECODER
───────                              ───────
128×128 ─────── skip1 ─────────────→ 128×128
   │                                    ▲
64×64 ───────── skip2 ───────────→ 64×64
   │                                    ▲
32×32 ───────── skip3 ─────────→ 32×32
   │                                    ▲
16×16 ───────── skip4 ───────→ 16×16
   │                                    ▲
   └──→ 8×8 BOTTLENECK ──────────────────┘
</code></pre></div></div>

<h3 id="why-are-skip-connections-needed">Why Are Skip Connections Needed?</h3>

<p>Think of it this way:</p>

<table>
  <thead>
    <tr>
      <th>Source</th>
      <th>Knows</th>
      <th>Problem</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Bottleneck</td>
      <td>“What” is in image</td>
      <td>Lost “where” exactly</td>
    </tr>
    <tr>
      <td>Skip</td>
      <td>“Where” things are</td>
      <td>Doesn’t know context</td>
    </tr>
    <tr>
      <td><strong>Combined</strong></td>
      <td><strong>Both!</strong></td>
      <td><strong>Sharp + accurate output</strong></td>
    </tr>
  </tbody>
</table>

<h3 id="visual-example">Visual Example</h3>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>WITHOUT skip connections:        WITH skip connections:
┌────────────────────┐          ┌────────────────────┐
│                    │          │  ●                 │
│      ◯             │          │   ╲                │
│   (blurry,         │          │    ╲               │
│    wrong spot)     │          │     ●  (sharp,     │
│                    │          │      ╲  correct!)  │
│                    │          │       ●            │
└────────────────────┘          └────────────────────┘
</code></pre></div></div>

<p>The bottleneck knows “there’s a line somewhere” but lost the exact position. The skip connection says “the line edge is at these exact pixels.” Combined, you get a sharp, accurate output.</p>

<h2 id="the-building-blocks">The Building Blocks</h2>

<h3 id="convblock-the-basic-unit">ConvBlock: The Basic Unit</h3>

<p>Every level of the U-Net uses convolutional blocks:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Input
  ↓
Conv 3×3 → BatchNorm → ReLU
  ↓
Conv 3×3 → BatchNorm → ReLU
  ↓
Output
</code></pre></div></div>

<p>A 3×3 convolution looks at a pixel and its 8 neighbors to compute each output pixel.</p>

<h3 id="understanding-conv2d">Understanding Conv2d</h3>

<p>Let’s make this concrete with <code class="language-plaintext highlighter-rouge">Conv2d(2, 3, 3)</code> — 2 input channels, 3 output channels, 3×3 kernel.</p>

<p><strong>Key insight:</strong> Each output channel has its own filter, and each filter looks at ALL input channels.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>INPUT (2 channels)              OUTPUT (3 channels)

┌─────────┐                    ┌─────────┐
│ Ch 0    │──┬─ Filter 0 ─────→│ Ch 0    │
│         │  │                 └─────────┘
└─────────┘  │
             ├─ Filter 1 ─────→┌─────────┐
┌─────────┐  │                 │ Ch 1    │
│ Ch 1    │──┤                 └─────────┘
│         │  │
└─────────┘  └─ Filter 2 ─────→┌─────────┐
                               │ Ch 2    │
                               └─────────┘
</code></pre></div></div>

<p>Each filter reads ALL input channels to produce ONE output channel.</p>

<h3 id="concrete-conv2d-example">Concrete Conv2d Example</h3>

<p>Input (2 channels, 4×4 each):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Channel 0:              Channel 1:
┌────┬────┬────┬────┐   ┌────┬────┬────┬────┐
│ 10 │ 10 │  0 │  0 │   │  5 │  5 │  5 │  5 │
├────┼────┼────┼────┤   ├────┼────┼────┼────┤
│ 10 │ 10 │  0 │  0 │   │  5 │  5 │  5 │  5 │
├────┼────┼────┼────┤   ├────┼────┼────┼────┤
│ 10 │ 10 │  0 │  0 │   │  5 │  5 │  5 │  5 │
├────┼────┼────┼────┤   ├────┼────┼────┼────┤
│ 10 │ 10 │  0 │  0 │   │  5 │  5 │  5 │  5 │
└────┴────┴────┴────┘   └────┴────┴────┴────┘
</code></pre></div></div>

<p>Filter 0 (one 3×3 kernel per input channel):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>For input ch0:          For input ch1:
┌────┬────┬────┐        ┌────┬────┬────┐
│  1 │  0 │ -1 │        │  0 │  0 │  0 │
├────┼────┼────┤        ├────┼────┼────┤
│  1 │  0 │ -1 │        │  0 │  1 │  0 │
├────┼────┼────┤        ├────┼────┼────┤
│  1 │  0 │ -1 │        │  0 │  0 │  0 │
└────┴────┴────┘        └────┴────┴────┘
</code></pre></div></div>

<p>To compute output pixel at (row=1, col=1):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>From ch0: 10×1 + 10×0 + 0×(-1) + 10×1 + 10×0 + 0×(-1) + 10×1 + 10×0 + 0×(-1) = 30
From ch1: 5×0 + 5×0 + 5×0 + 5×0 + 5×1 + 5×0 + 5×0 + 5×0 + 5×0 = 5
Total: 30 + 5 + bias = 35
</code></pre></div></div>

<h3 id="downblock-encoder-step">DownBlock (Encoder Step)</h3>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="n">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
    <span class="n">features</span> <span class="o">=</span> <span class="n">self</span><span class="p">.</span><span class="nf">conv</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>     <span class="c1"># Process with ConvBlock
</span>    <span class="n">pooled</span> <span class="o">=</span> <span class="n">self</span><span class="p">.</span><span class="nf">pool</span><span class="p">(</span><span class="n">features</span><span class="p">)</span> <span class="c1"># Shrink by half
</span>    <span class="k">return</span> <span class="n">pooled</span><span class="p">,</span> <span class="n">features</span>      <span class="c1"># Return BOTH!
</span></code></pre></div></div>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Input: (1, 64, 64, 64)
         │
    ConvBlock
         │
     (1, 128, 64, 64) ──→ SAVED as skip connection
         │
    MaxPool2d (shrink)
         │
Output: (1, 128, 32, 32)
</code></pre></div></div>

<p>The key: it returns TWO things — the pooled result for the next layer AND the features for the skip connection.</p>

<h3 id="upblock-decoder-step">UpBlock (Decoder Step)</h3>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="n">self</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">skip</span><span class="p">):</span>
    <span class="n">x</span> <span class="o">=</span> <span class="n">self</span><span class="p">.</span><span class="nf">up</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>              <span class="c1"># Grow spatially (ConvTranspose2d)
</span>    <span class="n">x</span> <span class="o">=</span> <span class="n">torch</span><span class="p">.</span><span class="nf">cat</span><span class="p">([</span><span class="n">x</span><span class="p">,</span> <span class="n">skip</span><span class="p">],</span> <span class="n">dim</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># Concatenate with skip
</span>    <span class="n">x</span> <span class="o">=</span> <span class="n">self</span><span class="p">.</span><span class="nf">conv</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>            <span class="c1"># Process combined features
</span>    <span class="k">return</span> <span class="n">x</span>
</code></pre></div></div>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Input: (1, 512, 8, 8)    Skip: (1, 512, 16, 16)
         │
  ConvTranspose2d (grow 2×)
         │
     (1, 512, 16, 16)
         │
  Concat with skip (channels add)
         │
     (1, 1024, 16, 16)
         │
  ConvBlock (reduce channels)
         │
Output: (1, 256, 16, 16)
</code></pre></div></div>

<h3 id="convtranspose2d-growing-images">ConvTranspose2d: Growing Images</h3>

<p>ConvTranspose2d is the opposite of Conv2d — it makes images bigger:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Conv2d (stride=2):          ConvTranspose2d (stride=2):
  4×4  →  2×2                 2×2  →  4×4
  (shrink)                    (grow)
</code></pre></div></div>

<p>Each input pixel becomes a 2×2 region:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Input (2×2):          Output (4×4):
┌───┬───┐             ┌───┬───┬───┬───┐
│ 1 │ 2 │             │ 1 │ 1 │ 2 │ 2 │
├───┼───┤      →      ├───┼───┼───┼───┤
│ 3 │ 4 │             │ 1 │ 1 │ 2 │ 2 │
└───┴───┘             ├───┼───┼───┼───┤
                      │ 3 │ 3 │ 4 │ 4 │
                      ├───┼───┼───┼───┤
                      │ 3 │ 3 │ 4 │ 4 │
                      └───┴───┴───┴───┘
</code></pre></div></div>

<h2 id="complete-data-flow">Complete Data Flow</h2>

<p>Let’s trace through an entire U-Net forward pass:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>INPUT:    (1,   3, 128, 128)   "RGB image"

ENCODER:
  enc1:   (1,  64,  64,  64)   → skip1 saved
  enc2:   (1, 128,  32,  32)   → skip2 saved
  enc3:   (1, 256,  16,  16)   → skip3 saved
  enc4:   (1, 512,   8,   8)   → skip4 saved

BOTTLENECK:
          (1, 512,   8,   8)   "Compressed understanding"

DECODER:
  dec4:   (1, 256,  16,  16)   ← uses skip4
  dec3:   (1, 128,  32,  32)   ← uses skip3
  dec2:   (1,  64,  64,  64)   ← uses skip2
  dec1:   (1,  64, 128, 128)   ← uses skip1

OUTPUT:   (1,   3, 128, 128)   "Processed image"
</code></pre></div></div>

<h2 id="what-can-u-net-do">What Can U-Net Do?</h2>

<p>U-Net is used for any task requiring pixel-level output:</p>

<table>
  <thead>
    <tr>
      <th>Task</th>
      <th>Input</th>
      <th>Output</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Medical segmentation</strong></td>
      <td>CT scan</td>
      <td>Tumor mask</td>
    </tr>
    <tr>
      <td><strong>Semantic segmentation</strong></td>
      <td>Photo</td>
      <td>Labels per pixel</td>
    </tr>
    <tr>
      <td><strong>Image denoising</strong></td>
      <td>Noisy image</td>
      <td>Clean image</td>
    </tr>
    <tr>
      <td><strong>Inpainting</strong></td>
      <td>Image with hole</td>
      <td>Filled image</td>
    </tr>
    <tr>
      <td><strong>Super resolution</strong></td>
      <td>Low-res</td>
      <td>High-res</td>
    </tr>
    <tr>
      <td><strong>Style transfer</strong></td>
      <td>Photo</td>
      <td>Stylized image</td>
    </tr>
    <tr>
      <td><strong>Diffusion models</strong></td>
      <td>Noisy latent</td>
      <td>Denoised latent</td>
    </tr>
  </tbody>
</table>

<h2 id="when-not-to-use-decoder">When NOT to Use Decoder</h2>

<p>Not all tasks need a decoder:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Classification (no decoder):
  Image → [shrink, shrink, shrink] → "This is a cat"

U-Net (full decoder):
  Image → [shrink] → [expand] → Processed image
</code></pre></div></div>

<p>If you only need a label, not a pixel-by-pixel output, skip the decoder.</p>

<h2 id="summary">Summary</h2>

<p>U-Net’s power comes from three key ideas:</p>

<ol>
  <li><strong>Encoder</strong>: Compress spatially, extract “what” is in the image</li>
  <li><strong>Decoder</strong>: Expand back to full resolution</li>
  <li><strong>Skip connections</strong>: Pass “where” information directly from encoder to decoder</li>
</ol>

<p>This combination allows U-Net to understand both the big picture (global context from bottleneck) and fine details (local information from skips), producing sharp, accurate outputs.</p>

<p>Whether you’re segmenting medical images, generating art with Stable Diffusion, or building your own image editing model, U-Net’s elegant architecture is likely at the core.</p>

<hr />

<p><em>This post was created while building a text-conditioned image editing model. The examples and diagrams come from hands-on experimentation with PyTorch.</em></p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Building an Image Captioning Transformer from Scratch]]></title>
    <link href="https://wangyi.ai/blog/2026/01/30/image-captioning-transformer-from-scratch/"/>
    <updated>2026-01-30T10:00:00-08:00</updated>
    <id>https://wangyi.ai/blog/2026/01/30/image-captioning-transformer-from-scratch</id>
    <content type="html"><![CDATA[<p>After building a text-only transformer for name generation, I wanted to tackle something more ambitious: teaching a model to describe images. This post documents my journey building a minimal image captioning transformer that learns to generate captions like “a dog runs through the snow” from raw pixels.</p>

<p><strong><a href="/demos/image-captioning/">Try the live demo!</a></strong> - The model runs entirely in your browser using ONNX Runtime Web.</p>

<!-- more -->

<h2 id="the-architecture-encoder-decoder-with-cross-attention">The Architecture: Encoder-Decoder with Cross-Attention</h2>

<p>Unlike the decoder-only transformer from my previous experiment, image captioning requires an <strong>encoder-decoder</strong> architecture. The key insight is that we need to process two different modalities (images and text) and connect them through <strong>cross-attention</strong>.</p>

<p><img src="/images/image_caption_architecture.png" alt="Image Captioning Architecture" /></p>

<p>The architecture has two parallel paths:</p>

<p><strong>Image Path (Blue):</strong> The image goes through patch embedding, then encoder self-attention layers. This produces “image features” — a sequence of patch embeddings that understand spatial relationships.</p>

<p><strong>Text Path (Green):</strong> The caption tokens go through token embedding, then decoder layers with both self-attention (causal) and cross-attention to the image features.</p>

<p><strong>The Bridge (Purple):</strong> Cross-attention is where the magic happens. It allows each text token to “look at” all image patches and gather relevant visual information.</p>

<h2 id="from-pixels-to-patches-the-vision-encoder">From Pixels to Patches: The Vision Encoder</h2>

<p>The first challenge is converting an image into something a transformer can process. Transformers work on sequences, but images are 2D grids. The solution: <strong>split the image into patches</strong>.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>128x128 image → 16x16 grid of 8x8 patches → 256 patch embeddings
</code></pre></div></div>

<p>Each 8x8 patch contains 64 pixels × 3 colors = 192 values. A linear layer projects this to 128 dimensions:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">PatchEmbedding</span><span class="p">(</span><span class="n">nn</span><span class="p">.</span><span class="n">Module</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="n">self</span><span class="p">,</span> <span class="n">image_size</span><span class="p">,</span> <span class="n">patch_size</span><span class="p">,</span> <span class="n">n_embd</span><span class="p">):</span>
        <span class="n">patch_dim</span> <span class="o">=</span> <span class="mi">3</span> <span class="o">*</span> <span class="n">patch_size</span> <span class="o">*</span> <span class="n">patch_size</span>  <span class="c1"># 192
</span>        <span class="n">self</span><span class="p">.</span><span class="n">proj</span> <span class="o">=</span> <span class="n">nn</span><span class="p">.</span><span class="nc">Linear</span><span class="p">(</span><span class="n">patch_dim</span><span class="p">,</span> <span class="n">n_embd</span><span class="p">)</span>  <span class="c1"># 192 → 128
</span>        <span class="n">self</span><span class="p">.</span><span class="n">pos_embd</span> <span class="o">=</span> <span class="n">nn</span><span class="p">.</span><span class="nc">Parameter</span><span class="p">(</span><span class="n">torch</span><span class="p">.</span><span class="nf">randn</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">n_patches</span><span class="p">,</span> <span class="n">n_embd</span><span class="p">))</span>

    <span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="n">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
        <span class="c1"># Split image into patches, flatten, project
</span>        <span class="n">patches</span> <span class="o">=</span> <span class="nf">extract_patches</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>  <span class="c1"># (B, 256, 192)
</span>        <span class="k">return</span> <span class="n">self</span><span class="p">.</span><span class="nf">proj</span><span class="p">(</span><span class="n">patches</span><span class="p">)</span> <span class="o">+</span> <span class="n">self</span><span class="p">.</span><span class="n">pos_embd</span>  <span class="c1"># (B, 256, 128)
</span></code></pre></div></div>

<p>Now we have 256 “patch tokens” that can go through self-attention, just like text tokens. The encoder self-attention lets patches learn about each other — a patch showing a dog’s head can attend to patches showing its body and legs, building a coherent understanding of “dog”.</p>

<h2 id="cross-attention-the-bridge-between-vision-and-language">Cross-Attention: The Bridge Between Vision and Language</h2>

<p>This is the key difference from text-only transformers. In self-attention, Q, K, and V all come from the same source. In cross-attention:</p>

<ul>
  <li><strong>Q (Query)</strong> comes from the text decoder: “What visual information do I need?”</li>
  <li><strong>K, V (Key, Value)</strong> come from the image encoder: “Here’s what each patch contains”</li>
</ul>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">CrossAttention</span><span class="p">:</span>
    <span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="n">self</span><span class="p">,</span> <span class="n">text_embeddings</span><span class="p">,</span> <span class="n">image_features</span><span class="p">):</span>
        <span class="n">Q</span> <span class="o">=</span> <span class="n">text_embeddings</span> <span class="o">@</span> <span class="n">W_q</span>   <span class="c1"># What am I looking for?
</span>        <span class="n">K</span> <span class="o">=</span> <span class="n">image_features</span> <span class="o">@</span> <span class="n">W_k</span>    <span class="c1"># What does each patch contain?
</span>        <span class="n">V</span> <span class="o">=</span> <span class="n">image_features</span> <span class="o">@</span> <span class="n">W_v</span>    <span class="c1"># What info to retrieve?
</span>
        <span class="n">scores</span> <span class="o">=</span> <span class="n">Q</span> <span class="o">@</span> <span class="n">K</span><span class="p">.</span><span class="n">T</span>  <span class="c1"># (text_len, num_patches)
</span>        <span class="n">weights</span> <span class="o">=</span> <span class="nf">softmax</span><span class="p">(</span><span class="n">scores</span><span class="p">)</span>
        <span class="k">return</span> <span class="n">weights</span> <span class="o">@</span> <span class="n">V</span>  <span class="c1"># Weighted sum of patch info
</span></code></pre></div></div>

<p>When generating the word “running”, the model learns to attend heavily to patches showing legs in motion. When generating “snow”, it attends to the white ground patches.</p>

<h2 id="training-on-flickr8k">Training on Flickr8k</h2>

<p>I used the Flickr8k dataset: 8,000 images with 5 human-written captions each. A key insight was using <strong>random caption sampling</strong> — each epoch, randomly select one of the 5 captions per image. This acts as data augmentation and dramatically reduces overfitting.</p>

<table>
  <thead>
    <tr>
      <th>Configuration</th>
      <th>Train Loss</th>
      <th>Val Loss</th>
      <th>Notes</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>64x64, fixed caption</td>
      <td>0.78</td>
      <td>1.10</td>
      <td>Baseline</td>
    </tr>
    <tr>
      <td>128x128, fixed caption</td>
      <td>0.58</td>
      <td>1.38</td>
      <td>More detail, more overfitting</td>
    </tr>
    <tr>
      <td>128x128, random caption</td>
      <td>0.90</td>
      <td>0.99</td>
      <td>Much better generalization!</td>
    </tr>
  </tbody>
</table>

<p>The random caption sampling closed the train-val gap from 0.80 to just 0.09.</p>

<h2 id="results-what-the-model-learned">Results: What the Model Learned</h2>

<p>After 30 epochs of training (~17 minutes on M4 Mac), the model generates reasonable captions:</p>

<p><strong>Success case:</strong></p>

<p><img src="/images/flickr8k_dog_running.jpg" alt="Dog running on grass" /></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Generated: "a black dog is running through the grass ."
Actual:    "A black dog running across green grass ."
</code></pre></div></div>

<p><strong>Failure case:</strong></p>

<p><img src="/images/flickr8k_ski_lodge.jpg" alt="Ski lodge scene" /></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Generated: "a man in a blue shirt is standing in the stree"
Actual:    "A crowd of people are enjoying a meal with a view of a mountaintop ."
</code></pre></div></div>

<p>The model handles simple scenes well (dogs, people, basic actions) but struggles with complex scenes (crowds, multiple objects, subtle context).</p>

<h2 id="model-statistics">Model Statistics</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Total parameters: ~980,000 (about 1M)

Breakdown:
- Patch embedding:     32,896 (3%)
- Encoder blocks (2):  395,776 (40%)
- Token embedding:     8,960 (1%)
- Position embedding:  6,144 (1%)
- Decoder blocks (2):  527,616 (54%)
- Output layer:        9,286 (1%)
</code></pre></div></div>

<p>The decoder is larger than the encoder because each decoder block has both self-attention AND cross-attention.</p>

<h2 id="key-learnings">Key Learnings</h2>

<h3 id="1-patches-are-the-tokenizer-for-images">1. Patches are the “tokenizer” for images</h3>
<p>Just as we split text into tokens, we split images into patches. This converts the 2D spatial structure into a sequence that transformers can process. The same weight matrix processes every patch, learning a universal “patch reader”.</p>

<h3 id="2-cross-attention-is-the-bridge">2. Cross-attention is the bridge</h3>
<p>The key architectural difference from text-only transformers. It lets the text generation process “see” the image at every step, attending to relevant patches for each word being generated.</p>

<h3 id="3-data-augmentation-matters-enormously">3. Data augmentation matters enormously</h3>
<p>Using all 5 captions with random sampling was more impactful than doubling the image resolution. The model learns semantic concepts rather than memorizing specific strings.</p>

<h3 id="4-resolution-limits-understanding">4. Resolution limits understanding</h3>
<p>At 128x128, a tricycle looks like a blob. The model can distinguish dogs from people, but struggles with fine details. Real vision models use 224x224 or higher.</p>

<h3 id="5-this-is-still-a-toy-model">5. This is still a toy model</h3>
<p>Production image captioning models use:</p>
<ul>
  <li>Pretrained vision encoders (CLIP, ViT trained on millions of images)</li>
  <li>Word-level tokenization (shorter sequences)</li>
  <li>Much larger datasets (COCO has 330k images)</li>
  <li>Billions of parameters</li>
</ul>

<h2 id="improvement-using-pretrained-clip-encoder">Improvement: Using Pretrained CLIP Encoder</h2>

<p>After training the from-scratch model, I wanted to see how much a pretrained vision encoder could help. I created a second version that uses <strong>CLIP ViT-B/32</strong> as a frozen image encoder, training only the decoder and a projection layer.</p>

<h3 id="architecture-changes">Architecture Changes</h3>

<p>Instead of learning patch embeddings from scratch:</p>
<ul>
  <li>CLIP’s pretrained ViT processes the image (224x224 input)</li>
  <li>50 patch embeddings (768-dim) are projected to the decoder dimension</li>
  <li>Only the decoder (~3.8M params) is trained; CLIP (~87M params) is frozen</li>
</ul>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">CLIPCaptioningModel</span><span class="p">(</span><span class="n">nn</span><span class="p">.</span><span class="n">Module</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">encode_image</span><span class="p">(</span><span class="n">self</span><span class="p">,</span> <span class="n">img</span><span class="p">):</span>
        <span class="c1"># Use CLIP's visual transformer (frozen)
</span>        <span class="k">with</span> <span class="n">torch</span><span class="p">.</span><span class="nf">no_grad</span><span class="p">():</span>
            <span class="n">x</span> <span class="o">=</span> <span class="n">clip_model</span><span class="p">.</span><span class="nf">visual</span><span class="p">(</span><span class="n">img</span><span class="p">)</span>  <span class="c1"># (B, 50, 768)
</span>        <span class="k">return</span> <span class="n">self</span><span class="p">.</span><span class="nf">visual_proj</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>  <span class="c1"># Project to decoder dim
</span></code></pre></div></div>

<h3 id="results-comparison">Results Comparison</h3>

<table>
  <thead>
    <tr>
      <th>Metric</th>
      <th>From-Scratch</th>
      <th>CLIP-based</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Val Loss</td>
      <td>1.29</td>
      <td><strong>0.86</strong></td>
    </tr>
    <tr>
      <td>Train Loss</td>
      <td>1.23</td>
      <td>0.75</td>
    </tr>
    <tr>
      <td>Epochs</td>
      <td>30</td>
      <td>20</td>
    </tr>
    <tr>
      <td>Training Time</td>
      <td>~17 min</td>
      <td>~17 min</td>
    </tr>
    <tr>
      <td>Model Size</td>
      <td>4 MB</td>
      <td>363 MB</td>
    </tr>
  </tbody>
</table>

<p>The CLIP-based model achieves <strong>33% lower validation loss</strong> with fewer epochs!</p>

<h3 id="sample-captions">Sample Captions</h3>

<p>For the same test image (two dogs in snow):</p>

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>Caption</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>From-scratch</td>
      <td>“a black dog and a white dog are in the snow .”</td>
    </tr>
    <tr>
      <td>CLIP-based</td>
      <td>“two dogs playing in the snow .”</td>
    </tr>
    <tr>
      <td>Ground truth</td>
      <td>“a black dog is running after a white dog in the snow .”</td>
    </tr>
  </tbody>
</table>

<p>The CLIP-based model produces more natural, concise captions. It benefits from CLIP having been trained on 400 million image-text pairs — it already understands visual concepts like “dogs” and “playing” without needing to learn them from our small 8k image dataset.</p>

<h3 id="testing-on-complex-scenes">Testing on Complex Scenes</h3>

<p>I tested both models on the validation set, focusing on complex scenes that the from-scratch model struggled with:</p>

<table>
  <thead>
    <tr>
      <th>Scene</th>
      <th>From-Scratch</th>
      <th>CLIP-based</th>
      <th>Ground Truth</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Ice skating rink</td>
      <td>“a man in a blue shirt…”</td>
      <td>“a group of people standing in the snow .”</td>
      <td>“A group of people are ice skating in a big city .”</td>
    </tr>
    <tr>
      <td>Rock climbing</td>
      <td>“a woman is standing…”</td>
      <td>“a woman in a red shirt is climbing a rock .”</td>
      <td>“A kid rock climbing against the backdrop of a green valley”</td>
    </tr>
    <tr>
      <td>People at boats</td>
      <td>“a man is…”</td>
      <td>“a group of people standing in a rowd of a boat”</td>
      <td>“A group of people waiting to ride boats .”</td>
    </tr>
    <tr>
      <td>Mountain hikers</td>
      <td>“a man in…”</td>
      <td>“two people stand on the side of a mountain .”</td>
      <td>“Three people facing the mountains .”</td>
    </tr>
  </tbody>
</table>

<p><strong>Key observations:</strong></p>

<ol>
  <li><strong>Better at groups/crowds</strong> — CLIP recognizes “group of people” much better than the from-scratch model which defaults to “a man”</li>
  <li><strong>Better semantic understanding</strong> — Recognizes concepts like “rock climbing”, “mountain”, “boat” that the small model misses entirely</li>
  <li><strong>Still struggles with fine details</strong> — Exact counts (two vs three people), specific activities (ice skating vs standing)</li>
  <li><strong>More robust to complex scenes</strong> — Doesn’t collapse to generic “man in blue shirt” for difficult images</li>
</ol>

<p>The pretrained visual features give CLIP a huge advantage on scenes requiring real-world knowledge.</p>

<h3 id="tradeoff-accuracy-vs-size">Tradeoff: Accuracy vs Size</h3>

<p>The improved model is 363MB (vs 4MB), making it impractical for browser deployment. This is the classic accuracy-size tradeoff:</p>
<ul>
  <li><strong>From-scratch model</strong>: Smaller, deployable, but less accurate</li>
  <li><strong>CLIP-based model</strong>: More accurate, but requires a large pretrained encoder</li>
</ul>

<p>For production, you’d typically use the large model on a server, or apply techniques like knowledge distillation to compress it.</p>

<h2 id="improvement-word-level-tokenization">Improvement: Word-Level Tokenization</h2>

<p>The character-level model processes “a black dog” as 11 tokens (including spaces). Word-level tokenization reduces this to just 3 tokens, making sequences shorter and potentially easier to learn.</p>

<h3 id="parameter-count-changes">Parameter Count Changes</h3>

<p>Switching from character-level to word-level tokenization dramatically changes where the parameters live:</p>

<table>
  <thead>
    <tr>
      <th>Component</th>
      <th>Character-Level</th>
      <th>Word-Level</th>
      <th>Change</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Token embedding</td>
      <td>8,960 (70 × 128)</td>
      <td>570,240 (4453 × 128)</td>
      <td>+561K</td>
    </tr>
    <tr>
      <td>Position embedding</td>
      <td>6,144 (48 × 128)</td>
      <td>2,560 (20 × 128)</td>
      <td>-3.5K</td>
    </tr>
    <tr>
      <td>Output layer</td>
      <td>8,960</td>
      <td>570,240</td>
      <td>+561K</td>
    </tr>
    <tr>
      <td><strong>Total model</strong></td>
      <td>~980K</td>
      <td>~2.1M</td>
      <td><strong>+1.1M (2.2×)</strong></td>
    </tr>
  </tbody>
</table>

<p>The vocabulary explodes from ~70 characters to ~4500 words, but sequences shrink from 48 characters to 20 words. The net effect: <strong>2.2× more parameters</strong>, almost entirely in the embedding layers.</p>

<h3 id="results-comparison-1">Results Comparison</h3>

<table>
  <thead>
    <tr>
      <th>Metric</th>
      <th>Character-Level</th>
      <th>Word-Level</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Val Loss</td>
      <td>0.99</td>
      <td><strong>2.98</strong></td>
    </tr>
    <tr>
      <td>Train Loss</td>
      <td>0.90</td>
      <td>2.42</td>
    </tr>
    <tr>
      <td>Vocab Size</td>
      <td>70</td>
      <td>4,453</td>
    </tr>
    <tr>
      <td>Max Seq Length</td>
      <td>48</td>
      <td>20</td>
    </tr>
    <tr>
      <td>Model Size</td>
      <td>4 MB</td>
      <td>8.2 MB</td>
    </tr>
  </tbody>
</table>

<p>Wait — the word-level loss is <strong>higher</strong>? This is actually expected:</p>

<ol>
  <li><strong>Loss is per-token</strong>: Character-level predicts from 70 options; word-level predicts from 4,453 options</li>
  <li><strong>Different scales</strong>: A word-level loss of 2.98 means perplexity ~20 (choosing from 4453 words), while character loss 0.99 means perplexity ~2.7 (choosing from 70 chars)</li>
  <li><strong>The captions are similar quality</strong> despite the different loss values</li>
</ol>

<h3 id="sample-caption">Sample Caption</h3>

<p>For the same test image (two dogs in snow):</p>

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>Caption</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Character-level</td>
      <td>“a black dog and a white dog are in the snow .”</td>
    </tr>
    <tr>
      <td>Word-level</td>
      <td>“a dog is running through the snow .”</td>
    </tr>
    <tr>
      <td>Ground truth</td>
      <td>“a black dog is running after a white dog in the snow .”</td>
    </tr>
  </tbody>
</table>

<p>The word-level model produces fluent captions but with a smaller effective vocabulary (it saw each word fewer times during training than character-level saw each character).</p>

<h3 id="key-insight-vocabulary-size-vs-training-data">Key Insight: Vocabulary Size vs Training Data</h3>

<p>Word-level tokenization works better when you have <strong>lots of training data</strong>. With only 8k images:</p>
<ul>
  <li>Character-level sees each character thousands of times → learns robust patterns</li>
  <li>Word-level sees many words only a few times → harder to learn good embeddings</li>
</ul>

<p>This is why production models use:</p>
<ul>
  <li><strong>Subword tokenization</strong> (BPE, WordPiece): Best of both worlds</li>
  <li><strong>Much larger datasets</strong>: COCO (330k), Conceptual Captions (3M+)</li>
  <li><strong>Pretrained word embeddings</strong>: GloVe, Word2Vec, etc.</li>
</ul>

<h2 id="improvement-clip--glove-pretrained-embeddings">Improvement: CLIP + GloVe Pretrained Embeddings</h2>

<p>Since the word-level model struggled with limited training data, I tried combining the best of both worlds: <strong>CLIP’s pretrained vision encoder</strong> with <strong>GloVe pretrained word embeddings</strong>.</p>

<h3 id="the-idea">The Idea</h3>

<p>Instead of learning word embeddings from scratch with only 8k images, why not use GloVe embeddings trained on 6 billion words? This gives the model a head start on understanding word relationships.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">CLIPGloVeCaptioningModel</span><span class="p">(</span><span class="n">nn</span><span class="p">.</span><span class="n">Module</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="n">self</span><span class="p">,</span> <span class="n">vocab_size</span><span class="p">,</span> <span class="n">clip_model</span><span class="p">,</span> <span class="n">glove_embeddings</span><span class="p">,</span> <span class="p">...):</span>
        <span class="c1"># Use CLIP for vision (frozen)
</span>        <span class="n">self</span><span class="p">.</span><span class="n">clip_model</span> <span class="o">=</span> <span class="n">clip_model</span>

        <span class="c1"># Use GloVe for word embeddings (fine-tuned)
</span>        <span class="n">self</span><span class="p">.</span><span class="n">token_embed</span> <span class="o">=</span> <span class="n">nn</span><span class="p">.</span><span class="nc">Embedding</span><span class="p">(</span><span class="n">vocab_size</span><span class="p">,</span> <span class="n">glove_dim</span><span class="p">)</span>
        <span class="n">self</span><span class="p">.</span><span class="n">token_embed</span><span class="p">.</span><span class="n">weight</span><span class="p">.</span><span class="n">data</span><span class="p">.</span><span class="nf">copy_</span><span class="p">(</span><span class="n">glove_embeddings</span><span class="p">)</span>

        <span class="c1"># Project GloVe dim (100) to decoder dim (256)
</span>        <span class="n">self</span><span class="p">.</span><span class="n">glove_proj</span> <span class="o">=</span> <span class="n">nn</span><span class="p">.</span><span class="nc">Linear</span><span class="p">(</span><span class="n">glove_dim</span><span class="p">,</span> <span class="n">n_embd</span><span class="p">)</span>
</code></pre></div></div>

<h3 id="glove-coverage">GloVe Coverage</h3>

<p>Using GloVe 6B 100d (100-dimensional embeddings trained on 6 billion tokens):</p>
<ul>
  <li><strong>4441 out of 4517 words</strong> (98.3%) found in GloVe</li>
  <li>Only 76 words missing (mostly rare or domain-specific terms)</li>
  <li>Missing words initialized with small random values</li>
</ul>

<h3 id="results">Results</h3>

<table>
  <thead>
    <tr>
      <th>Metric</th>
      <th>Word-Level (random)</th>
      <th>CLIP + GloVe</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Val Loss</td>
      <td>2.98</td>
      <td><strong>2.55</strong></td>
    </tr>
    <tr>
      <td>Train Loss</td>
      <td>2.42</td>
      <td>1.78</td>
    </tr>
    <tr>
      <td>Epochs</td>
      <td>30</td>
      <td>30</td>
    </tr>
    <tr>
      <td>GloVe Coverage</td>
      <td>N/A</td>
      <td>98.3%</td>
    </tr>
  </tbody>
</table>

<p>The GloVe embeddings give a <strong>14% improvement</strong> in validation loss!</p>

<h3 id="sample-caption-1">Sample Caption</h3>

<p>For the same test image (two dogs in snow):</p>

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>Caption</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Word-level (random init)</td>
      <td>“a dog is running through the snow .”</td>
    </tr>
    <tr>
      <td><strong>CLIP + GloVe</strong></td>
      <td>“two dogs are playing in the snow .”</td>
    </tr>
    <tr>
      <td>Ground truth</td>
      <td>“a black dog is running after a white dog in the snow .”</td>
    </tr>
  </tbody>
</table>

<p>The GloVe model correctly identifies “two dogs” rather than “a dog”, suggesting the pretrained embeddings help with understanding quantities and relationships.</p>

<h3 id="key-insight-transfer-learning-stacks">Key Insight: Transfer Learning Stacks</h3>

<p>This experiment shows that <strong>transfer learning compounds</strong>:</p>
<ol>
  <li>CLIP brings pretrained visual understanding (400M image-text pairs)</li>
  <li>GloVe brings pretrained word relationships (6B tokens)</li>
  <li>Only the decoder and projection layers need to learn task-specific mappings</li>
</ol>

<p>Even with just 8k training images, combining two pretrained components achieves significantly better results than training from scratch.</p>

<h2 id="whats-next">What’s Next</h2>

<p>Remaining improvements to explore:</p>

<ol>
  <li><del><strong>Pretrained vision encoder</strong>: Use CLIP or ViT instead of learning from scratch</del> ✅ Done!</li>
  <li><del><strong>Word-level tokenization</strong>: “a black dog” as 3 tokens instead of 11 characters</del> ✅ Done!</li>
  <li><del><strong>Pretrained word embeddings</strong>: Use GloVe for better word representations</del> ✅ Done!</li>
  <li><strong>Subword tokenization</strong>: Use BPE for better vocab coverage</li>
  <li><strong>More data</strong>: COCO dataset (330k images) instead of Flickr8k (8k)</li>
  <li><strong>Knowledge distillation</strong>: Train a small model to mimic the CLIP-based one</li>
</ol>

<p>But even the minimal from-scratch implementation demonstrates the core concepts: patch embeddings, encoder-decoder architecture, and cross-attention as the bridge between vision and language.</p>

<h2 id="code">Code</h2>

<p>The complete training script is available in my <a href="https://github.com/Jeswang/learn-llm">learn-llm</a> repository as <code class="language-plaintext highlighter-rouge">train-image-caption.py</code>.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Building a Language Transformer Step by Step]]></title>
    <link href="https://wangyi.ai/blog/2026/01/28/building-transformer-step-by-step/"/>
    <updated>2026-01-28T10:00:00-08:00</updated>
    <id>https://wangyi.ai/blog/2026/01/28/building-transformer-step-by-step</id>
    <content type="html"><![CDATA[<p>After months of reading about transformers and LLMs, I finally decided to build one from scratch. Not by copy-pasting code, but by incrementally adding each architectural component and measuring its impact. The result was a character-level name generator trained on 32,033 names, and the journey taught me more than any paper or tutorial could.</p>

<!-- more -->

<h2 id="preparation-standing-on-the-shoulders-of-giants">Preparation: Standing on the Shoulders of Giants</h2>

<p>Before diving into code, I spent time building intuition through two excellent resources:</p>

<p><strong>“Build a Large Language Model (From Scratch)” by Sebastian Raschka</strong> was my theoretical foundation. The book walks through every component of a transformer with clear explanations and diagrams. Reading it gave me a mental model of how attention, embeddings, and layer normalization fit together — knowledge that proved essential when debugging my own implementation.</p>

<p><strong>Andrej Karpathy’s YouTube series</strong> (<a href="https://www.youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ">Neural Networks: Zero to Hero</a>) was equally valuable. His “Let’s build GPT” video demystified the architecture by building it live on screen. Watching someone think through the design decisions — why we use residual connections, how attention matrices work, what LayerNorm actually does — made the concepts stick in a way that reading alone couldn’t. His <a href="https://github.com/karpathy/makemore">makemore</a> repository became the dataset and benchmark for my experiments.</p>

<p>With this foundation, I was ready to build.</p>

<h2 id="the-experiment">The Experiment</h2>

<p>I incrementally built a character-level transformer for name generation. Each step adds one architectural improvement. All models were trained with batch size 32, AdamW optimizer, and per-name padding with masked loss.</p>

<h2 id="results---architecture-comparison-5000-steps">Results - Architecture Comparison (5,000 steps)</h2>

<table>
  <thead>
    <tr>
      <th>Config</th>
      <th>N_EMBD</th>
      <th>Heads</th>
      <th>Layers</th>
      <th>Params</th>
      <th>Train</th>
      <th>Test</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>baseline</td>
      <td>32</td>
      <td>1</td>
      <td>1</td>
      <td>2,908</td>
      <td>2.35</td>
      <td>2.35</td>
    </tr>
    <tr>
      <td>double embd</td>
      <td>64</td>
      <td>1</td>
      <td>1</td>
      <td>8,860</td>
      <td>2.34</td>
      <td>2.34</td>
    </tr>
    <tr>
      <td>2 heads</td>
      <td>32</td>
      <td>2</td>
      <td>1</td>
      <td>5,948</td>
      <td>2.25</td>
      <td>2.23</td>
    </tr>
    <tr>
      <td>4 layers</td>
      <td>32</td>
      <td>2</td>
      <td>4</td>
      <td>18,332</td>
      <td>2.00</td>
      <td>2.04</td>
    </tr>
    <tr>
      <td>+ MLP</td>
      <td>32</td>
      <td>2</td>
      <td>4</td>
      <td>51,740</td>
      <td>1.97</td>
      <td>2.02</td>
    </tr>
    <tr>
      <td>+ LayerNorm</td>
      <td>32</td>
      <td>2</td>
      <td>4</td>
      <td>52,252</td>
      <td>1.96</td>
      <td>1.99</td>
    </tr>
    <tr>
      <td>+ RoPE</td>
      <td>32</td>
      <td>2</td>
      <td>4</td>
      <td>52,252</td>
      <td>1.94</td>
      <td>1.98</td>
    </tr>
    <tr>
      <td>+ GELU</td>
      <td>32</td>
      <td>2</td>
      <td>4</td>
      <td>52,252</td>
      <td>1.94</td>
      <td>1.94</td>
    </tr>
  </tbody>
</table>

<h2 id="results---scaling-up">Results - Scaling Up</h2>

<table>
  <thead>
    <tr>
      <th>Config</th>
      <th>Steps</th>
      <th>Train</th>
      <th>Test</th>
      <th>Notes</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>N_EMBD=32, 2 heads</td>
      <td>5,000</td>
      <td>1.94</td>
      <td>1.94</td>
      <td>Baseline final model</td>
    </tr>
    <tr>
      <td>N_EMBD=64, 4 heads</td>
      <td>5,000</td>
      <td>1.84</td>
      <td>1.92</td>
      <td>Matches makemore architecture</td>
    </tr>
    <tr>
      <td>N_EMBD=64, 4 heads + dropout</td>
      <td>5,000</td>
      <td>1.95</td>
      <td>2.00</td>
      <td>Dropout slows convergence</td>
    </tr>
    <tr>
      <td>N_EMBD=64, 4 heads + dropout</td>
      <td>20,000</td>
      <td>1.75</td>
      <td>1.85</td>
      <td>Longer training helps</td>
    </tr>
    <tr>
      <td>+ LR schedule, weight decay, grad clip</td>
      <td>20,000</td>
      <td>1.72</td>
      <td>1.86</td>
      <td>Training improvements</td>
    </tr>
  </tbody>
</table>

<p>Makemore’s default transformer achieves ~1.92 test loss with N_EMBD=64, 4 heads, 4 layers.</p>

<h2 id="generated-names">Generated Names</h2>

<p>Sample outputs from the final model (N_EMBD=64, 4 heads, 20k steps with all training improvements):</p>

<blockquote>
  <p>kaelynn, aileigh, elyce, yadi, ovani, derella, nyailee, ranyah, niaa, sett</p>
</blockquote>

<h2 id="key-findings">Key Findings</h2>

<h3 id="depth-beats-width">Depth beats width</h3>

<p>Doubling embedding size from 32 to 64 (3x params) gave almost no improvement (2.35 -&gt; 2.34). Adding a second attention head with fewer total params (5,948 vs 8,860) dropped loss by 0.12. Stacking 4 layers was the single biggest improvement, dropping test loss from 2.23 to 2.04. The model benefits far more from multiple layers of processing than from wider representations at a single layer.</p>

<h3 id="data-handling-matters-most">Data handling matters most</h3>

<p>Before adding per-name padding, our best model achieved 2.36 test loss. After switching to per-name padding with masked loss (same architecture), it dropped to 1.94. This was a larger improvement than all architectural changes combined. The reason: without padding, the model wasted capacity trying to predict across name boundaries — an impossible task that added noise to every gradient update.</p>

<h3 id="mlp-adds-capacity-but-needs-regularization">MLP adds capacity but needs regularization</h3>

<p>Adding the feed-forward network (MLP) to each layer tripled the parameter count (18k -&gt; 52k) but only modestly improved results. It also widened the train-test gap (2.00/2.04 -&gt; 1.97/2.02), suggesting mild overfitting. The MLP lets the model transform representations nonlinearly after attention gathers information, but at this small scale the effect is limited.</p>

<h3 id="layernorm-and-rope-help-incrementally">LayerNorm and RoPE help incrementally</h3>

<p>LayerNorm stabilized training and closed the train-test gap slightly. RoPE (Rotary Position Embeddings) gave the model awareness of character positions without adding any parameters. Neither was dramatic at this scale, but both are essential for larger models — LayerNorm enables training deep networks, and RoPE enables generalization to longer sequences.</p>

<h3 id="gelu-vs-relu-is-negligible-at-small-scale">GELU vs ReLU is negligible at small scale</h3>

<p>Switching from ReLU to GELU activation in the MLP had no measurable effect. The smoother gradient flow matters more when networks are deeper and wider.</p>

<h3 id="scaling-up-helps-significantly">Scaling up helps significantly</h3>

<p>Doubling N_EMBD to 64 and using 4 heads (matching makemore’s architecture) dropped test loss from 1.94 to 1.92 at 5k steps. With longer training (20k steps), the model reached 1.85 test loss — surpassing makemore’s default.</p>

<h3 id="dropout-trades-speed-for-generalization">Dropout trades speed for generalization</h3>

<p>Adding 20% dropout increased the train-test gap initially and slowed convergence. At 5k steps, it actually hurt test loss (1.92 -&gt; 2.00). But it prevents overfitting during longer training runs, allowing the model to keep improving past where it would otherwise plateau.</p>

<h3 id="training-improvements-compound">Training improvements compound</h3>

<p>Learning rate scheduling (warmup + cosine decay), weight decay (0.01), and gradient clipping (max_norm=1.0) together produced smoother training curves. The cosine decay prevents the learning rate from being too high in later steps when fine-tuning. Weight decay acts as regularization. Gradient clipping prevents instability from occasional large gradients.</p>

<h2 id="architecture-summary">Architecture Summary</h2>

<p>The final model is a proper transformer decoder:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Input tokens
    -&gt; Token Embedding (28 vocab -&gt; 64 dim)
    -&gt; 4x Transformer Blocks:
        -&gt; LayerNorm -&gt; Multi-Head Attention (4 heads, RoPE, dropout) -&gt; Residual
        -&gt; LayerNorm -&gt; MLP (64 -&gt; 256 -&gt; 64, GELU, dropout) -&gt; Residual
    -&gt; Linear (64 -&gt; 28 vocab)
    -&gt; Cross-entropy loss (masked on PAD tokens)
</code></pre></div></div>

<p>Training config:</p>
<ul>
  <li>20,000 steps</li>
  <li>Batch size 32</li>
  <li>AdamW optimizer with weight decay 0.01</li>
  <li>Learning rate: warmup to 1e-3 over 200 steps, cosine decay to 1e-4</li>
  <li>Gradient clipping: max_norm=1.0</li>
  <li>Dropout: 0.2</li>
</ul>

<h2 id="what-the-loss-means">What the Loss Means</h2>

<p><img src="/images/cross_entropy.png" alt="Cross Entropy Loss" /></p>

<p>A loss of 1.86 means the model assigns ~15.6% probability on average to the correct next character (<code class="language-plaintext highlighter-rouge">e^(-1.86)</code>). Random guessing over 27 characters would give ~3.7% (loss = 3.30). Perfect prediction is impossible because many positions are genuinely ambiguous — after “ma”, the next character could be r, d, k, x, t, and many others.</p>

<p>Progress through this project:</p>
<ul>
  <li>Start: 2.35 test loss (~9.5% confidence)</li>
  <li>Final: 1.86 test loss (~15.6% confidence)</li>
  <li>Improvement: ~1.6x more confident on the correct character</li>
</ul>

<h2 id="conclusion">Conclusion</h2>

<p>Building a transformer incrementally taught me that the magic isn’t in any single component — it’s in how they work together. Data preprocessing had the biggest impact. Depth mattered more than width. And the “modern” improvements (LayerNorm, RoPE, GELU) are less about dramatic gains and more about enabling scale.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Reverse Engineering Guitar Pro 8's Locked Files]]></title>
    <link href="https://wangyi.ai/blog/2026/01/16/unlocking-guitar-pro-8/"/>
    <updated>2026-01-16T16:58:33-08:00</updated>
    <id>https://wangyi.ai/blog/2026/01/16/unlocking-guitar-pro-8</id>
    <content type="html"><![CDATA[<p>Have you ever worked on a Guitar Pro tab, saved it, and then realized you couldn’t edit it anymore because it was “locked”? Or perhaps you downloaded a tab that was perfect but needed just one small tweak, and the author had locked it?</p>

<p>I recently went down a rabbit hole reverse-engineering this “protection” mechanism in Guitar Pro 8. What I found was a classic case of “security through obscurity” — and not very deep obscurity at that.</p>

<!-- more -->

<h2 id="the-problem">The Problem</h2>

<p>Guitar Pro has a feature to “lock” a file. When locked, the file can be opened and played, but the editing features are disabled. If you peek inside the <code class="language-plaintext highlighter-rouge">.gp</code> file (which is just a ZIP archive), you’ll see a few interesting things:</p>

<ol>
  <li>A file named <code class="language-plaintext highlighter-rouge">editLocked</code>.</li>
  <li>The main content <code class="language-plaintext highlighter-rouge">Content/score.gpif</code> is encrypted (it doesn’t have the standard XML header).</li>
</ol>

<p>Removing <code class="language-plaintext highlighter-rouge">editLocked</code> isn’t enough. The app sees it’s missing, but the content remains encrypted and unreadable.</p>

<h2 id="the-breakthrough">The Breakthrough</h2>

<p>As Guitar Pro can open and play the file without ever prompting for a password, it was clear that the key to decrypt the content must be available to the application without user input. This realization led me to investigate how the application handles these files internally.</p>

<p>I analyzed the <code class="language-plaintext highlighter-rouge">GuitarPro</code> binary and its libraries, specifically <code class="language-plaintext highlighter-rouge">libGPIO.dylib</code>.</p>

<h3 id="1-the-salt">1. The Salt</h3>
<p>Deep in the binary, I found a reference to a static salt used in the encryption routine.
<code class="language-plaintext highlighter-rouge">da40cc64900b617a0f72ad4e6ef42f9c</code></p>

<h3 id="2-the-password">2. The Password</h3>
<p>Tracing the assembly code for <code class="language-plaintext highlighter-rouge">Score::setLockPwd</code>, I found something surprising. The application reads the <strong>entire content</strong> of the <code class="language-plaintext highlighter-rouge">editLocked</code> file (which contains a salt and a hash of the user’s original password) and sets <em>that string</em> as the internal password for decryption.</p>

<p>So, the “password” to decrypt audio and score data isn’t what you typed. It’s the metadata file itself.</p>

<h2 id="the-solution">The Solution</h2>

<p>Putting it all together, the encryption scheme is:</p>
<ul>
  <li><strong>Algorithm</strong>: AES-256-CBC</li>
  <li><strong>Key Derivation</strong>: PBKDF2-HMAC-SHA1 (4096 iterations)</li>
  <li><strong>Password</strong>: The content of <code class="language-plaintext highlighter-rouge">editLocked</code> (e.g., <code class="language-plaintext highlighter-rouge">salt$hash</code>)</li>
  <li><strong>Salt</strong>: The static binary salt (<code class="language-plaintext highlighter-rouge">da40cc...</code>)</li>
</ul>

<p>With this information, I wrote a Python script <code class="language-plaintext highlighter-rouge">unlock_score.py</code> that fully unlocks these files.</p>

<h3 id="the-script">The Script</h3>

<p>Here is the core logic of the unlocker:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">STATIC_SALT_HEX</span> <span class="o">=</span> <span class="sh">"</span><span class="s">da40cc64900b617a0f72ad4e6ef42f9c</span><span class="sh">"</span>

<span class="k">def</span> <span class="nf">decrypt_gpif</span><span class="p">(</span><span class="n">encrypted_data</span><span class="p">,</span> <span class="n">password</span><span class="p">):</span>
    <span class="n">salt</span> <span class="o">=</span> <span class="n">binascii</span><span class="p">.</span><span class="nf">unhexlify</span><span class="p">(</span><span class="n">STATIC_SALT_HEX</span><span class="p">)</span>
    <span class="c1"># PBKDF2 with 4096 iterations
</span>    <span class="n">key</span> <span class="o">=</span> <span class="n">hashlib</span><span class="p">.</span><span class="nf">pbkdf2_hmac</span><span class="p">(</span><span class="sh">"</span><span class="s">sha1</span><span class="sh">"</span><span class="p">,</span> <span class="n">password</span><span class="p">.</span><span class="nf">encode</span><span class="p">(),</span> <span class="n">salt</span><span class="p">,</span> <span class="mi">4096</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span>
    
    <span class="n">iv</span> <span class="o">=</span> <span class="n">encrypted_data</span><span class="p">[:</span><span class="mi">16</span><span class="p">]</span>
    <span class="n">ciphertext</span> <span class="o">=</span> <span class="n">encrypted_data</span><span class="p">[</span><span class="mi">16</span><span class="p">:]</span>
    
    <span class="n">cipher</span> <span class="o">=</span> <span class="nc">Cipher</span><span class="p">(</span><span class="n">algorithms</span><span class="p">.</span><span class="nc">AES</span><span class="p">(</span><span class="n">key</span><span class="p">),</span> <span class="n">modes</span><span class="p">.</span><span class="nc">CBC</span><span class="p">(</span><span class="n">iv</span><span class="p">),</span> <span class="n">backend</span><span class="o">=</span><span class="nf">default_backend</span><span class="p">())</span>
    <span class="n">decryptor</span> <span class="o">=</span> <span class="n">cipher</span><span class="p">.</span><span class="nf">decryptor</span><span class="p">()</span>
    <span class="n">decrypted</span> <span class="o">=</span> <span class="n">decryptor</span><span class="p">.</span><span class="nf">update</span><span class="p">(</span><span class="n">ciphertext</span><span class="p">)</span> <span class="o">+</span> <span class="n">decryptor</span><span class="p">.</span><span class="nf">finalize</span><span class="p">()</span>
    
    <span class="c1"># Decompress zlib payload
</span>    <span class="k">return</span> <span class="n">zlib</span><span class="p">.</span><span class="nf">decompress</span><span class="p">(</span><span class="n">decrypted</span><span class="p">)</span>
</code></pre></div></div>

<p>You can find the full tool on <a href="https://gist.github.com/Jeswang/eeac3eb0977dee490814926e74538c9a">GitHub Gist</a>.</p>

<h2 id="the-role-of-llms-in-reverse-engineering">The Role of LLMs in Reverse Engineering</h2>

<p>A fascinating part of this project was using an LLM to accelerate the reverse engineering process. While tools like <code class="language-plaintext highlighter-rouge">otool</code> and <code class="language-plaintext highlighter-rouge">grep</code> provided the raw data, the AI acted as a “force multiplier”:</p>

<ul>
  <li><strong>Reading Code at Scale</strong>: The most daunting part of reverse engineering is the sheer volume of information. A binary dump can contain millions of lines of assembly instructions. For a human, “reading” this to build a mental model of the software’s behavior is a task that takes days or weeks. The LLM, however, could digest these massive text dumps instantly.</li>
  <li><strong>Semantic Understanding</strong>: It didn’t just match patterns; it understood the <em>intent</em> of the low-level code. By analyzing the context around function calls (like <code class="language-plaintext highlighter-rouge">AES_encrypt</code> or <code class="language-plaintext highlighter-rouge">setLockPwd</code>), the AI could infer high-level logic—such as identifying that the password was being sourced from file metadata—without us having to manually trace every register.</li>
  <li><strong>Time Compression</strong>: This ability to essentially “read” the binary allowed us to bypass the tedious manual tracing phase entirely. We could ask high-level questions about the software’s behavior and get answers derived from the raw assembly, compressing what would be an “forever” task for a human into a quick conversation.</li>
</ul>

<p>This collaboration turned what could have been a multi-day debugging session into a targeted, systematic investigation.</p>

<h2 id="conclusion">Conclusion</h2>

<p>This exercise showed that the “lock” feature in Guitar Pro is effectively just a UI flag backed by a fixed-key obfuscation. It prevents casual editing but offers no real security against someone determined to access the data.</p>

<p><em>Disclaimer: This information is for educational purposes only. Always respect copyright and the wishes of content creators.</em></p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Vibe Coding - Extracting Pet Sprites from Cross Gate]]></title>
    <link href="https://wangyi.ai/blog/2026/01/16/cross-gate-pet-extractor/"/>
    <updated>2026-01-16T14:35:00-08:00</updated>
    <id>https://wangyi.ai/blog/2026/01/16/cross-gate-pet-extractor</id>
    <content type="html"><![CDATA[<p><img src="/images/cross-gate-pet-viewer.png" alt="Cross Gate Pet Viewer" /></p>

<p>Cross Gate (魔力宝贝) was one of the most influential MMORPGs in Taiwan and China during the early 2000s. As someone who spent countless hours collecting pets in this game during my childhood, I recently embarked on a nostalgia-driven project: extracting all the pet sprites from the game files and building a modern web viewer to browse them.</p>

<!-- more -->

<h2 id="the-challenge">The Challenge</h2>

<p>Game resources from the early 2000s are notoriously difficult to work with. Cross Gate uses proprietary binary formats for its graphics and animation data:</p>

<ul>
  <li><strong>GraphicInfo_*.bin</strong> (40 bytes per entry) - Metadata for each graphic including dimensions, offsets, and addresses</li>
  <li><strong>Graphic_*.bin</strong> - RLE-compressed 8-bit indexed images with transparency</li>
  <li><strong>AnimeInfo_*.bin</strong> (12 bytes per entry) - Animation metadata linking pet IDs to frame sequences</li>
  <li><strong>Anime_*.bin</strong> - Animation frame data with actions and directions</li>
  <li><strong>Palette files (.cgp)</strong> - 224-color palettes mapping indices 16-239</li>
</ul>

<p>The compression format is a custom RLE implementation with multiple encoding modes (literal, repeat, transparent) and variable-length counters.</p>

<h2 id="the-solution">The Solution</h2>

<p>Using AI-assisted development (Claude Code and Antigravity), I built a Python extraction pipeline:</p>

<ol>
  <li><strong>Parse the binary formats</strong> - Read the structured binary files, extracting metadata and addresses</li>
  <li><strong>Decompress RLE graphics</strong> - Implement the full RLE decompression algorithm with all encoding modes</li>
  <li><strong>Apply palettes</strong> - Map 8-bit indexed pixels to RGB colors using the game’s palette files</li>
  <li><strong>Generate animated GIFs</strong> - Combine frames into animated GIFs for each pet’s actions and directions</li>
</ol>

<p>Each pet has up to 10 actions (Idle, Walk, Attack, Defend, Cast, etc.) and 8 directions, resulting in potentially 80 GIF animations per pet.</p>

<h2 id="the-frontend">The Frontend</h2>

<p>I built a Next.js web application to browse the extracted pets:</p>

<ul>
  <li><strong>Grid view</strong> displaying all available pets</li>
  <li><strong>Detail view</strong> with interactive controls for actions and directions</li>
  <li><strong>Drag-to-rotate</strong> functionality for intuitive direction changes</li>
  <li><strong>Pixel-perfect rendering</strong> with <code class="language-plaintext highlighter-rouge">image-rendering: pixelated</code> to preserve the retro aesthetic</li>
</ul>

<h2 id="lessons-learned">Lessons Learned</h2>

<ol>
  <li><strong>Binary format reverse engineering is time-consuming</strong> - Even with AI assistance, understanding undocumented binary formats requires careful experimentation and validation</li>
  <li><strong>Progress persistence is essential</strong> - With 1000+ pets to process, the batch generator needed to skip already-processed pets and handle timeouts gracefully</li>
  <li><strong>Test with edge cases early</strong> - Some pets had unusual frame counts or missing animations that caused the initial implementation to fail</li>
</ol>

<h2 id="references">References</h2>

<p>This project was made possible by the <a href="https://github.com/x2048/cgg-viewer">cgg-viewer</a> project, which provided the foundational understanding of Cross Gate’s binary file formats and RLE decompression algorithm. The original Python implementation by the cgg-viewer author was invaluable for understanding how to correctly parse GraphicInfo, AnimeInfo, and palette files.</p>

<h2 id="whats-next">What’s Next</h2>

<ul class="task-list">
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Try <a href="https://3d.hunyuan.tencent.com/">Tencent Hunyuan 3D</a> to convert 2D sprites into 3D models</li>
</ul>

<p>You can try it out at <a href="https://1203906e.cross-gate-pets.pages.dev/">https://1203906e.cross-gate-pets.pages.dev/</a>.</p>

]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Breaking Up with Evernote: Building a Custom Migration Tool for Apple Notes]]></title>
    <link href="https://wangyi.ai/blog/2026/01/16/evernote-to-apple-notes-migration/"/>
    <updated>2026-01-16T14:00:00-08:00</updated>
    <id>https://wangyi.ai/blog/2026/01/16/evernote-to-apple-notes-migration</id>
    <content type="html"><![CDATA[<p>After 15+ years of note-taking, I finally said goodbye to Evernote. Here’s the technical journey of migrating 4,330 notes—with all their attachments, tables, and formatting—to Apple Notes.</p>

<!-- more -->

<h2 id="the-problem">The Problem</h2>

<p>Evernote had been my digital brain since the late 2000s. But with each passing version, the app became slower, more bloated, and increasingly expensive. Apple Notes, meanwhile, has quietly evolved into a capable, fast, and free alternative that syncs seamlessly across my devices.</p>

<p>The catch? <strong>There’s no official migration path.</strong> Evernote’s export format (ENEX) doesn’t preserve everything, and Apple Notes doesn’t have any bulk import feature. Manual copy-paste wasn’t an option.</p>

<p>So I built my own migration tool.</p>

<h2 id="what-made-this-hard">What Made This Hard</h2>

<p>This wasn’t a simple file conversion:</p>

<ul>
  <li><strong>Rich text formatting</strong> including tables, checklists, and styled text</li>
  <li><strong>Embedded attachments</strong> (images, PDFs, documents) referenced by MD5 hashes in Evernote’s proprietary ENML format</li>
  <li><strong>Creation and modification dates</strong> that needed to be preserved</li>
  <li><strong>Duplicate detection</strong> to allow resumable, interruptible migrations</li>
  <li><strong>Apple Notes’ limitations</strong>—no public API, only AppleScript access</li>
</ul>

<p>Evernote v10 made things even more complicated. Unlike older versions that stored everything in a straightforward SQLite database, v10 uses a hybrid system with:</p>
<ul>
  <li>A SQLite database for metadata</li>
  <li>Separate <code class="language-plaintext highlighter-rouge">.dat</code> files containing rich text content (tables/formatting)</li>
  <li>Protobuf-encoded binary structures</li>
  <li>Server-side attachment storage requiring authenticated downloads</li>
</ul>

<h2 id="the-solution-a-two-phase-migration-system">The Solution: A Two-Phase Migration System</h2>

<p>I built a Python-based migration pipeline that handles all of this complexity.</p>

<h3 id="phase-1-parallel-preparation">Phase 1: Parallel Preparation</h3>

<p>The first phase downloads attachments and generates PDFs in parallel using 10 worker threads. For notes with embedded images or files, I render the complete content (HTML + attachments) into a PDF using headless Chrome. This preserves formatting perfectly.</p>

<h3 id="phase-2-sequential-import">Phase 2: Sequential Import</h3>

<p>The second phase imports to Apple Notes via AppleScript—sequentially, because Apple Notes doesn’t handle concurrent modifications well.</p>

<h3 id="solving-the-attachment-problem">Solving the Attachment Problem</h3>

<p>Evernote embeds attachments using <code class="language-plaintext highlighter-rouge">&lt;en-media&gt;</code> tags with MD5 hashes. To resolve these to actual files, I:</p>

<ol>
  <li>Query Evernote’s local database for attachment metadata</li>
  <li>Download from Evernote’s servers using captured auth tokens</li>
  <li>Embed them as base64 in generated PDFs</li>
  <li>Attach the PDF to the Apple Notes entry</li>
</ol>

<h3 id="deduplication-done-right">Deduplication Done Right</h3>

<p>My initial attempt at duplicate detection was fragile—comparing dates via AppleScript often failed. The fix was simple: track Evernote note IDs in a log file. This makes the migration <strong>fully resumable</strong>.</p>

<h2 id="bonus-ai-powered-organization">Bonus: AI-Powered Organization</h2>

<p>Once notes were in Apple Notes, I used Gemini AI to automatically categorize them into folders based on content.</p>

<h2 id="lessons-learned">Lessons Learned</h2>

<ol>
  <li>
    <p><strong>AppleScript is slow but reliable</strong> — Building a cache at startup dropped duplicate checks from 0.5s to 0.001s per note.</p>
  </li>
  <li>
    <p><strong>Parallelism for I/O, sequential for mutations</strong> — Downloading attachments scales linearly with workers. Writing to Apple Notes must be sequential.</p>
  </li>
  <li>
    <p><strong>Auth tokens expire</strong> — Evernote’s tokens last about an hour. I kept Proxyman ready to capture fresh tokens.</p>
  </li>
  <li>
    <p><strong>PDF is a universal container</strong> — When your target doesn’t support rich formatting or attachments, bundle everything into a PDF.</p>
  </li>
</ol>

<h2 id="the-code">The Code</h2>

<p>The entire migration toolkit is available on GitHub: <a href="https://github.com/Jeswang/apple-notes-toolkit">apple-notes-toolkit</a></p>

<p>⚠️ Note: This repo is fully vibe coded. Use with caution.</p>

<h2 id="final-thoughts">Final Thoughts</h2>

<p>What started as a weekend project turned into a deep dive into Evernote’s internals, Apple’s Scripting Bridge, and the art of data migration. But the result is worth it: my 15 years of notes are now in Apple Notes, fully searchable, syncing across devices, and—most importantly—mine to keep.</p>

<p>If you’re considering leaving Evernote, know that it’s possible. It just takes a bit of engineering.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[《世上为什么要有图书馆》读书笔记]]></title>
    <link href="https://wangyi.ai/blog/2025/09/28/library-book-reading-notes/"/>
    <updated>2025-09-28T10:00:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/09/28/library-book-reading-notes</id>
    <content type="html"><![CDATA[<p>最近读到的一本文字流畅，内容清爽的小书。书里描述了大学教授杨素秋，在西安市碑林区文化旅游局挂职一年，筹办区图书馆的经历。这是个繁杂、具体，有时甚至需要挑战权威的工作：</p>
<ul>
  <li>区里提供的馆址是个地下空间，需要在有限的预算内，找到合适的装修公司，把这个地下空间改造成舒适的阅读空间。</li>
  <li>在图书采购过程中，供应商惯于提供劣质的，滥竽充数的图书，为采购者支付回扣。作者不屑于收受回扣，一心为公，希望图书馆里都是经历了时间检验的好书。</li>
  <li>为一个图书馆选书，工程浩大，无法仅凭一己之力完成。作者发动自己的人脉，联系了诸多好友帮忙选书。选书缘由，荐者心路，作者缓缓道来，推卷而述，好不痛快。</li>
</ul>

<p>尽管困难重重，作者心有所往，逆流而上，不畏险阻，最终得偿所愿。主线之余，作者夹叙一年挂职生活所遇的形色人等，有的让人牙关紧咬，有的让人唏嘘感慨，说尽人情冷暖。西安的美食，官场中的本色不改，选书朋友们的人生故事，对弱势群体的关照，五味杂陈，乒乓作响，读者吃到的是酸辣爽口的一餐。</p>

<!-- more -->

<h1 id="附录里的书单">附录里的书单</h1>

<h3 id="童书含漫画">童书（含漫画）</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left">书名</th>
      <th style="text-align: left">作者</th>
      <th style="text-align: left">出版年份</th>
      <th style="text-align: left">豆瓣评分</th>
      <th style="text-align: left">豆瓣链接</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">《安徒生童话》</td>
      <td style="text-align: left">[丹麦] 汉斯·克里斯蒂安·安徒生</td>
      <td style="text-align: left">1835年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%AE%89%E5%BE%92%E7%94%9F%E7%AB%A5%E8%AF%9D">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《镖人》</td>
      <td style="text-align: left">许先哲</td>
      <td style="text-align: left">2015年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%95%96%E4%BA%BA">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《冰菓》</td>
      <td style="text-align: left">[日] 米澤穂信</td>
      <td style="text-align: left">2001年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%86%B0%E8%8F%93">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《查理和巧克力工厂》</td>
      <td style="text-align: left">[英] 罗尔德·达尔</td>
      <td style="text-align: left">1964年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%9F%A5%E7%90%86%E5%92%8C%E5%B7%A7%E5%85%8B%E5%8A%9B%E5%B7%A5%E5%8E%82">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《虫师》</td>
      <td style="text-align: left">[日] 漆原友纪</td>
      <td style="text-align: left">1999年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%99%AB%E5%B8%88">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《宝可梦（宠物小精灵）》</td>
      <td style="text-align: left">[日] 日下秀宪 / 真斗</td>
      <td style="text-align: left">1997年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%AE%9D%E5%8F%AF%E6%A2%A6%EF%BC%88%E5%AE%A0%E7%89%A9%E5%B0%8F%E7%B2%BE%E7%81%B5%EF%BC%89">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《窗边的小豆豆》</td>
      <td style="text-align: left">[日] 黑柳彻子</td>
      <td style="text-align: left">1981年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%AA%97%E8%BE%B9%E7%9A%84%E5%B0%8F%E8%B1%86%E8%B1%86">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《吹小号的天鹅》</td>
      <td style="text-align: left">[美] E.B. 怀特</td>
      <td style="text-align: left">1970年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%90%B9%E5%B0%8F%E5%8F%B7%E7%9A%84%E5%A4%A9%E9%B9%85">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《丁丁历险记》</td>
      <td style="text-align: left">[比利时] 埃尔热</td>
      <td style="text-align: left">1929年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%81%E4%B8%81%E5%8E%86%E9%99%A9%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《机动战士高达》</td>
      <td style="text-align: left">[日] 富野由悠季 / 矢立肇</td>
      <td style="text-align: left">1979年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%9C%BA%E5%8A%A8%E6%88%98%E5%A3%AB%E9%AB%98%E8%BE%BE">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《给孩子的故事》</td>
      <td style="text-align: left">黄永玉</td>
      <td style="text-align: left">2015年</td>
      <td style="text-align: left">8.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BB%99%E5%AD%A9%E5%AD%90%E7%9A%84%E6%95%85%E4%BA%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《灌篮高手》</td>
      <td style="text-align: left">[日] 井上雄彦</td>
      <td style="text-align: left">1990年</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%81%8C%E7%AF%AE%E9%AB%98%E6%89%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《哈利·波特》</td>
      <td style="text-align: left">[英] J.K. 罗琳</td>
      <td style="text-align: left">1997年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%93%88%E5%88%A9%C2%B7%E6%B3%A2%E7%89%B9">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《海贼王》</td>
      <td style="text-align: left">[日] 尾田荣一郎</td>
      <td style="text-align: left">1997年</td>
      <td style="text-align: left">9.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B5%B7%E8%B4%BC%E7%8E%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《汉声中国童话》</td>
      <td style="text-align: left">汉声杂志社</td>
      <td style="text-align: left">1982年</td>
      <td style="text-align: left">9.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B1%89%E5%A3%B0%E4%B8%AD%E5%9B%BD%E7%AB%A5%E8%AF%9D">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《荷花镇的早市》</td>
      <td style="text-align: left">周翔</td>
      <td style="text-align: left">2014年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%8D%B7%E8%8A%B1%E9%95%87%E7%9A%84%E6%97%A9%E5%B8%82">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《黑子的篮球》</td>
      <td style="text-align: left">[日] 藤卷忠俊</td>
      <td style="text-align: left">2008年</td>
      <td style="text-align: left">8.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%BB%91%E5%AD%90%E7%9A%84%E7%AF%AE%E7%90%83">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《护生画集》</td>
      <td style="text-align: left">丰子恺 / 弘一法师</td>
      <td style="text-align: left">1929年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%8A%A4%E7%94%9F%E7%94%BB%E9%9B%86">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《火影忍者》</td>
      <td style="text-align: left">[日] 岸本齐史</td>
      <td style="text-align: left">1999年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%81%AB%E5%BD%B1%E5%BF%8D%E8%80%85">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《精灵鼠小弟》</td>
      <td style="text-align: left">[美] E.B. 怀特</td>
      <td style="text-align: left">1945年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%B2%BE%E7%81%B5%E9%BC%A0%E5%B0%8F%E5%BC%9F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《可怕的科学》</td>
      <td style="text-align: left">[英] 尼克·阿诺德</td>
      <td style="text-align: left">1996年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%8F%AF%E6%80%95%E7%9A%84%E7%A7%91%E5%AD%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《拉比的猫》</td>
      <td style="text-align: left">[法] 尤安·斯法</td>
      <td style="text-align: left">2002年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%8B%89%E6%AF%94%E7%9A%84%E7%8C%AB">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《了不起的狐狸爸爸》</td>
      <td style="text-align: left">[英] 罗尔德·达尔</td>
      <td style="text-align: left">1970年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BA%86%E4%B8%8D%E8%B5%B7%E7%9A%84%E7%8B%90%E7%8B%B8%E7%88%B8%E7%88%B8">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《龙珠Z》 (漫画原作)</td>
      <td style="text-align: left">[日] 鸟山明</td>
      <td style="text-align: left">1984年</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%BE%99%E7%8F%A0Z">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《玛蒂尔达》</td>
      <td style="text-align: left">[英] 罗尔德·达尔</td>
      <td style="text-align: left">1988年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%8E%9B%E8%92%82%E5%B0%94%E8%BE%BE">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《玛法达》</td>
      <td style="text-align: left">[阿根廷] 季诺</td>
      <td style="text-align: left">1964年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%8E%9B%E6%B3%95%E8%BE%BE">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《名侦探柯南》</td>
      <td style="text-align: left">[日] 青山刚昌</td>
      <td style="text-align: left">1994年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%90%8D%E4%BE%A6%E6%8E%A2%E6%9F%AF%E5%8D%97">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《排球少年》</td>
      <td style="text-align: left">[日] 古馆春一</td>
      <td style="text-align: left">2012年</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%8E%92%E7%90%83%E5%B0%91%E5%B9%B4">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《七龙珠》</td>
      <td style="text-align: left">[日] 鸟山明</td>
      <td style="text-align: left">1984年</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%83%E9%BE%99%E7%8F%A0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《棋魂》</td>
      <td style="text-align: left">[日] 堀田由美 / 小畑健</td>
      <td style="text-align: left">1999年</td>
      <td style="text-align: left">9.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%A3%8B%E9%AD%82">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《犬夜叉》</td>
      <td style="text-align: left">[日] 高桥留美子</td>
      <td style="text-align: left">1996年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%8A%AC%E5%A4%9C%E5%8F%89">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《三毛流浪记》</td>
      <td style="text-align: left">张乐平</td>
      <td style="text-align: left">1947年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%89%E6%AF%9B%E6%B5%81%E6%B5%AA%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《圣斗士星矢》</td>
      <td style="text-align: left">[日] 车田正美</td>
      <td style="text-align: left">1986年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9C%A3%E6%96%97%E5%A3%AB%E6%98%9F%E7%9F%A2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《死神》 (BLEACH)</td>
      <td style="text-align: left">[日] 久保带人</td>
      <td style="text-align: left">2001年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%AD%BB%E7%A5%9E">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《死亡笔记》</td>
      <td style="text-align: left">[日] 大场鸫 / 小畑健</td>
      <td style="text-align: left">2003年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%AD%BB%E4%BA%A1%E7%AC%94%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《四月是你的谎言》</td>
      <td style="text-align: left">[日] 新川直司</td>
      <td style="text-align: left">2011年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9B%9B%E6%9C%88%E6%98%AF%E4%BD%A0%E7%9A%84%E8%B0%8E%E8%A8%80">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《太空》</td>
      <td style="text-align: left">[美] H.A. 雷</td>
      <td style="text-align: left">1957年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A4%AA%E7%A9%BA">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《网球王子》</td>
      <td style="text-align: left">[日] 许斐刚</td>
      <td style="text-align: left">1999年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BD%91%E7%90%83%E7%8E%8B%E5%AD%90">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《文豪野犬》</td>
      <td style="text-align: left">[日] 朝雾卡夫卡 / 春河35</td>
      <td style="text-align: left">2012年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%96%87%E8%B1%AA%E9%87%8E%E7%8A%AC">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《希利尔讲艺术史》</td>
      <td style="text-align: left">[美] V.M. 希利尔</td>
      <td style="text-align: left">1924年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%B8%8C%E5%88%A9%E5%B0%94%E8%AE%B2%E8%89%BA%E6%9C%AF%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《夏洛的网》</td>
      <td style="text-align: left">[美] E.B. 怀特</td>
      <td style="text-align: left">1952年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A4%8F%E6%B4%9B%E7%9A%84%E7%BD%91">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《夏目友人帐》</td>
      <td style="text-align: left">[日] 绿川幸</td>
      <td style="text-align: left">2005年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A4%8F%E7%9B%AE%E5%8F%8B%E4%BA%BA%E5%B8%90">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《写给孩子的哲学启蒙书》</td>
      <td style="text-align: left">[法] 布里吉特·拉贝 等</td>
      <td style="text-align: left">2001年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%86%99%E7%BB%99%E5%AD%A9%E5%AD%90%E7%9A%84%E5%93%B2%E5%AD%A6%E5%90%AF%E8%92%99%E4%B9%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《银魂》</td>
      <td style="text-align: left">[日] 空知英秋</td>
      <td style="text-align: left">2003年</td>
      <td style="text-align: left">9.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%93%B6%E9%AD%82">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《幽游白书》</td>
      <td style="text-align: left">[日] 冨㭴义博</td>
      <td style="text-align: left">1990年</td>
      <td style="text-align: left">9.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%B9%BD%E6%B8%B8%E7%99%BD%E4%B9%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《月刊少女野崎君》</td>
      <td style="text-align: left">[日] 椿泉</td>
      <td style="text-align: left">2011年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%9C%88%E5%88%8A%E5%B1%91%E5%A5%B3%E9%87%8E%E5%B4%8E%E5%90%9B">链接</a></td>
    </tr>
  </tbody>
</table>

<h3 id="文学类">文学类</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left">书名</th>
      <th style="text-align: left">作者</th>
      <th style="text-align: left">出版年份</th>
      <th style="text-align: left">豆瓣评分</th>
      <th style="text-align: left">豆瓣链接</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">《奥德赛》</td>
      <td style="text-align: left">[古希腊] 荷马</td>
      <td style="text-align: left">公元前8世纪</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A5%A5%E5%BE%B7%E8%B5%9B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《白鹿原》</td>
      <td style="text-align: left">陈忠实</td>
      <td style="text-align: left">1993年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%99%BD%E9%B9%BF%E5%8E%9F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《冰与火之歌》</td>
      <td style="text-align: left">[美] 乔治·R.R. 马丁</td>
      <td style="text-align: left">1996年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%86%B0%E4%B8%8E%E7%81%AB%E4%B9%8B%E6%AD%8C">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《查令十字街84号》</td>
      <td style="text-align: left">[美] 海莲·汉芙</td>
      <td style="text-align: left">1970年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%9F%A5%E4%BB%A4%E5%8D%81%E5%AD%97%E8%A1%9784%E5%8F%B7">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《传习录》</td>
      <td style="text-align: left">王阳明</td>
      <td style="text-align: left">约1518年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BC%A0%E4%B9%A0%E5%BD%95">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《东周列国志》</td>
      <td style="text-align: left">[明] 冯梦龙</td>
      <td style="text-align: left">约1620年代</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%9C%E5%91%A8%E5%88%97%E5%9B%BD%E5%BF%97">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《读库》</td>
      <td style="text-align: left">张立宪 (主编)</td>
      <td style="text-align: left">2006年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%AF%BB%E5%BA%93">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《儿女英雄传》</td>
      <td style="text-align: left">[清] 文康</td>
      <td style="text-align: left">约1878年</td>
      <td style="text-align: left">7.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%84%BF%E5%A5%B3%E8%8B%B1%E9%9B%84%E4%BC%A0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《反骨仔》</td>
      <td style="text-align: left">王朔</td>
      <td style="text-align: left">2007年</td>
      <td style="text-align: left">7.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%8F%8D%E9%AA%A8%E4%BB%94">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《废都》</td>
      <td style="text-align: left">贾平凹</td>
      <td style="text-align: left">1993年</td>
      <td style="text-align: left">8.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%BA%9F%E9%83%BD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《古文观止》</td>
      <td style="text-align: left">[清] 吴楚材 / 吴调侯</td>
      <td style="text-align: left">1695年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%8F%A4%E6%96%87%E8%A7%82%E6%AD%A2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《哈克贝利·费恩历险记》</td>
      <td style="text-align: left">[美] 马克·吐温</td>
      <td style="text-align: left">1884年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%93%88%E5%85%8B%E8%B4%9D%E5%88%A9%C2%B7%E8%B4%B9%E6%81%A9%E5%8E%86%E9%99%A9%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《海边的卡夫卡》</td>
      <td style="text-align: left">[日] 村上春树</td>
      <td style="text-align: left">2002年</td>
      <td style="text-align: left">8.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B5%B7%E8%BE%B9%E7%9A%84%E5%8D%A1%E5%A4%AB%E5%8D%A1">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《海底两万里》</td>
      <td style="text-align: left">[法] 儒勒·凡尔纳</td>
      <td style="text-align: left">1870年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B5%B7%E5%BA%95%E4%B8%A4%E4%B8%87%E9%87%8C">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《汉字王国》</td>
      <td style="text-align: left">[瑞典] 林西莉</td>
      <td style="text-align: left">1989年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B1%89%E5%AD%97%E7%8E%8B%E5%9B%BD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《红楼梦》</td>
      <td style="text-align: left">[清] 曹雪芹</td>
      <td style="text-align: left">约1791年</td>
      <td style="text-align: left">9.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BA%A2%E6%A5%BC%E6%A2%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《活着》</td>
      <td style="text-align: left">余华</td>
      <td style="text-align: left">1993年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B4%BB%E7%9D%80">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《基督山伯爵》</td>
      <td style="text-align: left">[法] 大仲马</td>
      <td style="text-align: left">1844年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9F%BA%E7%9D%A3%E5%B1%B1%E4%BC%AF%E7%88%B5">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《卡拉马佐夫兄弟》</td>
      <td style="text-align: left">[俄] 陀思妥耶夫斯基</td>
      <td style="text-align: left">1880年</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%8D%A1%E6%8B%89%E9%A9%AC%E4%BD%90%E5%A4%AB%E5%85%84%E5%BC%9F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《克林索尔的最后夏天》</td>
      <td style="text-align: left">[德] 赫尔曼·黑塞</td>
      <td style="text-align: left">1920年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%85%8B%E6%9E%97%E7%B4%A2%E5%B0%94%E7%9A%84%E6%9C%80%E5%90%8E%E5%A4%8F%E5%A4%A9">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《老人与海》</td>
      <td style="text-align: left">[美] 欧内斯特·海明威</td>
      <td style="text-align: left">1952年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%80%81%E4%BA%BA%E4%B8%8E%E6%B5%B7">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《礼物》</td>
      <td style="text-align: left">[美] 弗拉基米尔·纳博科夫</td>
      <td style="text-align: left">1938年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%A4%BC%E7%89%A9">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《裂缝》</td>
      <td style="text-align: left">[英] 多丽丝·莱辛</td>
      <td style="text-align: left">2007年</td>
      <td style="text-align: left">7.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%A3%82%E7%BC%9D">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《流言》</td>
      <td style="text-align: left">张爱玲</td>
      <td style="text-align: left">1944年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B5%81%E8%A8%80">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《鲁滨孙漂流记》</td>
      <td style="text-align: left">[英] 丹尼尔·笛福</td>
      <td style="text-align: left">1719年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%B2%81%E6%BB%A8%E5%AD%99%E6%BC%82%E6%B5%81%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《鲁迅全集》</td>
      <td style="text-align: left">鲁迅</td>
      <td style="text-align: left">1938年</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%B2%81%E8%BF%85%E5%85%A8%E9%9B%86">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《论语》</td>
      <td style="text-align: left">孔子弟子及再传弟子</td>
      <td style="text-align: left">战国时期</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%AE%BA%E8%AF%AD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《罗生门》</td>
      <td style="text-align: left">[日] 芥川龙之介</td>
      <td style="text-align: left">1915年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BD%97%E7%94%9F%E9%97%A8">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《麦田里的守望者》</td>
      <td style="text-align: left">[美] J.D. 塞林格</td>
      <td style="text-align: left">1951年</td>
      <td style="text-align: left">8.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%BA%A6%E7%94%B0%E9%87%8C%E7%9A%84%E5%AE%88%E6%9C%9B%E8%80%85">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《魔戒》</td>
      <td style="text-align: left">[英] J.R.R. 托尔金</td>
      <td style="text-align: left">1954年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%AD%94%E6%88%92">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《墓法墓天》</td>
      <td style="text-align: left">不带剑</td>
      <td style="text-align: left">2017年</td>
      <td style="text-align: left">7.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A2%93%E6%B3%95%E5%A2%93%E5%A4%A9">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《那不勒斯四部曲》</td>
      <td style="text-align: left">[意] 埃莱娜·费兰特</td>
      <td style="text-align: left">2011年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%82%A3%E4%B8%8D%E5%8B%92%E6%96%AF%E5%9B%9B%E9%83%A8%E6%9B%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《挪威的森林》</td>
      <td style="text-align: left">[日] 村上春树</td>
      <td style="text-align: left">1987年</td>
      <td style="text-align: left">8.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%8C%AA%E5%A8%81%E7%9A%84%E6%A3%AE%E6%9E%97">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《胚胎奇谭》</td>
      <td style="text-align: left">[英] 朱利安·巴恩斯</td>
      <td style="text-align: left">1984年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%83%9A%E8%83%8E%E5%A5%87%E8%B0%AD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《契诃夫文集》</td>
      <td style="text-align: left">[俄] 安东·巴甫洛维奇·契诃夫</td>
      <td style="text-align: left">19世纪末</td>
      <td style="text-align: left">9.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A5%91%E8%AF%83%E5%A4%AB%E6%96%87%E9%9B%86">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《人间词话》</td>
      <td style="text-align: left">王国维</td>
      <td style="text-align: left">1910年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BA%BA%E9%97%B4%E8%AF%8D%E8%AF%9D">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《人间喜剧》</td>
      <td style="text-align: left">[法] 奥诺雷·德·巴尔扎克</td>
      <td style="text-align: left">1829-1848年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BA%BA%E9%97%B4%E5%96%9C%E5%89%A7">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《三国演义》</td>
      <td style="text-align: left">[明] 罗贯中</td>
      <td style="text-align: left">14世纪</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%89%E5%9B%BD%E6%BC%94%E4%B9%89">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《三体》</td>
      <td style="text-align: left">刘慈欣</td>
      <td style="text-align: left">2006年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%89%E4%BD%93">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《诗的八堂课》</td>
      <td style="text-align: left">张晓风</td>
      <td style="text-align: left">2011年</td>
      <td style="text-align: left">8.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%AF%97%E7%9A%84%E5%85%AB%E5%A0%82%E8%AF%BE">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《诗歌手册》</td>
      <td style="text-align: left">[法] 保尔·瓦雷里</td>
      <td style="text-align: left">1942年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%AF%97%E6%AD%8C%E6%89%8B%E5%86%8C">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《诗经》</td>
      <td style="text-align: left">佚名</td>
      <td style="text-align: left">公元前11-7世纪</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%AF%97%E7%BB%8F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《史记》</td>
      <td style="text-align: left">[汉] 司马迁</td>
      <td style="text-align: left">约公元前94年</td>
      <td style="text-align: left">9.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%8F%B2%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《世说新语》</td>
      <td style="text-align: left">[南朝宋] 刘义庆</td>
      <td style="text-align: left">约430年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%96%E8%AF%B4%E6%96%B0%E8%AF%AD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《鼠疫》</td>
      <td style="text-align: left">[法] 阿尔贝·加缪</td>
      <td style="text-align: left">1947年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%BC%A0%E7%96%AB">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《太平广记》</td>
      <td style="text-align: left">[宋] 李昉 等</td>
      <td style="text-align: left">978年</td>
      <td style="text-align: left">9.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A4%AA%E5%B9%B3%E5%B9%BF%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《汤姆·索亚历险记》</td>
      <td style="text-align: left">[美] 马克·吐温</td>
      <td style="text-align: left">1876年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B1%A4%E5%A7%86%C2%B7%E7%B4%A2%E4%BA%9A%E5%8E%86%E9%99%A9%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《唐诗别裁集》</td>
      <td style="text-align: left">[清] 沈德潜</td>
      <td style="text-align: left">1717年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%94%90%E8%AF%97%E5%88%AB%E8%A3%81%E9%9B%86">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《唐诗三百首》</td>
      <td style="text-align: left">[清] 蘅塘退士</td>
      <td style="text-align: left">约1763年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%94%90%E8%AF%97%E4%B8%89%E7%99%BE%E9%A6%96">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《天龙八部》</td>
      <td style="text-align: left">金庸</td>
      <td style="text-align: left">1963年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A4%A9%E9%BE%99%E5%85%AB%E9%83%A8">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《推拿》</td>
      <td style="text-align: left">毕飞宇</td>
      <td style="text-align: left">2008年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%8E%A8%E6%8B%BF">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《文苑英华》</td>
      <td style="text-align: left">[宋] 李昉 等</td>
      <td style="text-align: left">987年</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%96%87%E8%8B%91%E8%8B%B1%E5%8D%8E">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《我弥留之际》</td>
      <td style="text-align: left">[美] 威廉·福克纳</td>
      <td style="text-align: left">1930年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%88%91%E5%BC%A5%E7%95%99%E4%B9%8B%E9%99%85">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《西南联大国文课》</td>
      <td style="text-align: left">闻一多 / 朱自清 等</td>
      <td style="text-align: left">-</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%A5%BF%E5%8D%97%E8%81%94%E5%A4%A7%E5%9B%BD%E6%96%87%E8%AF%BE">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《献给阿尔吉侬的花束》</td>
      <td style="text-align: left">[美] 丹尼尔·凯斯</td>
      <td style="text-align: left">1966年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%8C%AE%E7%BB%99%E9%98%BF%E5%B0%94%E5%90%89%E4%BE%AC%E7%9A%84%E8%8A%B1%E6%9D%9F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《小城之恋》</td>
      <td style="text-align: left">[英] L.P. 哈特利</td>
      <td style="text-align: left">1953年</td>
      <td style="text-align: left">8.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%B0%8F%E5%9F%8E%E4%B9%8B%E6%81%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《小说课》</td>
      <td style="text-align: left">毕飞宇</td>
      <td style="text-align: left">2017年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%B0%8F%E8%AF%B4%E8%AF%BE">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《写作法宝》</td>
      <td style="text-align: left">[美] 斯蒂芬·金</td>
      <td style="text-align: left">2000年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%86%99%E4%BD%9C%E6%B3%95%E5%AE%9D">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《伊利亚特》</td>
      <td style="text-align: left">[古希腊] 荷马</td>
      <td style="text-align: left">公元前8世纪</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BC%8A%E5%88%A9%E4%BA%9A%E7%89%B9">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《阴阳师》</td>
      <td style="text-align: left">[日] 梦枕貘</td>
      <td style="text-align: left">1986年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%98%B4%E9%98%B3%E5%B8%88">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《银河帝国》</td>
      <td style="text-align: left">[美] 艾萨克·阿西莫夫</td>
      <td style="text-align: left">1951年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%93%B6%E6%B2%B3%E5%B8%9D%E5%9B%BD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《酉阳杂俎》</td>
      <td style="text-align: left">[唐] 段成式</td>
      <td style="text-align: left">9世纪</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%85%89%E9%98%B3%E6%9D%82%E9%98%BB">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《战国争鸣记》</td>
      <td style="text-align: left">[日] 宫崎市定</td>
      <td style="text-align: left">1947年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%88%98%E5%9B%BD%E4%BA%89%E9%B8%A3%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《朝花夕拾》</td>
      <td style="text-align: left">鲁迅</td>
      <td style="text-align: left">1928年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%9C%9D%E8%8A%B1%E5%A4%95%E6%8B%BE">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《正常人》</td>
      <td style="text-align: left">[爱尔兰] 萨莉·鲁尼</td>
      <td style="text-align: left">2018年</td>
      <td style="text-align: left">8.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%AD%A3%E5%B8%B8%E4%BA%BA">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《纸牌屋》</td>
      <td style="text-align: left">[英] 迈克尔·多布斯</td>
      <td style="text-align: left">1989年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BA%B8%E7%89%8C%E5%B1%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《最后一个匈奴》</td>
      <td style="text-align: left">高建群</td>
      <td style="text-align: left">1993年</td>
      <td style="text-align: left">8.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%9C%80%E5%90%8E%E4%B8%80%E4%B8%AA%E5%8C%88%E5%A5%B4">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《左传》</td>
      <td style="text-align: left">[春秋] 左丘明 (传)</td>
      <td style="text-align: left">战国时期</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%B7%A6%E4%BC%A0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《作文七巧》</td>
      <td style="text-align: left">夏丏尊 / 叶圣陶</td>
      <td style="text-align: left">1980年</td>
      <td style="text-align: left">8.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BD%9C%E6%96%87%E4%B8%83%E5%B7%A7">链接</a></td>
    </tr>
  </tbody>
</table>

<h3 id="人文社科">人文社科</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left">书名</th>
      <th style="text-align: left">作者</th>
      <th style="text-align: left">出版年份</th>
      <th style="text-align: left">豆瓣评分</th>
      <th style="text-align: left">豆瓣链接</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">《1844年经济学哲学手稿》</td>
      <td style="text-align: left">[德] 卡尔·马克思</td>
      <td style="text-align: left">1932年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+1844%E5%B9%B4%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%93%B2%E5%AD%A6%E6%89%8B%E7%A8%BF">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《奥斯威辛：一部历史》</td>
      <td style="text-align: left">[英] 劳伦斯·里斯</td>
      <td style="text-align: left">2005年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A5%A5%E6%96%AF%E5%A8%81%E8%BE%9B%EF%BC%9A%E4%B8%80%E9%83%A8%E5%8E%86%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《奥义书》</td>
      <td style="text-align: left">佚名</td>
      <td style="text-align: left">公元前800-500年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A5%A5%E4%B9%89%E4%B9%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《巴尔扎克传》</td>
      <td style="text-align: left">[奥] 斯蒂芬·茨威格</td>
      <td style="text-align: left">1946年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%B7%B4%E5%B0%94%E6%89%8E%E5%85%8B%E4%BC%A0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《保卫马克思》</td>
      <td style="text-align: left">[法] 路易·阿尔都塞</td>
      <td style="text-align: left">1965年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BF%9D%E5%8D%AB%E9%A9%AC%E5%85%8B%E6%80%9D">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《藏在碑林里的国宝》</td>
      <td style="text-align: left">郭志呈 / 郭强</td>
      <td style="text-align: left">2019年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%97%8F%E5%9C%A8%E7%A2%91%E6%9E%97%E9%87%8C%E7%9A%84%E5%9B%BD%E5%AE%9D">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《册府元龟》</td>
      <td style="text-align: left">[宋] 王钦若 / 杨亿</td>
      <td style="text-align: left">1013年</td>
      <td style="text-align: left">9.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%86%8C%E5%BA%9C%E5%85%83%E9%BE%9F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《纯粹理性批判》</td>
      <td style="text-align: left">[德] 伊曼努尔·康德</td>
      <td style="text-align: left">1781年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BA%AF%E7%B2%B9%E7%90%86%E6%80%A7%E6%89%B9%E5%88%A4">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《丛书集成》</td>
      <td style="text-align: left">王云五 (主编)</td>
      <td style="text-align: left">1935年</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%9B%E4%B9%A6%E9%9B%86%E6%88%90">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《大藏经》</td>
      <td style="text-align: left">历代高僧</td>
      <td style="text-align: left">历代</td>
      <td style="text-align: left">9.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A4%A7%E8%97%8F%E7%BB%8F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《抵抗的群体》</td>
      <td style="text-align: left">[美] 王人英</td>
      <td style="text-align: left">2011年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%8A%B5%E6%8A%97%E7%9A%84%E7%BE%A4%E4%BD%93">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《第二性》</td>
      <td style="text-align: left">[法] 西蒙·娜·德·波伏娃</td>
      <td style="text-align: left">1949年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%AC%AC%E4%BA%8C%E6%80%A7">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《洞穴奇案》</td>
      <td style="text-align: left">[美] 彼得·萨伯</td>
      <td style="text-align: left">1998年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B4%9E%E7%A9%B4%E5%A5%87%E6%A1%88">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《对影胡说》</td>
      <td style="text-align: left">胡兰成</td>
      <td style="text-align: left">1980年</td>
      <td style="text-align: left">7.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%AF%B9%E5%BD%B1%E8%83%A1%E8%AF%B4">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《二十四史》</td>
      <td style="text-align: left">历代史学家</td>
      <td style="text-align: left">历代</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BA%8C%E5%8D%81%E5%9B%9B%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《二手时间》</td>
      <td style="text-align: left">[白俄] S.A.阿列克谢耶维奇</td>
      <td style="text-align: left">2013年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BA%8C%E6%89%8B%E6%97%B6%E9%97%B4">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《佛家名相通释》</td>
      <td style="text-align: left">熊十力</td>
      <td style="text-align: left">1937年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BD%9B%E5%AE%B6%E5%90%8D%E7%9B%B8%E9%80%9A%E9%87%8A">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《傅山的世界》</td>
      <td style="text-align: left">[美] 白谦慎</td>
      <td style="text-align: left">2006年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%82%85%E5%B1%B1%E7%9A%84%E4%B8%96%E7%95%8C">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《伽利略传》</td>
      <td style="text-align: left">[德] 贝托尔特·布莱希特</td>
      <td style="text-align: left">1943年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BC%BD%E5%88%A9%E7%95%A5%E4%BC%A0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《关于他人的痛苦》</td>
      <td style="text-align: left">[美] 苏珊·桑塔格</td>
      <td style="text-align: left">2003年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%85%B3%E4%BA%8E%E4%BB%96%E4%BA%BA%E7%9A%84%E7%97%9B%E8%8B%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《观看之道》</td>
      <td style="text-align: left">[英] 约翰·伯格</td>
      <td style="text-align: left">1972年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%A7%82%E7%9C%8B%E4%B9%8B%E9%81%93">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《汉字书法之美》</td>
      <td style="text-align: left">蒋勋</td>
      <td style="text-align: left">2009年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B1%89%E5%AD%97%E4%B9%A6%E6%B3%95%E4%B9%8B%E7%BE%8E">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《汉字与文物的故事》</td>
      <td style="text-align: left">孙机</td>
      <td style="text-align: left">2021年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B1%89%E5%AD%97%E4%B8%8E%E6%96%87%E7%89%A9%E7%9A%84%E6%95%85%E4%BA%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《黑镜头》</td>
      <td style="text-align: left">[美] 罗伯特·普雷基</td>
      <td style="text-align: left">2002年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%BB%91%E9%95%9C%E5%A4%B4">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《黄泉下的美术》</td>
      <td style="text-align: left">巫鸿</td>
      <td style="text-align: left">2005年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%BB%84%E6%B3%89%E4%B8%8B%E7%9A%84%E7%BE%8E%E6%9C%AF">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《火车上的中国人》</td>
      <td style="text-align: left">王福春</td>
      <td style="text-align: left">2001年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%81%AB%E8%BD%A6%E4%B8%8A%E7%9A%84%E4%B8%AD%E5%9B%BD%E4%BA%BA">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《基督教神学原理》</td>
      <td style="text-align: left">[美] 奥尔森</td>
      <td style="text-align: left">1992年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9F%BA%E7%9D%A3%E6%95%99%E7%A5%9E%E5%AD%A6%E5%8E%9F%E7%90%86">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《基督教要义》</td>
      <td style="text-align: left">[法] 约翰·加尔文</td>
      <td style="text-align: left">1536年</td>
      <td style="text-align: left">9.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9F%BA%E7%9D%A3%E6%95%99%E8%A6%81%E4%B9%89">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《加德纳艺术通史》</td>
      <td style="text-align: left">[美] 弗雷德·S. 克莱纳</td>
      <td style="text-align: left">1926年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%8A%A0%E5%BE%B7%E7%BA%B3%E8%89%BA%E6%9C%AF%E9%80%9A%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《剑桥中国史》</td>
      <td style="text-align: left">[英] 费正清 等</td>
      <td style="text-align: left">1978年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%89%91%E6%A1%A5%E4%B8%AD%E5%9B%BD%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《咖啡厅、餐馆内景实例》</td>
      <td style="text-align: left">-</td>
      <td style="text-align: left">-</td>
      <td style="text-align: left">6.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%92%96%E5%95%A1%E5%8E%85%E3%80%81%E9%A4%90%E9%A6%86%E5%86%85%E6%99%AF%E5%AE%9E%E4%BE%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《康德传》</td>
      <td style="text-align: left">[德] 曼弗雷德·库恩</td>
      <td style="text-align: left">2001年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%BA%B7%E5%BE%B7%E4%BC%A0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《旷野呼告》</td>
      <td style="text-align: left">[美] 杰克·伦敦</td>
      <td style="text-align: left">1903年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%97%B7%E9%87%8E%E5%91%BC%E5%91%8A">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《拉丁美洲被切开的血管》</td>
      <td style="text-align: left">[乌拉圭] 爱德华多·加莱亚诺</td>
      <td style="text-align: left">1971年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%8B%89%E4%B8%81%E7%BE%8E%E6%B4%B2%E8%A2%AB%E5%88%87%E5%BC%80%E7%9A%84%E8%A1%80%E7%AE%A1">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《蓝色血脉》</td>
      <td style="text-align: left">朱大可</td>
      <td style="text-align: left">1991年</td>
      <td style="text-align: left">8.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%93%9D%E8%89%B2%E8%A1%80%E8%84%89">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《劳特利奇哲学史》</td>
      <td style="text-align: left">G.H.R.帕金森 (主编)</td>
      <td style="text-align: left">1993年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%8A%B3%E7%89%B9%E5%88%A9%E5%A5%87%E5%93%B2%E5%AD%A6%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《理解一张照片》</td>
      <td style="text-align: left">[英] 约翰·伯格</td>
      <td style="text-align: left">2013年</td>
      <td style="text-align: left">8.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%90%86%E8%A7%A3%E4%B8%80%E5%BC%A0%E7%85%A7%E7%89%87">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《理想城市》</td>
      <td style="text-align: left">[美] 简·雅各布斯</td>
      <td style="text-align: left">1961年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%90%86%E6%83%B3%E5%9F%8E%E5%B8%82">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《另一种讲述的方式》</td>
      <td style="text-align: left">[英] 约翰·伯格</td>
      <td style="text-align: left">1982年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%8F%A6%E4%B8%80%E7%A7%8D%E8%AE%B2%E8%BF%B0%E7%9A%84%E6%96%B9%E5%BC%8F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《伦理学》</td>
      <td style="text-align: left">[荷] 巴鲁赫·斯宾诺莎</td>
      <td style="text-align: left">1677年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BC%A6%E7%90%86%E5%AD%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《论摄影》</td>
      <td style="text-align: left">[美] 苏珊·桑塔格</td>
      <td style="text-align: left">1977年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%AE%BA%E6%91%84%E5%BD%B1">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《毛以后的中国》</td>
      <td style="text-align: left">[美] 罗德里克·麦克法夸尔</td>
      <td style="text-align: left">2008年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%AF%9B%E4%BB%A5%E5%90%8E%E7%9A%84%E4%B8%AD%E5%9B%BD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《美术、神话与祭祀》</td>
      <td style="text-align: left">张光直</td>
      <td style="text-align: left">1988年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BE%8E%E6%9C%AF%E3%80%81%E7%A5%9E%E8%AF%9D%E4%B8%8E%E7%A5%AD%E7%A5%80">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《明朝那些事儿》</td>
      <td style="text-align: left">当年明月</td>
      <td style="text-align: left">2006年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%98%8E%E6%9C%9D%E9%82%A3%E4%BA%9B%E4%BA%8B%E5%84%BF">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《墨庄漫录》</td>
      <td style="text-align: left">[宋] 张邦基</td>
      <td style="text-align: left">南宋</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A2%A8%E5%BA%84%E6%BC%AB%E5%BD%95">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《纽约摄影学院摄影教材》</td>
      <td style="text-align: left">[美] Don Sheff</td>
      <td style="text-align: left">1970年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BA%BD%E7%BA%A6%E6%91%84%E5%BD%B1%E5%AD%A6%E9%99%A2%E6%91%84%E5%BD%B1%E6%95%99%E6%9D%90">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《欧洲大学史》</td>
      <td style="text-align: left">[法] 克里斯托夫·夏尔勒</td>
      <td style="text-align: left">2002年</td>
      <td style="text-align: left">8.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%AC%A7%E6%B4%B2%E5%A4%A7%E5%AD%A6%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《破〈破新唯识论〉》</td>
      <td style="text-align: left">熊十力</td>
      <td style="text-align: left">1923年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%A0%B4%E3%80%88%E7%A0%B4%E6%96%B0%E5%94%AF%E8%AF%86%E8%AE%BA%E3%80%89">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《囚徒的困境》</td>
      <td style="text-align: left">[美] 威廉·庞德斯通</td>
      <td style="text-align: left">1992年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9B%9A%E5%BE%92%E7%9A%84%E5%9B%B0%E5%A2%83">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《让房子与你的灵魂契合》</td>
      <td style="text-align: left">[美] 克莱尔·库珀·马库斯</td>
      <td style="text-align: left">1995年</td>
      <td style="text-align: left">8.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%AE%A9%E6%88%BF%E5%AD%90%E4%B8%8E%E4%BD%A0%E7%9A%84%E7%81%B5%E9%AD%82%E5%A5%91%E5%90%88">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《人类简史》</td>
      <td style="text-align: left">[以色列] 尤瓦尔·赫拉利</td>
      <td style="text-align: left">2011年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BA%BA%E7%B1%BB%E7%AE%80%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《如何建造美好家园》</td>
      <td style="text-align: left">[英] 约翰·布鲁克斯</td>
      <td style="text-align: left">1984年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A6%82%E4%BD%95%E5%BB%BA%E9%80%A0%E7%BE%8E%E5%A5%BD%E5%AE%B6%E5%9B%AD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《撒马尔罕的金桃》</td>
      <td style="text-align: left">[美] 薛爱华</td>
      <td style="text-align: left">1963年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%92%92%E9%A9%AC%E5%B0%94%E7%BD%95%E7%9A%84%E9%87%91%E6%A1%83">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《僧侣与哲学家》</td>
      <td style="text-align: left">[法] 让-弗朗索瓦·勒维尔</td>
      <td style="text-align: left">1997年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%83%A7%E4%BE%A3%E4%B8%8E%E5%93%B2%E5%AD%A6%E5%AE%B6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《送法下乡》</td>
      <td style="text-align: left">苏力</td>
      <td style="text-align: left">2000年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%80%81%E6%B3%95%E4%B8%8B%E4%B9%A1">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《山川悠远》</td>
      <td style="text-align: left">方闻</td>
      <td style="text-align: left">2004年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%B1%B1%E5%B7%9D%E6%82%A0%E8%BF%9C">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《设计中的设计》</td>
      <td style="text-align: left">[日] 原研哉</td>
      <td style="text-align: left">2003年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%AE%BE%E8%AE%A1%E4%B8%AD%E7%9A%84%E8%AE%BE%E8%AE%A1">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《摄影哲学的思考》</td>
      <td style="text-align: left">[捷] 维兰·傅拉瑟</td>
      <td style="text-align: left">1983年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%91%84%E5%BD%B1%E5%93%B2%E5%AD%A6%E7%9A%84%E6%80%9D%E8%80%83">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《身体·性别·摄影》</td>
      <td style="text-align: left">[日] 笠原美智子</td>
      <td style="text-align: left">2003年</td>
      <td style="text-align: left">8.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%BA%AB%E4%BD%93%C2%B7%E6%80%A7%E5%88%AB%C2%B7%E6%91%84%E5%BD%B1">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《神话学》</td>
      <td style="text-align: left">[法] 罗兰·巴特</td>
      <td style="text-align: left">1957年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%A5%9E%E8%AF%9D%E5%AD%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《生活与命运》</td>
      <td style="text-align: left">[苏] 瓦西里·格罗斯曼</td>
      <td style="text-align: left">1980年</td>
      <td style="text-align: left">9.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%94%9F%E6%B4%BB%E4%B8%8E%E5%91%BD%E8%BF%90">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《圣经·旧约》</td>
      <td style="text-align: left">摩西 等</td>
      <td style="text-align: left">公元前13世纪-前2世纪</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9C%A3%E7%BB%8F%C2%B7%E6%97%A7%E7%BA%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《圣经·新约》</td>
      <td style="text-align: left">马太 / 马可 / 路加 等</td>
      <td style="text-align: left">公元1世纪</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9C%A3%E7%BB%8F%C2%B7%E6%96%B0%E7%BA%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《世界摄影史》</td>
      <td style="text-align: left">[美] 内奥米·罗森布拉姆</td>
      <td style="text-align: left">1984年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%96%E7%95%8C%E6%91%84%E5%BD%B1%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《世界摄影艺术史》</td>
      <td style="text-align: left">[法] 安德烈·胡耶</td>
      <td style="text-align: left">2005年</td>
      <td style="text-align: left">8.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%96%E7%95%8C%E6%91%84%E5%BD%B1%E8%89%BA%E6%9C%AF%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《世界通史》</td>
      <td style="text-align: left">[美] 斯塔夫里阿诺斯</td>
      <td style="text-align: left">1970年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%96%E7%95%8C%E9%80%9A%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《市井西仓》</td>
      <td style="text-align: left">胡武功</td>
      <td style="text-align: left">2006年</td>
      <td style="text-align: left">8.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%B8%82%E4%BA%95%E8%A5%BF%E4%BB%93">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《私人生活史》</td>
      <td style="text-align: left">[法] 菲利普·阿里埃斯 等</td>
      <td style="text-align: left">1985年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%A7%81%E4%BA%BA%E7%94%9F%E6%B4%BB%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《斯宾诺莎导读》</td>
      <td style="text-align: left">[美] 史蒂文·纳德勒</td>
      <td style="text-align: left">2006年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%96%AF%E5%AE%BE%E8%AF%BA%E8%8E%8E%E5%AF%BC%E8%AF%BB">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《四库全书》</td>
      <td style="text-align: left">[清] 纪昀 等</td>
      <td style="text-align: left">1782年</td>
      <td style="text-align: left">9.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9B%9B%E5%BA%93%E5%85%A8%E4%B9%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《俗世威尔》</td>
      <td style="text-align: left">[英] 特里·伊格尔顿</td>
      <td style="text-align: left">2008年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BF%97%E4%B8%96%E5%A8%81%E5%B0%94">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《涑水记闻》</td>
      <td style="text-align: left">[宋] 司马光</td>
      <td style="text-align: left">北宋</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%BA%A3%E6%B0%B4%E8%AE%B0%E9%97%BB">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《太平御览》</td>
      <td style="text-align: left">[宋] 李昉 等</td>
      <td style="text-align: left">983年</td>
      <td style="text-align: left">9.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A4%AA%E5%B9%B3%E5%BE%A1%E8%A7%88">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《天真的人类学家》</td>
      <td style="text-align: left">[英] 奈吉尔·巴利</td>
      <td style="text-align: left">1983年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A4%A9%E7%9C%9F%E7%9A%84%E4%BA%BA%E7%B1%BB%E5%AD%A6%E5%AE%B6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《同性恋亚文化》</td>
      <td style="text-align: left">李银河 / 王小波</td>
      <td style="text-align: left">1998年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%90%8C%E6%80%A7%E6%81%8B%E4%BA%9A%E6%96%87%E5%8C%96">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《图书馆入门》</td>
      <td style="text-align: left">[日] 若松英辅</td>
      <td style="text-align: left">2013年</td>
      <td style="text-align: left">8.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9B%BE%E4%B9%A6%E9%A6%86%E5%85%A5%E9%97%A8">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《完美店铺设计指南》</td>
      <td style="text-align: left">-</td>
      <td style="text-align: left">-</td>
      <td style="text-align: left">7.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%AE%8C%E7%BE%8E%E5%BA%97%E9%93%BA%E8%AE%BE%E8%AE%A1%E6%8C%87%E5%8D%97">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《唯识二十论》</td>
      <td style="text-align: left">[古印度] 世亲</td>
      <td style="text-align: left">约4世纪</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%94%AF%E8%AF%86%E4%BA%8C%E5%8D%81%E8%AE%BA">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《为什么我不是基督教徒》</td>
      <td style="text-align: left">[英] 伯特兰·罗素</td>
      <td style="text-align: left">1927年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%BA%E4%BB%80%E4%B9%88%E6%88%91%E4%B8%8D%E6%98%AF%E5%9F%BA%E7%9D%A3%E6%95%99%E5%BE%92">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《未来简史》</td>
      <td style="text-align: left">[以色列] 尤瓦尔·赫拉利</td>
      <td style="text-align: left">2015年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%9C%AA%E6%9D%A5%E7%AE%80%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《文字的力与美》</td>
      <td style="text-align: left">[日] 杉浦康平</td>
      <td style="text-align: left">2002年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%96%87%E5%AD%97%E7%9A%84%E5%8A%9B%E4%B8%8E%E7%BE%8E">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《无知的教师》</td>
      <td style="text-align: left">[法] 雅克·朗西埃</td>
      <td style="text-align: left">1987年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%97%A0%E7%9F%A5%E7%9A%84%E6%95%99%E5%B8%88">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《乡土中国》</td>
      <td style="text-align: left">费孝通</td>
      <td style="text-align: left">1947年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B9%A1%E5%9C%9F%E4%B8%AD%E5%9B%BD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《湘山野录》</td>
      <td style="text-align: left">[宋] 释文莹</td>
      <td style="text-align: left">北宋</td>
      <td style="text-align: left">8.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B9%98%E5%B1%B1%E9%87%8E%E5%BD%95">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《新教伦理与资本主义精神》</td>
      <td style="text-align: left">[德] 马克斯·韦伯</td>
      <td style="text-align: left">1905年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%96%B0%E6%95%99%E4%BC%A6%E7%90%86%E4%B8%8E%E8%B5%84%E6%9C%AC%E4%B8%BB%E4%B9%89%E7%B2%BE%E7%A5%9E">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《新唯识论》</td>
      <td style="text-align: left">熊十力</td>
      <td style="text-align: left">1932年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%96%B0%E5%94%AF%E8%AF%86%E8%AE%BA">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《新游牧民》</td>
      <td style="text-align: left">[日] 四方田犬彦</td>
      <td style="text-align: left">2002年</td>
      <td style="text-align: left">7.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%96%B0%E6%B8%B8%E7%89%A7%E6%B0%91">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《幸运者》</td>
      <td style="text-align: left">[英] 约翰·伯格</td>
      <td style="text-align: left">1967年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%B9%B8%E8%BF%90%E8%80%85">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《修剪菩提树》</td>
      <td style="text-align: left">[美] 唐纳德·S.洛佩兹</td>
      <td style="text-align: left">1995年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BF%AE%E5%89%AA%E8%8F%A9%E6%8F%90%E6%A0%91">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《雅典与耶路撒冷》</td>
      <td style="text-align: left">[俄] 列夫·舍斯托夫</td>
      <td style="text-align: left">1938年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%9B%85%E5%85%B8%E4%B8%8E%E8%80%B6%E8%B7%AF%E6%92%92%E5%86%B7">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《艺术哲学》</td>
      <td style="text-align: left">[法] 丹纳</td>
      <td style="text-align: left">1865年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%89%BA%E6%9C%AF%E5%93%B2%E5%AD%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《隐士建筑》</td>
      <td style="text-align: left">[日] 中村好文</td>
      <td style="text-align: left">2011年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%9A%90%E5%A3%AB%E5%BB%BA%E7%AD%91">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《永字八法》</td>
      <td style="text-align: left">佚名</td>
      <td style="text-align: left">唐代</td>
      <td style="text-align: left">8.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%B0%B8%E5%AD%97%E5%85%AB%E6%B3%95">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《犹太教》</td>
      <td style="text-align: left">[英] 诺曼·所罗门</td>
      <td style="text-align: left">1996年</td>
      <td style="text-align: left">8.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%8A%B9%E5%A4%AA%E6%95%99">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《与古为徒和娟娟发屋》</td>
      <td style="text-align: left">巫鸿</td>
      <td style="text-align: left">2005年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%8E%E5%8F%A4%E4%B8%BA%E5%BE%92%E5%92%8C%E5%A8%9F%E5%A8%9F%E5%8F%91%E5%B1%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《与小泽征尔共度的午后音乐时光》</td>
      <td style="text-align: left">[日] 村上春树 / 小泽征尔</td>
      <td style="text-align: left">2011年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%8E%E5%B0%8F%E6%B3%BD%E5%BE%81%E5%B0%94%E5%85%B1%E5%BA%A6%E7%9A%84%E5%8D%88%E5%90%8E%E9%9F%B3%E4%B9%90%E6%97%B6%E5%85%89">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《造型的诞生》</td>
      <td style="text-align: left">[日] 杉浦康平</td>
      <td style="text-align: left">1999年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E9%80%A0%E5%9E%8B%E7%9A%84%E8%AF%9E%E7%94%9F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《怎样阅读照片》</td>
      <td style="text-align: left">[英] 伊安·杰夫里</td>
      <td style="text-align: left">1981年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%80%8E%E6%A0%B7%E9%98%85%E8%AF%BB%E7%85%A7%E7%89%87">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《詹森艺术史》</td>
      <td style="text-align: left">[美] H.W. 詹森</td>
      <td style="text-align: left">1962年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%A9%B9%E6%A3%AE%E8%89%BA%E6%9C%AF%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《正面管教》</td>
      <td style="text-align: left">[美] 简·尼尔森</td>
      <td style="text-align: left">1981年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%AD%A3%E9%9D%A2%E7%AE%A1%E6%95%99">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《知日》</td>
      <td style="text-align: left">苏静 (主编)</td>
      <td style="text-align: left">2011年</td>
      <td style="text-align: left">7.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%9F%A5%E6%97%A5">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《直角之诗》</td>
      <td style="text-align: left">[法] 勒·柯布西耶</td>
      <td style="text-align: left">1955年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%9B%B4%E8%A7%92%E4%B9%8B%E8%AF%97">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《纸上纪录片》</td>
      <td style="text-align: left">崔永元 (主编)</td>
      <td style="text-align: left">2002年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BA%B8%E4%B8%8A%E7%BA%AA%E5%BD%95%E7%89%87">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《中国碑帖名品》</td>
      <td style="text-align: left">-</td>
      <td style="text-align: left">-</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%AD%E5%9B%BD%E7%A2%91%E5%B8%96%E5%90%8D%E5%93%81">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《中国摄影史》</td>
      <td style="text-align: left">陈申 / 徐希景</td>
      <td style="text-align: left">1987年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%AD%E5%9B%BD%E6%91%84%E5%BD%B1%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《中国照相馆史》</td>
      <td style="text-align: left">[美] 泰瑞·贝内特</td>
      <td style="text-align: left">2013年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%AD%E5%9B%BD%E7%85%A7%E7%9B%B8%E9%A6%86%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《宗教生活的基本形式》</td>
      <td style="text-align: left">[法] 埃米尔·涂尔干</td>
      <td style="text-align: left">1912年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%AE%97%E6%95%99%E7%94%9F%E6%B4%BB%E7%9A%84%E5%9F%BA%E6%9C%AC%E5%BD%A2%E5%BC%8F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《走向新建筑》</td>
      <td style="text-align: left">[法] 勒·柯布西耶</td>
      <td style="text-align: left">1923年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%B5%B0%E5%90%91%E6%96%B0%E5%BB%BA%E7%AD%91">链接</a></td>
    </tr>
  </tbody>
</table>

<h3 id="自然科学">自然科学</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left">书名</th>
      <th style="text-align: left">作者</th>
      <th style="text-align: left">出版年份</th>
      <th style="text-align: left">豆瓣评分</th>
      <th style="text-align: left">豆瓣链接</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">《别闹了，费曼先生》</td>
      <td style="text-align: left">[美] 理查德·费曼</td>
      <td style="text-align: left">1985年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%88%AB%E9%97%B9%E4%BA%86%EF%BC%8C%E8%B4%B9%E6%9B%BC%E5%85%88%E7%94%9F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《城市自然故事》</td>
      <td style="text-align: left">张瑜</td>
      <td style="text-align: left">2021年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9F%8E%E5%B8%82%E8%87%AA%E7%84%B6%E6%95%85%E4%BA%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《从一到无穷大》</td>
      <td style="text-align: left">[美] G. 伽莫夫</td>
      <td style="text-align: left">1947年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BB%8E%E4%B8%80%E5%88%B0%E6%97%A0%E7%A9%B7%E5%A4%A7">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《地球编年史》</td>
      <td style="text-align: left">[美] 撒迦利亚·西琴</td>
      <td style="text-align: left">1976年</td>
      <td style="text-align: left">8.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9C%B0%E7%90%83%E7%BC%96%E5%B9%B4%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《第三种黑猩猩》</td>
      <td style="text-align: left">[美] 贾雷德·戴蒙德</td>
      <td style="text-align: left">1991年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%AC%AC%E4%B8%89%E7%A7%8D%E9%BB%91%E7%8C%A9%E7%8C%A9">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《哥德尔、艾舍尔、巴赫》</td>
      <td style="text-align: left">[美] 侯世达</td>
      <td style="text-align: left">1979年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%93%A5%E5%BE%B7%E5%B0%94%E3%80%81%E8%89%BE%E8%88%8D%E5%B0%94%E3%80%81%E5%B7%B4%E8%B5%AB">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《给忙碌者的天体物理学》</td>
      <td style="text-align: left">[美] 奈尔·德葛拉司·泰森</td>
      <td style="text-align: left">2017年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BB%99%E5%BF%99%E7%A2%8C%E8%80%85%E7%9A%84%E5%A4%A9%E4%BD%93%E7%89%A9%E7%90%86%E5%AD%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《给青年科学家的信》</td>
      <td style="text-align: left">[美] 爱德华·威尔逊</td>
      <td style="text-align: left">2013年</td>
      <td style="text-align: left">8.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%BB%99%E9%9D%92%E5%B9%B4%E7%A7%91%E5%AD%A6%E5%AE%B6%E7%9A%84%E4%BF%A1">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《果壳中的宇宙》</td>
      <td style="text-align: left">[英] 斯蒂芬·霍金</td>
      <td style="text-align: left">2001年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%9E%9C%E5%A3%B3%E4%B8%AD%E7%9A%84%E5%AE%87%E5%AE%99">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《剑桥科学史》</td>
      <td style="text-align: left">[英] 科林·A.罗南</td>
      <td style="text-align: left">1983年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%89%91%E6%A1%A5%E7%A7%91%E5%AD%A6%E5%8F%B2">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《科学的历程》</td>
      <td style="text-align: left">吴国盛</td>
      <td style="text-align: left">1995年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%A7%91%E5%AD%A6%E7%9A%84%E5%8E%86%E7%A8%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《盲眼钟表匠》</td>
      <td style="text-align: left">[英] 理查德·道金斯</td>
      <td style="text-align: left">1986年</td>
      <td style="text-align: left">9.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%9B%B2%E7%9C%BC%E9%92%9F%E8%A1%A8%E5%8C%A0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《上帝掷骰子吗？》</td>
      <td style="text-align: left">曹天元</td>
      <td style="text-align: left">2006年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%8A%E5%B8%9D%E6%8E%B7%E9%AA%B0%E5%AD%90%E5%90%97%EF%BC%9F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《什么是科学》</td>
      <td style="text-align: left">吴国盛</td>
      <td style="text-align: left">2016年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BB%80%E4%B9%88%E6%98%AF%E7%A7%91%E5%AD%A6">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《实验室女孩》</td>
      <td style="text-align: left">[美] 霍普·洁伦</td>
      <td style="text-align: left">2016年</td>
      <td style="text-align: left">8.6</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%AE%9E%E9%AA%8C%E5%AE%A4%E5%A5%B3%E5%AD%A9">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《贪婪的多巴胺》</td>
      <td style="text-align: left">[美] 丹尼尔·利伯曼 等</td>
      <td style="text-align: left">2018年</td>
      <td style="text-align: left">7.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%B4%AA%E5%A9%AA%E7%9A%84%E5%A4%9A%E5%B7%B4%E8%83%BA">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《物理世界奇遇记》</td>
      <td style="text-align: left">[美] G. 伽莫夫</td>
      <td style="text-align: left">1940年</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%89%A9%E7%90%86%E4%B8%96%E7%95%8C%E5%A5%87%E9%81%87%E8%AE%B0">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《现实不似你所见》</td>
      <td style="text-align: left">[意] 卡洛·罗韦利</td>
      <td style="text-align: left">2014年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%8E%B0%E5%AE%9E%E4%B8%8D%E4%BC%BC%E4%BD%A0%E6%89%80%E8%A7%81">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《园丁的一年》</td>
      <td style="text-align: left">[捷克] 卡雷尔·恰佩克</td>
      <td style="text-align: left">1929年</td>
      <td style="text-align: left">8.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9B%AD%E4%B8%81%E7%9A%84%E4%B8%80%E5%B9%B4">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《云彩收集者手册》</td>
      <td style="text-align: left">[英] 加文·弗雷特-平尼</td>
      <td style="text-align: left">2006年</td>
      <td style="text-align: left">8.0</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%BA%91%E5%BD%A9%E6%94%B6%E9%9B%86%E8%80%85%E6%89%8B%E5%86%8C">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《杂草的故事》</td>
      <td style="text-align: left">[英] 理查德·梅比</td>
      <td style="text-align: left">2012年</td>
      <td style="text-align: left">8.8</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%9D%82%E8%8D%89%E7%9A%84%E6%95%85%E4%BA%8B">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《怎样观察一棵树》</td>
      <td style="text-align: left">[美] 南希·罗斯·哈格</td>
      <td style="text-align: left">2005年</td>
      <td style="text-align: left">8.5</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E6%80%8E%E6%A0%B7%E8%A7%82%E5%AF%9F%E4%B8%80%E6%A3%B5%E6%A0%91">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《这里是中国》</td>
      <td style="text-align: left">星球研究所 / 中国青藏高原研究会</td>
      <td style="text-align: left">2018年</td>
      <td style="text-align: left">9.3</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%BF%99%E9%87%8C%E6%98%AF%E4%B8%AD%E5%9B%BD">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《自私的基因》</td>
      <td style="text-align: left">[英] 理查德·道金斯</td>
      <td style="text-align: left">1976年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E8%87%AA%E7%A7%81%E7%9A%84%E5%9F%BA%E5%9B%A0">链接</a></td>
    </tr>
  </tbody>
</table>

<h3 id="其他系列书">其他系列书</h3>

<table>
  <thead>
    <tr>
      <th style="text-align: left">书名</th>
      <th style="text-align: left">作者</th>
      <th style="text-align: left">出版年份</th>
      <th style="text-align: left">豆瓣评分</th>
      <th style="text-align: left">豆瓣链接</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left">《中国在梁庄》(“梁庄”系列)</td>
      <td style="text-align: left">梁鸿</td>
      <td style="text-align: left">2010年</td>
      <td style="text-align: left">8.9</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E4%B8%AD%E5%9B%BD%E5%9C%A8%E6%A2%81%E5%BA%84">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《玛格南世纪》(“玛格南”系列)</td>
      <td style="text-align: left">玛格南图片社</td>
      <td style="text-align: left">1999年</td>
      <td style="text-align: left">9.4</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%8E%9B%E6%A0%BC%E5%8D%97%E4%B8%96%E7%BA%AA">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">“牛津树”系列</td>
      <td style="text-align: left">[英] Roderick Hunt 等</td>
      <td style="text-align: left">1986年</td>
      <td style="text-align: left">9.7</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E7%89%9B%E6%B4%A5%E6%A0%91">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">“培生”系列</td>
      <td style="text-align: left">培生教育集团</td>
      <td style="text-align: left">-</td>
      <td style="text-align: left">9.1</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%9F%B9%E7%94%9F">链接</a></td>
    </tr>
    <tr>
      <td style="text-align: left">《失落的一代》(“中国纪实三部曲”)</td>
      <td style="text-align: left">[法] 潘鸣啸</td>
      <td style="text-align: left">1994年</td>
      <td style="text-align: left">9.2</td>
      <td style="text-align: left"><a href="https://www.google.com/search?q=site%3Adouban.com+%E5%A4%B1%E8%90%BD%E7%9A%84%E4%B8%80%E4%BB%A3">链接</a></td>
    </tr>
  </tbody>
</table>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[《纳瓦尔宝典》推荐阅读]]></title>
    <link href="https://wangyi.ai/blog/2025/07/04/na-wa-er-bao-dian-tui-jian-yue-du/"/>
    <updated>2025-07-04T15:00:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/07/04/na-wa-er-bao-dian-tui-jian-yue-du</id>
    <content type="html"><![CDATA[<p>纳瓦尔·拉维坎特（Naval Ravikant）在《纳瓦尔宝典》中不仅分享了他关于财富和幸福的智慧，还推荐了大量影响他思维的优质书籍和博客。这些推荐读物构成了一个完整的知识体系，涵盖科学、哲学、商业、灵修等多个领域。</p>

<!-- more -->

<h1 id="纳瓦尔宝典提及书籍与博客索引含博客链接">《纳瓦尔宝典》提及书籍与博客索引（含博客链接）</h1>

<p>以下列表依照在《The Almanack of Naval Ravikant》中首次出现顺序整理，并补充中文译名及 Naval 的一句话点评。博客及博文已附可点击链接。</p>

<table>
  <thead>
    <tr>
      <th style="text-align: right">序</th>
      <th style="text-align: left">英文原名（含链接）</th>
      <th style="text-align: left">中文译名</th>
      <th style="text-align: left">类 型 </th>
      <th style="text-align: left">Naval 一句点评</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: right">1</td>
      <td style="text-align: left">The Beginning of Infinity</td>
      <td style="text-align: left">无穷的开始：世界进步的本源</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">不算易读，却真正把我读聪明了。</td>
    </tr>
    <tr>
      <td style="text-align: right">2</td>
      <td style="text-align: left">Sapiens: A Brief History of Humankind</td>
      <td style="text-align: left">人类简史：从动物到上帝</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">近十年读过的最佳著作，洞见满页。</td>
    </tr>
    <tr>
      <td style="text-align: right">3</td>
      <td style="text-align: left">The Rational Optimist</td>
      <td style="text-align: left">理性乐观派：人类经济进步史</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">多年里最睿智、最启发我的一本书。</td>
    </tr>
    <tr>
      <td style="text-align: right">4</td>
      <td style="text-align: left">Genome</td>
      <td style="text-align: left">基因组：人类自传23章</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">Ridley 的其他作品，我全读且反复读。</td>
    </tr>
    <tr>
      <td style="text-align: right">5</td>
      <td style="text-align: left">The Red Queen</td>
      <td style="text-align: left">红皇后：性与人类进化</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">Ridley 必读之作之一。</td>
    </tr>
    <tr>
      <td style="text-align: right">6</td>
      <td style="text-align: left">The Origins of Virtue</td>
      <td style="text-align: left">美德的起源</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">Ridley 探讨合作本能的佳作。</td>
    </tr>
    <tr>
      <td style="text-align: right">7</td>
      <td style="text-align: left">The Evolution of Everything</td>
      <td style="text-align: left">万物演化</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">解释新思想如何诞生的前瞻之书。</td>
    </tr>
    <tr>
      <td style="text-align: right">8</td>
      <td style="text-align: left">Skin in the Game</td>
      <td style="text-align: left">非对称风险</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">2018 年最佳读物之一，商业模型极佳。</td>
    </tr>
    <tr>
      <td style="text-align: right">9</td>
      <td style="text-align: left">The Bed of Procrustes</td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">Taleb 的古典智慧箴言集。</td>
    </tr>
    <tr>
      <td style="text-align: right">10</td>
      <td style="text-align: left">The Black Swan</td>
      <td style="text-align: left">黑天鹅</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">Taleb 另一部必读之作。</td>
    </tr>
    <tr>
      <td style="text-align: right">11</td>
      <td style="text-align: left">Antifragile</td>
      <td style="text-align: left">反脆弱</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">Taleb 另一部必读之作。</td>
    </tr>
    <tr>
      <td style="text-align: right">12</td>
      <td style="text-align: left">Fooled by Randomness</td>
      <td style="text-align: left">随机漫步的傻瓜</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">Taleb 另一部必读之作。</td>
    </tr>
    <tr>
      <td style="text-align: right">13</td>
      <td style="text-align: left">Six Easy Pieces</td>
      <td style="text-align: left">费曼物理学讲义·六篇轻松小品</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">我会送给孩子的物理入门书。</td>
    </tr>
    <tr>
      <td style="text-align: right">14</td>
      <td style="text-align: left">Six Not-So-Easy Pieces</td>
      <td style="text-align: left">费曼物理学讲义·六篇不太轻松小品</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">与上册并读收获更大。</td>
    </tr>
    <tr>
      <td style="text-align: right">15</td>
      <td style="text-align: left">Perfectly Reasonable Deviations…</td>
      <td style="text-align: left">合理的偏差：费曼书信集</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">展示费曼思考魅力的书信精选。</td>
    </tr>
    <tr>
      <td style="text-align: right">16</td>
      <td style="text-align: left">Genius: The Life and Science of Richard Feynman</td>
      <td style="text-align: left">天才：理查德·费曼的一生</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">费曼传记，值得再三回味。</td>
    </tr>
    <tr>
      <td style="text-align: right">17</td>
      <td style="text-align: left">Thing Explainer</td>
      <td style="text-align: left">万物解释者</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">用千常用词解释复杂世界，妙不可言。</td>
    </tr>
    <tr>
      <td style="text-align: right">18</td>
      <td style="text-align: left">Thinking Physics</td>
      <td style="text-align: left">思考物理</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">小学到研究生都能悟到物理真义。</td>
    </tr>
    <tr>
      <td style="text-align: right">19</td>
      <td style="text-align: left">The Lessons of History</td>
      <td style="text-align: left">历史的教训</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">短小却犀利，概括宏大历史主题。</td>
    </tr>
    <tr>
      <td style="text-align: right">20</td>
      <td style="text-align: left">The Sovereign Individual</td>
      <td style="text-align: left">主权个人</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">自《人类简史》以来最打动我的书。</td>
    </tr>
    <tr>
      <td style="text-align: right">21</td>
      <td style="text-align: left">Poor Charlie’s Almanack</td>
      <td style="text-align: left">穷查理宝典</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">芒格之道的最全面记录。</td>
    </tr>
    <tr>
      <td style="text-align: right">22</td>
      <td style="text-align: left">Reality Is Not What It Seems</td>
      <td style="text-align: left">现实并非如你所见</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">现代物理的诗意科普。</td>
    </tr>
    <tr>
      <td style="text-align: right">23</td>
      <td style="text-align: left">Seven Brief Lessons on Physics</td>
      <td style="text-align: left">七堂极简物理课</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">物理学的极简浪漫入门。</td>
    </tr>
    <tr>
      <td style="text-align: right">24</td>
      <td style="text-align: left">The Compleat Strategyst</td>
      <td style="text-align: left">策略家的博弈</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">博弈论的轻松读物，受益匪浅。</td>
    </tr>
    <tr>
      <td style="text-align: right">25</td>
      <td style="text-align: left">The Evolution of Cooperation</td>
      <td style="text-align: left">合作的进化</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">合作的博弈论经典。</td>
    </tr>
    <tr>
      <td style="text-align: right">26</td>
      <td style="text-align: left">Theory of Everything (Dreamstate Trilogy)</td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">探索意识与现实边界的小说。</td>
    </tr>
    <tr>
      <td style="text-align: right">27</td>
      <td style="text-align: left">Jed McKenna’s Notebook</td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">对自我探寻的极端反思。</td>
    </tr>
    <tr>
      <td style="text-align: right">28</td>
      <td style="text-align: left">A Master’s Secret Whispers</td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">灵性启蒙手册。</td>
    </tr>
    <tr>
      <td style="text-align: right">29</td>
      <td style="text-align: left">Direct Truth</td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">直指真理的心灵炸弹。</td>
    </tr>
    <tr>
      <td style="text-align: right">30</td>
      <td style="text-align: left">Atmamun</td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">意识自由的个人记录。</td>
    </tr>
    <tr>
      <td style="text-align: right">31</td>
      <td style="text-align: left">The Book of Life</td>
      <td style="text-align: left">生命之书</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">克里希那穆提思想精粹。</td>
    </tr>
    <tr>
      <td style="text-align: right">32</td>
      <td style="text-align: left">Total Freedom</td>
      <td style="text-align: left">彻底的自由</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">通往绝对自由的途径。</td>
    </tr>
    <tr>
      <td style="text-align: right">33</td>
      <td style="text-align: left">Siddhartha</td>
      <td style="text-align: left">悉达多</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">每个人的精神旅程寓言。</td>
    </tr>
    <tr>
      <td style="text-align: right">34</td>
      <td style="text-align: left">The Book of Secrets</td>
      <td style="text-align: left">秘密之书</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">奥修对人生的114条开示。</td>
    </tr>
    <tr>
      <td style="text-align: right">35</td>
      <td style="text-align: left">The Great Challenge</td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">奥修晚期谈话录。</td>
    </tr>
    <tr>
      <td style="text-align: right">36</td>
      <td style="text-align: left">The Way to Love</td>
      <td style="text-align: left">爱的方式</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">孟德信简练的灵修指引。</td>
    </tr>
    <tr>
      <td style="text-align: right">37</td>
      <td style="text-align: left">The Untethered Soul</td>
      <td style="text-align: left">觉醒的你</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">如何超越自我束缚。</td>
    </tr>
    <tr>
      <td style="text-align: right">38</td>
      <td style="text-align: left">Meditations</td>
      <td style="text-align: left">沉思录</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">斯多葛智慧的原典读法。</td>
    </tr>
    <tr>
      <td style="text-align: right">39</td>
      <td style="text-align: left">Love Yourself Like Your Life Depends on It</td>
      <td style="text-align: left">像生命一样爱自己</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">简单却有效的自爱练习。</td>
    </tr>
    <tr>
      <td style="text-align: right">40</td>
      <td style="text-align: left">The Tao of Seneca</td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">与纳瓦尔同频的斯多葛精选。</td>
    </tr>
    <tr>
      <td style="text-align: right">41</td>
      <td style="text-align: left">How to Change Your Mind</td>
      <td style="text-align: left">如何改变你的想法</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">揭开迷幻药疗愈潜力。</td>
    </tr>
    <tr>
      <td style="text-align: right">42</td>
      <td style="text-align: left">Striking Thoughts</td>
      <td style="text-align: left">搏击思想</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">李小龙哲学火花。</td>
    </tr>
    <tr>
      <td style="text-align: right">43</td>
      <td style="text-align: left">The Prophet</td>
      <td style="text-align: left">先知</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">简洁而永恒的人生诗篇。</td>
    </tr>
    <tr>
      <td style="text-align: right">44</td>
      <td style="text-align: left">Ficciones</td>
      <td style="text-align: left">虚构集</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">每一页都折射无限宇宙。</td>
    </tr>
    <tr>
      <td style="text-align: right">45</td>
      <td style="text-align: left">Stories of Your Life and Others</td>
      <td style="text-align: left">你一生的故事</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">科幻与哲思的完美融合。</td>
    </tr>
    <tr>
      <td style="text-align: right">46</td>
      <td style="text-align: left">Exhalation</td>
      <td style="text-align: left">呼吸</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">最富想象力的当代科幻集。</td>
    </tr>
    <tr>
      <td style="text-align: right">47</td>
      <td style="text-align: left">The Lifecycle of Software Objects</td>
      <td style="text-align: left">软件体的生命周期</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">AI 伦理预演，深刻摄人。</td>
    </tr>
    <tr>
      <td style="text-align: right">48</td>
      <td style="text-align: left">Snow Crash</td>
      <td style="text-align: left">雪崩</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">网络与文化的先知小说。</td>
    </tr>
    <tr>
      <td style="text-align: right">49</td>
      <td style="text-align: left">The Diamond Age</td>
      <td style="text-align: left">钻石年代</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">纳瓦尔常提的教育乌托邦。</td>
    </tr>
    <tr>
      <td style="text-align: right">50</td>
      <td style="text-align: left">The Last Question</td>
      <td style="text-align: left">最后的问题</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">短篇里藏着宇宙终极命题。</td>
    </tr>
    <tr>
      <td style="text-align: right">51</td>
      <td style="text-align: left">Tools of Titans</td>
      <td style="text-align: left">巨人的工具</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">实践者的心法大全。</td>
    </tr>
    <tr>
      <td style="text-align: right">52</td>
      <td style="text-align: left">Thermoinfocomplexity</td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">信息热力学的深度论文。</td>
    </tr>
    <tr>
      <td style="text-align: right">53</td>
      <td style="text-align: left">Pre-Suasion</td>
      <td style="text-align: left">瞬时说服</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">说服术的时机艺术。</td>
    </tr>
    <tr>
      <td style="text-align: right">54</td>
      <td style="text-align: left">The Story of Philosophy</td>
      <td style="text-align: left">哲学的故事</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">通俗入门哲学名著。</td>
    </tr>
    <tr>
      <td style="text-align: right">55</td>
      <td style="text-align: left">God’s Debris</td>
      <td style="text-align: left">神的碎片</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">思辨小说的奇葩精品。</td>
    </tr>
    <tr>
      <td style="text-align: right">56</td>
      <td style="text-align: left">Tao Te Ching</td>
      <td style="text-align: left">道德经</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">智慧源头，日日可读。</td>
    </tr>
    <tr>
      <td style="text-align: right">57</td>
      <td style="text-align: left">The Undercover Economist</td>
      <td style="text-align: left">卧底经济学</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">经济学视角的日常透镜。</td>
    </tr>
    <tr>
      <td style="text-align: right">58</td>
      <td style="text-align: left">Illusions: The Adventures of a Reluctant Messiah</td>
      <td style="text-align: left">幻灭</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">寓言式的自由宣言。</td>
    </tr>
    <tr>
      <td style="text-align: right">59</td>
      <td style="text-align: left">The Three-Body Problem</td>
      <td style="text-align: left">三体</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">科幻史诗，引人沉思。</td>
    </tr>
    <tr>
      <td style="text-align: right">60</td>
      <td style="text-align: left">Man’s Search for Meaning</td>
      <td style="text-align: left">活出生命的意义</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">逆境中的意义之书。</td>
    </tr>
    <tr>
      <td style="text-align: right">61</td>
      <td style="text-align: left">Sex at Dawn</td>
      <td style="text-align: left">黎明前的性</td>
      <td style="text-align: left">书籍</td>
      <td style="text-align: left">重新审视人类亲密关系。</td>
    </tr>
    <tr>
      <td style="text-align: right">62</td>
      <td style="text-align: left"><a href="https://meltingasphalt.com/">Melting Asphalt (Kevin Simler)</a></td>
      <td style="text-align: left">暂无中文版</td>
      <td style="text-align: left">博客</td>
      <td style="text-align: left">洞悉人性与社会的深度博文。</td>
    </tr>
    <tr>
      <td style="text-align: right">63</td>
      <td style="text-align: left"><a href="https://fs.blog/">Farnam Street (Shane Parrish)</a></td>
      <td style="text-align: left">范南街</td>
      <td style="text-align: left">博客</td>
      <td style="text-align: left">思维模型的宝库。</td>
    </tr>
    <tr>
      <td style="text-align: right">64</td>
      <td style="text-align: left"><a href="https://stratechery.com/">Stratechery (Ben Thompson)</a></td>
      <td style="text-align: left">战略学</td>
      <td style="text-align: left">博客</td>
      <td style="text-align: left">商业与科技的清晰分析。</td>
    </tr>
    <tr>
      <td style="text-align: right">65</td>
      <td style="text-align: left"><a href="https://idlewords.com/">Idle Words (Maciej Cegłowski)</a></td>
      <td style="text-align: left">闲言碎语</td>
      <td style="text-align: left">博客</td>
      <td style="text-align: left">写作优雅，观点锐利。</td>
    </tr>
    <tr>
      <td style="text-align: right">66</td>
      <td style="text-align: left"><a href="https://fs.blog/munger-operating-system/">The Munger Operating System: How to Live a Life That Really Works</a></td>
      <td style="text-align: left">芒格操作系统：如何过一种真正有效的生活</td>
      <td style="text-align: left">博文</td>
      <td style="text-align: left">芒格智慧的浓缩指南。</td>
    </tr>
    <tr>
      <td style="text-align: right">67</td>
      <td style="text-align: left"><a href="https://dilbertblog.typepad.com/the_dilbert_blog/2007/06/the_day_you_bec.html">The Day You Became a Better Writer</a></td>
      <td style="text-align: left">你成为更好作家的那一天</td>
      <td style="text-align: left">博文</td>
      <td style="text-align: left">写作质量跃迁之道。</td>
    </tr>
    <tr>
      <td style="text-align: right">68</td>
      <td style="text-align: left"><a href="https://meltingasphalt.com/crony-beliefs/">Crony Beliefs</a></td>
      <td style="text-align: left">裙带信念</td>
      <td style="text-align: left">博文</td>
      <td style="text-align: left">自我欺骗的深刻剖析。</td>
    </tr>
    <tr>
      <td style="text-align: right">69</td>
      <td style="text-align: left"><a href="https://blog.eladgil.com/p/career-decisions">Career Decisions</a></td>
      <td style="text-align: left">职业决策</td>
      <td style="text-align: left">博文</td>
      <td style="text-align: left">择业思考框架。</td>
    </tr>
    <tr>
      <td style="text-align: right">70</td>
      <td style="text-align: left"><a href="https://www.lesswrong.com/posts/tWLFWAndSZSYN6rPB/think-like-reality">Think Like Reality</a></td>
      <td style="text-align: left">像现实一样思考</td>
      <td style="text-align: left">博文</td>
      <td style="text-align: left">量子并不怪——怪的是你。</td>
    </tr>
    <tr>
      <td style="text-align: right">71</td>
      <td style="text-align: left"><a href="https://medium.com/flow/lazy-leadership-8ba19e34f959">Lazy Leadership</a></td>
      <td style="text-align: left">懒惰的领导力</td>
      <td style="text-align: left">博文</td>
      <td style="text-align: left">以无为治有为。</td>
    </tr>
    <tr>
      <td style="text-align: right">72</td>
      <td style="text-align: left"><a href="https://edlatimore.com/">EdLatimore.com</a></td>
      <td style="text-align: left">Ed Latimore 个人网站</td>
      <td style="text-align: left">博客</td>
      <td style="text-align: left">拳击与人生哲理的结合。</td>
    </tr>
    <tr>
      <td style="text-align: right">73</td>
      <td style="text-align: left"><a href="https://www.cs.virginia.edu/~robins/YouAndYourResearch.html">You and Your Research</a></td>
      <td style="text-align: left">你和你的研究</td>
      <td style="text-align: left">博文</td>
      <td style="text-align: left">做重要工作的心法。</td>
    </tr>
  </tbody>
</table>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[与冰山交谈]]></title>
    <link href="https://wangyi.ai/blog/2025/07/04/iceberg/"/>
    <updated>2025-07-04T12:38:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/07/04/iceberg</id>
    <content type="html"><![CDATA[<p>每个人都是一座冰山。当你与人交谈，想象你是在和冰山交谈，目之所及的只是水面之上的部分。如果你希望达成交流，你必须具备耐心，从身体和情绪感受出发，逐层递进，弄清原委。</p>

<p><img src="/images/冰山模式.png" alt="冰山模式" style="max-width: 600px; width: 100%;" /></p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Claude Code Complexity: Safety, Safety, Safety]]></title>
    <link href="https://wangyi.ai/blog/2025/06/26/permission/"/>
    <updated>2025-06-26T11:24:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/06/26/permission</id>
    <content type="html"><![CDATA[<!-- more -->

<p>I tried Claude Code this week, and instantly felt the empowerment from the tool, and was stunned by how naturally it blends into developer workflows.</p>

<p>It demonstrated how easy the LLM model makers can disrupt the application makers (Cursor in this case). This reminds me of the analogy Andrej Karpathy made in <a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ">Software Is Changing (Again) presentation</a> that LLM has strong analogies to operating systems. The LLM model makers can easily disrupt app makers like Apple can <a href="https://en.wikipedia.org/wiki/Sherlock_(software)#:~:text=%5B2%5D-,Sherlocked%20as%20a%20term,-%5Bedit%5D">sherlock</a> other softwares running on top of macOS.</p>

<p>With a similar tool from Google called <a href="https://github.com/google-gemini/gemini-cli">Gemini CLI</a> released, I begin to question about what is the main complexity Claude Code has, and whether that complexity is challenging enough to support companies relying on building agentic tools.</p>

<p>I found <a href="https://www.youtube.com/watch?v=6eBSHbLKuN0">the following video</a> where Boris Cherny (who is the creator of Claude Code) answered my first question:</p>

<blockquote>
  <p>Audience: I was wondering what was the hardest implementation, like part of the implementation for you of building it?</p>

  <p>Boris: I think there’s a lot of tricky parts. <strong>I think one part that is tricky is the things that we do to make bash commands safe.</strong> Bash is inherently pretty dangerous and it can change system state in unexpected ways. But at the same time, if you have to manually approve every single bash command, it’s super annoying as an engineer.</p>

  <p>Boris: … the thing we landed on is there’s commands that are read-only, there’s static analysis that we do in order to figure out which commands can be combined in safe ways, and then we have this pretty complex tiered permission system so that you can allow list and block list commands at different levels.</p>
</blockquote>

<p>This highlights a key insight: <strong>In agentic systems, safety isn’t an afterthought—it’s the core challenge.</strong></p>

<p>How do we know if a command is safe to run? How can these tools predict the consequences of an action? Currently, the burden is shifted to the developer via permission dialogs. But eventually, developers will expect these tools to act more autonomously—without compromising safety.</p>

<p>For commands that only affect local environments, Docker might offer a partial solution. But many real-world use cases involve remote effects—like modifying a task in Linear or changing a GitHub label. These remote side effects raise thorny questions about trust, auditability, and failure handling.</p>

<p>After exploring Claude Code and Gemini CLI, I’m excited about where this space is headed. The next breakthroughs may come not just from smarter agents—but from safer ones.</p>

<p>– EOF –</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[微信读书：LLM 自动化问答 PK]]></title>
    <link href="https://wangyi.ai/blog/2025/06/21/wechat-reader/"/>
    <updated>2025-06-21T20:42:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/06/21/wechat-reader</id>
    <content type="html"><![CDATA[<p><img src="/images/wensheng.png" alt="" /></p>

<p>为了增加用户活跃度，微信读书团队开发了一个微信小游戏——问答 PK。这是一个双人对决形式的知识问答天梯，题目内容主要基于常识，比如成语填字，古诗词接上下句。</p>

<p>玩了几天后发现，光靠我的知识储备和记忆力，很难持续提升段位。答案在网上一搜就能找到，但是 10 秒钟的答题时间来不及搜索，于是我想到借助 DeepSeek 来自动答题。说干就干，Vide-Coding 了一个 Python 脚本，自动化了整个答题过程，并最终达到了最高等级。本文记录在开发过程中，遇到的问题与一些观察。</p>

<!-- more -->

<h2 id="技术难点与观察">技术难点与观察</h2>

<h3 id="ocr-错误率导致的复杂度">OCR 错误率导致的复杂度</h3>

<p>我首先想到的是将窗口截图转为文字，这一步涉及图片到文字的模态转换：</p>
<ul>
  <li>macOS 自带的 OCR 中文识别准确率并不完美。有些中文字符在不同帧中会被错误识别为相似字形。
    <ul>
      <li>为了判断题目是否更新，程序需要实现较复杂的题目刷新检测逻辑。</li>
      <li>在存储与提取已答题目上，也因此增加了额外复杂度。</li>
    </ul>
  </li>
  <li>后来想到可以利用 macOS 的 Accessibility API 来获取小程序窗口的文字信息，实现起来就简单多了。</li>
  <li>结论：
    <ul>
      <li>如果可以获取文本内容，应当优先使用文本内容，尽量避免不必要的复杂度。</li>
      <li>第一个想到的方法不一定是最好的方法，实现之前可以再多花一点时间比较一下其他方法。</li>
    </ul>
  </li>
</ul>

<h3 id="反馈机制的设计">反馈机制的设计</h3>

<p>LLM 并不能保证每道题都能准确回答，因此，需要设计一种反馈机制，用于处理错误回答，并逐步提高系统表现：</p>
<ul>
  <li>每次答题后，程序会记录实际答案与 LLM 输出是否一致。</li>
  <li>若识别为错误，会将题目及正确答案保存进本地题库，供后续匹配使用。</li>
  <li>随着题库积累，LLM 的回答可以逐步退居辅助角色，以“已知题目匹配”为主、生成式回答为辅。</li>
  <li>在实践中，这种混合策略显著提高了答题准确率，也使系统更加可控。</li>
</ul>

<h3 id="工具效率与资源消耗">工具效率与资源消耗</h3>

<p>这类依赖模态转换和实时反馈的程序在效率上也面临挑战，尤其当一方发生变化、但未提供明确的推送机制时，工具只能通过“轮询”方式不断查询变化状态：</p>
<ul>
  <li>本例中，为了判断题目是否已经刷新，程序只能定期抓取小程序里的文字内容，并比对，轮询带来了显著的资源消耗。这种“拉取式”的检测逻辑效率低下，不适合长期运行。</li>
  <li>本质上，这类问题的根源在于缺乏变化触发的事件通知机制。如果 macOS 或目标应用能提供“题目变动事件”的观察接口，将显著提高系统效率。期待苹果在接下来的几年持续进化 macOS 来帮助第三方软件加入更多 AI 驱动的功能。</li>
  <li>实现的过程中用到了 <a href="https://macpaw.com/">MacPaw</a> 开源的 <a href="https://github.com/MacPaw/macapptree">macapptree</a> 来抓取应用的 Accessibility Tree。估计 MacPaw 团队在开发 <a href="https://macpaw.com/eney">Envy</a> 的 actions 也依赖 Accessibility API 来实现各种软件的自动化。</li>
  <li>结论：在系统设计中，应尽量选择或构建具备事件驱动机制的组件，避免盲目轮询所带来的能耗与复杂度。</li>
</ul>

<h3 id="vibe-coding">Vibe-Coding</h3>

<p>作为一个 Weekend Fun Project，没有 Vibe-Coding，我无论如何也无法在两三天里快速迭代实现各种预想中的功能，修复各种 bug，并最终把程序跑起来，自动化整个答题过程的。不得不说，有了 Cursor 以后，没有办法回到一行一行写代码的日子了。Vibe-Coding is fun and the future for everyone。</p>

<p>– EOF –</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Working on Moonshot Projects]]></title>
    <link href="https://wangyi.ai/blog/2025/06/10/why-moonshot-projects/"/>
    <updated>2025-06-10T10:55:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/06/10/why-moonshot-projects</id>
    <content type="html"><![CDATA[<p><a href="https://www.youtube.com/watch?v=9V6tWC4CdFQ">Sundar Pichai: CEO of Google and Alphabet | Lex Fridman Podcast</a>:</p>

<blockquote>
  <p>Sundar Pichai views “moonshot” projects as crucial for several reasons:</p>

  <ul>
    <li><strong>Driving Innovation:</strong> He believes that aiming for audacious, seemingly impossible goals, like the original moon landing, forces radical rethinking and leads to breakthroughs that wouldn’t happen with incremental improvements. It’s about finding “10X” improvements rather than “10 percent” improvements.</li>
    <li><strong>Inspiring Talent and Passion:</strong> Big, challenging problems ignite both the hearts and minds of people. It’s easier to attract passionate and talented individuals to work on projects that could redefine humanity.</li>
    <li><strong>Societal Impact:</strong> Moonshots, even if their initial goal is not fully realized, can lead to numerous technological advancements with real-world applications and inspire future generations. For example, Google considers fighting climate change as a “moonshot” due to its profound societal importance.</li>
    <li><strong>Leveraging Constraints:</strong> Pichai has also highlighted that constraints can act as catalysts for innovation. Working within defined limits encourages teams to be more creative and focused, leading to groundbreaking ideas.</li>
  </ul>
</blockquote>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Vibe Coding - Baby Sleep Tracker]]></title>
    <link href="https://wangyi.ai/blog/2025/06/03/baby-sleep-tracker/"/>
    <updated>2025-06-03T09:07:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/06/03/baby-sleep-tracker</id>
    <content type="html"><![CDATA[<p><img src="/images/baby-sleep-tracker-4.jpg" alt="" /></p>

<p>To monitor our baby from other rooms, we purchased a Nanit Baby Monitor. Using image recognition, Nanit provides insights into our baby’s nighttime sleep patterns through its app. Each state transition point includes a video for review.</p>

<p>However, the display isn’t very intuitive — the chart doesn’t show the exact timestamps for each transition. For example, the start and end times of the two longer sleep sessions are not clearly marked.</p>

<!-- more -->

<p>To more intuitively view this information and more flexibly display the baby’s sleep duration and time periods throughout the night, I used Cursor and video-coding to build a Web App:</p>

<ul>
  <li>Fetch data from Nanit API for any given date</li>
  <li>Render sleep sessions throughout the day</li>
  <li>Plot sleeping trend of most recent dates</li>
</ul>

<p>Lessons learnt:</p>

<ul>
  <li>Think through the main features and their designs you want before code generation with Cursor.
    <ul>
      <li>Although LLM can generate code for you. You would still need to think through what are the features you have in mind, and what things would look like (the design).</li>
      <li>This reminds me how Firebase Studio is trying to help build a PRD (Product Requirements Document) before beginning to generate code.</li>
      <li>Remind me apps like <a href="https://stitch.withgoogle.com/">https://stitch.withgoogle.com/</a></li>
    </ul>
  </li>
  <li>Think about <strong>testing</strong> if you would like to have some code maintainability.
    <ul>
      <li>Fully AI generated code without any review and test is not maintainable.</li>
      <li>As a weekend project to meet myself’s requirements, I didn’t put much effort into how to make it maintainable.</li>
      <li>I feel the joy of vibe coding goes down slowly when I put more features to it as new changes could break existing features.
        <ul>
          <li>I probably should add some end-to-end tests to make sure that new changes won’t break existing features. However, I didn’t figure out how to put tests in the iteration loop in Cursor yet.</li>
        </ul>
      </li>
    </ul>
  </li>
  <li>Tighter development loop and more agentic behaviors are needed.
    <ul>
      <li>Cursor stops itself frequently even with agent mode to ask for all kinds of inputs:
        <ul>
          <li>human input (confirmation, or opinion on design choices)</li>
          <li>app console output</li>
        </ul>
      </li>
      <li>For the human input, I found myself becoming the bottleneck for it to do more useful things. When it’s waiting for some input, I wish it would begin working on other parts which don’t require human input.</li>
      <li>For the app console output, I wish it has <strong>a tighter loop</strong> so that I don’t need to copy console output from Chrome DevTools back to Cursor. (Maybe Chrome could provide something to close the loop here?)</li>
    </ul>
  </li>
  <li>Analyzing images through AI generated code doesn’t work.
    <ul>
      <li>As Nanit doesn’t provide a way to export data, I was trying to use app screenshots to parse the sleep information (which is challenging for me to code manually), and it turns out that the current AI models cannot do that as well even with dozens of prompts back and forth.</li>
      <li>I ended up using <a href="http://proxyman.com/">Proxyman</a> to capture HTTPs requests and responses from the Nanit app to understand the API, and calling that directly from Python.
        <ul>
          <li>Used some go code from <a href="https://github.com/gregory-m/nanit">https://github.com/gregory-m/nanit</a> in the prompt to help LLM to implement the authentication part.</li>
        </ul>
      </li>
    </ul>
  </li>
</ul>

<p><img src="/images/baby-sleep-tracker-1.png" alt="" /></p>

<p><img src="/images/baby-sleep-tracker-2.png" alt="" /></p>

<p><img src="/images/baby-sleep-tracker-3.png" alt="" /></p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[独立思考的人]]></title>
    <link href="https://wangyi.ai/blog/2025/04/24/critical-thinking/"/>
    <updated>2025-04-24T16:43:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/04/24/critical-thinking</id>
    <content type="html"><![CDATA[<p>独立思考的人，<br />
笃定真理一定存在，<br />
但可能不是他心中的模样。<br /></p>

<!-- more -->

<p>世界上大部分的问题悬而未决，<br />
小部分我们以为的答案，<br />
也随着时间的推移不断演变。<br /></p>

<p>观点就像流过身体的水，<br />
并不属于某一个人。<br /></p>

<p>保持质疑一切的态度，<br />
保持开放，<br />
倾听不同的观点。<br /></p>

<p>听到一个观点之后，<br />
不急着相信或者否定，<br />
而是尝试理解观点背后的事实与逻辑，<br />
然后再做出独立的判断。<br /></p>

<p>做好随时修正持有观点的准备，<br />
因为对事实的认知会改变，<br />
行动之后也会得到了更多的事实。<br /></p>

<p>论辩不是为了输赢，<br />
而是共同探索不同观点的根源，<br />
是价值排序的不同，<br />
还是你我看见了不同局部的世界。<br /></p>

<p>放下偏见和自傲，<br />
做一个理智，独立思考的人。<br /></p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Magic Moment]]></title>
    <link href="https://wangyi.ai/blog/2025/04/21/magic-moment/"/>
    <updated>2025-04-21T20:23:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/04/21/magic-moment</id>
    <content type="html"><![CDATA[<p>使用了一整天 <a href="https://goodsnooze.gumroad.com/l/macwhisper">MacWhisper</a> 之后的感受：</p>

<blockquote>
  <p>语音输入文字本身并不是什么新鲜的功能，但就像 <a href="https://www.amazon.com/Creative-Selection-Inside-Apples-Process/dp/1250194466">iPhone 键盘</a> 的诞生一样，它背后仿佛存在着一道无形的界限——在跨越这道界限之前，一切都显得繁琐笨重；而一旦突破，用户才能真正感受到那种 Magic Moment，仿佛一切变得自然、顺畅，甚至有些神奇。</p>
</blockquote>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[《思辨力35讲：像辩手一样思考》读书笔记]]></title>
    <link href="https://wangyi.ai/blog/2025/04/20/35-talks-on-critical-thinking/"/>
    <updated>2025-04-20T09:08:00-07:00</updated>
    <id>https://wangyi.ai/blog/2025/04/20/35-talks-on-critical-thinking</id>
    <content type="html"><![CDATA[<p>《思辨力35讲：像辩手一样思考》是最近读到的干货满满的一本书。</p>

<p>这本书前两章系统地整理了分析问题的逻辑框架和常见的逻辑谬误，对于如何提高思辨能力能有帮助。第三章辩论实战部分讲如何应用在辩论中，对于不直接参与辩论的读者不如前两章实用。</p>

<!-- more -->

<h2 id="塑造理论的整体结构第二章的内容">塑造理论的整体结构（第二章的内容）</h2>

<ul>
  <li>MECE（Mutually Exclusive, Collectively Exhaustive）
    <ul>
      <li>定义：相互独立、完全穷尽。这些点与点彼此不重合，叫相互独立；它们加在一起能够完整地覆盖对这个问题的分析，叫完全穷尽。</li>
    </ul>
  </li>
</ul>
<p class="info">MECE这个概念对我比较有启发，工作中的一些讨论缺乏对问题的总体上的思考。
</p>
<ul>
  <li>明确定义是讨论的开始
    <ul>
      <li>明确定义，达成共识，挖掘更深洞见</li>
    </ul>
  </li>
  <li>有标准，才有意义
    <ul>
      <li>比较标准是建立论证的关键因素
        <ul>
          <li>比较标准的公开是建立共识的前提。选择辩论队员上场的例子。</li>
          <li>检视标准是发现分歧、明确重点的方式</li>
          <li>比较标准的反驳：有效性、合理性与归谬反驳</li>
        </ul>
      </li>
      <li>明确比较标准：洞悉底层价值，引导决策方向</li>
    </ul>
  </li>
  <li>权衡价值与利益的“需根解损”
    <ul>
      <li>政策性辩论/价值性辩论</li>
      <li>需根解损是政策性辩论的分析框架</li>
      <li>概念：
        <ul>
          <li>需求：可以是问题导向（空气污染）、利益导向（更好的工作）或目标导向（更文明的社会）。</li>
          <li>根属性指的是之所以会存在这个需求，其根本原因是什么。</li>
          <li>解决力，也就是这个政策解决问题的效果。包含可行性和效果。</li>
          <li>损益比，比比落实这个政策带来的好处和它产生的弊害，划算吗？
            <ul>
              <li>量化和补救措施</li>
            </ul>
          </li>
        </ul>
      </li>
    </ul>
  </li>
</ul>
<p class="info">需根解损这个概念我是在这本书里第一次了解到。工作当中常用的一个决定项目优先级的框架和这个有相似之处：RICE（Rich，Impact，Confidence，Effort）。
</p>
<ul>
  <li>没有绝对共识，但可比较利弊
    <ul>
      <li>为生命提供避风VS助长遗弃之风</li>
      <li>利弊比较：让思考完整清晰，但没有绝对真理。利弊比较往往涉及到价值排序，所以因人而异，难以有绝对的共识。
        <ul>
          <li>利可否被替代，弊可否被规避</li>
          <li>寻找同一标尺，平行比较利弊</li>
          <li>看清事件本质，用价值排序判断利弊</li>
        </ul>
      </li>
    </ul>
  </li>
  <li>不说废话，从“决胜点意识”开始
    <ul>
      <li>辩论中的决胜点意识：对，但为什么更对</li>
      <li>明确目的，达成目的</li>
    </ul>
  </li>
  <li>论证观点，检视自己
    <ul>
      <li>如何论证论点？</li>
      <li>论证的三个部分：逻辑、事实、价值</li>
    </ul>
  </li>
  <li>论证强度与论证责任
    <ul>
      <li>演绎论证：前提真实，逻辑有效，结果必然</li>
      <li>归纳论证：结论超越前提，缺乏绝对有效性
        <ul>
          <li>归纳论证的4种方法：摆事实、举数据、讲机理、举例子、引用权威理论。</li>
        </ul>
      </li>
      <li>论证强度：标准不统一，视损益比而定</li>
      <li>比例原则：论证强度与对应行为成比例</li>
      <li>推定利益与举证责任</li>
    </ul>
  </li>
  <li>让道理听得进去　用例子完善逻辑，用故事锦上添花
    <ul>
      <li>说服力</li>
    </ul>
  </li>
</ul>

<h2 id="常见的逻辑谬误第一章的内容">常见的逻辑谬误（第一章的内容）</h2>

<ul>
  <li>相关不等于因果。“错把相关当因果”是我们生活和工作中最常见的逻辑错误之一。
    <ul>
      <li>因果倒置</li>
      <li>C同时带来A和B</li>
      <li>如何克服：尝试反向思维，对于观点A带来B，B带来A成立吗；控制变量；</li>
    </ul>
  </li>
  <li>实然不能论证应然。
    <ul>
      <li>概念理解
        <blockquote>
          <p>如何理解“落后就要挨打”这句话？第一种理解：落后时更容易有人来欺负我。第二种理解：如果我落后了，别人打我、欺负我无可厚非，落后的人和国家就应该被欺负，甚至被消灭。两种不同的理解分别对应了两个概念——实然和应然。实然，descriptive，是指对现实的描述；应然，normative，讨论的是什么是应该的、好的、对的、值得追求的。这一讲我们就来区分这两个概念。</p>
        </blockquote>
      </li>
      <li>区分实然还是应然
        <blockquote>
          <p>门当户对是否过时？实然层面，只需要做问卷调查。应然层面，探讨现代人应不应该还在乎门当户对。</p>
        </blockquote>
      </li>
      <li>实然不能论证应然
        <ul>
          <li>对于存在即合理的误读。“合理”在黑格尔的原意是：凡是现实的都是有原因的、可被归因的、有迹可循的。</li>
        </ul>
      </li>
      <li>实然是对真实世界的确认，属于求真。应然是道德层面上的讨论。</li>
    </ul>
  </li>
  <li>行，但不对
    <blockquote>
      <p>行不行，指的是这些行为能否实现行为实施者的功利性目的；对不对，指的是这些行为在道德上是不是正义的和应该做的。</p>
    </blockquote>
    <ul>
      <li>营救式刑求，行不行？
        <ul>
          <li>用一个例子来讲解功利道德观和道德绝对主义道德观之间的交锋。</li>
        </ul>
      </li>
      <li>利弊权衡中隐藏的风险与危机
        <ul>
          <li>功利主义更容易被认同。</li>
          <li>如何推广这种绝对的道德观？创造某种沉浸式的体验；论证为什么持这样的道德观的世界会更好，这有些类似在功利主义角度去辩驳，杀一也许不能救百。</li>
        </ul>
      </li>
    </ul>
  </li>
  <li>滑坡是谬误，也是合理质疑
    <ul>
      <li>滑坡谬误：一环连着一环的不成立
        <ul>
          <li>如果A发生，B就会发生；如果B发生，C就会发生。后半部分是真的吗？</li>
        </ul>
      </li>
      <li>关于同性婚姻的滑坡论证：有效自愿与道德原则
        <blockquote>
          <p>文明社会的两条最基本的行为原则是自愿和对他人无害。</p>
        </blockquote>
        <ul>
          <li>根属性。一件事情发生的根属性是什么？利于艾滋病传播的因素不见得根属于同性性行为，而是根属于未受保护的同性性行为。</li>
        </ul>
      </li>
    </ul>
  </li>
  <li>平等与正义之间隔着一个公平
    <ul>
      <li>Equity illustration：<img src="/images/Equity.jpg" alt="Equity illustration" width="600px" />
        <ul>
          <li>Equality（平等）：每个人都被给予相同的资源或机会，但由于本身的差异，有人仍然无法受益。</li>
          <li>Equity（公平）：资源分配考虑到了个人的具体需要，使每个人都能达到同样的成果。</li>
          <li>Justice（正义）：通过系统性的改革（如移除障碍），让每个人都不再需要额外的帮助就能获得平等的机会。</li>
        </ul>
      </li>
      <li>《平权法案》（Affirmative action）</li>
      <li>目标状态，起始状态和过渡状态</li>
    </ul>
  </li>
  <li>三段论里的不证自明
    <ul>
      <li>引子
        <ul>
          <li>大前提：人活着是好事。</li>
          <li>小前提：我的伴侣是人。</li>
          <li>结论：复活伴侣是好事。</li>
        </ul>
      </li>
      <li>什么是三段论？
        <ul>
          <li>大前提，小前提和结论。包含关系。</li>
          <li>演绎论证（Deductive Argument）和归纳论证（Inductive Arument）</li>
        </ul>
      </li>
    </ul>
  </li>
  <li>不是所有分歧都叫偷换概念
    <ul>
      <li>什么是偷换概念？
        <ul>
          <li>偷换概念指的是在同一思维过程中，用一个概念代替另一不同的概念，也就是说，同样的词或短语在同一个论证逻辑中，第一次和第二次出现时表面意思相同，但是实际上却是两个不同的概念，它违反了同一律要求，从而造成逻辑错误。</li>
        </ul>
      </li>
      <li>类比不当不是偷换概念
        <ul>
          <li>“人不可能伤害自己的孩子，因为虎毒还不食子呢！”你可以反驳我这句话是类比不当，因为人和老虎在对待孩子的恶毒程度上不见得足够相似，或者说人类世界的复杂程度要远远高于动物世界的复杂程度。但这并不是偷换概念。</li>
        </ul>
      </li>
    </ul>
  </li>
  <li>稻草人谬误与红鲱鱼谬误
    <blockquote>
      <p>在辩论中故意把对方的观点曲解为一个更容易反驳的版本然后对其反驳并觉得自己赢了，这就是稻草人谬误。</p>
    </blockquote>
    <ul>
      <li>稻草人谬误是对观点复杂性的粗暴简化</li>
      <li>如何反驳稻草人谬误：忠实原则与宽容原则
        <blockquote>
          <p>所谓忠实原则，是当对方表达观点后，我们要尽可能按照他的本意去理解、去复述、去反驳，而不是编造出另一个不符合他本意的东西。
所谓宽容原则，是将疑点、利益归于提出观点的人，尽可能使他的论证有说服力。当然这也要在忠于他的原意的前提下。
在这样的前提下，我们反驳的才是这个观点，否则反驳的只是另一个概念，或者我们战胜的只是对方一时没说清的失误而已。</p>
        </blockquote>
      </li>
      <li>红鲱鱼谬误（Red Herring Fallacy）
        <ul>
          <li>比喻那些为了让人分散注意力而提出的不相干的观点甚至是错误信息。</li>
        </ul>
      </li>
      <li>如何反驳红鲱鱼谬误：识别被转移的焦点</li>
    </ul>
  </li>
  <li>样本偏误不可信
    <ul>
      <li>幸存者偏差：无视“牺牲者”的数据谬误
        <ul>
          <li>Survivorship bias</li>
        </ul>
      </li>
      <li>选择偏差：具有倾向性的样本无法代表总体全貌
        <ul>
          <li>Selection bias：选择样本是不是随机的</li>
        </ul>
      </li>
      <li>自选择偏差：主体自我选择带有的特征会影响因果关系的判定
        <ul>
          <li>Self-selection bias</li>
          <li>离婚律师分析离婚</li>
        </ul>
      </li>
      <li>参与偏差（无反应偏差）
        <ul>
          <li>Non-response bias</li>
        </ul>
      </li>
      <li>条件概率：收集信息，理解自己
        <ul>
          <li>Conditionial Probability</li>
        </ul>
      </li>
    </ul>
  </li>
  <li>回避论证过程的循环谬误（Begging the question）
    <ul>
      <li>好马不吃回头草，因为吃回头草的不是好马。</li>
    </ul>
  </li>
  <li>进退两难也许只是假象
    <ul>
      <li>什么是虚假两难？虚假两难也称非黑即白，指的是在本来有其他选项的情况下，却要求人们做出非此即彼的选择。</li>
      <li>光谱思维</li>
    </ul>
  </li>
  <li>人身攻击无法论证观点
    <ul>
      <li>人身攻击谬误</li>
      <li>诉诸权威谬误</li>
    </ul>
  </li>
  <li>充分吗？必要吗？
    <ul>
      <li>比较级：面对有限现状，量化最优选项
        <ul>
          <li>我最求高尚，但是不追求更高尚。</li>
        </ul>
      </li>
      <li>充分与必要，充分不必要，必要不充分</li>
      <li>霸道定义，包山包海：条件不中立，标准不统一</li>
    </ul>
  </li>
</ul>

<p>- EOF -</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Daily Watched YouTube Videos]]></title>
    <link href="https://wangyi.ai/blog/2022/05/10/daily-watched-youtube-videos/"/>
    <updated>2022-05-10T23:05:00-07:00</updated>
    <id>https://wangyi.ai/blog/2022/05/10/daily-watched-youtube-videos</id>
    <content type="html"><![CDATA[<svg class="line-chart"></svg>
<script src="https://cdn.jsdelivr.net/npm/chart.xkcd@1.1/dist/chart.xkcd.min.js"></script>

<script>
  const svg = document.querySelector('.line-chart')

  const lineChart = new chartXkcd.Bar(svg, {
    title: 'Daily Watched YouTube Videos', // optional
    xLabel: 'Year', // optional
    data: {
      labels: ['2014', '2015', '2016', '2017', '2018', '2019', '2020', '2021', '2022'],
      datasets: [ {
        label: 'Number of Videos',
        data: [2, 2, 2, 7, 13, 15, 18, 26, 30],
      }],
    },
    options: { // optional
      yTickCount: 3,
      legendPosition: chartXkcd.config.positionType.upLeft
    }
  });
</script>

<p>- EOF -</p>
]]></content>
  </entry>
  
</feed>
