独立开发者的一天:AI 真正改变了什么
我想记录一个真实的工作日,不是为了炫耀生产力,而是想诚实地回答一个问题:
AI 到底改变了什么?
早上 8:30 — 开始前的 15 分钟
泡咖啡,打开飞书,看各个 Agent 的日报。
这是一年前没有的仪式。每个 Agent 在夜里运转,early morning 是读它们”工作报告”的时间:哪个服务心跳正常,昨天处理了什么任务,有没有需要人工决策的问题挂起。
改变:我从”开始工作”变成了”接班”。有东西在我之前就已经运转了几个小时。
没变:咖啡还是要自己泡。
9:00 — 第一段编程时间
打开终端,Claude Code 读 CLAUDE.md,加载项目上下文。
今天的任务:给这个博客加一个新功能——文章的系列筛选。
以前的工作流是:读文档 → 想方案 → 写代码 → 调试 → 重来。
现在的工作流是:说”我想要这个功能,现有代码在这里” → Claude 给方案 → 我判断方案 → Claude 实现 → 我 review → 调整 → done。
改变:我从”写代码”变成了”审代码”。工作的重心从执行移向了判断。
没变:读懂代码的能力仍然是核心。Claude 可以写,但我必须能读懂才能 review。不懂代码,审不了。
10:30 — 一个卡住的问题
TypeScript 类型报错,Claude 给了三个方案,我选了第二个,然后发现这个方案有副作用——在某个边界情况下会静默失败。
这花了 40 分钟才找到。
改变:Claude 解决了 90% 的常规问题,腾出了时间。但剩下那 10% 的复杂问题,还是要自己啃。而且因为 Claude 产出代码的速度快,这种”快速生成、慢速调试”的节奏需要适应。
没变:调试能力。边界情况的直觉。对系统行为的理解。这些没有被替代。
12:00 — 午饭 + 信息消化
午饭时间不看 X,改成用 Perplexity 做定向信息消化:搜几个我在关注的话题,看最近有什么新进展。
这不是被动刷,是主动查。区别很大。
改变:信息消费从流被动推送变成了主动检索。Perplexity 让这个过程快了 3 倍。
没变:判断信息价值的能力。哪些值得深读,哪些是噪音,这个判断还是靠自己。
14:00 — 写作时间
写这篇文章。
流程:我先在 Notion 记下今天真实发生的事,几百字的碎片笔记。然后让 Claude 帮我把这些碎片组织成文章结构,生成初稿。然后我把初稿改得面目全非——加进去 Claude 不可能知道的那些细节,删掉那些听起来正确但不是我真实想法的段落。
最后,大概 60% 的文字是我原创的,40% 是在 Claude 基础上改的。
改变:写作的起点从”空白页”变成了”结构草稿”。心理门槛降低了很多。
没变:真实的个人视角。独特的经历和判断。这些 Claude 无法捏造,也无法替代。如果我不把它们注入进去,文章就会是正确的、完整的、但无聊的。
16:00 — Agent 系统维护
某个心跳 Agent 昨天出了问题,需要排查。
读日志 → 定位 → 理解原因 → 修复 → 更新约束文件。
这类工作 AI 帮不上太多——不是因为它能力不够,而是因为这涉及到只有我才知道的系统背景:为什么某个配置是这样设的,当初做了什么权衡,这个”异常”是真的异常还是预期行为。
改变:AI 给我了一个系统,而维护这个系统本身变成了工作的一部分。
没变:系统设计的决策是人的。Agent 执行什么、如何边界,都需要人来定义和维护。
18:30 — 收工前的存档
给每个活跃项目更新 daily.md,记录今天做了什么、遇到什么、明天要做什么。
这是一年前没有的习惯。当时觉得”写日志浪费时间”,现在觉得这是最值钱的投入——因为明天早上 Claude 会读这些日志,而不是重新问我”上次做到哪里了”。
改变:文档化从”偶尔做”变成了”每天做”。不是出于纪律,而是因为它有直接回报。
没变:这仍然是我在写,仍然需要我有清晰的思维。AI 帮不了你记录你自己没想清楚的事。
回到最初的问题
AI 改变了什么?
执行层被压缩了。写代码、找信息、生成初稿——这些事的时间成本大幅下降。
AI 没有改变什么?
判断层依然是人的。哪个方案更好,这段代码有没有 bug,这篇文章有没有价值,这个系统该不该这样设计——这些判断,AI 可以参与,但不能替代。
意想不到的副作用:
我变得更愿意尝试了。以前一个新想法,光想到实现成本就容易放弃。现在因为执行变快了,我尝试的门槛低了,失败的成本低了,所以实验的频率高了。
这或许是 AI 给独立开发者最大的礼物——不是替你工作,而是降低了”试一试”的成本。
I’ve read a lot of “AI transformed my workflow” pieces. Most of them describe the tools, not the day. Here’s the day.
8:30 AM — Reading Agent Reports
The morning used to start with email. Now it starts with agent daily reports.
I run a few automated agents that work overnight — monitoring, summarizing, triaging. By morning there’s a digest waiting: what happened, what needs attention, what was handled automatically.
It’s a different ritual than reading email. Email is reactive — someone wants something from me. The agent reports are more like taking over a shift: here’s the state of the system, here’s what changed, here’s what I need to decide.
The mental model shift is real. I’m less reactive in the morning than I used to be.
9:00 AM — First Coding Block
The role has shifted. I’m not primarily writing code anymore — I’m reviewing it.
I describe what I want, Claude Code produces a draft implementation, I read it carefully and decide what’s right. Sometimes I accept it almost verbatim. Sometimes I throw it out and try a different description. Sometimes the right move is to write it myself, because the thing I’m trying to express is subtle enough that the description would take longer than the code.
What hasn’t changed: I still need to understand the code. Reviewing generated code requires the same mental model as writing it. You can’t just run it and see if it works — well, you can, but that’s how you accumulate technical debt you don’t understand.
The output is faster. The understanding is still required.
10:30 AM — Stuck
TypeScript edge case. Forty minutes of debugging.
I ran it through Claude twice. Got reasonable-sounding explanations that didn’t actually solve the problem. Eventually figured it out by reading the TypeScript compiler output carefully and tracing the type inference chain myself.
This happens more than the productivity discourse suggests. AI is excellent at standard patterns. Edge cases — especially language-level edge cases that interact with your specific codebase — often require human debugging intuition. The kind you build by reading a lot of error messages over years.
The AI saved time elsewhere today. This forty minutes was all me.
12:00 PM — Information Consumption
I don’t browse feeds during the day. That changed about a year ago.
Instead: directed search. I have specific questions — what’s the current state of X, what did Y ship last week, is there a better approach to Z. I use Perplexity for these. Fifteen minutes, specific answers with citations, done.
Passive feed consumption never made me more informed in any durable way. Directed search does.
2:00 PM — Writing
Articles, documentation, internal notes. This is where the collaboration model is most visible.
My process: describe the structure and key arguments to Claude, get a draft, read it, rewrite heavily. The draft gives me something to react to, which is faster than starting from a blank document. But the voice, the specific opinions, the things I actually think — those aren’t in the draft.
Claude produces plausible text. It doesn’t produce my text. The editing process is where I figure out what I actually want to say, using the draft as a foil.
By the time an article is done, maybe 60% of the words are original. The structure is often Claude’s. The content is mine.
4:00 PM — Agent System Maintenance
This is the part that AI can’t help with much, and it’s instructive about why.
Maintaining agent systems — updating memory files, adjusting configurations, debugging why an agent made a wrong decision — requires understanding the full system context. The agents themselves don’t have that context about their own architecture. Neither does Claude when I describe the problem, because the relevant details live in a dozen files across a non-obvious file structure.
This work is slow. It requires me to hold the whole system in my head. AI tools help with isolated tasks within it, but the system-level thinking is mine.
Every complex system has this layer. AI handles well-defined subtasks. The architecture and the judgment calls are still human work.
6:30 PM — Pre-Shutdown Archive
I update daily.md files before shutting down. Log what happened, what decisions were made, what I learned.
This used to feel like overhead. Now it feels like direct payoff, because the logs feed back into the agent memory system. Tomorrow’s agents will know what happened today. The context doesn’t evaporate.
The habit formed because the benefit is immediate and visible. Good incentive design by accident.
What Actually Changed
AI compressed the execution layer. Implementation that used to take a day takes a few hours. Boilerplate is cheap. Standard patterns are cheap. The gap between “I have an idea” and “I have a working prototype” is much smaller.
The judgment layer is unchanged. What to build, why, for whom, in what priority order — none of that got easier. If anything, faster execution makes the judgment calls more consequential. You can pursue a wrong idea much faster now.
The unexpected side effect: more experiments. When execution is cheap, the threshold for “let’s try it” drops. I run experiments now that I would have skipped before because the cost wasn’t worth the information. Some of them turn out to matter. That’s new.
The day is faster. The thinking is the same. The ratio of thinking to execution shifted dramatically, and that’s the real change — not any specific tool.
相关文章
这篇文章对你有帮助吗?
分享这篇文章
引用此文
讨论
这篇文章让你感觉
喜欢这篇文章?
订阅 RSS,第一时间收到新文章推送
私人笔记
仅保存在本地浏览器讨论
评论加载中...