philosophypractice

Take Back Your Thinking

7 min read

At First, It Felt Like a Partner

When LLMs first arrived, I plugged one into my note-taking workflow almost immediately. It was genuinely compelling, and not just in one way.

It could take my messy, half-formed sentences and turn them into something clear and articulate. It could scan hundreds of scattered notes and surface connections I'd never spotted on my own. It could even pull up something I'd written three months ago and put it in dialogue with what I was writing today, sparking new ideas like a conversation with my past self.

It felt like having a thinking partner who never got tired, always had something to say, and remembered everything I'd ever written.

For a while, my notes looked better. Ideas got fleshed out more completely, connections between notes became richer. I even felt like I was thinking more deeply, because the notes looked deeper. It seemed like AI could take me further than I could go alone.

That feeling lasted about six months.

The Illusion Wore Off

One day, I went back to my notes looking for a thought I'd had about a particular book. I found it. The AI-polished note was smooth, well-structured, and used expressions I wouldn't normally reach for.

But I stared at it and realized something: I couldn't remember if I'd actually thought this.

The text was too clean. It read like a smart person's standard take on that passage, reasonable, articulate, correct. But it didn't sound like me. What I remembered from that moment was a vague feeling connected to something in my own life, something I couldn't quite put into words. The AI hadn't written that uncertain part. It gave me a tidy answer and skipped right past my real confusion.

That's when I started wondering: over the past six months, how much of my "thinking" had actually been mine?

Each One Compelling. Each One at a Cost.

Looking back carefully, AI entered my thinking in more than one way. Each felt exciting. Each carried the same hidden risk.

"Help me write my thoughts more clearly." This might be the most subtle one. You have a vague feeling, you write a few clumsy sentences, AI polishes them into something elegant. You read the result and think: yes, that's what I meant. But is it? Often your original had rough edges, and those rough edges contained your most genuine confusion. AI smoothed them away. The text looked better, but the part that was most authentically yours got erased.

"Help me find connections across my scattered notes." Sounds incredible. You have hundreds of notes, AI finds cross-topic links in seconds. But discovering connections is itself the most valuable part of thinking. That moment when you're flipping through old notes and your brain suddenly goes "wait, these two things are related"—that's real understanding happening. Hand that process to AI and you get a beautiful graph of connections but skip the entire process of insight.

"Have a dialogue with my past self." This one captivated me the most. AI pulls up an idea you had three months ago, puts it next to today's thinking, new sparks fly. It feels like an intellectual conversation with a former version of yourself. But think carefully: that "collision" was manufactured by the AI, not something that arose naturally from you revisiting your own notes. AI decided which old ideas were relevant. AI constructed the framework for the dialogue. You thought you were talking to yourself. You were watching a play the AI directed for you.

Every one of these scenarios makes you feel like you're thinking, like you're growing. But if you stop and honestly ask: is this real? What's the cost?

The answer comes into focus slowly. The cost is that your own thinking is being peeled away, one layer at a time, and you barely notice.

Doing Things vs. Thinking Things

I'm not against AI. I use it every day for writing code, looking things up, doing translations, and I'll keep using it.

But I've come to realize there's a line between "using AI to help you do things" and "using AI to help you think," and that line matters more than I assumed.

Tasks can be delegated. Formatting a document, organizing data, translating a paragraph from one language to another. These are perfectly fine to hand off. AI is faster, more accurate, and the time saved is real.

But thinking is different. Thinking is the process of facing an idea, not knowing how to express it, and struggling to find your own words. That process can't be shortcut. It's slow, it's clumsy, and it often produces something half-finished. But the struggle is where understanding happens.

When AI writes your reflection for you, you get a beautiful paragraph and skip all the understanding. It's like having someone else do your workout. The reps get done, but your muscles don't grow.

The Uncomfortable Truth

Your own awkward, half-formed sentence is worth more than a perfect AI-generated paragraph.

I know that's uncomfortable to hear. We've internalized the efficiency narrative: automate what you can, optimize what's left, save time for the important stuff. In most domains, that's right.

But when it comes to personal thinking, the struggle is the important stuff. That sentence you wrote and deleted and rewrote and barely squeezed out contains your real confusion, your genuine uncertainty, your understanding as it exists right now. The AI version might read better, but it doesn't have any of that.

Remove the struggle and you remove the meaning.

Two Years Later, I Started Pulling It Out

It wasn't a sudden decision. It happened slowly.

First I stopped letting AI polish my thoughts. Then I turned off the auto-connect feature that surfaced links across my notes. Eventually I removed all AI assistance from my note-taking workflow entirely.

It was uncomfortable at first. My notes went back to being short, rough, half-formed. I had to face that "I don't know what to say" moment by myself. What I wrote got shorter, clumsier, more incomplete.

But here's the strange thing: when I went back and read those notes, I recognized them. Every sentence was mine. The rough edges, the half-finished thoughts, even the slightly embarrassing phrasing, all of it felt more real than anything AI had polished for me. Because that's what I'd actually been thinking, no more, no less.

In a World of Generated Content

Now every tool has a button that generates something for you. Summarize this. Rewrite that. Polish this up. Every AI tool wants to write for you.

But if AI thinks for you, what's left that's actually yours?

This question deserves a serious answer. In a world saturated with generated content, your own thinking is the only thing that remains authentic. Not because it's better, but because it's yours. Your confusion is yours. Your uncertainty is yours. The sentence you stumbled through is yours. These things can't be generated.

Years from now, when you look back at your notes, you'll want to hear your own voice, not a model's output.

And this is only going to get harder. AI is getting stronger. It can do more every month. There will be more products, more "AI note-taking methodologies," more seamless integrations. Plugging AI into your reading, your notes, your entire knowledge system will keep getting easier, the barrier lower, the temptation greater.

Every new tool will tell you: let me help you think. Every one of them will sound reasonable.

But remember to ask yourself: in this process, what am I losing?

I hope you come to see it clearly, in your own time. Our own thinking is not something to be taken from us. That's the line.

An Honest Suggestion

Try using a little less AI in your notes.

Not zero. Not anti-technology. Just in that most personal moment, when you're facing a passage that moved you and trying to write down your own thoughts, turn the AI off. Let yourself be uncomfortable for a while. Let yourself write slowly, write badly.

That discomfort is you, thinking.

Take it back. You'll thank yourself in a few years.