Skip to content
Writing

How to make AI writing sound like you

AI drafts are fast but generic. Here is the full breakdown of how to get AI output that actually carries your personal writing style.

Yourtone5 min read

You asked ChatGPT to write an email. It wrote one. It is grammatically perfect, well-structured, and sounds like it was written by no one in particular.

This is the default state of AI writing. The output reflects the statistical center of the model's training data. It is average by design. Making it sound like you requires more work than most people expect.

The manual method

The most common approach is prompt engineering. You tell the AI how you write, and it tries to follow your instructions.

A basic version looks like: "Write in a casual, conversational tone with short sentences." A more advanced version includes example passages, vocabulary preferences, and structural constraints.

Zapier published a widely-read guide on this. The process boils down to four steps: gather writing samples, ask ChatGPT to analyze them, paste the analysis into Custom Instructions, and iterate until the output sounds right.

It works for a few messages. Then it drifts.

The problems are structural, not effort-related. ChatGPT's Custom Instructions has a character limit. You cannot fit a real voice description into that space. And even when the instructions are well-crafted, the model's adherence weakens over long conversations. By message 15 or 20, the voice instructions are competing with accumulated context. The context usually wins.

Custom GPTs give you more room, but the drift still happens. The voice profile is a static text block sitting inside a context window that grows with every exchange. As the window fills, the model pays less attention to the instructions at the top.

Why instructions are not enough

Instructions describe your voice. They do not encode it.

"Write short punchy sentences with informal vocabulary" is a description. It tells the model a direction but not a destination. The model interprets "short" based on its training data, not based on your actual sentence lengths. Its version of "informal" might be different from yours. The instruction creates a general vibe, not a precise replication.

The difference matters because your voice is specific. It is not just "short sentences." It is your ratio of short to medium to long. It is not just "informal." It is the specific informal words you use: your filler phrases, your go-to verbs, your preferred contractions. Instructions flatten this specificity into generalizations.

And instructions do not capture frequency. Your voice profile includes words you use constantly (core register), words you use sometimes (signature phrases), and words you use rarely for emphasis (rare vocabulary). If the AI scatters your rare words throughout every paragraph, it sounds like a parody. Getting frequency right requires data, not instructions.

What actually works

Making AI sound like you requires three things that instructions cannot provide:

Pattern extraction from real samples. Instead of you describing how you write, the system reads your actual writing and identifies the patterns. Sentence rhythm ratios. Vocabulary frequency tiers. Punctuation habits. Opening and closing patterns. Structural tendencies. This is closer to what forensic linguists do when identifying anonymous authors.

Persistence across sessions. Your voice profile needs to survive beyond a single conversation. It should not degrade as the context window fills. It should exist as a stable reference that every rewrite draws from, regardless of when or where you use it.

A feedback loop. Every time you approve output that sounds right, or flag output that sounds wrong, that feedback should refine the profile. Over time, the system learns not just your patterns but your preferences: which of its interpretations landed and which missed.

The practical approach

If you want to do this manually, here is a more thorough process than the basic prompt engineering approach:

1. Collect at least 1,000 words of your natural writing. Not your best writing. Your default writing. The stuff you produce when you are not thinking about style. Emails, messages, notes, casual posts.

2. Analyze the patterns yourself. Count sentence lengths across 30 sentences. Note your most-used words. Identify your opening habits. Look at your punctuation: do you use colons? Semicolons? Parentheticals? Dashes? Ellipses?

3. Build a structured prompt. Not "write casually" but specific: "Average sentence length: 12 words. Ratio: 55% short, 30% medium, 15% long. Core vocabulary: just, actually, look, thing, kind of. Never use: leverage, facilitate, synergize. Opening pattern: direct statement, often a fragment. Paragraph length: 2-4 sentences."

4. Include 5-10 representative excerpts. Actual passages from your writing, organized by function: default rhythm samples, opening samples, transition samples, closing samples. The AI will pattern-match against these better than against abstract descriptions.

5. Test with the same input across sessions. Paste the same paragraph and rewrite it three times. If the outputs are inconsistent, the profile needs tightening. If they converge, the profile is working.

This takes time. Serious time. Most people give up after step 1.

The automated version

Yourtone runs this process automatically. You upload writing samples, and the system extracts the pattern structure: rhythm, vocabulary tiers (core, signature, rare, never-use), punctuation, openings, closings, paragraph habits. It stores the profile persistently, so it does not degrade across sessions.

When you paste text for rewriting, the system applies your profile to the output. Every rewrite you approve feeds back into the profile. The more you use it, the sharper it gets.

The difference from prompt engineering is that you are not describing your voice. The system is learning it from evidence. That distinction matters because most writers cannot accurately describe their own patterns. You know how your writing feels, but you probably cannot quantify your sentence-length distribution or list your vocabulary frequency tiers. The system does that work for you.

What "sounds like you" actually means

It does not mean the output is identical to something you would write by hand. It means the output belongs in the same family. A reader familiar with your writing should not be able to tell whether you wrote it from scratch or ran it through a tool. The rhythm, vocabulary, and structural choices should feel natural to your patterns.

A good test: read the output out loud. If it sounds like something you would say, the voice profile is working. If it sounds like something a slightly more formal version of a stranger would say, it is not.

The bar is high. But once you cross it, the value is clear. You get to produce writing at AI speed with your voice intact. That is a meaningful upgrade over fast text that sounds like everybody else.

Your voice is already there.
Let's find it.

Start with your own writing samples. Yourtone does the rest.

Start today, your trial runs until April 27. Cancel anytime.