Tips for using AI to create content at scale

Header image created with DALL-E-3

“We need to be using AI more. Make it happen.”

In 2023, over half of the in-house marketers we surveyed told us they’re under pressure from their boss to use AI to create content faster and cheaper. (Check out more results from that survey, if you’re curious.)

To truly capture the exponential efficiency gains that AI promises, though—and the saved time and money that business leaders are pressuring their teams for—one writer going back and forth with ChatGPT to draft a blog post isn’t going to cut it.

The techniques we’re going to discuss here are focused on content at scale. Think hundreds of local landing pages, thousands of product descriptions for an e-commerce website, or dozens of new service pages for agency clients.

AI is perfect for these use cases because using humans to write thousands of product descriptions, for example, is cost-prohibitive, but having those descriptions is still important for SEO. More importantly, by using AI to craft high-quality descriptions, you can actually provide value to your audience in an area where your competitors are likely just using MadLibs style templates—if they’re doing anything at all.

Trying to use ChatGPT out of the box for this, though, will drive you nuts. Combine the API of your favorite LLM with an automation tool like Zapier plus some advanced prompting, though, and you’ve got magic.

This is an example of an extremely basic flow I’ve used in Make, a Zapier alternative. What you can’t see is that the API call to Anthropic includes a sequence of 13 different prompts and responses.

This type of process doesn’t work for all tasks. With the right use case, though, leveraging AI in this way allows for exponential efficiency gains, rather than the marginal gains that come from an individual user working with ChatGPT directly.

The challenge of this heavy-duty usage is that it requires a deeper understanding of how to get what you want from AI. You no longer have an individual person going back and forth with an LLM in an interface, able to adjust prompts and ask for changes on the fly. You need to build prompts that work for all the different situations in which they’ll be used and that get you the results you need, even when there may not be human intervention until the final step. 

We’ve been using AI in this way for certain tasks since September 2022 and have learned a ton about what works and what doesn’t. Using these six tips can save you hundreds of hours of work and dramatically improve the quality of your content. Some of these are good strategies no matter how you’re using AI. When you’re using it at scale, though, they become that much more important and might require you to think about them in slightly different ways.

1. Use variables to customize prompts

This is the basic building block of prompting at scale. Rather than using a prompt like “Write an outline for a 1500-word article about brute force attacks,” the prompt you write will be something like this:

USER: Write an outline for this article. Topic: {topic} Word length: {word_length}

When running this prompt, I’ll then replace the variable {topic} with “brute force attacks” and {word_length} with “1500.” (Well, *I* won’t—the program or Zap I’ve created will do it for me. That’s the beauty of using prompts programmatically.)

Creating prompts that use variables in this way requires a “greatest common factor” approach. For any single given topic, there will no doubt be a prompt that could do better—but it wouldn’t do as well for other topics. The goal is to find a prompt template that works decently well across all the topics you’re writing for, and then make use of variables to customize it.

🏠 In-house marketers: If you’re creating content for a single brand, your prompts can likely be more specific because all the content you’re creating falls within the same industry and should follow the same style guide. 

🗂️ Agency folks: If you’re creating content for multiple clients, your prompts will have to be more general, and you’ll likely make greater use of variables for things like voice and industry.

2. Use the right size buckets

While testing a prompt to create outlines for articles of different lengths, I might find a prompt that works well on long articles but not on shorter ones. At that point, I’d have to decide whether I want to maintain two different prompt flows and send articles one way or another based on their word count, or find a prompt that works decently well for both.

This is the constant tension when using prompts at scale: How big should your “buckets” of use cases be? The larger your bucket, the more variables you’ll need to use to customize the prompt for each use case. You may also see a decrease in quality because you’ll be using the same prompt in situations where a different prompt would perform better. You will, however, save time on testing and building different flows.

Here’s an example at one end of the spectrum: Instead of using a dedicated prompt to create outlines, I could use a single prompt to create outlines, introductions, articles, etc. That prompt might look something like this:

USER: Write an {content_type} for this article. topic: {topic} word count: {word_count}

In addition to filling in the other inputs like topic and word count, I would then also replace {content_type} with “outline” or “introduction,” depending on what I needed.

In my experience, the time I might save from only having to build a single prompt flow for all those content types is not worth the drop in quality I would see across some of them. In other words, that bucket of use cases would be too big.

On the other end of the spectrum, I could use a different outline prompt for every word count increment. One of those prompts might look like this:

USER: Write an outline for this 1000-word article. There should be about six main sections, with appropriate subheadings in each of them. topic: {topic}

By using a different prompt for each word count and designating the approximate number of sections each should have, I might get better (or at least more consistent) outlines, but would they be enough better to merit the time spent building those different flows? Probably not.

In this case, my bucket of use cases would be too small, and I’d be doing a lot of unnecessarily repetitive work. I could probably use a single prompt for every word count (by including the {word_count} variable) and still get close to the same results simply by reminding the LLM to make sure the outline is an appropriate length for the word count.

Despite the officialness of my Goldilocks diagram, it’s worth noting that the right-sized bucket isn’t universal.If you’re creating two types of articles that should each have a very different structure—for example, tutorial articles that should follow a step-by-step format with a list of what you’ll need at the top vs. case studies that should follow a problem/solution/results format—using a single prompt for “outlines” could be too big a bucket for you. Instead, you’d likely want to create two different prompts for those two kinds of articles, with each prompt detailing the specifics of the format you’re looking for. This approach would give you better enough results to make it worth the effort of building those two flows.

The most extreme example of a small bucket is no bucket at all, ie. using a unique prompt every single time. Again, in many cases you can get better results that way, but you lose all the efficiency gains of using AI at scale.

3. Take advantage of few-shot prompting

“Show, don’t tell” is one of the golden rules of prompting. You can get better results by providing a few examples of what you’re looking for, rather than trying to describe what you want. This is known as “few-shot” prompting as opposed to “zero-shot” prompting, which is when you ask it to do something without including any examples of what you want.

Let’s say I work for an agency creating content for twelve different clients, and I’m using AI to create outlines. I want every outline to have a few common elements:

  1. The first heading should be “Introduction”
  2. The second heading should be “What is ” + the primary keyword
  3. The last heading should be a call to action that references the client’s business

The best way to get AI to give me what I need consistently is to include examples of what I’m looking for in the prompt. Each example outline should meet all those requirements and show what a “good” outline looks like to me.

My final prompt might look something like this:

USER: Write an outline for this article. topic: Understanding the Risk of Brute Force Attacks word count: 1200 primary keyword: brute force attack business: Hank's Digital Security Solutions 1. Introduction 2. What is a brute force attack? 3. Common types of brute force attacks  A. Credential stuffing  B. Password cracking  C. Distributed brute force attacks 4. Impact of Brute Force Attacks  A. Data breaches  B. Financial losses  C. Reputational damage 5. How to protect your business against brute force attacks  A. Strong password policies  B. Two-factor authentication  C. Account lockout mechanisms  D. Intrusion detection systems 6. Prevent Brute Force Attacks with Hank's Digital Security Solutions Write an outline for this article. topic: How a Brand Ambassador Can Boost Your Marketing word count: 800 primary keyword: brand ambassador business: Magic Marketing Solutions 1. Introduction 2. What is a brand ambassador? 3. The role of a brand ambassador in marketing  a. Raising brand awareness  b. Engaging with your audience 4. What to look for in a brand ambassador  a. Professionalism and positivity  b. Passion for the brand  c. Excellent people skills 5. Find Your Next Brand Ambassador with Magic Marketing Solutions Write an outline for this article. topic: {topic} word count: {word_count} primary keyword: {keyword} business: {business_name}

LLMs are great at following patterns. By including examples of what I’m looking for, the model will pick up on the common elements—that the first heading is always “Introduction,” the second is always “What is” followed by the keyword, and the final section always mentions the business—and incorporate those elements in the outlines it writes.

One important note here is to use a range of examples in your prompt, lest the model pick up on a pattern you hadn’t intended to convey. If, for example, you use three examples that are all “how to” articles with a numbered list of steps to teach the reader how to do something, it might try to follow that same pattern of creating a step-by-step guide even when asked for a different type of article. (To think of it in terms of “buckets” again: This is a situation where, depending on how many different types of articles you’re creating and how distinct they are, you may want to use separate prompts, each with their own examples, rather than a single prompt for all of them.)

Bootstrapping your way to good examples

Coming up with a few examples of “good” to use in your few-shot prompts can feel like an annoying waste of time. Depending on what I’m trying to create, I often will use AI to bootstrap my way there:

  1. I’ll ask ChatGPT for an outline without providing any examples. I’ll then edit that outline extensively until it aligns completely with what I’m looking for. 
  2. I’ll add that edited outline as an example in my original prompt and ask ChatGPT for another outline. The single example will help it get closer to what I want, though I will still need to spend some time editing this second outline as well.
  3. I’ll use both edited outlines as two examples in my prompt, and ask ChatGPT for a third. This time, the output should be even closer to what I’m looking for, and I can likely spend less time editing it.

I’ll repeat this process—using as many edited examples as I have in my prompt to generate the next one, and editing each new one until it’s “perfect”—until I have as many examples as I want.

It’s worth noting that I likely wouldn’t use this approach for something like introductions. I would want those examples to be fully human in order to get the best outputs from my prompt going forward—otherwise what I get back will be more likely to sound like AI. For something like outlines, though, which are less about the voice and word choice and more about the logical organization of information, AI can give me a decent starting point for those examples.

4. Ask for an analysis first

Also known as “giving the model time to think,” you can get better results by asking the model to analyze the task at hand before providing you with any deliverable. 

Here’s an example of what that could look like:

USER: You will be writing an outline for a given topic. First, analyze the searcher intent. Consider the specific information that readers are seeking based on the topic and keyword. Think about how to optimize the headings for SEO. Provide your analysis inside <analysis></analysis> tags. Then, create an outline that addresses the topic with specific headings and subheadings, ensuring that each section will directly answer the searcher's intent and contribute to the content goal. Provide the outline itself inside <outline></outline> tags. topic: {topic} keyword: {keyword} content goal: {goal} target audience: {audience} word length: {word_count}

By telling the model to put its analysis inside <analysis></analysis> tags and the outline itself inside <outline></outline> tags, I can easily parse the response to get only the part I care about, ie. the outline.

When using the prompt at scale, I can ignore the analysis and only save the outline itself. While testing prompts, however, it can often be helpful to look at the <analysis> portion of the response as a way of understanding how the model is approaching the problem. If it’s referring too much to one portion of your instructions and missing something else, for example, that could be a sign that you’re trying to have it meet too many requirements at once. In that case, you should pare the prompt down to focus on the most important requirements.

5. Use a prompt chain rather than a single prompt

A prompt chain is when you use multiple prompts in a row, threading them together to provide more context for the model.

This approach can often get you better results than using a single prompt. It can be especially helpful in breaking down different things you want the model to focus on. I’ve found it works best if my first prompt focuses on the general principles I want the model to follow for whatever the piece of content may be, and the second prompt focuses on specifics that I do or don’t want.

For example, using the prompt above as my first prompt, I would get an outline that has taken the searcher’s intent into account and contains the most valuable information a reader would be looking for. However, I might have some other requirements I want the outline to follow, too, and I could include these in a second prompt.

USER: You will be writing an outline for a given topic. First, analyze the searcher intent. Consider the specific information that readers are seeking based on the topic and keyword. Think about how to optimize the headings for SEO. Provide your analysis inside <analysis></analysis> tags. Then, create an outline that addresses the topic with specific headings and subheadings, ensuring that each section will directly answer the searcher's intent and contribute to the content goal. Provide the outline itself inside <outline></outline> tags. topic: {topic} keyword: {keyword} content goal: {goal} target audience: {audience} word length: {word_count}

ASSISTANT: {model's response, containing both the analysis and the actual outline}

USER: Now review the outline. Make the following changes as necessary: - The first section heading should be "Introduction." - There should not be any references to case studies or testimonials. - Make sure the outline is tailored to the topic, providing specific names where applicable. Avoid using placeholders like "Item 1" or "Service A" and instead use actual names and descriptions that are current and relevant. - Make sure the outline is appropriate for the requested word length and not too long. Return only the revised outline inside <outline></outline> tags.

In my second API call, I would include all three of these messages in order (blue, yellow, and blue).

By breaking up the requirements of what I want in this way, I’ve found the final outline will do a much better job of meeting them than if I put them all into a single prompt. You’ll also notice I put all of the specific formatting requirements into the second prompt. This is because if you split them between the first and second prompt, at times it might inadvertently “undo” requirements from the first prompt when revising the outline in the second prompt.

Using multiple prompts is also an easy way to allow your flows to work across more use cases. If I were an agency, for example, I might use the same first prompt across all my clients, and then customize only the second prompt with each one’s specific style guide requirements.

Note on the last line in the second prompt: It’s possible the original outline could already meet all my requirements. In that case, if I hadn’t included that final line about returning only the outline inside tags, the model might respond with something like “This outline is well-suited to the topic. It includes specific names and is appropriate for the word count” etc. Because I’m using these prompts at scale and will be delivering the output of the final prompt directly to the customer, I don’t want it to tell me the outline already meets my requirements—I just want it to give me the final outline. By specifying that I only want it to return the revised outline, I’m ensuring that I’ll get a consistent output I can use without having to review it.

6. Test your prompts

I’ve already written about my process for testing prompts, and I highly recommend coming up with your own system if you don’t yet have one.

LLMs have gotten good enough that the first prompt you try will likely get you an okay response. But when you’re using AI at scale, the difference between “okay” and “great” on hundreds or thousands of outputs can add up to hundreds of additional hours of human work needed to make your content publish-ready. It’s well worth the time spent testing to find the prompt that gets you “great” out of the gate.

It’s also important to remember that LLM behavior can change over time, so a prompt that works for you one month might get different results later. Test early, test often.


Using AI to create content at scale is an entirely different ballgame than becoming a ChatGPT power user. If you’re working on your process and want to chat strategies, roadblocks, writing your own python code to access the OpenAI API, the risk of human extinction by AI, or anything else, reach out at megan@verblio.com.

If you don’t actually want to deal with this stuff yourself but need to get your boss off your back about using AI, check out our hybrid human-AI content to get all the efficiencies of AI without having to write a single prompt.

Avatar photo

Megan Skalbeck

Megan has been following the world of AI since the initial GPT release in 2018. As Director of AI & Strategic Initiatives at Verblio, she's responsible for figuring out the best ways to blend the capabilities of artificial intelligence with the quality of our human freelance writers. When she's not doing tech things, she's making music, writing existentialist fiction, or getting reckless on two wheels.

Questions? Check out our FAQs or contact us.