12min read

5 Ways We're Not Using Generative AI - And What We're Doing Instead

by Article by Hannah Beatty Hannah Beatty | June 27, 2024 at 7:30 AM

While 2024 has brought excitement around Generative AI and opened up opportunities for marketers, not all Generative AI tools currently satisfy the needs of actual clients and their growth goals.

By now, we’ve all spent some time asking ChatGPT which character from The West Wing is the best (and we must admit, C.J. Cregg is the right answer), but the fun we have playing around in a tool doesn’t mean it’s going to generate meaningful content. 

That’s why we’ve determined that many Gen AI use cases trending with marketers are actually not quite ready for client-facing work–and might not be ready for your marketing program either.  

When we deliver client work, we expect that work to be excellent above all else. Beyond that, it must be accurate, original, specific to the client’s goals, and free from any legal questions.

Generative AI may produce work that looks excellent occasionally, but underneath the surface, the work can be inaccurate, biased, or duplicative…certainly outside clients’ expectations.

With that in mind, this article will explain:

  • Our main concerns with Generative AI
  • 5 ways we’re not using Generative AI
  • What we are using Generative AI for instead (because we still want to have a little fun now and then)

Before we get into this post (feel free to skip down), we want to define a few terms:

  • Generative AI: “Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.” - IBM

  • Open Source AI: “Open-source artificial intelligence (AI) refers to AI technologies where the source code is freely available for anyone to use, modify and distribute.” -  IBM

  • Closed Source AI: “Closed-source refers to a software development approach where an application’s source code is proprietary and not publicly available.” - Deloitte

  • Large Language Models (LLM): “Large language models (LLMs) are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.” - IBM

Generative AI Concerns We Can't Overlook

Of course, we’re not advocating for complete abstinence regarding Generative AI: We recognize that tools can broadly provide real value to marketers, writers, designers, developers, and organizations. 

We believe we can balance Generative AI’s immediate benefits with thoughtful skepticism and critical thinking. This balance allows us to apply new technology when it is the most effective for our clients as they meet their goals. 

A tall order? Maybe. But it’s worth it.  

No tool can claim to be perfect or without risks, but Generative AI tools have 3.5 main concerns:

1. Lack of Data Privacy

We already know that open-source AI models are built on massive data scrapes of the internet, which include social media posts, websites, books, art, videos, and more. However, you may not know that feeding personal information into the tool can make that data available to anyone else who is also using the tool. 

So, seemingly innocuous uses of Generative AI tools to build presentations, social media posts, or reports can spread sensitive HR, health, and personal information/personal identifying information (PII).  By “feeding” this information (even your own code or design work) to these tools, you’re making it possible for that data to be used by AI for others

This not only crosses an ethical boundary (“Should we feed this data to an AI?”) but also a legal one (“Can we feed this data to an AI?”) depending on where you are in the United States or around the globe. 

In addition to data privacy concerns, there are real security threats: in March 2023, a ChatGPT bug allowed users to see someone else’s chat history (yes, including payment information). While this was resolved, a similar defect could occur again.

2. Inclusion of Inaccurate Information

Are you smarter than AI? In some cases, the answer is a resounding: YES. 

Generative AI tools are trained on existing web data. These tools then use predictive algorithms and probability to fill in the gaps. 

If you’ve been on Twitter for 20 seconds, you know that not all content on the internet is a) True b) Helpful c) Inclusive or d) Unbiased. 

AI takes in just as much of the garbage information as it does the truly consequential information. And it weighs them equally. 

Because of that AI tends to “hallucinate.” 

Sometimes these hallucinations can be funny: Christian, one of our web developers, found an AI-generated image of “What Vermont looks like in March.” The image in question? Headless cows. At least they got the five fingers on each hand right this time.

AI Hallucination

But other times, the hallucinations can be more sinister and lead to devastating consequences (defamation suit, ineffective legal counsel, suggested self-harm to a patient). 

Despite pleas from its audience, most Generative AI tools don’t cite their sources, making it difficult to verify information. If the tool you use to save yourself research time requires constant fact-checking, you might as well do the research yourself.

3. Intellectual Property Disputes

Who owns the internet? (answer: no one, everyone, maybe Al Gore). Who owns the output of Generative AI that scrapes the internet to find its answers? (answer: pending). 

The legal issues associated with Generative AI tools alone are enough for us to be wary. 

The first of intellectual property issues surrounds exploiting existing copyright. 

Within the internet sources AI uses to train itself are copyrighted works such as art, videos, books, products, and more. None of the producers of the original material are ever cited or sourced, allowing Generative AI to propose “original ideas” based on someone else’s work.

Content creators are taking legal action to protect their copyrights. And even Getty, one of the largest commercial websites for images, is taking legal action. 

OpenAI has argued that publicly available content on the internet (specifically in reference to the New York Times) can be used based on the fair use doctrine. Fair use doctrine is the legal principle that you can use existing work to transform it through critique, repurposing, or “remixing.” 

The next IP issue is our inability to copyright works created through AI. 

The U.S. Copyright Office’s stance is that “works created by AI without human intervention or involvement still cannot be copyrighted, as they fail to meet the human authorship requirement” (Ropes & Gray). 

Works can be eligible for copyright if there is human authorship that simply uses Generative AI as a tool or “assisting instrument.” You can read more about the U.S. Copyright Office’s guidelines here

We are eager to see the outcome of the cited legal cases working their way through the courts. 

Because the code, copy, and images we create are the property of the client, we’re not going to risk copyright infringement.

3.5 Honorable Mention: Volatile Search Engine Response

Another reason we aren’t sourcing content from Generative AI? The outputs can harm SEO strategy. 

Great content can boost your name and company to the top of the search engine results page. Google uses a concept called E-E-A-T (experience, expertise, authoritativeness, and trustworthiness) to evaluate which content should be served first. 

As we’ve previously mentioned, Generative AI’s experience, expertise, and trustworthiness can be shaky depending on what sources it takes in to train itself. 

Additionally, Google doesn’t take too kindly to plagiarized content…which is also a complaint against Generative AI. 

Because of Google’s ever-changing policies trying to crack down on rampant Generative AI content that people are using to “hack” the rankings, we’re hesitant to use it in our process until the guidelines are more concrete. 

With our concerns out in the open, let’s get into the Generative AI applications we’re comfortable and uncomfortable with.

5 Ways We're Not Using Generative AI (And What We're Doing Instead)

1. Content Creation

Humans are still the best content creators. Their experience, unique sense of voice & ability to discern context and truthfulness make them the ideal authors.

But that’s not to say that Generative AI can’t be a helpful tool for writers; in fact, Generative AI can be instrumental in planning, organizing, and ideating.

Here are a few ways we’re experimenting with and using Generative AI for blogs, pillar pages, and social media copy.

  • Inspiring a place to start. Generative AI can provide a variety of directions from a single topic. From there, a writer can research and choose which story path to pursue.

generating blog ideasPerplexity is an AI research assistant that cites its sources.

  • Ideating titles or variations of titles. Balancing the total length of a sentence and the correct order of words can feel like playing Tetris. Generative AI can help jog an answer. As always, trust but verify.

seo title recommendationsWe've noticed that Generative AI's titles rely heavily on subtitles. Writer beware.

  • Generating blog ideas from keywords.
    If a writer needs to work backward from a keyword list, Generative AI can be used in the brainstorming process to find the right stories to tell. (Of course, we know that competitors might be getting the same list, so there’s always room for iterating and adjusting).

  • Writing meta description variations for existing content.
    Repurposing content is a powerful marketing tactic. Sometimes writers get stuck in the same order of words they’ve already written for a meta description. Generative AI can serve as a brainstorming partner to craft something new that captures the spirit of the original. 

  • Summarizing written content.
    Summaries are a low-hanging fruit for Generative AI. Use these in social media captions, content library descriptions, or CTAs.

  • Narrating written content.
    HubSpot’s post narration module opens up another layer of accessibility for blogs. We love it! 

  • Correcting grammar and punctuation.
    We use Google’s spelling and grammar checker embedded within Google’s editor suite and Grammarly to ensure our writing is clear and correct

2. Entire Graphic Design

We are not using Generative AI for ANY final client deliverables right now. 

For these three examples, we’re being picky about the Generative AI tools we’re using. Copyright infringement is a particular concern for images and designs. 

So, we’re only using tools that take in information ethically and legally. 

For example, Adobe’s Firefly has trained its model only “on Adobe Stock images, openly licensed content, and public domain content where the copyright has expired” (Domestic Data Streamers). 

Here’s how we’re using Generative AI:

  • Getting inspiration for concepts, colors, and patterns. Generative AI can expand a designer’s inspiration and frame of reference. But the designer always has the overall direction and strategy to choose the next steps.
  • Iterating on existing content in various colors, formats, and sizes. This use case doesn’t create designs from scratch but uses our human-generated design as a starting point and direct inspiration. 
  • Helping with complex and intricate patterns. Generative AI can help with background patterns and textures that would be time-consuming for a designer to create. These patterns are not a core feature of the overall design.

3. Entire UX Design

No surprise here - we’re also not using Generative AI for user experience design.

In the UX design field, Generative AI doesn’t have many great options per the Nielsen Norman Group. 

In an April 2024 article, they shared that interviews with current designers found “zero design-specific AI tools in serious use by the professional UX designers we spoke with.” 

Right now, those designers interviewed are doing the same as us, using Generative AI for research and inspiration. 

As far as our UX designers are concerned, Generative AI is being used in the following:

  • Early stage planning
  • Ideating on user interview questions
  • Researching user personas (trusting but verifying, of course)
  • Identifying trends

Because UX design is such a uniquely human concept and practice, we’re sticking by the expertise of our designers.

4. Development and Engineering

Besides copywriting, coding has been the second most cited use case for Generative AI. We have some particular philosophical objections to Generative AI for development and engineering. 

First, when you check your own code, you learn from your mistakes. Education through mistakes is a key way to evolve and grow as a developer. 

Tools that check the code automatically remove the learning opportunities in trial and error. 

Secondly, and perhaps more importantly, the code we give our clients is their proprietary information. Using Generative AI tools to create the code we give them is both an ethical and legal hornet’s nest. 

So here’s the way our developers are using Generative AI:

  • Task planning. Generative AI can help with the order of operations in a complicated process. 
  • Coaching. Developers can ask “How might you approach X problem?” and then Generative AI can help inspire a process or solution.
  • Writing the first draft of documentation. So long as the documentation doesn’t need the code itself, Generative AI can help with the first pass.

5. Replacing Humans with AI

There is no replacement for the amazing humans who create, write, code, problem-solve, and communicate on behalf of our clients. 

Not only do we think it’s a generally bad business approach, but we also think it’s prohibitively expensive and unethical.

We are in the business of providing human experiences, and we can only accomplish that with the great humans who inspire us daily. 

Generative AI should not replace the work humans can do. We are optimistic that as the tools improve, the working lives of our team will be more meaningful and valuable. 

In pursuit of that future, we’re exploring Generative AI solutions that meet our internal ethical standards and center the human experience first.

Our Approach to Our Approach

Generative AI is rapidly evolving, so how do we stay current? 

We formed an AI council that is comprised of employees from various disciplines. The team meets biweekly to discuss trends, approaches, beta testing, and new policies. 

We highly recommend an internal team to steer your company’s policies. 

We operate from four general principles that you are free to steal for your own team.

  1. Always trust but verify. Generative AI is a reliable liar. Make sure you’re prepared to fact check. 
  2. Never treat customer data or content as if it were your own. Transparency and protection are key when dealing with someone else’s information. 
  3. Play is encouraged, but using it in your products and services without clear communication and permission from your clients is unacceptable. 

Our perspective is subject to change, and we look forward to updating this blog.

Subscribe to our Blog