How I make UNDETECTABLE AI Content (that Google Loves)
Summary
TLDRIn this video, the presenter explores techniques to create AI-generated content that is undetectable by AI detection tools, ensuring it appears human-written. They test various methods, including using QuillBot for paraphrasing and instructing AI to write with high perplexity and burstiness. The video also discusses the implications of AI content for SEO, Google's stance, and the importance of creating content that is both search engine and human-friendly.
Takeaways
- 😎 The video aims to demonstrate methods for creating AI-generated content that is undetectable by AI detection tools.
- 🔍 The presenter tests various techniques to see if they can fool AI detection tools like Originality.ai, AI Cheat Sheet, and OpenAI's own classifier.
- 🎬 The video uses a retro theme, generating content about movies and activities from the '80s, to test the AI detection tools.
- 📊 Initial tests show inconsistent results across different detection tools, with some identifying AI content and others not.
- 📝 QuillBot is introduced as a tool to paraphrase and rewrite content, which can reduce the AI detection scores.
- 🤖 The presenter suggests pre-training AI with specific prompts to encourage more human-like writing, affecting perplexity and burstiness.
- 📉 Some techniques, like the emoji trick and comma trick, show promise in reducing AI detection but are not consistently effective.
- 🏆 The most effective technique tested is asking the AI to rewrite content specifically to avoid detection, yielding the best results across tools.
- 🌐 Google's stance on AI-generated content is discussed, with a focus on the quality and originality of the content rather than the method of creation.
- 🔮 The video concludes with a call to action for viewers to sign up for a free SEO training masterclass and stay tuned for further tests on AI content performance.
Q & A
What is the main goal of the video?
-The main goal of the video is to demonstrate methods to create AI-generated content that is undetectable by AI detection tools, with the aim of generating high-quality, original content for websites to increase traffic and revenue.
Why is it important to pass AI detection tools according to the video?
-Passing AI detection tools is important because it allows for the creation of a large volume of content that can be used to drive significant traffic to websites, potentially leading to increased revenue.
What are some of the techniques mentioned in the video to beat AI detection tools?
-The video mentions using tools like QuillBot for paraphrasing, as well as various pre-training prompts to instruct the AI to write with higher perplexity, burstiness, and in a more human-like, unpredictable fashion.
How does the video demonstrate the effectiveness of the techniques mentioned?
-The video demonstrates the effectiveness of the techniques by generating AI content and then testing it against various AI detection tools, showing the change in detection scores before and after applying the techniques.
What is the result of using QuillBot to rewrite AI-generated content according to the video?
-Using QuillBot to rewrite AI-generated content results in a significant decrease in the AI detection scores on tools like Originality and AI Cheat Sheet, making the content appear more human-written.
What does the video suggest about the consistency of AI detection tools?
-The video suggests that AI detection tools are inconsistent, as they give varying results on the same content, and even their own detection scores can differ significantly.
What role does the 'perplexity and burstiness' prompt play in the video?
-The 'perplexity and burstiness' prompt is used to instruct the AI to generate content with higher complexity and variation in sentence structure, which is intended to mimic human writing and reduce AI detection.
How does the video address the potential issue of AI-generated content being against webmaster guidelines?
-The video discusses the ambiguity in Google's stance on AI-generated content, suggesting that while it's not explicitly against guidelines, content created primarily for search engines rather than humans is discouraged.
What is the Surfer tool mentioned in the video and how does it perform in AI detection tests?
-The Surfer tool is a beta version of an AI content generation tool that is SEO-optimized. According to the video, it performs exceptionally well in AI detection tests, often being considered human-written by the detection tools.
What is the emoji trick mentioned in the video and how effective is it?
-The emoji trick involves inserting an emoji after each word in the AI-generated content and then asking the AI to remove them. While it shows promise in some cases, the video concludes that it is not consistent enough to be a reliable method.
What is the final recommendation from the video regarding the use of AI in SEO strategy?
-The final recommendation is to use AI in a way that makes the content appear human-written, as this is likely to be treated as higher quality by search engines and aligns with the need for original content.
Outlines
🤖 Undetectable AI Content Creation Techniques
The video introduces methods to create AI-generated content that is undetectable by AI detection tools. The host demonstrates how to generate content using OpenAI's playground and tests its originality using tools like Originality.ai, AI Cheat Sheet, and OpenAI's AI Text Classifier. The results show inconsistencies across different detection tools, with some identifying high AI likelihood and others not. The video also discusses the potential benefits of creating undetectable AI content, such as generating large volumes of content for website traffic and monetization.
🔍 Testing AI Detection Techniques and Tools
This section of the video explores various techniques to trick AI detection tools, including using QuillBot for paraphrasing, applying perplexity and burstiness in content generation, and instructing AI to write in a human-like manner. The host also tests asking the AI directly for advice on creating undetectable content and how to rewrite existing AI-generated content to pass detectors. The video presents the results of these techniques, showing improvements in passing AI detection tests, with some methods proving more effective than others. The host also mentions using a beta version of Surfer's AI tool for SEO-optimized content generation and its effectiveness in evading detection.
📊 Analyzing AI Content Detection and SEO Implications
The final paragraph delves into the broader implications of AI content detection for SEO. It discusses Google's stance on AI-generated content and the potential for it to be treated similarly to plagiarism or duplicate content. The video highlights the importance of creating content that appears human to maintain quality standards and SEO effectiveness. The host shares insights from a member of the Affiliate Lab, suggesting that AI-generated content might be considered lower quality unless it's indistinguishable from human-written content. The video concludes with a teaser for upcoming tests comparing AI and human content in search engine results and encourages viewers to subscribe for updates.
Mindmap
Keywords
💡AI Detection Tools
💡QuillBot
💡Perplexity and Burstiness
💡Surfer AI
💡SEO
💡Google Guidelines
💡Emoji Trick
💡Rewrite Content Technique
💡Affiliate Marketing
💡AI Content Plagiarism
Highlights
Demonstrates how to create AI-generated content that is undetectable by AI detection tools.
Claims that with the right techniques, AI-generated content can be made to appear 100% original to detection tools.
Explains the potential for generating large volumes of content to drive website traffic and revenue.
Tests various methods claimed to beat AI detection tools, including QuillBot and the emoji trick.
Uses OpenAI's playground to generate test content without experiencing lag or usage blackouts.
Baseline testing of AI-generated content using Originality.ai, AI Cheat Sheet, and OpenAI's own detection tool.
Shows inconsistency in AI detection results among different tools.
Discusses the impact of perplexity and burstiness on AI-generated content's detectability.
Provides a pre-training prompt to instruct AI on generating content with high perplexity and burstiness.
Tests the effectiveness of QuillBot in reducing AI detection scores.
Analyzes the results of using AI-generated content with specific instructions for human-like writing.
Asks the AI directly for advice on creating undetectable AI content.
Presents a technique where AI is asked to rewrite its own content to be undetectable by AI detection tools.
Introduces Surfer's AI tool, which generates SEO-optimized content that is difficult to detect as AI-generated.
Discusses Google's stance on AI-generated content and its potential impact on SEO strategies.
Concludes that making AI-generated content appear human is key to passing AI detection and adhering to SEO best practices.
Announces ongoing tests to compare the performance of AI-generated content with human-written content in search results.
Transcripts
- In this video, I'm gonna show you
how to create completely undetectable AI content.
Google will have no clue if it's written by an AI
and neither were humans.
Want proof?
An AI content piece pulled a 100% guaranteed AI score
before I worked my magic on it.
Then after I used just one of the techniques
I'll show you in this video, we now have 100% original.
Okay, cool, so I can pass AI detection tools.
So what? How does that translate into anything useful?
Well, when you can produce high quality
undetectable AI articles,
you can generate enormous amounts of content
like it's no one's business,
which is gonna sprout a massive traffic for your website
so you can then start printing money like Papa Powell.
Now, there's multiple ways that people claim
they can beat AI detection tools,
such as rewriting the content with QuillBot,
the emoji trick, and quite a few more.
I'm gonna test each of them out to see which ones work.
But first, let's generate some AI content
and plug it into each of the detection tools
so we can get a baseline.
To avoid ChatGPT lag time and usage blackouts,
which have been rampant as of late,
I'm gonna use OpenAI's playground
to generate my test content.
Here's my prompt and thank me later if you're an '80s fan.
Write me an article on "My Big Trouble in Little China"
is a better movie than "Top Gun."
Bam, here's the content.
Now let's go on over to Originality.ai
to see if it busts me red-handed for using AI.
This is a paid tool so let's hope it diagnoses it correctly.
You plop in the content here, let it run, and here we go.
Originality says there's a 98% chance of being AI content.
Good for you, Originality.
Now let's toss it into aicheatsheet.com,
one of the free tools.
As expected, 100% certain the text was written by AI.
And lastly, let's test out OpenAI's own detection tool,
AI Text Classifier.
Okay, then the classifier considers the text
to be very unlikely AI generated.
Pretty sus that OpenAI's own tool misses the mark,
but noted.
So here's where we stand.
Article one is 98% likely to be AI on Originality,
100% AI on Cheat Sheet, and unlikely on OpenAI's own tool.
Let's keep going and create a few more test cases.
Keeping up with the retro theme,
OpenAI, write me an article on why rollerblading is dead.
4% chance it was AI according to Originality,
100% on AI Cheat Sheet, and unclear according to OpenAI.
So putting all these results together so far,
we're starting to see some inconsistency
at least with the first tool.
OpenAI's tool is still lost AF.
Before we start tricking these tools,
let's generate one more test case.
Write me an article on proper form for bench press.
AI, do you even lift, bro?
Apparently it does, and here's the content.
So this time, we have Originality with 99% AI,
AI Cheat Sheet with 1%, and OpenAI itself still confused.
Here's our baseline.
Ultimately, both Originality
and AI Cheat Sheet are inconsistent,
and OpenAI's tool can probably just be ignored.
Let's start to do some magic tricks
to see if I can improve these scores.
But first, make sure to sign up
for my free SEO training masterclass
by using the link in the pinned comment.
It goes over all the magic I'm doing today
to get sites to the top of Google.
Now, back to the show.
The first AI undetection technique, is that even a word?
Anyways, the first technique I'm gonna use is QuillBot
for rewriting your content.
You do this by going to QuillBot,
dropping in your content, and then hitting paraphrase.
Once I put in the '80s movie AI text,
we get this new content on the right.
Let's just copy this new text
and paste it back into Originality.
Well, well, well, Originality said it was 98% fake before,
now it's 95% real, that was easy.
Now let's drop into AI Cheat Sheet,
100% AI content before, and now it's 59%
chance of its human written, or in other words,
41% chance of it's AI.
Then we have OpenAI itself with possibly AI generated.
That's actually an improvement for the tool
from their previous unlikely classification.
Bringing the same QuillBot process over
for the two other test cases,
we end up with the following results, which are interesting.
If the articles were previously
super detectable by the tools, then QuillBot
is for sure gonna improve it.
But if they were already passing with flying colors,
QuillBot will likely make it worse, which is expected.
If it's not broken, don't try to fix it.
QuillBot requires a paid account
if you input any significant amount of characters.
So let's move over to testing some of the free techniques.
The following few are all under the umbrella
of telling the AI not to write like an AI,
starting with perplexity and burstiness.
What?
- There's this post on Medium
where the author shared the following prompt
to pre-train your AI before it starts word vomiting.
Hey ChatGPT, regarding generating writing content,
two factors are crucial to be in the highest degree,
perplexity and burstiness.
Perplexity measures the complexity of the text.
Separately, burstiness compares the variations of sentences.
Humans tend to write with greater burstiness,
for example, with some longer or more complex sentences
alongside shorter ones.
AI sentences tend to be more uniform.
Therefore, generated text content
must have the highest degree of perplexity
and the highest degree of burstiness.
The other two factors are that writing
should be maximum contextually relevant
and maximum coherent.
Take this word block, drop it into your AI tool,
and then ask it to create a content.
Based on the above, write me an article
on why "Big Trouble in Little China"
is a better movie than "Top Gun."
Originally says it's 78% AI,
which is an improvement from the 98% before,
AI Cheat Sheet went from 100% AI to only 11% AI,
and OpenAI itself went from possibly to likely.
Adding in the results for the other two test cases,
we end up with a table like this.
I'd say that this perplexity prompt
is pretty damn inconsistent.
It did decently for the first test case
but bummed ass on the second two, skip.
By the way, if you're curious as to why
we even care in the first place
about passing AI detection software,
first off, yes, it's very important,
and I'll dig into why after I finish up these tests.
Next, we have another pre-training prompt
that tells the tool how to write like a human.
Try to sound like a human writer, writing for a blog,
writing in the first person giving advice.
Try to sound unique and write in an unpredictable fashion
that doesn't sound like GPT3.
Here's how it performed.
For test cases one and two, it screwed the pooch,
but for the bench press content, it did really well.
How about if we asked the AI itself
how to write undetectable AI content?
What are the key attributes of conversational content
that is undetectable as being written by an AI.
OpenAI tells you that this content
would include human-like grammar,
punctuation, and spelling that would look human,
context-based content creation, and so forth.
And then you ask it to generate your content.
Now we're talking.
Based on all the simple pre-training exercises
we've used so far,
asking the AI directly how to pass detectors
gives the most consistent result.
What if we ask the almighty AI
how to fix content that's already written?
Let it first write your content in vanilla mode.
Then you ask it to rewrite the above content
so that it is not detected as AI content
by AI content writers, and here's a result.
Winner, winner, chicken dinner.
This is our best performing technique so far,
simply asking the AI to rewrite itself
for passability results and great scores for all the tools.
- Whoa.
- Thank you to Affiliate Lab member in Chiang Mai buddy,
Chris Manak, for that tip.
Next, if you haven't already seen my case study
where I broke 50K traffic with a pure AI site,
make sure to check it out after you finish here.
Link in the description.
In that video, I mentioned that I've been using
a beta version of Surfer's upcoming AI tool
that will generate AI content on the fly
that's actually SEO-optimized.
It looks at the top ranking articles
to determine how to create the article outline,
how long to write the content,
and what entities and critical keywords
need to be in your content, and at what frequencies.
It actually does a lot more than that
but let's see how detectable it is.
Bear in mind, the Surfer tool generates full articles
and longer content
has a much higher chance of detectability,
so here goes nothing.
The Big Trouble article hit 28% AI on Originality,
AI Cheat Sheet guarantees it was a human who wrote it,
and the OpenAI tool choked up on it
probably because it's too long.
- That's what she said.
(chuckles)
- Ladies and gentlemen, we have a new champion.
The Surfer content bamboozled these detection tools
better than any other technique so far.
Now there's something I wanna mention.
I gotta admit that was more than a bit nervous
putting the Surfer tool to the test,
not only because I'm an investor,
but because I'm putting this content
on my freaking websites.
So seeing these results was a huge relief.
And if anyone would like to verify
that I'm not making these numbers up,
I'm happy to share the content with you
so you can run the tools yourself.
If you're interested in applying
for the Surfer AI beta test,
use the link of the description.
The next technique we have is a fun one
and that's the emoji trick.
You ask it to generate your content
but insert an emoji after each word.
And then when it's done, you ask it to remove the emojis.
While the emoji trick did perform super well
in the bench press article,
it definitely fell short in the other two.
In my opinion, this isn't consistent enough,
so let's call this myth busted.
Next, the comma trick.
Write me an article on proper form for bench press,
but remove as many commas as possible.
The comma technique seemed to work on Originality
and OpenAI's detector in a test case or two,
but it's definitely not consistent enough
to add to your process.
Now we're about to get into why this all matters,
but let's take a look at the final results.
The winners are clear.
Surfer's AI performs the best,
followed by the rewrite the content technique,
and then good old,
you tell me how to write human-like content.
Google has gone back and forth on their stance
on whether or not AI content
is against their guidelines or not.
In April of last year, they were like,
"Nah bro, AI is fully against our guidelines.
Specifically if you're using machine learning tools
to generate your content, it's essentially the same
as if you're just shuffling words around
or looking up synonyms
or doing the translation tricks that people used to do.
And that doing that
is still automatically generated content,
that means for us, it's still
against the webmaster guidelines."
But then in February, they released an official statement
in Google Search Central and said they'd reward
high quality content however it's produced, okay,
but that's not surprising.
They have to take the stance
'cause they're clearly gonna be using AI themselves
in their search results to compete
with Bing's ChatGPT integration.
And Denny Sullivan clarified this with quote,
"Content written primarily for search engines
rather than humans is the issue.
If someone fires up 100 humans to write content
just to rank or fires up a spinner or AI, same issue."
Sure, I agree that the ultimate goal
would be to create content that humans would enjoy
and encourages them to do what you
would want them to do on your website,
namely generate you money through ads or conversions.
But ultimately, as a search engine professional,
I'm obviously producing this content
so it performs well in Google
or any other search engine for that matter.
But there seems to be an unclear line
that you're not supposed to cross.
In the Affiliate Lab, we have a private mastermind group
with the top performers who have sold businesses
for six figures or more.
One of the members, Dave Gibbons,
has a theory which I gravitate to the most.
"Google will treat it just like it does
with plagiarism or duplicate content.
AI will be allowed but it will be deemed
a lower quality article
than a unique piece with equivalent other SEO factors."
I agree 100%.
Google needs to incentivize people
for creating real original content.
Otherwise, what in God's name will the AI
be able to scrape its information from?
So if you're gonna be using AI in your SEO strategy,
the name of the game is to make it appear human.
I told Dave that this was a very good theory.
After which, he sarcastically responded with,
"If only I was in a group with someone
with an engineering background
that could build a hypothesis from it
and do a (beeps) load of tests
and then let me know the reality."
Touche, my friend, touche.
And that's what I'm up to right now.
I'm running single variable tests right now
to see how AI content performs against human content
in the actual search results.
Make sure to subscribe so you don't miss my findings.
Weitere ähnliche Videos ansehen
Humanize AI Content: Rank #1 on Google with ChatGPT in 2024
ChatGPT Prompt Tutorial to Improve Google SEO - Write Longer Keyword Optimized Articles & Long Blogs
Is AI Content Detectable? And does Google even Care?
How I "Humanize" ChatGPT AI Content...
How to Bypass AI Detection - Even Turnitin! 😱
Does Google Penalize AI Content? New SEO Case Study (2024)
5.0 / 5 (0 votes)