“Content is King” – this motto has been around for years. Google’s requirements have continued to increase, so that “any content” simply no longer works.

How content is well received by readers and search engines

A lot has happened on Google in the past few years when it comes to evaluating content. In the past, aspects such as “keyword density” were relevant: the content of a text could be relatively weak, but as long as the search terms appeared in the right places, the page was still rewarded with good rankings.

However, depending on the perspective, or luckily, these times are over. Especially with its “Panda” update, Google has focused on the quality of content. Thanks to further updates such as “Hummingbird”, the search engine is increasingly detaching itself from the individual search terms and trying to really understand the intentions of the searcher.

Simply writing texts – or having them written – is no longer enough to be found with your own content. Rather, there are two important areas in terms of content:

  • real user signals
  • expected user signals

Of course, many other signals also contribute to the ranking, for example the external link. In other words: Even the perfect content will not have it easy on a badly linked or badly structured website. For this article, however, we will limit ourselves to the topic of content – and hope that the other signals will be considered in principle.

Real user signals

It is undisputed among SEOs that Google also uses real user signals to rate content. If you take a look at the “Search Analysis” of the Google Search Console, you will see that positions and click-through rates (CTR = Click Through Rate) are displayed there for the organic rankings. This is clear evidence that Google counts every click in the search results.

And if the search engine counts the clicks, it can not only measure the click rate, but also draw conclusions about the respective length of stay. If someone clicks on a search result, takes a quick look at the page, returns to the search results and clicks on another search result, this is a clear – in this case negative – signal: the user probably does not have the search query on the first page found what he hoped for.

Basically, you want to send two signals to Google:

  1. Your own search result has an above-average click rate.
  2. The searcher stays on the page for a long time – so there he finds what he was looking for.

How can you influence that?

First of all, it seems that branded websites generally have higher CTRs. After all, the purpose of a brand is also to offer orientation and a promise of good performance. If you’re looking for a marathon training plan and find a Nike and Paules cheap sports shop page in the search results, you’re more likely to click on the Nike page – and Google can measure that. So it also makes sense to build your own brand in relation to the CTR.

But there are other ways to exert influence. For example, companies can also build optical advantages . By optimizing the page titles and meta descriptions, the title line and text section can be influenced in the search result. If you compare the two search results in Figure 1, you can imagine that an above-average number of users will click on the second result for the following reasons:

  1. Only the second result shows the complete product. So the user knows more that he is getting the “right thing”.
  2. The second result has so-called rich snippets : the additional line shows both attractive rating stars and the price directly.
  3. The text description of the second result is relatively short, but makes it clear that the product can be bought directly. It is not possible to tell from the (first) search result whether Milupa has an online shop or only offers product information.

In addition to the use of such optical and communicative advantages, it is also important to align content optimally to the user . The user should stay on the website for as long as possible – what he does exactly when the content answers all his questions and is also easy to consume.

An example: If you want to write a text for the search term “cost hearing aids”, you have to know what questions your users have . If you have been dealing with the subject yourself for years, it is often difficult to put yourself in the position of the ignorant. Keyword databases can help here.

Who z. For example, if you use the Google Keyword Planner (AdWords login required) and enter the phrase “cost hearing aids”, search queries such as “hearing aids without additional payment” and “digital hearing aid prices” will be found there. From this you can already derive important content for your text:

  • Is there an additional payment? How much is it? What kind of device do you get for this?
  • If there are digital hearing aids: Are there also analog hearing aids? How are the costs compared? Are the digital better? For whom?

You should also use other tools such as keywordtool.io , hypersuggest.com or answerthepublic.com, SEMrush to get even more inspiration about search behavior. The three tools mentioned analyze Google’s search suggestions by e.g. B. combine the search queries with question words. In general, you get a lot more and above all more specific (“long tail”) search queries than with the Keyword Planner.

However, legibility as such is also important : only a small number of users have the stamina and leisure to read a text completely from top to bottom. The larger part tries to quickly grasp the structure of a text and to read only the parts that are required. The reader can be supported with the following elements:

  • Gripping headline
  • Meaningful subheadings
  • Table of contents at the beginning
  • Media (YouTube videos, downloads …)
  • Further links (internal, external)

Expected user signals

In addition to the real, observed user signals, the Google algorithm takes on another important part. It calculates how good a text is likely to be for the user – hence “expected” user signals.

As I said at the beginning, the “Panda” update was particularly important here. As part of this, Google has published a catalog with a total of 23 questions, which could be used to critically examine its content and entire website. B .:

  • Would you trust the information contained in this article?
  • Was the article written by an expert or a knowledgeable layperson, or is it rather superficial?
  • Does this article contain spelling, stylistic, or factual errors?

The “Panda” update was not implemented by Google in such a way that specific components or properties of a page were explicitly formulated. Rather, software was used that was trained on the basis of manually selected good and bad websites. So it is interesting to ask yourself what properties the software has recognized.

Readability of the text and relevance

At the algorithmic level, Google has other options for evaluating texts. There are various so-called readability indices that analyze the readability of a text (sentence length, number of syllables per word …).

Under the keyword “WDF * IDF”, another method has been haunting the SEO network for years. The point is that a text can also be evaluated based on its “proof terms” and “relevant terms”. A fictitious example: In a text about “aspirin”, the word “headache” also occurs with a probability of 80% (and with 70% “medication”, with 60% “tablets” etc.). Whoever writes a text about “Aspirin” and does not use any of these words sends a clear signal that the text is probably inferior.

Basically, it makes sense to meet the expectations regarding these “proof / relevant terms”. Fortunately, anyone who writes a factually well-founded text will do it by themselves. However, many authors use WDF * IDF tools to determine the related search terms and incorporate them into the text. There you will find search terms that could be important in relation to the search query “sofa”. In any case, you should check all of this very critically (and never use words like “ikea” or “heine”). Because: The tools only use a very small database, so that there are often wrong words in the list.

Above all, the following applies: A text must not be an unstructured lead desert. The goal should be well structured, appealing and detailed texts. If you want to read the entire text from the first to the last word, you should be able to cope with it as well as someone who can only dedicate 20 seconds to the text.

Visibility of the content

Beyond that, other aspects are important. This is how Google expanded its Google guidelines in 2016. It now reads :

“Make sure that the most important content on your website is visible by default . Google is able to crawl HTML content that is hidden behind navigation elements such as tabs or expandable areas. However, we classify this content as less accessible to users and believe that the most important information should be visible in the standard page view . ”

In other words, content that is collapsed or that requires user interaction to be visible is rated lower by Google. So if you have a tab such as “Ratings” on product detail pages that contains user ratings, you will only benefit from the text there if the tab is also visible when the page is loaded.

Even more input from Google

By the way, the “Search Quality Evaluator Guidelines” (PDF) , which Google updated some time ago, are also worth reading . The document is basically aimed at Google’s internal employees, who should evaluate search results so that the algorithm can be refined more and more.

But companies can also find a lot of information there about which content Google prefers. For example, the term “Helpful Supplementary Content (SC)” appears in this document, which is defined as follows: “SC allows users to find related and interesting content on websites with many pages.” A section such as “Related Links” at the end of an article certainly fulfills this requirement well.

Other concepts in this document, such as Expertise, Authoritativeness and Trustworthiness (EAT), are also very important to understand what individual pages and entire websites should ideally look like. Overall, the guidelines offer an extremely large number of tips for website operators. For example, the following requirement for high quality pages:

“Satisfying website information and / or information about who is responsible for the website or satisfying customer service information, if the page is primarily for shopping or includes financial transactions.”

In other words, an imprint is just as helpful as an author’s box at the end of an article. One can assume that Google will incorporate all of these requirements into its algorithms or has already implemented them. And that is exactly why you should take a very detailed look at the document (or at least the relevant parts of it).

In the end, only one thing counts: the user

Let’s put all the requirements down to a simple denominator: The respective page must provide the best answer to the question for the user. Even if there are some evaluations about the supposedly ideal length of texts, one should not get involved with such rules. The previously used keyword density has long been an exciting topic for editors, as they were influenced by their creativity. Nowadays, editors actually have a free hand again. The aspects mentioned in this article are therefore positively received by many editors.

Conclusion

As you can see from this article, the search engines’ requirements for good content are relatively high and overall very diverse. At its core, however, it is about the user receiving high-quality content that satisfies his needs and that does not make it necessary for him to obtain any other information. In SEO industry circles one speaks of “10x content”, meaning content that is ten times better than all other content. Of course, this is a high goal, but it also indicates a rough direction: Instead of writing ten smaller articles on individual aspects, it is more common these days to make a large, comprehensive article.