Knowledge Base
Semrush Toolkits
SEO
Site Audit
What Issues Can Site Audit Identify?

What Issues Can Site Audit Identify?

With the ability to check over 140+ website issues, Semrush Site Audit makes your website audit process easy and effective. Below you can find the complete list of errors, warnings, and notices and a little bit about the categories of issues that Site Audit checks for. Some of the checks are combined to ease your routine.

Crawlability and Architecture

First and foremost, your website has to be crawlable before you can begin to think about SEO. To foster easier navigation for search crawlers, the website’s structure should be clean and well-organized. In order to make your structure as clean as possible, some of the main focus areas will be your sitemap, robots.txt file, and your internal links and URL structure. Site Audit covers over 20 of the most important common issues in this area.

On Page SEO

Issues regarding your content, title tags, h1 tags, meta descriptions, and images all play into your website’s on page SEO. In order to optimize, you’ll want to avoid any duplicate content, titles, descriptions, or h1 tags. You’ll also need to make sure that your content, titles, and descriptions are all at an optimal length. Site Audit checks for over a dozen of the most common on-page SEO issues, so you can identify all of the mistakes on your site that could be hurting you.

Technical SEO

A great deal of SEO depends on technical aspects of a website such as proper HTML, page load speed, and mobile optimization. Site Audit will tell you if you have any pages with an oversized HTML file or slow load speeds. It will also point out the use of flash, frames, or other outdated technology that should be avoided for SEO.

HTTPS Implementation

When moving a website from HTTP to HTTPS there are a lot of common mistakes that you can run into. From proper certificates to redirects and canonicals, encryption, and more, Site Audit checks for every possible issue involved when a website moves to HTTPS. There’s a dedicated report showing all of your HTTPS-related issues which you can get to quickly by following the widget from the Overview report.

International SEO

For larger companies and global websites, international SEO can be tricky. Correct usage of the hreflang tag is going to be critical to your success in getting search visibility across the regions and languages you desire. Site Audit checks for multiple issues like the use of the right language and country codes, conflicts with page source code, incorrect hreflang links, and more.

AMP

Accelerated Mobile Pages (AMP) are one of Google’s preferred methods of delivering mobile web pages, and Site Audit checks over 40 of the most common AMP-related issues, more than any other software on the market.

Performance

Performance-related checks in Site Audit will look for issues concerning a website’s page loading speed and user experience. Page load speed is becoming one of the most important SEO factors these days - if your site is too slow to load, users will leave and look for another website to visit. To help your performance, it is important to keep the HTML, CSS, and Javascript on your site lightweight and easy for crawlers to read.

Internal Linking

The internal Linking report gives you a full in-depth look at all of the internal links for an entire website. You will further see four different widgets that can help identify specific issues; Link Distribution, Incoming Internal Links, Pages to add Outgoing Links, and Internal Links Issues. These issues can include something large such as internal links that are broken, to minor ones such as pages only having one internal link.

 

Full List of Issues Site Audit Can Identify

Below is the list of all errors, warnings, and notices that Site Audit can identify on a website. All issues should be addressed to improve your Site Audit score and website health. The order of priority is Errors (most harmful to a website) - Warnings (harmful) - Notices (least harmful). 

Errors

About the issue

If you're running a multilingual website, it is necessary to help users from other countries find your content in the language that is most appropriate for them. This is where the hreflang (rel="alternate" hreflang="x") attribute comes in handy. This attribute helps search engines understand which page should be shown to visitors based on their location. It is very important to properly synchronize your hreflang attributes within your page's source code, otherwise you may experience unexpected search engine behavior. For more information, see the Google article.

How to fix it

To avoid any conflicts, we recommend that you review your hreflang attributes within your page's source code and fix any of the following issues:
- Conflicting hreflang and rel=canonical URLs
- Conflicting hreflang URLs
- No self-referencing hreflang URLs

About the issue

5xx errors refer to problems with a server being unable to perform the request from a user or a crawler. They prevent users and search engine robots from accessing your webpages, and can negatively affect user experience and search engines' crawlability. This will in turn lead to a drop in traffic driven to your website.

How to fix it

Investigate the causes of these errors and fix them.

About the issue

A <title> tag is a key on-page SEO element. It appears in browsers and search results and helps both search engines and users understand what your page is about.
If a page is missing a title, or a <title> tag is empty, Google may consider it low quality. In case you promote this page in search results, you will miss chances to rank high and gain a higher click-through rate.

How to fix it

Ensure that every page on your website has a unique and concise title containing your most important keywords. For information on how to create effective titles, please see this Google article. You can also view the On-Page SEO Basics: Meta Descriptions article.

About the issue

Our crawler reports pages that have duplicate title tags only if they are exact matches. Duplicate <title> tags make it difficult for search engines to determine which of a website's pages is relevant for a specific search query, and which one should be prioritized in search results. Pages with duplicate titles have a lower chance of ranking well and are at risk of being banned. Moreover, identical <title> tags confuse users as to which webpage they should follow.

How to fix it

Provide a unique and concise title for each of your pages that contains your most important keywords. For information on how to create effective titles, please see this Google article. You can also view the On-Page SEO Basics: Meta Descriptions article.

About the issue

Webpages are considered duplicates if their content is 85% identical. Having duplicate content may significantly affect your SEO performance. First of all, Google will typically show only one duplicate page, filtering other instances out of its index and search results, and this page may not be the one you want to rank.
In some cases, search engines may consider duplicate pages as an attempt to manipulate search engine rankings, and, as a result, your website may be downgraded or even banned from search results. Moreover, duplicate pages may dilute your link profile.

How to fix it

Here are a few ways to fix duplicate content issues:
- Add a rel="canonical" link to one of your duplicate pages to inform search engines which page to show in search results
- Use a 301 redirect from a duplicate page to the original one
- Use a rel="next" and a rel="prev" link attribute to fix pagination duplicates
- Instruct GoogleBot to handle URL parameters differently using Google Search Console
- Provide some unique content on the webpage
For more information, please read these Google articles: "Duplicate content" and "Consolidate duplicate URLs".

About the issue

This issue indicates that our crawler couldn't access the webpage. There are two possible reasons:
- Your site's server response time is more than 5 seconds
- Your server refused access to your webpages

How to fix it

Please contact your web hosting technical support team and ask them to fix the issue.

About the issue

A DNS resolution error is reported when our crawler can't resolve the hostname when trying to access your webpage.

How to fix it

Please contact your web hosting technical support and ask them to investigate and fix the issue.

About the issue

This issue indicates that our crawler couldn't access the webpage. There are two possible reasons:
- Your site's server response time is more than 5 seconds
- Your server refused access to your webpages

How to fix it

Make sure your page's URL conforms to a standard scheme and doesn't have any unnecessary characters or typos.

About the issue

An internal broken image is an image that can't be displayed because it no longer exists, its URL is misspelled, or because the file path is not valid. Broken images may jeopardize your search rankings because they provide a poor user experience and signal to search engines that your page is low quality.

How to fix it

To fix a broken internal image, perform one of the following:
- If an image is no longer located in the same location, change its URL
- If an image was deleted or damaged, replace it with a new one
- If an image is no longer needed, simply remove it from your page's code

About the issue

Our crawler reports pages that have duplicate meta descriptions only if they are exact matches.
A <meta description> tag is a short summary of a webpage's content that helps search engines understand what the page is about and can be shown to users in search results.
Duplicate meta descriptions on different pages mean a lost opportunity to use more relevant keywords. Also, duplicate meta descriptions make it difficult for search engines and users to differentiate between different webpages. It is better to have no meta description at all than to have a duplicate one.

How to fix it

Provide a unique, relevant meta description for each of your webpages.
For information on how to create effective meta descriptions, please see this Google article.
You can also view the On-Page SEO Basics: Meta Descriptions article.

About the issue

If your robots.txt file is poorly configured, it can cause you a lot of problems.
Webpages that you want to be promoted in search results may not be indexed by search engines, while some of your private content may be exposed to users.
So, one configuration mistake can damage your search rankings, ruining all your search engine optimization efforts.

How to fix it

Review your robots.txt file and fix all errors, if there are any.
You can check your file using Google's robots.txt Tester.
For information on how to configure your robots.txt, please see this article.

About the issue

If your sitemap.xml file has any errors, search engines will not be able to process the data it contains, and they will ignore it.

How to fix it

Review your sitemap.xml file and fix all errors.
You can check your file using the Sitemaps report in Google Search Console.
For information on how to configure your sitemap.xml, please see this article.

About the issue

A sitemap.xml file makes it easier for crawlers to discover the pages on your website. Only good pages intended for your visitors should be included in your sitemap.xml file.
This error is triggered if your sitemap.xml contains URLs that:
- lead to webpages with the same content
- redirect to a different webpage
- return non-200 status code
Populating your file with such URLs will confuse search engines, cause unnecessary crawling or may even result in your sitemap being rejected.

How to fix it

Review your sitemap.xml for any redirected, non-canonical or non-200 URLs. Provide the final destination URLs that are canonical and return a 200 status code.

About the issue

Normally, a webpage can be accessed with or without adding www to its domain name. If you haven’t specified which version should be prioritized, search engines will crawl both versions, and the link juice will be split between them. Therefore, none of your page versions will get high positions in search results.

How to fix it

Specify which version of your webpage you want to be the main one. Use Google Search Console data to define pages that are indexed. We recommend that you redirect an alternate version of your page to the preferred version via a 301 redirect. For more information, please see the Consolidate duplicate URLs article.

About the issue

The viewport meta tag is an HTML tag that allows you to control a page's viewport size and scale on mobile devices. This tag is indispensable if you want to make your website accessible and optimized for mobile devices.
For more information about the viewport meta tag, please see the Responsive Web Design Basics article.

How to fix it

Set the viewport meta tag for each page, and then test your website on a mobile device to make sure everything works fine.

About the issue

A webpage’s HTML size is the size of all HTML code contained in it. A page size that is too large (i.e., exceeding 2 MB) leads to a slower page load time, resulting in a poor user experience and a lower search engine ranking.

How to fix it

Review your page’s HTML code and consider optimizing its structure and/or removing inline scripts and styles.

About the issue

This issue is triggered if your AMP page has no canonical tag.
When creating AMP pages, several requirements should be met:
- If you have both an AMP and a non-AMP version of the same page, you should place canonical tags on both versions to prevent duplicate content issues
- If you have only an AMP version of your webpage, it must have a self-referential canonical tag
For more information, please see these articles: AMP on Google Search guidelines and ABC of Fixing AMP Validation Errors With Semrush

How to fix it

Add a rel="canonical" tag in the <head> section of each AMP page.

About the issue

This issue is triggered if:
- Your country code is not in the ISO_3166-1_alpha-2 format
- Your language code is not in the ISO 639-1 format

A hreflang (rel="alternate" hreflang="x") attribute helps search engines understand which page should be shown to visitors based on their location. Utilizing this attribute is necessary if you're running a multilingual website and would like to help users from other countries find your content in the language that is most appropriate to them.
It is very important to properly implement hreflang attributes, otherwise, search engines will not be able to show the correct language version of your page to the relevant audience.
For more information, please see these articles: Tell Google about localized versions of your page, How to Do International SEO with Semrush and Hreflang Attribute 101

How to fix it

Make sure that your hreflang attributes are used correctly. Here are a few ways to avoid hreflang implementation issues:
- Specify the correct ISO 639-1 language code. For language script variations, use the ISO 15924 standard format
- Specify the correct ISO_3166-1_alpha-2 country code

About the issue

A 4xx error means that a webpage cannot be accessed. This is usually the result of broken links. These errors prevent users and search engine robots from accessing your webpages, and can negatively affect both user experience and search engine crawlability. This will in turn lead to a drop in traffic driven to your website. Please be aware that the crawler may detect a working link as broken if your website blocks our crawler from accessing it. This usually happens due to the following reasons:
- DDoS protection system
- Overloaded or misconfigured server

How to fix it

If a webpage returns an error, remove all links leading to the error page or replace it with another resource.
To identify all pages on your website that contain links to a 4xx page, click "View broken links" next to the error page.
If the links reported as 4xx do work when accessed with a browser, you can try either of the following:
- Contact your web hosting support team
- Instruct search engine robots not to crawl your website too frequently by specifying the "crawl-delay" directive in your robots.txt

About the issue

This issue is triggered if our crawler detects an HTTP page with a <input type="password"> field.
Using a <input type="password"> field on your HTTP page is harmful to user security, as there is a high risk that user login credentials can be stolen. To protect users' sensitive information from being compromised, Google Chrome will start informing users about the dangers of submitting their passwords on HTTP pages by labeling such pages as "non-secure" starting January 2017. This could have a negative impact on your bounce rate, as users will most likely feel uncomfortable and leave your page as quickly as possible.

How to fix it

Move your HTTP webpages that contain a password field to HTTPS. Please follow these Google guidelines.

About the issue

This issue is triggered if your certificate has expired or will expire soon.
If you allow your certificate to expire, users accessing your website will be presented with a warning message, which usually stops them from going further and may lead to a drop in your organic search traffic.

How to fix it

Ask your website administrator to renew the certificate and run periodic checks to avoid any future issues.

About the issue

Running SSL or old TLS protocol (version 1.0) is a security risk, which is why it is strongly recommended that you implement the newest protocol versions.

How to fix it

Update your security protocol to the latest version.

About the issue

If the domain or subdomain name to which your SSL certificate is registered doesn't match the name displayed in the address bar, web browsers will block users from visiting your website by showing them a name mismatch error, and this will in turn negatively affect your organic search traffic.

How to fix it

Contact your website administrator and ask them to install the correct certificate.
Since subdomains also require their own certificates, you can use a wildcard or multi-domain SSL certificate that allows you to secure multiple subdomains.

About the issue

If your website contains any elements that are not secured with HTTPS, this may lead to security issues. Moreover, browsers will warn users about loading unsecure content, and this may negatively affect user experience and reduce their confidence in your website.

How to fix it

Only embed HTTPS content on HTTPS pages.
Replace all HTTP links with the new HTTPS versions. If there are any external links leading to a page that has no HTTPS version, remove those links.

About the issue

If you're running both HTTP and HTTPS versions of your homepage, it is very important to make sure that their coexistence doesn't impede your SEO. Search engines are not able to figure out which page to index and which one to prioritize in search results. As a result, you may experience a lot of problems, including pages competing with each other, traffic loss, and poor placement in search results. To avoid these issues, you must instruct search engines to only index the HTTPS version.

How to fix it

Do either of the following:
- Redirect your HTTP page to the HTTPS version via a 301 redirect
- Mark up your HTTPS version as the preferred one by adding a rel="canonical" to your HTTP pages

About the issue

Redirecting one URL to another is appropriate in many situations. However, if redirects are done incorrectly, it can lead to disastrous results. Two common examples of improper redirect usage are redirect chains and loops.

Long redirect chains and infinite loops lead to a number of problems that can damage your SEO efforts. They make it difficult for search engines to crawl your site, which affects your crawl budget usage and how well your webpages are indexed, slows down your site's load speed, and, as a result, may have a negative impact on your rankings and user experience.

Please note that if you can’t spot a redirect chain with your browser, but it is reported in your Site Audit report, your website probably responds to crawlers’ and browsers’ requests differently, and you still need to fix the issue.

How to fix it

The best way to avoid any issues is to follow one general rule: do not use more than three redirects in a chain.

If you are already experiencing issues with long redirect chains or loops, we recommend that you redirect each URL in the chain to your final destination page.

We do not recommend that you simply remove redirects for intermediate pages as there can be other links pointing to your removed URLs, and, as a result, you may end up with 404 errors.

About the issue

In order for AMP pages to be served properly to mobile users, they must be compliant with AMP guidelines.
If your HTML doesn't adhere to AMP standards, your AMP page will not work correctly, and may not be indexed by search engines, and, as a result, may not appear in mobile search results.

How to fix it

Since there are multiple reasons why your page's HTML may not comply with AMP standards, we provide specific how-to-fix tips for each invalid AMP page. These tips are provided in the 'Issue Description' column on the page that lists all pages with HTML issues.
You can also check out the ABC of Fixing AMP Validation Errors With Semrush article to get more information.

About the issue

In order for AMP pages to be served properly to mobile users, they must be compliant with AMP guidelines.
If the style and layout of your AMP page do not adhere to AMP standards, the page will not work correctly, and may not be indexed by search engines, and, as a result, may not appear in mobile search results.

How to fix it

Since there are multiple reasons why your page's style and layout may not comply with AMP standards, we provide specific how-to-fix tips for each invalid AMP page. These tips are provided in the 'Issue Description' column on the page that lists all pages with style and layout issues.
You can also check out the ABC of Fixing AMP Validation Errors With Semrush article to get more information.

About the issue

In order for AMP pages to be served properly to mobile users, they must be compliant with AMP guidelines.
If your AMP page includes templating syntax, it will not work correctly and may not be indexed by search engines, and, as a result, may not appear in mobile search results.

How to fix it

Since there are different types of templating issues that your AMP page can have, we provide specific how-to-fix tips for each invalid AMP page. These tips are provided in the 'Issue Description' column on the page that lists all pages with templating issues.
You can also check out the ABC of Fixing AMP Validation Errors With Semrush article to get more information.

About the issue

By setting a rel="canonical" element on your page, you can inform search engines of which version of a page you want to show up in search results. When using canonical tags, it is important to make sure that the URL you include in your rel="canonical" element leads to a page that actually exists. Canonical links that lead to non-existent webpages complicate the process of crawling and indexing your content and, as a result, decrease crawling efficiency and lead to unnecessary crawl budget waste.

How to fix it

Review all broken canonical links. If a canonical URL applies to a non-existent webpage, remove it or replace it with another resource.

About the issue

Multiple rel=”canonical” tags with different URLs specified for the same page confuse search engines and make it almost impossible for them to identify which URL is the actual canonical page. As a result, search engines will likely ignore all the canonical elements or pick the wrong one. That’s why it is recommended that you specify no more than one rel=”canonical” for a page.

How to fix it

Remove all canonical URLs except the one that you’d like to serve as the actual canonical page.

About the issue

A meta refresh tag instructs a web browser to redirect a user to a different page after a given interval. Generally, it is recommended that you avoid using a meta refresh tag as it is considered a poor, slow, and outdated technique that may lead to SEO and usability issues.

How to fix it

Review all pages with a meta refresh tag. If this tag is used to redirect an old page to a new one, replace it with a 301 redirect.

About the issue

A broken JavaScript or CSS file is an issue that should be watched out for on your website. Any script that has stopped running on your website may jeopardize your rankings, since search engines will not be able to properly render and index your webpages. Moreover, broken JS and CSS files may cause website errors, and this will certainly spoil your user experience.

How to fix it

Review all broken JavaScript and CSS files hosted on your website and fix any issues.

About the issue

This issue is triggered when we connect to your web server and detect that it uses old or deprecated encryption algorithms. Using outdated encryption algorithms is a security risk that can have a negative impact on your user experience and search traffic. Some web browsers may warn users accessing your website about loading insecure content. This usually negatively affects their confidence in your website, thereby stopping them from going further, and as a result, you may experience a drop in your organic search traffic.

How to fix it

Contact your website administrator and ask them to update encryption algorithms.

About the issue

This issue is triggered if the size of your sitemap.xml file (uncompressed) exceeds 50 MB or it contains more than 50,000 URLs. Sitemap files that are too large will put your site at risk of being ineffectively crawled or even ignored by search engines.

How to fix it

Break up your sitemap into smaller files. You will also need to create a sitemap index file to list all your sitemaps and submit it to Google.
Don't forget to specify the location of your new sitemap.xml files in your robots.txt.
For more details, see this Google article.

About the issue

Page (HTML) load speed is one of the most important ranking factors. The quicker your page loads, the higher the rankings it can receive. Moreover, fast-loading pages positively affect user experience and may increase your conversion rates.
Please note that "page load speed" usually refers to the amount of time it takes for a webpage to be fully rendered by a browser. However, the crawler only measures the time it takes to load a webpage’s HTML code - load times for images, JavaScript, and CSS are not factored in.

How to fix it

The main factors that negatively affect your HTML page generation time are your server’s performance and the density of your webpage’s HTML code.
So, try to clean up your webpage’s HTML code. If the problem is with your web server, you should think about moving to a better hosting service with more resources.

About the issue

This issue is triggered if structured data items contain fields that do not meet Google's guidelines.

Implementing and maintaining your structured data correctly is important if you want to get an edge over your competitors in search results.

If your website markup has errors, crawlers will not be able to properly understand it, and you may run the risk of losing the chance of gaining rich snippets and getting more favorable rankings.

For more information on the structured data requirements, see schema.org, Google documentation, or our article.

How to fix it

Check structured data on your webpages with a validation tool. Please note that different markup testing tools may show different results.

We recommend that you use the Rich Results Test tool to review and validate your pages’ structured data against their rich snippet requirements.

About the issue

This issue is reported when SemrushBot fails to crawl a link because of an invalid link's URL.

Common mistakes include the following:

- Invalid URL syntax (e.g., no or an invalid protocol is specified, backslashes (\) are used)

- Spelling mistakes

- Unnecessary additional characters

How to fix it

Make sure the link's URL conforms to a standard scheme and doesn't have any unnecessary characters or typos.

About the issue

This issue is triggered if the viewport meta tag used on your page is missing the width or initial scale value.
The viewport meta tag is an HTML tag that allows you to control a page’s viewport size and scale on mobile devices. This tag is indispensable if you want to make your website accessible and optimized for mobile devices.
For more information about the viewport meta tag, please see the Responsive web design basics article.
How to fix it

Specify the width and initial-scale values. We recommend you contact your developers for assistance. Once this is done, check your page for mobile-friendliness or re-audit your site.

Warnings

About the issue

Most search engines truncate titles containing more than 70 characters. Incomplete and shortened titles look unappealing to users and won't entice them to click on your page.
For more information, please see this Google article.

How to fix it

Try to rewrite your page titles to be 70 characters or less.

About the issue

Generally, using short titles on webpages is a recommended practice. However, keep in mind that titles containing 10 characters or less do not provide enough information about what your webpage is about and limit your page's potential to show up in search results for different keywords.
For more information, please see this Google article.

How to fix it

Add more descriptive text inside your page's <title> tag.

About the issue

Your text-to-HTML ratio indicates the amount of actual text you have on your webpage compared to the amount of code. This issue is triggered when your text to HTML is 10% or less.
Search engines have begun focusing on pages that contain more content. That's why a higher text-to-HTML ratio means your page has a better chance of getting a good position in search results.
Less code increases your page's load speed and also helps your rankings. It also helps search engine robots crawl your website faster.

How to fix it

Split your webpage's text content and code into separate files and compare their size. If the size of your code file exceeds the size of the text file, review your page's HTML code and consider optimizing its structure and removing embedded scripts and styles.

About the issue

Though meta descriptions don't have a direct influence on rankings, they are used by search engines to display your page's description in search results. A good description helps users know what your page is about and encourages them to click on it. If your page's meta description tag is missing, search engines will usually display its first sentence, which may be irrelevant and unappealing to users.
For more information, please see these articles: Create good titles and snippets in Search Results and On-Page SEO Basics: Meta Descriptions.

How to fix it

In order to gain a higher click-through rate, you should ensure that all of your webpages have meta descriptions that contain relevant keywords.

About the issue

It is a bad idea to duplicate your title tag content in your first-level header. If your page's <title> and <h1> tags match, the latter may appear over-optimized to search engines.
Also, using the same content in titles and headers means a lost opportunity to incorporate other relevant keywords for your page.
For more information, please see this Google article.

How to fix it

Try to create different content for your <title> and <h1> tags.

About the issue

While less important than <title> tags, h1 headings still help define your page’s topic for search engines and users. If an <h1> tag is empty or missing, search engines may place your page lower than they would otherwise. Besides, a lack of an <h1> tag breaks your page’s heading hierarchy, which is not SEO-friendly.

How to fix it

Provide a concise, relevant h1 heading for each of your pages.

About the issue

When it comes to URL structure, using underscores as word separators is not recommended because search engines may not interpret them correctly and may consider them to be a part of a word. Using hyphens instead of underscores makes it easier for search engines to understand what your page is about.
Although using underscores doesn't have a huge impact on webpage visibility, it decreases your page's chances of appearing in search results, as opposed to when hyphens are used.
For more information, please see this Google article.

How to fix it

Replace underscores with hyphens. However, if your page ranks well, we do not recommend that you do this.

About the issue

If you have both a sitemap.xml and a robots.txt file on your website, it is a good practice to place a link to your sitemap.xml in your robots.txt, which will allow search engines to better understand what content they should crawl.

How to fix it

Specify the location of your sitemap.xml in your robots.txt. To check if Googlebot can index your sitemap.xml file, use the Sitemaps report in Google Search Console.

About the issue

This issue is triggered if the number of words on your webpage is less than 200.
The amount of text placed on your webpage is a quality signal to search engines.
Search engines prefer to provide as much information to users as possible, so pages with longer content tend to be placed higher in search results, as opposed to those with lower word counts.
For more information, please view this video.

How to fix it

Improve your on-page content and be sure to include more than 200 meaningful words.

About the issue

Temporary redirects (i.e., a 302 and a 307 redirect) mean that a page has been temporarily moved to a new location. Search engines will continue to index the redirected page, and no link juice or traffic is passed to the new page, which is why temporary redirects can damage your search rankings if used by mistake.

How to fix it

Review all URLs to make sure the use of 302 and 307 redirects is justified. If so, don’t forget to remove them when they are no longer needed. However, if you permanently move any page, replace a 302/307 redirect with a 301/308 one.

About the issue

Alt attributes within <img> tags are used by search engines to understand the contents of your images. If you neglect alt attributes, you may miss the chance to get a better placement in search results because alt attributes allow you to rank in image search results.
Not using alt attributes also negatively affects the experience of visually impaired users and those who have disabled images in their browsers.
For more information, please see these articles: Using ALT attributes smartly and Google Image Publishing Guidelines.

How to fix it

Specify ‪‬‬a relevant alternative attribute inside an <img> tag for each image on your website, e.g., "<img src="mylogo.png" alt="This is my company logo">".

About the issue

A broken external image is an image that can't be displayed because it no longer exists or because its URL is misspelled. Having too many broken external images negatively affects user experience and may be a signal to search engines that your website is poorly coded or maintained.

How to fix it

To fix a broken external image, perform one of the following:
- If an image was deleted or damaged, replace it with a new one
- If an image is no longer needed, simply remove it from your page's code
- If an image moved to a different location and you know its new address, change its URL

About the issue

Using too many URL parameters is not an SEO-friendly approach. Multiple parameters make URLs less enticing for users to click and may cause search engines to fail to index some of your most important pages.

How to fix it

Try to use no more than four parameters in your URLs.

About the issue

This issue is reported if your page has neither lang nor hreflang attribute.
When running a multilingual website, you should make sure that you’re doing it correctly.
First, you should use a hreflang attribute to indicate to Google which pages should be shown to visitors based on their location. That way, you can rest assured that your users will always land on the correct language version of your website.
You should also declare a language for your webpage’s content (i.e., lang attribute). Otherwise, your web text might not be recognized by search engines. It also may not appear in search results or may be displayed incorrectly.

How to fix it

Perform the following:
- Add a lang attribute to the <html> tag, e.g., "<html lang="en">"
- Add a hreflang attribute to your page's <head> tag, e.g., <link rel="alternate" href="http://example.com/" hreflang="en"/>

About the issue

Providing a character encoding tells web browsers which set of characters must be used to display a webpage’s content. If a character encoding is not specified, browsers may not render the page content properly, which may result in a negative user experience. Moreover, search engines may consider pages without a character encoding to be of little help to users and, therefore, place them lower in search results than those with a specified encoding.

How to fix it

Declare a character encoding either by specifying one in the charset parameter of the HTTP Content-Type header (Content-Type: text/html; charset=utf-8) or by using a meta charset attribute in your webpage HTML (<meta charset="utf-8"/>). For more details, please see these articles: Character Encoding - HTTP header and Character Encoding - HTML

About the issue

A webpage’s doctype instructs web browsers which version of HTML or XHTML is being used. Declaring a doctype is extremely important in order for a page’s content to load properly. If no doctype is specified, this may lead to various problems, such as messed up page content or slow page load speed, and, as a result, negatively affect user experience.

How to fix it

Specify a doctype for each of your pages by adding a <!Doctype> element (e.g., "<!Doctype HTML5>") to the very top of every webpage source, right before the <html> tag.

About the issue

This issue is triggered if your page has content based on Flash, JavaApplet, or Silverlight plugins. These types of plugins do not work properly on mobile devices, which frustrates users. Moreover, they cannot be crawled and indexed properly, negatively impacting your website’s mobile rankings.

How to fix it

Convert unsupported plugin content into HTML5. If you’re using Flash videos on your website, please see this article.

About the issue

<frame> tags are considered to be one of the most significant search engine optimization issues. Not only is it difficult for search engines to index and crawl content within <frame> tags, which may in turn lead to your page being excluded from search results, using these tags also negatively affects user experience.

How to fix it

Try to avoid using <frame> tags whenever possible.

About the issue

The rel="nofollow" attribute is an element in an <a> tag that tells crawlers not to follow the link (e.g., "<a href="http://example.com/link" rel="nofollow">Nofollow link example</a>")."Nofollow" links don’t pass any link juice to referred webpages. That’s why it is not recommended that you use nofollow attributes in internal links. You should let link juice flow freely throughout your website. Moreover, unintentional use of nofollow attributes may result in your webpage being ignored by search engine crawlers even if it contains valuable content.

How to fix it

Make sure not to use nofollow attributes by mistake. Remove them from <a> tags, if necessary.

About the issue

A sitemap.xml file is used to list all URLs available for crawling. It can also include additional data about each URL.
Using a sitemap.xml file is quite beneficial. Not only does it provide easier navigation and better visibility to search engines, it also quickly informs search engines about any new or updated content on your website. Therefore, your website will be crawled faster and more intelligently.

How to fix it

Consider generating a sitemap.xml file if you don't already have one.
Then you should specify the location of your sitemap.xml files in your robots.txt, and check if Googlebot can index your sitemap.xml file with the Sitemaps report in Google Search Console

About the issue

One of the common issues you may face when using HTTPS is when your web server doesn't support Server Name Indication (SNI). Using SNI allows you to support multiple servers and host multiple certificates at the same IP address, which may improve security and trust.

How to fix it

Make sure that your web server supports SNI. Keep in mind that SNI is not supported by some older browsers, which is why you need to ensure that your audience uses browsers supporting SNI.

About the issue

Google considers a website's security as a ranking factor. Websites that do not support HTTPS connections may be less prominent in Google's search results, while HTTPS-protected sites will rank higher with its search algorithms.
For more information, see this Google article.

How to fix it

Switch your site to HTTPS. For more details, see Secure your site with HTTPS.

About the issue

Your sitemap.xml should include the links that you want search engines to find and index. Using different URL versions in your sitemap could be misleading to search engines and may result in an incomplete crawling of your website.

How to fix it

Replace all HTTP URLs in your sitemap.xml with HTTPS URLs.

About the issue

If any link on the website points to the old HTTP version of the website, search engines can become confused as to which version of the page they should rank.

How to fix it

Replace all HTTP links with the new HTTPS versions.

About the issue

This issue is triggered if the Content-Encoding entity is not present in the response header. Page compression is essential to the process of optimizing your website. Using uncompressed pages leads to a slower page load time, resulting in a poor user experience and a lower search engine ranking.

How to fix it

Enable compression on your webpages for faster load time.

About the issue

Blocked resources are resources (e.g., CSS, JavaScript, image files, etc.) that are blocked from crawling by a "Disallow" directive in your robots.txt file. By disallowing these files, you're preventing search engines from accessing them and, as a result, properly rendering and indexing your webpages. This, in return, may lead to lower rankings. For more information, please see this article.

How to fix it

To unblock a resource, simply update your robots.txt file.

About the issue

This issue is triggered if compression is not enabled in the HTTP response.
Compressing JavaScript and CSS files significantly reduces their size as well as the overall size of your webpage, thus improving your page load time.
Uncompressed JavaScript and CSS files make your page load slower, which negatively affects user experience and may worsen your search engine rankings.
If your webpage uses uncompressed CSS and JS files that are hosted on an external site, you should make sure they do not affect your page's load time.
For more information, please see this Google article.

How to fix it

Enable compression for your JavaScript and CSS files on your server.
If your webpage uses uncompressed CSS and JS files that are hosted on an external site, contact the website owner and ask them to enable compression on their server.
If this issue doesn't affect your page load time, simply ignore it.

About the issue

This issue is triggered if browser caching is not specified in the response header.
Enabling browser caching for JavaScript and CSS files allows browsers to store and reuse these resources without having to download them again when requesting your page. That way the browser will download less data, which will decrease your page load time. And the less time it takes to load your page, the happier your visitors are.
For more information, please see this Google article.

How to fix it

If JavaScript and CSS files are hosted on your website, enable browser caching for them.
If JavaScript and CSS files are hosted on a website that you don't own, contact the website owner and ask them to enable browser caching for them.
If this issue doesn't affect your page load time, simply ignore it.

About the issue

This issue is triggered if the total transfer size of the JavaScript and CSS files used on your page exceeds 2 MB.
The size of the JavaScript and CSS files used on a webpage is one of the important factors for a page's load time. Having lots of clunky JavaScript and CSS files makes your webpage "heavier" in weight, thus increasing its load time. This in turn leads to a poor user experience and lower search engine rankings.
For more information, please see this Google article.

How to fix it

Review your pages to make sure that they only contain necessary JavaScript and CSS files. If all resources are important for your page, consider reducing their transfer size.

About the issue

This issue is triggered if a webpage uses more than a hundred JavaScript and CSS files.
Each time a visitor navigates to a webpage, their browser first starts loading supportive files, such as JavaScript and CSS. For each file used by your webpage, a browser will send a separate HTTP request. Each request increases your page load time and affects its rendering, which has a direct impact on user experience, bounce rate, and, ultimately, search engine rankings.
For more information, please see this Google article.

How to fix it

Review your pages to make sure that they only contain necessary JavaScript and CSS files.
If all resources are important for your page, we recommend that you combine them.

About the issue

Minification is the process of removing unnecessary lines, white space, and comments from the source code.
Minifying JavaScript and CSS files makes their size smaller, thereby decreasing your page load time, providing a better user experience, and improving your search engine rankings.
For more information, please see this Google article.

How to fix it

Minify your JavaScript and CSS files.
If your webpage uses CSS and JS files that are hosted on an external site, contact the website owner and ask them to minify their files.
If this issue doesn't affect your page load time, simply ignore it.

About the issue

This issue is triggered if your link URL is longer than 2,000 characters. Although theoretically there is no character limit for your URLs, it is still recommended that you keep their length under 2,000 characters. This is important because some browsers cannot handle URLs exceeding this limit. Moreover, keeping URLs at a reasonable length will make their crawling much easier, while extremely long URLs may be ignored by search engines.

How to fix it

Try to keep your link URLs shorter than 2,000 characters. For more information, please see this article.

Notices

About the issue

This issue is triggered if a crawler gets a 403 code when trying to access an external webpage or resource via a link on your site. A 403 HTTP status code is returned if a user is not allowed to access the resource for some reason. In the case of crawlers, this usually means that a crawler is being blocked from accessing content at the server level.

How to fix it

Check that the page is available to browsers and search engines. To do this, follow a link in your browser and check the Google Search Console data.

- If a page or resource is not available, contact the owner of the external website to restore deleted content or change the link on your page

- If a page is available but our bot is blocked from accessing it, you can ask the external website owner to unblock the page, so we can check all resources correctly. You can also hide this issue from your list.

About the issue

This issue is triggered if a non-descriptive anchor text is used for a link (either internal or external). An anchor is considered to be non-descriptive if it doesn’t give any idea of what the linked-to page is about, for example, “click here”, “right here”, etc. This type of anchor provides little value to users and search engines as it doesn't provide any information about the target page. Also, such anchors will offer little in terms of the target page’s ability to be indexed by search engines, and as a result, rank for relevant search requests. For more information on the criteria used to trigger this check, refer to “What are unoptimized anchors and how does Site Audit identify them?”.

How to fix it

To let users and search engines understand the meaning of the linked-to page, use a succinct anchor text that describes the page’s content. For best practices on how to optimize your anchor text, refer to the “Write good link text” section in Google’s Search Engine Optimization (SEO) Starter Guide.

About the issue

A page's crawl depth is the number of clicks required for users and search engine crawlers to reach it via its corresponding homepage. From an SEO perspective, an excessive crawl depth may pose a great threat to your optimization efforts, as both crawlers and users are less likely to reach deep pages.
For this reason, pages that contain important content should be no more than 3 clicks away from your homepage.

How to fix it

Make sure that pages with important content can be reached within a few clicks.
If any of them are buried too deep in your site, consider changing your internal link architecture.

About the issue

A nofollow attribute is an element in an <a> tag that tells crawlers not to follow the link. "Nofollow" links don’t pass any link juice or anchor texts to referred webpages. The unintentional use of nofollow attributes may have a negative impact on the crawling process and your rankings.

How to fix it

Make sure you haven’t used nofollow attributes by mistake. Remove them from <a> tags, if needed. 

About the issue

HTTP Strict Transport Security (HSTS) informs web browsers that they can communicate with servers only through HTTPS connections. So, to ensure that you don't serve unsecured content to your audience, we recommend that you implement HSTS support.

How to fix it

Use a server that supports HSTS.

About the issue

According to Google, long URLs are not SEO-friendly. Excessive URL length intimidates users and discourages them from clicking or sharing it, thus hurting your page's click-through rate and usability.

How to fix it

Keep your URLs at a reasonable length.

About the issue

Although multiple <h1> tags are allowed in HTML5, we still do not recommend that you use more than one <h1> tag per page. Including multiple <h1> tags may confuse users.

How to fix it

Use multiple <h2>-<h6> tags instead of an <h1>.

About the issue

A robots.txt file has an important impact on your overall SEO website's performance. This file helps search engines determine what content on your website they should crawl.
Utilizing a robots.txt file can cut the time search engine robots spend crawling and indexing your website.
For more information, please see this Google article.

How to fix it

If you don't want specific content on your website to be crawled, creating a robots.txt file is recommended. To check your robots.txt file, use Google's robots.txt Tester in Google Search Console.

About the issue

This issue is triggered if a language value specified in a hreflang attribute doesn't match your page's language, which is determined based on semantic analysis.
Any mistakes in hreflang attributes may confuse search engines, and your hreflang attributes will most likely be interpreted incorrectly. So it's worth taking the time to make sure you don't have any issues with hreflang attributes.
For more information, see these articles: Tell Google about localized versions of your page, How to Do International SEO with Semrush, and Hreflang Attribute 101.

How to fix it

Review all pages reported to have this issue and fix all hreflang attributes.
Please note that our crawler may report your webpage to have a "hreflang language mismatch" issue even if the hreflang value shows the correct language. This usually happens if your webpage is multilingual or has too little content.

About the issue

If a page cannot be accessed by search engines, it will never appear in search results. A page can be blocked from crawling either by a robots.txt file or a noindex meta tag.

How to fix it

Make sure that pages with valuable content are not blocked from crawling by mistake.

About the issue

A webpage that is not linked internally is called an orphaned page. It is very important to check your website for such pages. If a page has valuable content but is not linked to another page on your website, it can miss out on the opportunity to receive enough link juice. Orphaned pages that no longer serve their purpose confuse your users and, as a result, negatively affect their experience. We identify orphaned pages on your website by comparing the number of pages we crawled to the number of pages in your Google Analytics account. That's why to check your website for any orphaned pages, you need to connect your Google Analytics account.

How to fix it

Review all orphaned pages on your website and do either of the following:
- If a page is no longer needed, remove it
- If a page has valuable content and brings traffic to your website, link to it from another page on your website
- If a page serves a specific need and requires no internal linking, leave it as is.

About the issue

An orphaned page is a webpage that is not linked internally. Including orphaned pages in your sitemap.xml files is considered to be a bad practice, as these pages will be crawled by search engines. Crawling outdated orphaned pages will waste your crawl budget. If an orphaned page in your sitemap.xml file has valuable content, we recommend that you link to it internally.

How to fix it

Review all orphaned pages in your sitemap.xml files and do either of the following:
 - If a page is no longer needed, remove it
 - If a page has valuable content and brings traffic to your website, link to it from another page on your website
 - If a page serves a specific need and requires no internal linking, leave it as is.

About the issue

The x-robots-tag is an HTTP header that can be used to instruct search engines whether or not they can index or crawl a webpage. This tag supports the same directives as a regular meta robots tag and is typically used to control the crawling of non-HTML files. If a page is blocked from crawling with x-robots-tag, it will never appear in search results.

How to fix it

Make sure that pages with valuable content are not blocked from crawling by mistake.

About the issue

Blocked external resources are resources (e.g., CSS, JavaScript, image files, etc.) that are hosted on an external website and blocked from crawling by a "Disallow" directive in an external robots.txt file. Disallowing these files may prevent search engines from accessing them and, as a result, properly rendering and indexing your webpages. This, in return, may lead to lower rankings. For more information, please see this article.

How to fix it

If blocked resources that are hosted on an external website have a strong impact on your website, contact the website owner and ask them to edit their robots.txt file.
If blocked resources are not necessary for your site, simply ignore them.

About the issue

If your website uses JavaScript or CSS files that are hosted on an external site, you should be sure that they work properly. Any script that has stopped running on your website may jeopardize your rankings since search engines will not be able to properly render and index your webpages. Moreover, broken JavaScript and CSS files may cause website errors, and this will certainly spoil your user experience.

How to fix it

Contact the website owner and ask them to fix a broken file.

About the issue

Although using permanent redirects (a 301 or 308 redirect) is appropriate in many situations (for example, when you move a website to a new domain, redirect users from a deleted page to a new one, or handle duplicate content issues), we recommend that you keep them to a reasonable minimum. Every time you redirect one of your website's pages, it decreases your crawl budget, which may run out before search engines can crawl the page you want to be indexed. Moreover, too many permanent redirects can be confusing to users.

How to fix it

Review all URLs with a permanent redirect. Change permanent redirects to a target page URL where possible.

About the issue

This issue is triggered if a link (either external or internal) on your website has an empty or naked anchor (i.e., anchor that uses a raw URL), or anchor text only contains symbols. Although a missing anchor doesn't prevent users and crawlers from following a link, it makes it difficult to understand what the page you're linking to is about. Also, Google considers anchor text when indexing a page. So, a missing anchor represents a lost opportunity to optimize the performance of the linked-to page in search results.

How to fix it

Use anchor text for your links where it is necessary. The link text must give users and search engines at least a basic idea of what the target page is about. Also, use short but descriptive text. For more information, please see the "Use link wisely" section in Google's SEO Starter Guide.

Frequently asked questions Show more
Manual Show more
Workflows Show more
Recently viewed