Tech

How to Make Website Optimized for SEO

Through SEO you can optimize your site and rank or position your site on Google. you need to learn it by joining an SEO Community, so that you can learn what you can do to make your page rank higher in the SERP.

Let’s talk about how to optimize the main positioning factors and the main SEO problems that arise when optimizing a website and their promising solutions.

I have divided the topics of into four blocks:

  1. Accessibility
  2. Indexability
  3. Content
  4. Meta tags

1. Accessibility

The initial step in optimizing a website is to allow search engines access to our content. You have to check if the web page is visible for search engines and how they are looking at the page. You can learn this by consulting any Digitalmarketing agency for better solutions.

It may be the case that sometimes search engines cannot correctly read a website for various reasons, a key requirement for positioning.

Aspects of optimization that you should take into account for good accessibility

  • Robots txt file
  • Meta tag robots
  • HTTP status codes
  • Sitemap
  • Web structure
  • JavaScript and CSS
  • Web speed

Robots.txt file

The robots.txt file prevents access of search engines to some certain of a site. It is very beneficial to prevent Google from showing the pages we do not want in the search results. For example, in WordPress, that search engine does not access the administrator files, the robots.txt file would be like this:

Example

User agent: *

Disallow: / wp-admin

NOTE: Be very careful not to block search engine access to your entire website without realizing it, for instance:

User agent: *

Disallow: /

You must verify that the file of robots.txt is not blocking any key part of your website. You can do it by visiting the URL

www.example.com/robots.txt, or through Google Webmaster Tools in “Tracking”> “Tester robots.txt.”

The robots.txt file can also indicate where our Sitemap is located by appending to the document’s last line.

So a complete robots.txt example for WordPress would look like this:

Example

User-agent: *

Disallow: / wp-admin

Sitemap: http: //www.example.com/sitemap.xml

Want to explore more detail about this file? I recommend visiting the website with the information about the standard.

Meta Robot Tag

The meta tag “robots” is used to inform search engine robots whether or not they can index the page and whether to follow the links (it contains) or not. If you want to learn more you can contact a local seo specialist like an SEO agency in UAE, if you are in UAE.

When analyzing a page, check if there is any meta tag mistakenly blocking access to these robots. This is an example of what these tags would look like in HTML code:

For instance:

<meta name = ”robots” content = ”noindex, nofollow”>

Meta tags are very handy in preventing Google from indexing the pages that do not interest you anymore, such as pages or filters, but to follow the links so that it continues to crawl the website. In this case, label will be written as under:

Example

<meta ame = ”robots” content = ”noindex, follow”>

Through “view page source code” we can check the meta tags by right-clicking on the page.

If you want to go further, with the Screaming Frog tool, we can see at a glance which pages on the whole web have implemented this tag. See it in the “Meta Robots 1” field and “Directives” tab. Locate all the pages with these tags and delete them.

HTTP status codes

If a URL returns a status code (404, 502, etc.), search engines and users will not be able to access that page. To identify these URLs, we recommend you also use Screaming Frog because it quickly shows all the URLs’ status on your page.

Note: whenever you do a new search in Screaming Frog, export the result in a CSV. So, you can collect them all in the same Excel later.

Sitemap

The Sitemap is an XML file containing a list of the pages on the site and some added information, such as how often the content of a page changed when it was last updated, etc.

Excerpt from a sitemap would look like this:

Example

<url>

<loc> http://www.eexample.com </loc>

<changefreq> daily </changefreq>

<priority> 1.0 </priority>

</url>

Important points that you should check regarding the Sitemap are:

You always need to follow the protocols; otherwise, Google will not process it properly

  • Be uploaded to Google Webmaster Tools
  • Be up to date. And remember when you update your website, make sure you have all the new pages in your Sitemap

Google is indexing all the pages in the Sitemap

In case the website does not have any sitemap, we will have to create one, following four steps:

  1. Generate Excel with all of the pages that you want to be indexed. For this, use the same Excel that we created when searching for the HTTP response codes
  2. Create a Sitemap. For this, it is recommended the Sitemap Generators tool (simple and very complete)
  3. Compare the pages that are in your excel and those that are in the Sitemap. Delete those pages from the excel that we do not want to be indexed
  4. Upload the Sitemap through Google Webmaster Tools

Web structure

If a website’s structure is too deep, Google will find it more difficult to reach all the pages. So, it is recommended that the structure should not be more than three levels deep (not counting the home). Since the Google robot has a limited time to crawl a web, the more levels it has to go through, the less time it will have to access the deepest pages

That is why it is always better to create a horizontal web structure and not a vertical one.

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close