heading Tag
The best methodology for employing heading tags is to do the following.
- First, your page should only employ an h1 tag once, which usually is the title of the page.
- Your h2 tags should repeat your important keywords—but with additional terms to give context to the section that the h2 tag covers. For example, if you are writing a page about IT service and repair, you might employ h2 tags with text such as, Your best choice for IT support service, and Certified and reliable IT professionals. And these are usually the subtitle of the page.
- The h3 tags are optional, but can come in handy for organizing longer pages. The search engine ranking power of heading tags decreases as you progress from h1 down through the lower orders of heading tags.
In short, h1 tags are mandatory, h2 tags are highly recommended, and h3 tags are necessary only in the most competitive markets. Heading tags have a complementary effect when combined with an effective title tag, body text, and meta description.
Remember also, that keyword prominence applies to heading tags, so greater weight is given to the words at the beginning of the tag.
Keyword metatag
Google does not make use of the keyword meta tags in determining web ranking. some smaller search engines still employ the keywords tag in ordering search results. Notably, Bing publicly acknowledges that it still makes use of keyword meta tags in ordering search results, but warns that because of historic abuse, they’ll not give keyword tags much weight.
Google:
Search Console
Edit the robots.txt tester: https://www.google.com/webmasters/tools/robots-testing-tool?utm_source=support.google.com/webmasters/&utm_medium=referral&utm_campaign=6155685
Test your robots.txt file
- Open the tester tool for your site, and scroll through the
robots.txt
code to locate the highlighted syntax warnings and logic errors. The number of syntax warnings and logic errors is shown immediately below the editor. - Type in the URL of a page on your site in the text box at the bottom of the page.
- Select the user-agent you want to simulate in the dropdown list to the right of the text box.
- Click the TEST button to test access.
- Check to see if TEST button now reads ACCEPTED or BLOCKED to find out if the URL you entered is blocked from Google web crawlers.
- Edit the file on the page and retest as necessary. Note that changes made in the page are not saved to your site! See the next step.
- Copy your changes to your robots.txt file on your site. This tool does not make changes to the actual file on your site, it only tests against the copy hosted in the tool.
Limitations of the robots.txt Tester tool:
- Changes you make in the tool editor are not automatically saved to your web server. You need to copy and paste the content from the editor into the
robots.txt
file stored on your server.- The robots.txt Tester tool only tests your
robots.txt
with Google user-agents or web crawlers, like Googlebot. We cannot predict how other web crawlers interpret yourrobots.txt
file.
Importance of freshness
A search engine sees a website for what it is. If a website has static content that never changes, the search engine knows it. Over time, the search engine spiders will come less often. Why send a search spider for content that doesn’t change? The site will not be seen by search engines as high-value to its users. On the other hand, publish regular content to your site and the search engines will know it. In fact, search engines are moving toward real-time search results, although this technology isn’t yet fully developed. The search engines will send spiders to your site more often looking for both new content and changes to old content. Your site will rise in the rankings. And, if that wasn’t enough, new content will get a fresh content bump.
The fresh content bump refers to a supplemental boost in ranking power that search engines assign to fresh content served up from blogging platforms like WordPress for the first few weeks after the published date.