Official Google answers on Technical SEO – March 2018
John Mueller, a webmaster trends analyst and a developer advocate at Google opened an AMA (Ask Me Anything) thread on Reddit to answer questions about technical SEO as well as some general questions.
Looking back at the famous Matt Cutt’s Youtube videos a couple of years ago, Google has been known to answer questions about SEO in a very abstract, twisted ways, very seldom giving unambiguous statements. Nevertheless, such public discussions are always helpful to understanding the current state of SEO.
Without further ado, here are some key points from the discussion:
Login/Sign up modals cloaked content
- Shouldn’t be a blank page, make sure you show content underneath, including images.
- Don’t make a difference between Googlebot and a real user (don’t show modal for users and disable for Googlebot), especially important is not to have different markup (HTML).
User Interface & Design
- Don’t hide important navigation items/content on mobile
- Don’t hide important content on mobile, if users see some content in SERPs but is not really shown in mobile design when they visit your page, its bad
- Burger menus are OK in mobile design
- Above the fold content needs to have at least some real content. If its all ads, it will hurt you.
- Inline SVGs cannot be read by Google and are not included in the Google images search
- When a page is largely composed of svgs/images, you NEED to put some text content there. John suggests even comments will help.
- Updating content does not automatically increase rankings
- Content that changes all the time and the one that stays the same for many years can be equally beneficial for users
* Having said that, there is a small comment I would make when it comes to this topic…
If you update your content to be more beneficial for users, more content, better optimized for queries, you WILL rank better.
Knowledge graph is a ranking signal
Knowledge graph is sort of like a database that Google has, with data from various places, including Wikipedia. My understanding is, that it sort of maps similar text together.
So, lets say it will have in its database that “Green tea” content usually contains keywords like “antioxidants”, “health”, “caffeine”. By knowing that, it knows also that “Green tea” content almost never talks about “cloud computing”, “flying squirrels”,…
This is quite an important thing to keep in mind, once you understand it. For example relevant and irrelevant backlinks I imagine heavily rely on this knowledge graph.
It is used to understand your page & search queries better. Graph maps what Google sees in various places around the web in order to find out how different information could be relevant for peoples queries.
- They are still important and always will be
- Nofollow links are not counted as usual links, it is considered by Google a strong indication not to give trust in that link.
- You can do it
- Treat Googlebot the same as any other user
- It should be done for a limited time
- Pages you are A/B testing should have the same content (you shouldn’t display affiliate offers in A and dancing events in B)
- If you have different URLs for A & B, make sure you set the canonical to your primary URL
- Keep in mind, Googlebot does not store & replay cookies, so make sure you have a fallback for users without cookies
Local search (google.co.uk, google.co.za,…)
- Search algorithms are for the most part the same
- Search results however are different because the content is different on local markets
- Same algorithm + Different content = Different search behavior & results
- Some new features launch at delayed times on the country/local search. Features like rich snippets/cards.
HTTPS & SSL Certificates
- Yes, HTTPS has a small boost on the rankings
- SSL Certificates are not differentiated for search (if its valid certificate, its OK)
Notable mention 1 – John Mueller’s comment on Moz’s Domain Authority
Notable mention 2 – John Mueller’s comment on SEO advice