Google Webmaster Hangouts 29 Mei 2020

SEO-Sight-Google-Webmasters-Hangout-29-Mei-2020

Written by Ryan

14/06/2020

 

Google Webmasters Hang Out

Google Webmasters Hang Out is een vragenuurtje van Google waarbij je vragen kunt stellen aan John Mueller. Hij is een teamlid dat verantwoordelijk is voor Google Search. Vragen kun je insturen of tijdens de Hang Out aan hem stellen. 

Via deze blogpost wil ik op een overzichtelijke manier de vragen en antwoorden die worden gesteld in kaart brengen.

Inhoudsopgave

Inhoudsopgave

Een homepagina is geïndexeerd via Google Search Console maar wordt niet getoond op de Google Ranking Pagina’s.

Audience:
So my question is regarding a query, site:domainname.com. So usually when we want to see if the specific page is indexed in Google other than Search Console is when I do a site: query directly on the web. I see the homepage is not coming up, which was coming up back in the time.

And anytime I go back in the Search Console and resubmit it, I see it coming up for 24 hours, like for a day, and then it disappears. Surprisingly, that page is still coming up normally or working fine for one of the competitive keywords. So it’s not like it’s one of the blacklisted pages.

If it was an internal page, I would not worry, because a variety of reasons. But since this is the home page, I don’t know, should I be worried? And there are no signals in Search Console that it’s showing me an error. EMP version and all looks fine.

John Mueller:
Now that’s something that comes up every now and then. But it’s not a sign of anything being wrong or anything broken.

The main reason you’re seeing this is that for site queries in particular, we don’t have any defined order that we would show them.

So it’s very common that you would see the homepage first or on the first page of a site query, site: results. But it’s not always the case, and it’s not a sign that there is anything wrong with the website or anything blocked, or otherwise problematic. If it’s ranking normally, I think you’re OK.

Audience: Even though it doesn’t appear at all in all the pages?

John Mueller:
At all is kind of tricky because of the way that things get filtered out from a site: query, where we try to filter out things that are kind of duplicate, where we already have shown the same title and description.

And because of things like that, the site: query isn’t really meant as a comprehensive list of all the pages that we have indexed. And since it’s an artificial query, we don’t have any kind of defined order for the site: results.

Audience: So nothing to worry at all. It’s just completely natural to have the homepage missing in it?John Mueller:
Yeah. On the one hand, for a site query, that’s something that happens every now and then. It sometimes also happens that even within a website, some other page is seen as a primary page for the website.

So even just from the point of view, what is your primary page, sometimes that’s not necessarily the home page, because a variety of reasons. Maybe some product that you have is what your whole company is known for, and that’s some subpart of your website.

Audience: Got it. OK, thanks.

John Mueller:
Cool.

 

 

Hoe gaat Google om met (adaptieve) content waarbij de mobiele versie van je website misschien anders is dan van de desktop versie?

Audience:
I had a question. A lot of people who are doing personalization now are talking about adaptive content for, for example, their mobile home page versus their desktop home page. And with mobile indexing, how does Google feel about adaptive content, where the content on the mobile version doesn’t necessarily contain all of the same content that’s on the desktop?

John Mueller:
Ultimately, that’s something that you kind of have to decide for yourself, because from an indexing point of view, we will index the mobile version of the homepage.

* Google bezoekt je website vanuit de VS en als je je website bezoekers filtert op basis van Geografie kan dat problemen opleveren.

And usually, we crawl an index from US-based IP addresses. So if you’re doing adaptive content by country or by language or by city or something like that, then that would also play into that.

But essentially, we would index the content that we would see. And if you show users other content for other locations or other devices, that’s kind of up to you.

* Als je inderdaad twee verschillende inhoud laat zien

The only problematic case where someone on our side might get involved is when it gets into more malicious content, where maybe you show comic books on the mobile version, which is the one that we index, and the desktop version is an adult website, or something crazy. That’s something that we don’t see very often, but it can happen every now and then.

But if you’re adjusting the content slightly differently, if you’re saying, well, on mobile, people don’t want to read as much, so we have a shorter version of the page, and on desktop, we have maybe more links to PDFs or something like that, that’s completely fine. Totally up to you.

The only thing to keep in mind is that we’ll just index the mobile version. So we won’t look at the desktop version at all. If there’s something that you want your site to be known for that’s only on the desktop version, we might not know about it.

* Hier geeft John aan dat Google inderdaad de mobiele versie van je website prevaleert boven de desktop versie.

Audience:
Makes sense, thank you.

John Mueller:
Cool, thanks.

* Wat als je op de mobiele versie een “lees meer” knop hebt waar iemand op moet drukken voordat je de inhoud ziet?

Audience:
I do have a follow-up note on that. What we have been doing is we’ll try and replicate the same content considering for ranking, but for a mobile version, we’ll probably add a link called Read More. So if someone wants to sit down while they’re driving or anything or an iPad or something, they can otherwise just see a very small version of the same piece of content, since that is what Google is going to consider for ranking. I don’t know if that makes sense, if it’s OK or not.

John Mueller:
From our point of view, that’s OK. I don’t know if users always appreciate that, because lots of people on mobile do want to see the full content. But that’s ultimately up to you. So we’ll just index the version that you have on the mobile version. And if that’s all you want to have indexed, that’s up to you. If you prefer to have more indexed, then make sure that more content is on the mobile version as well.

Audience:
Got it.

 

 

Cloaking

John Mueller:
Cool. OK. Let me jump into some of the submitted questions. And if you all have any questions along the way, feel free to jump in. Or if there is anything kind of related that comes up, feel free to bring that up. Let’s see. I think this is a similar question already, a question around cloaking in general.

Q:
In some situations, we display slightly different content for US IPs, for domains with non-English language to optimize the way that Google indexes our site, as Googlebot is majorly coming from the US.

Do we risk any cloaking penalty with this behavior?
How does Google define or detect cloaking?
In general, is the same IP, the same content safe enough of a solution?

So in general, the thing you would need to watch out for is that you’re not serving Google or search engines’ specifically different content.

* If you’re serving all users in the US this content, that’s perfectly fine. That’s the version that we’ll index.

* If users from other countries see slightly different content, that’s ultimately up to you.

In an ideal world, we would be able to crawl and index from all of these different countries, and see all of those versions of the content. But for practical reasons, just crawling the web once is already pretty hard. So that’s not something that we can easily expand to all other countries.

So if you’re showing content to users in the US and Google is crawling from the US, then that’s what we would index. That’s not something that we would consider cloaking.

Cloaking en de web spam team
When it comes to cloaking, if the web spam team suspects there is cloaking happening, then they have lots of sophisticated means to test that. So that’s something where I suspect it’s not anything you need to worry about if you’re doing this in a reasonable way, but the web spam team does thus sometimes take time to figure out what’s actually happening here, and to make sure that it’s not causing any problems in our search results.

 

 

301 Redirects, om bezoekers door te sturen naar een andere website indien ze vanuit een ander land komen.

* Dit is een vraag voor bedrijven die internationaal klanten willen bedienen.

And then a question regarding 301 redirect. For strategic reasons, we would like to redirect all users coming from IP addresses from a certain country to a different domain. Let’s say we have abc.info, and we redirect every user coming from a Russian IP to xyz.org. Xyz.org would have canonical pointing to abc.info. The reason is we want abc.info to rank and xyz.org to serve visitors from Russia. Do you see any risks here which might harm the rankings of abc.info?

John:
In general, this is something that you can do. I think you probably want to watch out that your users are also happy with this, because ultimately, if your users learn that your website is not what they were looking for, then they’ll go elsewhere anyway.

But in general, when it comes to internationalization, you can have a setup where you have something like a central homepage that redirects users from different countries to different versions of your site.

The thing to watch out for, again, is we will crawl an index from the US. So if you always redirect US users away from your Russian content, we would never recognize that you actually have Russian content, which might make it hard for you to rank for all of these Russian keywords, for example. But in general, this kind of a setup is something you can do.

You can use hreflang to tell us about this and kind of flag the redirecting page as an x-default page. And by that, we will generally learn which versions to show in search. It’s not always guaranteed that we will show the canonical URL, or the one that you have the rel canonical to. So if we see lots of signals telling us, actually, your Russian site is the one that people prefer, then we might index that URL as your main website.

So for canonicalization, we use a number of different signals. The redirects help us, the rel canonical helps us. But we also look at things like links, internally and externally, site map files, all of the other signals that we have there. So that’s something just to keep in mind if you’re seeing that the wrong version is being indexed.

 

 

Hoe lang duurt het voordat bepaalde Sitelinks Search Box uit de zoekresultaten is verwijderd?

Q:
What’s a typical timeline for a large corporate site to see Sitelinks Search Box to be removed from search results?

Voorbeeld Sitelink search box:

Sitelinks searchbox in-use

Je kunt op de website van Pinterest zelf direct vanuit Google zoeken.
Meer info: https://developers.google.com/search/docs/data-types/sitelinks-searchbox

JOHN:
So I assume this is if you add the no site links. I don’t know what the exact meta tag is called.

But there is a meta tag that you can add that removes the sitelinks search box. And if you add that to your pages, then generally over time, we will drop the Sitelinks Search Box for that site. I don’t know how long that takes to be processed.

I do know that if you add the markup to provide your own Sitelinks Search Box with your own website, that can sometimes take quite a bit of time. So with some kinds of structured data, we can pick that up within a couple of days. With the sitelinks markup, I’ve seen it sometimes take a month, maybe a little bit longer. So sometimes that just takes longer to be processed.

 

 

Omschrijvingen van website is veranderd maar Google laat zo nu en dan nog de oude zien.

Q:
We changed one of our sites’ title and description and content three weeks ago, NFL schedule to the new season. And at first, Google displayed the new snippet and ranked the site quite well. However, after three days, Google displayed the old snippet and ranked us on page 2. Since then, the snippet and the ranking are switching about every three days. Googlebot crawled the site several times since then– what’s the matter here?

JOHN: I don’t know. It’s really hard to say without an example. This sounds like something that shouldn’t be happening like this. So if you want to send me some examples of where you’re seeing this happening, [INAUDIBLE] then feel free to do that. Feel free to ping me on Twitter if you want, or add some more details to the question here.

 

 

Als een website ervoor zorgt dat de back button naar de homepagina van de website verwijst ipv naar de vorige pagina (bv Google Search).

Q:
What’s Google’s policy on back button hijacking? Sites like Gear Hungry, once you visit from Google and click the Back button, they take you to the home page instead of Google search results. Is there a penalty for such action, or is it allowed?

JOHN:
It feels kind of sneaky, so this is something which I certainly wouldn’t recommend doing, because if people want to go somewhere, then it’s probably a good idea to let them go there. I don’t think they’d be happier if you just sent them to your homepage.

I am not aware of any specific policies on our side with regards to back button hijacking. So I don’t know if that’s something that the various Google teams would really worry about there, because it’s really something more where you’re annoying your users. It’s not that you’re causing problems in search.

 

 

Is het verstandig om gegevens van onze auteurs op onze nieuwswebsite toe te voegen?

Question about EAT and YMYL. So EAT is expertise, authority, trustworthiness. And YMYL is your money or your life content. These are terms from the Google rater guidelines that we put out. We’re working with news websites.

What tips can you give us about indication of content authors?
Is it really necessary to make pages for each author, provide big info with photo, bio, links to social networks?

That this really matters, that there are lots of work to do elsewhere. I think like with all kinds of content, it’s not the case that you can say this really matters and you absolutely must do it. I do think with a lot of news websites, especially if you’re providing information that you want people to trust, then this certainly makes sense.

So it’s not something where I’d say it’s the same as removing a noindex meta tag on a page, because that’s really an on and off switch. But if you’re improving the content of your site, that works well for users, that works well for Google. So that seems like it’s like something that could be done.

How to prioritize that versus other things on the website, that’s really hard to do. That’s where you almost need to use your experience and figure out what works well on your side.

 

 

Vraag: Hoe zorgen we ervoor dat de kwaliteit van onze content door Google als waardevoller wordt gezien? Kunnen we minder waardevolle content verbergen?

Q:
Let’s see. We’re publishing news and articles. For example, we have 100 new articles every day, and 10 of them give us 95% percent of the organic search traffic. Another 90 go nowhere. We’re afraid that Google can decide our website is interesting only for 10%. There’s an idea to hide some boring local news under noindex tag to make the overall quality of all publishing content better. What do you think?

In general, we do look at the content on a per page basis. And we also try to understand the site on an overall basis, to understand how well is this site working? Is this something that users appreciate, if everything is essentially working the way that it should be working? So it’s not completely out of the question to think about all of your content and think about what you really want to have indexed.

But especially with a news website, it seems pretty normal that you’d have a lot of articles that are interesting for a short period of time, that are perhaps more of a snapshot from a day-to-day basis for local area. And it’s kind of normal that they don’t become big, popular stories on your website. So from that point of view, I wouldn’t necessarily call those articles low quality articles, for example.

On the other hand, if you’re publishing articles from, I don’t know, hundreds of different authors, and they’re from varying quality, and some of them are really bad, they’re kind of hard to read, they’re structured in a bad way, their English is broken, and some of them are really high quality pieces of art, almost, that you’re providing, then creating that kind of a mix on a website makes it really hard for Google and for users to understand that actually you do have a lot of gems on your website that are really fantastic.

So that’s the situation where I would go in and say, we need to provide some kind of quality filtering, or some kind of quality bar ahead of time, so that users and Google can recognize, this is really what I want to be known for. And these are all things, maybe user-submitted content, that is something we’re publishing, because we’re working with these people. But it’s not what we want to be known for. Then that’s the situation where you might say, maybe I’ll put no index on these, or maybe I’ll initially put no index on these, until I see that actually, they’re doing really well. So for that, I could see it making sense that you provide some kind of quality filtering.

But if it’s a news website where kind of by definition, you have a variety of different articles, they’re all well-written, they’re reasonable, just the topics aren’t that interesting for the long run, that’s kind of normal. That’s not something where I’d say, you need to block that from being indexed, because it’s not low quality content, it’s just less popular content.

 

 

Veel URLS geven status code 500 terug in de Google Search Console. Dit komt door de social sharing knoppen. Hoe kunnen we dit verhelpen?

Q:
In my Search Console account, I see a lot of URLs in the coverage report that return status code 500. However, there are URLs that existed by the buttons for sharing the page on social networks under the socialnetworks/facebook/id, for example. But those URLs are not returning a server error, but a temporary 302 redirect towards a social network.

How can I correct these errors when they’re really not from our site?

OK, so that sounds like these errors are things that happen when you redirect to an external site. And there are things where essentially, on your site, everything is working well, and the external site is maybe serving a [INAUDIBLE] error, or serving something else where, for whatever reason, it’s not returning a normal 200 result.

That’s something that essentially doesn’t bother us that much. We will flag it in Search Console, because it is something where we try to crawl a URL on your side and we ended up on an error page, which could be kind of a sign that there’s something weird happening. But if you’re aware of this, then that’s generally not a problem.

Tip: Blok deze URLs via de “robots.txt”

What you could do is perhaps block these by robots.txt, these URLs on your site, because those are not things that really need to be crawled in index. So from that point of view, you can prevent that error from appearing in the crawl errors by blocking them by robots.txt. In general, I don’t think that would cause any problems overall. So that’s probably the direction that I would go there.

What might happen is that if you do a site query for the specific directories on your site, then maybe we would show these URLS without a title, without a description, because they’re blocked by robots.txt. But if you do normal queries for your website, then these aren’t things that would be showing up in the normal search results. So that’s not something you’d really need to worry about.

 

 

Gebruikt Google de historie van een website om te ranken?

Q:
Does Google ever use historic data when deciding how to rank a site, or do the algorithms only look at the present and most recent snapshots?
Can a website build up goodwill over a period of time, which may help it?

Yes. In some ways, that can happen. So in particular, if you have a website that has existed for a very long period of time, then that’s something where the whole ecosystem around the website will have evolved around that. And there’ll be links from all over the web for a longer period of time.

And when we look at that website, we will see the current snapshot of the website, what we recently crawled, but all of these signals that have been collected over the years. So that’s something that can definitely play a role.

There are also signals within search, specific to a website that might be collected over time. The ones that I’ve run into a few times are generally around adult content. So that’s something where if a website for a longer period of time provided adult content, then our algorithms might start learning that actually, we need to filter this using Safe Search.

* De inhoud van een website verandert, door bv een herpositionering van de website. Dat kan ervoor zorgen dat Google de website niet laat zien!

And if that website were to completely revamp the whole website, and was suddenly a non-adult website, then it can happen that for a period of time, those Safe Search algorithms will try to stay on the safer side and might still filter that site in Safe Search for a while. So that’s something that can sometimes take a bit of time to settle down.

Similarly, with some of the quality algorithms, it can also take a bit of time to kind of adjust from one state to another state. So if you significantly improve your website, then it’s not that from one crawl to the next crawl, we would say, oh, this is a fantastic website now.

It’s something where probably over the course of a year, maybe sometimes even longer, our algorithms have to learn that actually, this is a much better website than we thought initially. And we can treat that a little bit better in the search results over time.

So that’s something where essentially, the historic data that you mentioned there is not historic in the sense that it’s 10 years old data. But it’s is more that sometimes the status of things, maybe a year or even two years ago can still play into how a website is ranked in search now.

 

 

Als ik een pagina tijdelijk verwijder uit SC, geeft de link nog steeds de Page Rank door?

Q:
Regarding one point I came across recently, does the link pass page rank on a page that I have temporarily removed in Search Console?

JOHN:
Yes. So temporary removal in Search Console, that does not change anything about indexing of a page. It really just hides that page in the search results. So that’s something where essentially from our side, we want to provide a way to make it possible for people to remove things as quickly as possible.

And if we needed to go through the whole indexing system to do that, that would take quite a bit of time. So the temporary removal essentially just hides it in the search results. It doesn’t change anything from the indexing side.

For the indexing side, we still need to recrawl and reindex that page. If that page is now gone, if it has a noindex meta tag on it, then we’ll be able to take that into account the next time we have reprocessed that page, and when the temporary removal expires, for example.

So similarly, if you add a link to a page, and this page is something that is temporarily removed in Search Console, then when we recrawl that page, we will find that link, we will treat that as a normal link, as we would any other link in indexing. And we might crawl the page that is linked, or we might pass some signals to the page that is linked from there.

* Je kunt dus een pagina tijdelijk verbergen in de Index via Search Console. Echter, de pagina wordt nog steeds gecrawld door Google en alle wijzigingen worden meegenomen.

 

 

Kan ik na een jaar over meer nog steeds een domein redirecten met 301?

JOHN:
I’m not quite sure how you mean that. But in general, if you’re moving from one domain to another, then 301 redirect is the right way to deal with that. And with a clean 301 redirect, if you’re redirecting every page one by one, and there’s nothing left on the old website, then we will try to pass all of the signals as much as possible. And the signals include page rank.

* John gaat hier verder op in als je een deel van de website met een 301 redirect doorstuurt naar een nieuwe website.

We use a lot of other signals, though, so it’s really not just page rank that is being processed there. If you wait with this redirect, if you do a partial redirect from the website, If you do a redirect in a way that makes it hard for us to recognize that this is a one-to-one site move, then that makes it a lot harder for us to process those signals and to forward them on. That means, essentially what usually happens is it takes a lot longer, and the outcome is not so clearly defined.

So if you’re redirecting part of a website, then we essentially have to reprocess both of those, the old and the new website, and try to figure out what the new status of this website should be. So it’s not that we would just pass everything on. We kind of have to recalculate everything.

 

 

Kan ik nog steeds een 301 redirect instellen na de site migratie?

Q:
Similarly, maybe this question goes in a direction that’s like, what if you only start setting up the redirect a year after you actually do the site move?

JOHN:
For the most part, we will try to recognize site moves, even without a clear redirect. But it’s a lot harder.

And if you wait a year to set up this redirect, then it’s kind of a long time already, and things have already settled down a little bit. And what the final state there will be is really hard to predict. So I think adding a redirect later if you forgot that with a site move is always a good idea.

If you wait very long, like a year, then I don’t know what effect you would effectively get from that.

 

 

Ik heb een site gemigreerd maar en ben mijn ranking kwijtgeraakt.

Q:
I’m trying to move my landing page from WordPress to a new React framework server on server-side rendered page. The new React page gets an almost perfect score in every category in the DevTools audit. It scores far better than the WordPress page. I’ve copied almost all the content word for word from the WordPress site over to the new React site I ended up changing my name servers from AWS GoDaddy to [? Zite ?] and ended up losing my rankings. I’d love to know how to migrate a site properly, and not sure what the main problem is with moving things over.

Is it a name server or DNS records?
Is the difference that WordPress adds a lot of metadata and has a lot of plugins?

JOHN:
Yeah, I don’t know. So just from a purely technical point of view, taking the site, moving from WordPress to a React framework, it sounds like you’re essentially restructuring your website, you’re revamping the whole website. And that’s probably the part where our systems got confused, or where our systems are running into issues.

So it’s less a matter of what technical infrastructure you’re moving across, but more that you essentially have a new website.

So the changing of the name servers, I would see that as totally inconsequential, unless the new name servers are completely unreachable by Google. Then that’s totally irrelevant.

But what probably plays a role here is the change of the site itself. So copying the content word for word is one thing, but there’s a lot more on a website rather than just the words on a page.

There are things like titles and headings on a page, there are images, the way the images are embedded. There’s the design of the page itself, the internal linking, the navigation. All of that plays into us understanding what this website is about, and how the individual pages are connected to each other.

So that’s something where sometimes if you take the content and just copy it word for word to something else, then we can still understand this is the same content.

But we might have trouble understanding the emphasis of the individual pieces of content. Things, like I mentioned, like the headings, or the elements that are somehow otherwise emphasized within the pages. So that’s something where I could imagine you might be running into problems there.

The other thing is, depending on what all you changed, if you changed the URLs themselves, then you’d also need to set up redirect between the old URLs and the new ones.

Sometimes if you’re migrating from one CMS to another CMS, these URLs change. Sometimes you can keep them the same. If you can keep them the same, that’s an optimal situation. If they change, then we essentially have to reprocess the whole website again.

The internal linking is the other part that I briefly mentioned, things like the navigation, how you link the various pages and parts of your website together with categories, and kind of how you level pages, and maybe snippets and overviews, all of those things. That also plays a role with how we can index your pages properly. So that’s something where I’d also watch out for that.

The other thing that might be playing a role is with regards to moving from kind of a static HTML version to more JavaScript-based version. It sounds like you’re doing server-side rendering, which would mean that the new website is also a static HTML page when we try to crawl and index it, which means it’s more like the same situation.

But at the same time, if you’re doing this kind of server-side rendering for Google specifically, then there are lots of opportunities for things to go wrong that you might not be seen as a user. So especially if you’re doing something fancy for Google in that regard, then it’s something where you just have to check a lot more things.

So my recommendation here would be to try to figure out the differences as quickly as possible.

So if you still have a backup of the WordPress site, then maybe install that on a different server somewhere so that you can crawl the whole website, and you can compare it on a page-by-page basis with the new website, so that maybe using some third-party crawling tool, figure out:

Is the internal linking the same?
Are the same URLs being used before and afterwards?

If not, is that something that you can still tweak on your new website to make it more the same?
How do the individual pages look?
Do they have headings, titles?
All of these things, do they match?

* Metadata niet belangrijk voor ranking
That’s something that also plays a little bit of a role there. Let’s see. The metadata that you mentioned, a lot of the metadata on pages are things that we can ignore. Sometimes they’re relevant for social media sites, like finding what is a thumbnail image that should be shown for this page. Those are things probably worth adding anyway. But that’s not something that you would use for or indexing and ranking.

* Structured data daarentegen wel
There are some other types of metadata on pages though, that we would use, especially if it’s something that we would pick up for structured data, or if it’s something that we would show maybe as a description in the search results page.

Both the description and structured data are not things that we would use for ranking though, but you might see that in that your search results entry looks a little bit different. And if it looks a little bit different, maybe users respond to it a little bit differently. Now, wow. So many things to watch out for.

But my general recommendation with this kind of situation is to try to understand exactly what’s happening as quickly as possible, because the longer you let it sit in this state of being different, and you’re not sure exactly why it’s behaving differently, the more likely our systems will just learn your new website, and say well, it’s a different website. We should rank it differently, but we’ve learned how to deal with it.

Whereas if you can fix these things as quickly as possible, then we can understand a little bit better, oh wait, this just tweaked slightly. We can take all of the existing signals that we have and apply them to the new website. Kind of weird.

Audience: [INAUDIBLE]

John Mueller:
Go for it.

Audience:
Sorry, go ahead, Amanda first.

 

 

Komt de Highlighter tool terug in Search Console?

 

Meer info over de highlighter tool:

https://support.google.com/webmasters/answer/2692911?hl=en#:~:text=You%20simply%20use%20Data%20Highlighter,as%20the%20Google%20Knowledge%20Graph.

Audience:
Sure, a bit of a lighter question, one that hopefully has a relatively quick answer. Are there plans in Search Console to bring back the Highlighter Tool?

I’m thinking for people that either don’t have access to developers, or who are on large enterprise sites where developers are tied up on things.

John Mueller:
Yeah, so it’s still there. It’s just hidden away with the old tools at the moment. But it is something that the team is looking into, how we can migrate that to the new infrastructure. So I hope that doesn’t go away, because like you mentioned, a lot of times you don’t have the ability to just go out and get all of the code changed. And it’s sometimes really useful to be able to prove to people that actually it’s worthwhile. So that’s something where I think there are some options there.

Another thing that we noticed is with Google Tag Manager, you can sometimes also do similar things, in that you can add structured data, and let that be picked up like that. It’s a little bit more complicated, but it might also be an option.

Audience:
Sure, a bit of a lighter question, one that hopefully has a relatively quick answer. Are there plans in Search Console to bring back the Highlighter Tool?

I’m thinking for people that either don’t have access to developers, or who are on large enterprise sites where developers are tied up on things.

John Mueller:
Yeah, so it’s still there. It’s just hidden away with the old tools at the moment. But it is something that the team is looking into, how we can migrate that to the new infrastructure. So I hope that doesn’t go away, because like you mentioned, a lot of times you don’t have the ability to just go out and get all of the code changed. And it’s sometimes really useful to be able to prove to people that actually it’s worthwhile. So that’s something where I think there are some options there.

Another thing that we noticed is with Google Tag Manager, you can sometimes also do similar things, in that you can add structured data, and let that be picked up like that. It’s a little bit more complicated, but it might also be an option.

 

 

Kan Google helpen met Site migraties?

Audience:
The question I had around the site move is I’ve– one of, like any other SEO guys out there, have [INAUDIBLE] using– changing the website, right? I have yet to see a migration that goes smooth. If you’re ranked on top 10, you’re getting decent attraction. You want to make it mobile-friendly, you always want to do the next step.

So is there any … from Google to provide some kind of assistance in the Search Console or somehow saying that, hey, we are migrating a new site. So don’t burn the signals, we’ll keep the same URLs.

Obviously when you move from one CMS to the other, you stated an example of CMS, all different servers/new code, you are going to expect a complete design change. The user can make sure the UI can remain the same, or the link/content, but there is so much that runs behind it. And if Google is going to reprocess it, it’s like a new marriage after everything’s started all over again.

So is Google trying to not promote users to come out with new websites and let’s give a better experience?
How can you help us better, like the whole community?

John Mueller:
Yeah, good question.

The general situation with a lot of these site revamps is also that you’re trying to improve things. So that’s something where I think sometimes it does make sense to rethink the signals, because these signals can also improve in a positive way.

It’s definitely not the case that every site migration that you do will always result in negative signals, and things getting lost, because a lot of times you will be able to work on a website and you recognize all of these problems.

And then you create a new structure, a new website, and it doesn’t have any of these problems. And you want that work that you worked on to be reflected well in search as well. So that’s something where it’s kind of tricky to say that we should keep the signals and not let a site be able to profit from the improvements there.

But it is always tricky, because I think the part that always makes it hard for me when talking about this is that it’s hard to judge ahead of time what the actual effects will be. And especially if you don’t have a lot of experience around SEO, then it’s really hard to judge what will actually happen when we switch over.

And the other thing that I always see as well is that, sometimes these kind of site revamps are done by people who don’t actually know much about SEO, who are maybe the developers, or maybe the design team or marketing team that says, oh, we need a new website. We will outsource a new website. It looks really nice, it’s really fancy. And they don’t realize that all of these changes could potentially affect the kind of search visibility as well.

Things like removing text and putting it into images, it looks the same in a browser, but it reacts very differently. And as in SEO, when you look at this new site, you say well, this is a problem. This is something that we need to fix. Yeah, it’s weird.

Someone mentioned that people are having trouble getting in. That’s kind of weird. Usually I get a little pop-up saying that people want to get let in. But that doesn’t seem to be happening at the moment. I don’t know, hard for me to figure out what exactly is stuck.

Audience:
So there are no plans from Google to kind of help with this problem, right? From webmaster team, I guess.

John Mueller:
So I think the one thing that we’ve been looking at is specifically for site moves to have at least some kind of a progress indicator that lets you know that when Google has finished reprocessing your site migration, and your site move setup.

That I think makes it a little bit easier to understand when you’ve reached the final state where it’s not that you’re in this uncertainty of do we just need to wait for Google to figure it out, or is there actually something that you need to do differently to move into a different situation?

Audience:
Yeah, I think that would be very helpful, because we’re going to panic, right? Wait, Google is still working on it. Give it six months, that’s fine. The wait is fine as long as we know it’s in transition or it’s progressing. And going back to the other thing that you discussed earlier from one of the other users about content.

So I had a site, the content wasn’t changed in five years, seven years. Suddenly the guy is free because of this pandemic. He’s like, let me revamp the content. I haven’t changed it in a while. And he didn’t see any positive result.

And that was, I kept telling him, content is key. He’s like, it didn’t change anything. I’m like, I guess we need to wait. And I don’t know if there is a temporary removal, just like a new hey, I can indicate somewhere in the Search Console, this is a new page. And if Google wants to take six months to settle down, it’s fine. It would be nice to see the progress like you said, [INAUDIBLE] if you tag individual pages. I don’t know if that’s even possible, but yeah.

John Mueller:
Yeah, I think it’s always a bit tricky, because there are always some things that change on a website. And it’s hard to understand when you kind of just need to let the systems figure it out on a website, or if there’s something that you need to change yourself. Now, I don’t know. Good point. I think that’s something we can certainly think about, and see if there is something that we can do there.

One of the things that we did with the new version of Search Console is the general concept of being able to validate fixes. Maybe that’s something that we could implement there in general, where you can say, well, I’ve revamped my content. I would like a Search Console to validate the changes that I’ve made.

With the validation process in Search Console at the moment, it’s tied to issues that we recognize, which are usually technical issues, like structure data or with indexing. And what will happen there is we will try to recrawl and reprocess the website a little bit faster, so that we can reflect that new state in the index a little bit quicker. So maybe that’s something that we could also expand to more content or revamp situations as well.

Audience:
Yeah, that’s exactly what– so unless the user checks in manually, so you are waiting for an input from the company of someone to say, hey, this is new. And then you say, all right, I’m going to do it again, and I’m also going to show the progress this time or something.

John Mueller:
Yeah.

Audience:
Thanks.

Wat ook interessant is…