English Google SEO office-hours from July 2, 2021

English Google SEO office-hours from July 2, 2021

JOHN MUELLER: All right. Welcome, everyone, to today's
Google Search Central SEO office hours hangout. My name is John Mueller. I'm a Search Advocate on
the Search Relations team here at Google in Switzerland. And part of what we do are
these office hours hangouts, where people can join in and ask
their questions all around web search and the website. Wow, it looks like a bunch of
people are still jumping in. But maybe as
always– maybe we can get started with some,
first, live questions, and then we'll go through some
of the submitted questions on YouTube as well. Let's see. Nathan, I think you're
on top of the list. NATHAN: Thank you, John. Hi. So we are working on a
website that uses hreflang. And I've been trying to
figure out what's going wrong, because they're
ranking in France with their Belgian
pages, they're ranking in France with
some Luxembourg pages.

And I was trying to figure
out what went wrong. So one of the things
I found out is that they didn't make a
general language page. So there is main .fr page. So that was the first
suggestion I made. But I was wondering, since
they're currently focusing on Germany alone, would they
need to have a .DE again or is just .de enough if they're
just focusing on Germany? JOHN MUELLER: Just the
.eu would be enough. If that's the broader level
that they want to target, like the German
language content, that would
essentially be enough. NATHAN: OK, but it's they're
selling in Germany alone. It's not Austria
and not Switzerland. So I was wondering whether
they need the .de as well, or if it's just .de
when it doesn't matter. JOHN MUELLER: I think
that's generally fine. With the hreflang it would not
hide it in the other countries. It would essentially just– when
one of these German language pages are shown, we
would use the hreflang to swap out the URL for the more
appropriate URL for that page. NATHAN: OK. JOHN MUELLER: So
it's not that it would rank higher in Germany
because of that, but more like, well, if someone is
searching for the brand name, for example, where we
might have the French page and the German page– and they
could be equally relevant, because from the
brand name alone, we might not know which
language version, then we'd be able to swap
to the German language version for users who have
their setup in German.

NATHAN: Yeah, all right. I think I got it now. Thanks. JOHN MUELLER: Sure. Let's see, Vahan. VAHAN: Hi, Mr. Mueller. I'm [INAUDIBLE] director of IT
at Search Engine [INAUDIBLE].. So we are trying to
fight [INAUDIBLE] issue with the Twitter embeds on
optimizing for Core Web Vitals. And as you know,
it causes, a lot of problems like content
layout shift first of all. Secondly, [INAUDIBLE] loading
all the resources instantly on page logs, whereas
[INAUDIBLE] embeds are below the fold, and would
have to scroll to see it. Does Google work with Twitter
somehow to fix that issue, or has plans, because
embedding [INAUDIBLE] is a very common practice
across all the web. Core Web Vitals is very
big focus for Google.

So I do think that
it could be fixed with cooperation with Google. Does Google have
any plans on that? JOHN MUELLER: I don't
know about any plans. But we essentially don't
have any special treatment for any other
providers, essentially, of specific content. So it's not that we would
say, oh, this is a tweet. Therefore, we will let it
through whatever crazy stuff it wants with Core Web Vitals.

But it is something
where, if a lot of sites are using those embeds,
it feels like it would be in Twitter's
best interest to provide a way to
enable people to embed those in a reasonable way. There was a recently a
conversation on Twitter about this, where
someone mentioned that they were using– I don't know, a specific
CSS setup to make sure that these tweets don't
move around that much. I think it was something
like minimum/maximum height and width– something like that. And apparently with
that, they were able to significantly
improve things around CLS and I think FCP as well. So that might be
something to look into to see if that's
an option there. I mean, it's also
something where if you're embedding a
lot of these from them, then it's like going
to them and saying, OK, you need to get
this working better.

Otherwise, we'll swap it
out against something else. That might also be an option. VAHAN: Gotcha. Thank you. And recently, Google did publish
its guide on the 301 redirects? So I would like to also
ask what is the proper way to handle 301 redirects,
which was done by mistake. What is the correct path
to revert those mistakes? JOHN MUELLER: So like,
you redirected from page a to page b, and
then afterwards you decide page a should
remain indexed? VAHAN: Yes. JOHN MUELLER: Yeah, and
page b should drop out? VAHAN: It can either drop
out or can stay alive. So what would be a correct
path to handle those scenarios? JOHN MUELLER: So if the new
page is not meant to exist, I would just redirect
back from that old page.

If they're essentially parallel
pages that are separate, then I would just leave
both of them at 200. You don't need to do anything
special to revert back. That's similar to when you
accidentally 404 a page. Then just making it work again
is essentially good enough. VAHAN: Gotcha. Thank you very much. JOHN MUELLER: Sure All right. Darcy. DARCY: Hey, John. How's it going? JOHN MUELLER: Pretty good. How about you? DARCY: Doing well, doing well. So a question about
the page experience update so in order for a
page to benefit from the page experience update, it needs
to have all the marks, right? It needs to have a
mobile-friendly HTTPS, and then either needs
improvement, I guess, or good Core Web Vitals
score Is that correct? JOHN MUELLER: Yes.

[INAUDIBLE] needs improvement,
but [INAUDIBLE] what is it? DARCY: I just envision your
little chart every time I talked about it. JOHN MUELLER: Exactly the graph. DARCY: Yeah. So if previously, before
the page experience update, you had a URL that was
benefiting from HTTPS, whatever little benefit that is, once
the page experience update rolled out, and say that URL
had a bad core web vital status, would then it have lost
whatever little benefit it got from HTTPS, because now it
doesn't qualify for the page experience update? JOHN MUELLER: I don't think so. I mean, I haven't checked
specifically on that.

But my understanding is that the
HTTPS aspect would essentially be parallel. DARCY: Sorry. [INAUDIBLE] We still hold onto it? JOHN MUELLER: Yeah, yeah. DARCY: OK. OK, cool. Can I ask one other just
a little small question? Well, maybe small. [LAUGHS] If I'm doing a site migration,
and in Search Console, I've executed the site migration
to another Search Console, am I to believe that
it has not fully completed in Google's eyes until
those notifications in Search Console are gone? Or will I get a new
notification where it says one of your other
sites is moving to this site, and on the other one, it says
this site is currently moving? Do I have to wait
for those to tell me something else before
I know that it's done, or will it just stay
like that forever until I click the Got It button? JOHN MUELLER: Yeah, I think
there is a timeout there.

But it's a Search Console based
timeout, which is essentially something– I'm not sure what the number is. I think maybe it's 180
days or 90 days– somewhere in that range,
where essentially, that status drops out. But that's not a sign that
the migration is complete. I don't think we have any kind
of internal status like that, where we'd say like this site
has completely moved over.

Because often, there are
just, like, these lagging URLs that are left for
a pretty long time. DARCY: Fine. OK, OK, so then I'm not
waiting for a new notification in Search Console, basically. JOHN MUELLER: Exactly, yeah. DARCY: OK, wonderful. Thanks so much. JOHN MUELLER: Sure. Ashtanani. ASHTANANI: Yeah. Hi, John. JOHN MUELLER: Hi. ASHTANANI: Yeah my question is
related to language targetting. The question is, can we
put a different language in a single content
if my content is in the English
language, and now I want it target the
Portuguese language or the French language. Can we Portuguese plus English
or French plus English? Can we mix these two
languages [INAUDIBLE]?? JOHN MUELLER: So
different language content on the same page? Is that what you're looking? ASHTANANI: No, for
different directories. I have a different directory
of the Portuguese language. And I'm asking you,
can we insert, like, 30% English content and
70% Portuguese content? JOHN MUELLER: Sure, absolutely. ASHTANANI: So it doesn't
create a problem [INAUDIBLE]?? JOHN MUELLER: Yeah, that's
perfectly fine for languages.

We primarily look
on a per page basis. So we want ideally to
have one clear language on a specific page. So that's something
where, if you have different pages
with different language, that's perfectly fine. But it makes it a
lot easier for us if we have one clear
language on a page, because then, we can clearly
say, oh, this isn't French or this isn't Portuguese
or this isn't Spanish, and show it to
people appropriately. ASHTANANI: OK, and
again, the same question. Suppose I have English
content of [INAUDIBLE],, and I've just converted all
those content into Portuguese. So it is also, like, 700 words. Now I have one at
least [INAUDIBLE] 200 or 300 word content in
the Portuguese language. So does it reflect any changes
in the ranking position [INAUDIBLE]? Does it impact our SEO strategy? JOHN MUELLER: These pages would
essentially rank independently. So if you have some content in
English, and slightly different content in a different
language, then we would just rank those
pages independently. So that's something
where sometimes, it makes sense to have more
content in a certain language.

Sometimes a certain
language just has more words for the same
content or fewer words. That's all perfectly fine. ASHTANANI: No, just
suppose we have content in the English language, and
we have a similar content which is in the Portuguese language. But additionally, I want
to add for more information in Portuguese [INAUDIBLE]. So I just wanted
to add a few more. So in that case, I'm
asking you, is there any problem to add more content
in the Portuguese language? JOHN MUELLER: No,
that's perfectly fine. ASHTANANI: OK, and if I use a
passive voice, or active voice, does it also matter
in terms of ranking? JOHN MUELLER: I don't think so. I think that's more
a matter of how people understand your content. But I can't see that
mattering for us here. ASHTANANI: OK, that's all. Thank you so much. And one more question
is related to stability. Like, for a few keyword,
my ranking is not the same. Some days, it is on the first
page, first or second position.

And some days, it's in
the fifth or sixth page. So what is the issue? JOHN MUELLER: That can happen. I mean, search is very dynamic. So it can certainly happen that
sometimes, a page ranks really well. And then a few hours
or a few days later, it ranks a lot worse and then
it ranks a little bit better again. Usually, when you see these
kinds of fluctuations, it's because our algorithms
are not 100% sure yet. And over time, it will settle
down somewhere in that range. But it's not a sign that
there's anything specifically wrong or bad with that page. Essentially, sometimes the
algorithms are just unsure, and they switch between
different variations. ASHTANANI: Does the revamping
of a website happen? JOHN MUELLER: I mean,
if you significantly change your website,
then our algorithms have to understand it again. So that sometimes takes time. ASHTANANI: OK, thank you. Thank you. JOHN MUELLER: Sure. All right, Nancy. NANCY: Hey, John. First of all, thank you
very much for doing these. I watch every week and really
get a lot of value out of them.

I work with a lot
of small companies, and we try to apply all
the SEO best practices that Google lays out for us. It's a little harder to do some
of the fancier backend code kind of things. I'm more of a content writer in
SEO then a digital code guru. How can I help these
sites to rank well when they're competing
against these big enterprise organizations that have a lot of
resources and money and people to throw behind
their SEO, and tend to rank higher because they
are enterprise corporations? Do you have any advice? JOHN MUELLER: Yeah, I
think that's super-hard. [LAUGHS] NANCY: Just give up. JOHN MUELLER: No. No, no, but I think
small companies have a lot of advantages,
especially online in that area, because it's a
lot easier for them to shift and to move quickly
compared to large companies.

So I see that, at
least within Google, where when we come up
with a really cool idea that we should implement
in Search Console, it's going to take one or
two years until we get there. Whereas external SEO
tools, when they come up with the same idea,
they'll be like, oh, I'll just do this like
next week, and it'll be done. So I think as a
small company, you have that advantage
of being able to move quickly when something
shifts in the ecosystem, when some new trends come up,
when something significantly changes, you can
move really quickly, and you can be visible
like that essentially immediately in search. Whereas with large
companies, I mean, sure they have a lot
of power, and it's like a strong brand
and all of that.

But they can't move as quickly. And that, I think, is one of
the advantages with a smaller company. The other I think
is something where, as a small company,
sometimes it makes sense to be active in areas
that are just not interesting for large companies,
where you have the ability to focus on a smaller
segment of the market and do something
really unique there– which the larger
companies will say, oh, it's so much work to
actually get that set up and to prioritize that. And then it's only, like,
a $5 million business. Like, what are we going to do
with all of that small change? And as a small company,
you're like, oh, it's a couple of million dollars.

That's fantastic. So that's something
where I also see– especially small companies have
the ability, especially online, to focus on these small
areas and say, well, we'll do something really unique here. And maybe if it picks up,
a really large company will also do something
similar in a couple of years down the line. But we have that kind
of headway at least, and can move ahead of all
of these big companies during that time. NANCY: OK, thank you. Can you give me an
example of that, like what you mean by do
something really unique? Like– [LAUGHS] JOHN MUELLER: I don't know.

[LAUGHS] I've always struggled to
find explicit examples that we can mention there. But I see it for
example with SEO tools, because I work together with
the Search Console team. And it is something where a lot
of the external or non-Google SEO tools essentially
have the ability to do things in ways
that wouldn't really be possible on Google side, be
it for policy or legal reasons or whatever. And I think that gives these
tool providers a little bit a heads up, even
in a market where– I don't know. Google is this
giant corporation, and they're even
offering it for free. But yet, all of these
small SEO tool companies– or smaller SEO tool
companies, still have a chance to grow a
significant user base which is relevant for them.

I mean, that's not
something that I imagine the average
small business would be creating SEO tools. But it's one of
those areas where I tend to see that happening. NANCY: Thank you. JOHN MUELLER: Sure. Let's see, Praveen. PRAVEEN: Hi, John. How are you? JOHN MUELLER: Hi. PRAVEEN: Hi, so we've got
this multilingual website. We recently launched
it in April this year. So apart from English, we got
called in for visits in Taiwan and for users in China. Now, the issue is that
when someone is searching for this website on Google, like
searching for the brand name, the site links that they see in
Taiwan, half of the sitelinks are fully in the Taiwanese
language and half of them of those linkings
[INAUDIBLE] Chinese language. And Taiwanese users,
they don't appreciate that kind of behavior,
like their language mixed with the Chinese language. So what can we do to fix this? We have used hreflang
tags properly.

We are using breadcrumbs
to show the upper hierarchy of these language categories. We are interlinking
relevant pages. But what else can we do here? JOHN MUELLER: Those are
essentially the things that you would be doing. So hreflang to make sure that we
provide the most matching ones, and a really strong
internal linking to make sure that we
understand which of these pages belong together. And even in those cases,
I've seen situations where sometimes the sitelinks
are in a different language or for different countries.

And that's not something that
you can control explicitly. The only thing you
could really do is to noindex those pages that
you don't want to have shown. But then they're not
shown anywhere, which is probably not what you want. PRAVEEN: Yes. To add this, I think the
domain is a .gd domain, which is a domain
for some country, but also used by
gaming websites. We are also in
the same industry.

So could that be an issue,
like we are using [INAUDIBLE]?? JOHN MUELLER: I don't think so. I think it's probably
just something where the sitelinks are suboptimal. And if you could point
me to a screenshot, I'm happy to pass
it on to the team. But these kinds of things
happen all the time. Well, maybe not all the
time, but from time to time. And it's annoying. It settles down
usually over a while. But it's not something that
you can explicitly control. PRAVEEN: But maybe the
site is new for Google. Like, it was launched in April. So maybe Google is taking
time to process all the links. JOHN MUELLER: It's possible
that it'll settle down. Yeah. PRAVEEN: OK. Thank you. Thank you for this. JOHN MUELLER: Sure. Sergio. SERGIO: Hey, John. I have a very specific question. I'm working on a
multilingual website. And we've been using
the hreflang tags in the sitemaps– not on the
page level, but sitemaps. And let's say the website
is structured in a way that the main tool
and the main site is completely multilingual. And then there is a blog
section that, at the moment, is only one language, but
it will be multilingual.

And we have this
situation in which we want to create
sitemaps per language. So let's say we have
the different blocks of the main language page
and all the alternate tags. So it would start, let's
say, for the English, then English, all the translations. Then the other sitemap
would say German. It would have first
the German main version of the page plus the alternate. So I wanted to ask, one,
is that a good practice? Would there be any
problem if that's OK? And then the other question
is, we have the blogs. In the blogs, there
are some blog posts that, of course, are not
relevant to other regions, other languages. So some will have a
translation and some others won't have the translation. So can we pack all of those
within one single sitemap for each language
that, for some cases, would have an hreflang tag
and the other one wouldn't? So those two questions. JOHN MUELLER: So you can
structure sitemap files however you want. Like, you can make them by
language, by site section.

Essentially, it's
totally up to you. I would try to
make them in a way that they can be stable
with regards to the URLs that they include. So don't take all of
the URLs from your site and randomly put them
into sitemap files. But have some kind of a
system to distribute them, primarily so that when we
process one sitemap file, we get kind of the same URLs
back when we look at that, more for the
situation where we– usually we don't process
all sitemap files all at the same time. So if the URLs were to
jump between sitemap files, we might process one file
and see one URL process the other file, and then
see the same URL again. And then in the end,
we might miss some URLs because it's randomly
swapping between files, but having it persistent in
any method that you think is relevant for your side.

It can also be alphabetical
or whatever you want to do. That's the important part. With regards to content, that's
only in certain languages, or that doesn't have any
hreflang annotations for it, that's also perfectly fine. Like I mentioned before,
not having the annotations doesn't mean we won't show
it for other languages. But having annotations
helps us to show the appropriate version. So if you just have one
version, then [INAUDIBLE] that's the version that we'll show. You don't need any hreflang
annotations for that. SERGIO: OK, understood. Thank you. JOHN MUELLER: Sure. Let's see, Tanoosh? Have I got your
name right, Tanoosh? [INAUDIBLE] OK. Dave, in that case. DAVE: Hi. It's international day. Another hreflang question,
I'm afraid I've got a client I've been talking to.

They have a problem. They've got English
in different regions– Canada, US, Australia, UK. They've got hreflang
which seems to be working to show the right
one, but rich results. They've got product rich
markups, that doesn't seem to get switched out. That seems to be sticking
to just one version. Is that the way it's meant
to work, or have they got something else going on
that we're not sure about? JOHN MUELLER: How do you mean– [INTERPOSING VOICES] DAVE: So the rich
results, the search data for product seems to be coming
from the US dotcom, which is their [INAUDIBLE] default.
So you search for products in the UK, actually,
the UK just seems to sometimes pick up things. They'll search one
in Canada sometimes, and it will show the US
dollar result in rich results. So basically, it appears
to be taking the markup from the default page. The US, if you check as
well, is the canonical one. So it's showing wrong prices,
because obviously, the US and Canadian dollar
is different, as are Australian dollars. So it gets a little
bit confusing. I'm not sure if that's
just the way it works, or they've got
something else going on.

JOHN MUELLER: OK, so you
mean within the normal web results, the rich
results markup in there? OK, so not like the
product search stuff that's [INAUDIBLE]. DAVE: No. JOHN MUELLER: OK. So what's probably
happening there is, we're seeing these
as duplicates. And we're picking one of those
versions as the canonical, and then using hreflang
to swap out the URL. But since we have
one as a canonical, that's the one where we pull the
rich result information from. DAVE: Yeah, [INAUDIBLE]. JOHN MUELLER: So that
seems unfortunate that we would pick one canonical
across different currencies. DAVE: I mean, literally,
it's just the price and maybe a number that's
different on the page. So it's entirely understandable
that there's it is de-duped. It just makes it awkward
in this situation. JOHN MUELLER: Yeah,
if you can send me some examples where
that's happening, I'm happy to pass
it on to the team. Because usually, we
do try to recognize when things, like phone
numbers or prices, are different across
different versions and say, oh, we shouldn't
de-duplicate these pages because they are unique. So if you have some examples,
I'm happy to look at that.

But sometimes, when it's really
just the currency symbol that's different across
these pages, then it can happen that
our algorithms say, well, this is almost
exactly the same page. We should just make it
easier for the site owner and treat them as one
page, which in this case makes it more confusing. DAVE: OK, I'll share with them,
see if they'll share some. Cheers. Thanks. JOHN MUELLER: Cool. OK, let me run through some
of the submitted questions as well so that we don't
lose track of any of that.

And we'll have more time
for your live questions along the way as well. Let's see, we're going to
relaunch and migrate our site. A few important pages won't
be ready for the launch, but will be added later. How should we
handle these pages? Should we leave them as
404 and redirect later? How long until they
lose their SEO signals? Should we redirect them
to a less fitting page and later to the right one? Yeah, I think this is tricky. One of the things that I would
try to do in a case like this is try to leave the old
page up as much as possible, so that we see that part
of the website is moving, but also part is still there. And that way, we can
transition the old URL to whatever new
URL ends up being there in a consistent
way– that we don't have any gaps in between. If you do need to
move everything over, and you're saying the old page
is really not relevant anymore and we need to create
something completely new, then a 404 is certainly
a possibility.

If you're sure that the new page
will be up within a day or so, a 503 might also
be an option, which is like being almost
sneaky and saying, well, the page doesn't
work at the moment. And then Google will retry
it a little bit later. And then suddenly, the
page is there and ready. So that might be an
option if you know that it's just a day or two. A 404 can also
work, but there you have a situation where the
page ends up dropping out of the index completely. And when it comes
back, we essentially have to rebuild the information
that we know about that page.

So it's not that the information
will be lost forever. But it takes a bit of
time to rebuild again. So those, I think,
are the options. My preference would really
be to try to at least keep the old page there until
you can swap it out against the new one. So keep it out of 200 if you
absolutely need to keep it. If you need to
redirect it and have the old version on the new
site, I think that's also OK.

If you need to really
remove the page for a longer period of time, the 404
is the right choice. But you have to assume that
it'll take a bit of time to catch up again. I'm looking at merging two brand
websites, where one brand will overtake the other. However, there will
be a 90-day window where we run both sites
announcing the merger. What's the best way to set up
a schema as an early indicator that they're essentially
the same site before we 301 redirect and ensure that
the primary brand is what's ranked as a secondary
brand resolves, like a canonical for
the entire website? So the suggestion of a
canonical is probably a good approach here.

With the rel canonical,
you're essentially telling us that this content is
equivalent to the other one, and you prefer the other one
to be the primary version. I think the downside of using a
rel canonical in this situation is that we will already
start migrating the search results over to the new
version of your site. So users would still be able
to see both of those versions. But in search, essentially,
the new version or the primary
version would start to become more and
more visible, and it might be a little bit awkward
if someone is looking explicitly for the older version. But with the rel canonical,
you can set that up ahead of time before you
start doing the move as well. Otherwise, there's no way of
using structured data markup to say, well, we're
going to merge in a certain period of time. The other thing to keep
in mind when you do end up doing the merging of
the different sites is that merging
and splitting sites tends to take longer than just
a pure redirect from one domain to another. So that's something
where I would expect a little bit of a longer
time, maybe a couple of months, for things to settle down
around search before it actually ends up working– like everything is moved
over to your preferred site.

Does Google want read
information on a page if it's in a toggle,
a small symbol that you click on which opens
up with more information? Can you recommend a
helpful step-by-step video for beginners to
implement structured data. So if the content
is within HTML, then essentially, we'd be
able to use that for indexing. On the other hand,
if the content needs to be loaded after
someone clicks on this toggle, then we would not know that you
have to click on this toggle to get that information. So those are the two
variations there. One way to try this
out is to just open the page in a browser, and then
look at the source code and see is that information
already loaded or not. If it's loaded already,
then most likely, it'll be available for indexing. If it's already on the page,
and your website is already being shown in search,
a simple way to check is also just to search for
that piece of text in quotes, and to see if
Google can find it.

With regards to getting
started with structured data, I'm not aware of
any simple getting started guides for
structured data, because I think it depends
quite a bit on the way that you have your
website set up. So that's something where if
you're working with pure HTML and you're doing
everything manually, then going to the
developer documentation is probably a good idea. You can copy and paste
the examples there. If you're using an existing
CMS or hosting system, like WordPress or Wix or
something like that, then oftentimes, there
will be plug-ins that you can just enable
for the site, which make it a lot easier
for you to add structured data in a way that
doesn't require you to actually do any of the code part itself.

So that's the
direction I would head, where I'm going
to assume that you use something like
WordPress or Wix or Squarespace or
something like that. I would try to find a plug-in
or a simple way to just activate that within your website, and
then just fill out the fields and let the plug-in do the
structured data for you. Can backlinks from
low-quality websites make a negative impact
on a blog's SEO, and do user comments have
any effect on page ranking. So just backlinks from
low-quality websites, usually we will
just ignore those. I don't think that would
have any negative effect on a website's SEO. What would be more
problematic is if you go out and actively buy
a significant amount of links for your website, or
do something else which essentially goes directly
against our webmaster guidelines. So that's something where
I'd say that's problematic, and that's something I would
avoid doing, and clean that up if you notice that happening
from someone who previously worked on your website.

But if you're just seeing
random low-quality links to your website, I would
totally ignore those. They absolutely have
no effect on your site. And regarding user
comments and page ranking, if these are comments
on your website, then the primary effects
that I would expect from that is essentially that
sometimes, user comments do provide a lot of value
additionally to your content. And sometimes, those comments
can be used for ranking as well in search. So if someone is
searching for something that is only visible in a
comment on your website, then your pages could be visible
because of that comment that is also embedded on those pages. So essentially, it's a
way of having content that is available on your pages. It's just not content
that you wrote yourself. But at the same time,
it's also something where, if this these user
comments are really low quality and they drag your site's
overall quality down, then that is something
that we would also see as a part of your
website, where we'd say, well, we look at
your website overall and we are not really sure about
the quality of your website overall.

And our systems wouldn't
really differentiate between, well, this is content
you wrote, and this is content someone else
wrote, but happened to leave in a comment on your website. A number of our
pages can rightly be classified as an FAQ,
a how-to, and an article. Should we add all three types
of schema.org structured data, or is it better to
just choose one? So I think there are two aspects
here from our guidelines. We want to make sure that
the structure that you have on your page matches the
primary element on your page. So if you're saying
that you can add an FAQ to a random page on your
website, sure you can do that. But is this FAQ the primary
part of the page or relevant for the primary
part of the page? That's something that
you need to figure out. So that's one aspect. The other aspect is that,
in the search results, some of the rich results
types we can combine, and some of them
we can't combine. So for example, if you a
recipe and you have ratings, then we can often combine that
in the search results in one rich results type.

However, if you have an FAQ
and you have a how-to, then at least from what I recall
of what these look like, these are things that
wouldn't be combined in a single rich
result type, which means our systems would have
to pick one of them to show. And maybe we'll pick the type
that you would have chosen, or maybe you would have
a different preference on your side. And if you have a strong
preference on your side, I would just make
that super-clear to us by just providing that
kind of structured data. So if you're saying,
like, oh the FAQ results, I really like those
for these pages, they're super-relevant here,
but it's also kind of an article and kind of a how-to,
then I would just focus on your preferred one– the FAQ in that case.

Or if you're saying, like, the
how-to is really the way that I want to have this
page shown in search, then I would focus on that type. As I understand it,
interlinks can path authority, and that authority can
be divided or diluted as more internal links
are added to the page. Is this another SEO myth, or
is this roughly how it works? If so, does that mean that
having a lot of internal links on a page could do
more harm than good? So yes and no, I
think in the sense that we do use the internal
links to better understand the structure of a page.

And you can imagine
the situation where if we're
trying to understand the structure of a website,
with the different pages that are out there– if all pages are linked
to all other pages on the website,
where you essentially have a complete internal linking
across every single page, then there's no real
structure there. It's like this one giant mass
of pages for this website, and they're all interlinked,
we can't figure out which one is the most important one. We can't figure out
which ones of these are related to each other.

And in a case like that, having
all of those internal links, that's not really doing
your site that much. So regardless of what page
rank and authority and passing things like that,
you're essentially not providing a clear
structure of the website. And that makes it harder
for search engines to understand the context
of the individual pages within your website. So that's the way that I
would look at it back there. And similar to the second
question that you had there, with regards to lots
of internal links doing more harm than good– yes, if you do dilute the
value of your site structure by having so many internal
links that we don't see a structure
anymore, then that does make it harder for us
to understand what you think is important on your website. And I think providing that
relative sense of importance is sometimes really valuable,
because it gives you a little bit more
opportunity to fine-tune how you'd like to be present
in the search results. If you tell search engines
pretty clearly and directly, well, this is my primary
page, and from there you link to different categories,
and the categories link to different products,
then it's a lot easier for us to understand that.

If someone is looking for
this category of product, this is that page
that we should be showing in the search results. Whereas if everything is
cross-linked, then it's like, well, any of these
pages could be relevant. And then maybe we'll send the
user to some random product instead of to your category page
when you're actually looking for a category of products. Let's see– regarding
images on our website, how to find the perfect balance
between maintaining high image quality, as well as making sure
that images are lightweight and load fast.

It seems that both
of these things are encouraged by
Google, but they both take away from each other. Yeah, I don't know if there's
ever a perfect balance. But especially with
regards to images, there are lots of things
that you can do, especially with the responsive
images set up in HTML, where you can
essentially specify different image files for
different resolutions, and through that make it
possible that when users load a page, for example,
on a mobile phone, they'll get a page that
loads really quickly because it has optimized
images for that device size. Whereas when a user loads
that page with a really large screen, we can swap out and
show the high-resolution images as well.

So that's something where
the whole response of images set up, I think, is
a really good idea. The other aspect here is also
that a lot of the modern image formats that are out there– I'm thinking of
WebP and [INAUDIBLE] I don't know how to
actually pronounce that, where you essentially
have ways of providing really high-quality images at
a fairly high resolution with [INAUDIBLE] not
that byte size price that you would usually have
for really large images. So those are all different
options that you can look into.

There's also image lazy
loading that you can use, where if an image is below
the fold, you can say, well, the browser doesn't need to
load that image until the user scrolls it into view. So there are lots of things
that you can do there. And I think what is interesting
specifically about images and SEO is a lot of
the things that you can do are very technical,
and things that you can measure fairly clearly. You can take Lighthouse or
one of the testing tools and just measure the
page as it loads, and see how large is
that page, how many bytes need to be transferred, and
really work with the numbers, rather than working with
that black box of SEO, where you tweak some things,
and then wait a month or so to see how
the rankings evolve. What's your vision
for the future of SEO? I don't know. Good question. I don't have that five-minute
answer on the future of SEO. I think one of the
things that people always worry about is everything
around machine learning, and that's Google's
algorithms will get so far as to automatically
understand every website and SEO will be obsolete,
nobody will need to do that.

I don't think that will happen. But rather, with all of
these new technologies, you'll have new
tools and new ways of looking at your website, and
making it maybe easier for you to create really good
content to create clearer structures for your website. Similar to how
things have evolved, I think, the last
10, 20 years as well, where in the beginning, you
would write your own PHP code and craft your own HTML,
and it was a lot of work.

And over time all
of these CMSs has evolved, where essentially
anyone can go off and create a website without having
to really understand any of these HTML and
server-side basics. And I think that
evolution will continue, and that there'll be more
and more tools available. And you'll be able to
do more and more things in a way that works fairly
well for search engines. And it's not that the
SEO work will go away, but rather, it'll evolve. So maybe instead of
hand-tweaking H2 tags and H1 tags, you'll
just delegate that to a CMS that makes sure that
the most important content is already included as a
heading on the page.

Since page rank is
a finite resource, is it a good idea to
preserve page ranking using PRG pattern for
not-important pages like privacy policy pages? So PRG the pattern is
kind of a weird trick to make something
work like a link, but actually not be a link. And it uses things like
posting to the server, and then the server
does redirects and all kinds of weird things. From my point of
view, you absolutely don't need to use any of that. If you want to make sure
that a link does not pass any page rank, then
use the rel nofollow. If you want to make sure
that a link works well, then don't use the rel nofollow.

That's essentially my
perspective on that. So I would not go off and create
these fancy constructs, where you're doing things like posting
and redirecting on a server, because it just adds
so much more complexity and you get absolutely
no value out of that. So that's my primary take
on technologies like these. I think it's really cool to come
up with this kind of a setup, but it's not
something that I would implement on a website
on a day-to-day basis.

It's terrible to maintain. You can't use any of
the existing tools on it because you can't really
crawl the website. I would absolutely
discourage from that setup. With regards to page rank,
essentially what is going here is like PageRank
sculpting, where you're saying I don't want any
PageRank going to my privacy policy pages. Our systems are pretty good
at recognizing how these pages interact within a website. And from my point
of view, you don't need to do things like
blocking PageRank from going to your privacy policy pages. These are things that are
totally common on the web.

It's not something that
we would be surprised if we found a privacy policy
page for your website. But rather, it's a
normal part of the web, and we expect to find
it on a website almost. So that's not something
I would consider hiding. In general, PageRank
sculpting within a website is something I would try to
avoid as much as possible, because it's possible to really
significantly break the way that your website is crawled.

And you don't notice
that when you interact with the page in a browser. So that's something where I
would try to avoid that as much as possible. We just have a few minutes left
before I pause the recording. So maybe I'll just go back
to some live questions in the meantime. Looks like there's
still a bit of hands up. I also have a bit more time
afterwards if any of you want to stick around
a little bit longer. Let's see, Akash, I
think you're up first. AKASH: Hi, John. So my question is regarding
internet targetting. We're working for a
client, and of what we saw is we have created different
pages for different countries– like, targeting those pages,
and of course all the pages are in the English language. And we have created a subject
[INAUDIBLE] kind of thing, like USA or France or Australia. And we have set up one
canonical URL in all the pages, and the hreflang tags are
also populated [INAUDIBLE].. But for different
locations, we see sometimes for a country like
France, we see the UK is getting
ranked or Australia or something else
is getting ranked.

So I just wanted to know, all
our interlinking or we have created different
websites, site mapping. In particular for UK, we have
a sub or a nested sitemap for them, but [INAUDIBLE]
organized interlinked [INAUDIBLE]. So what could be the
possibilities [INAUDIBLE] with doing all those parameters? They're still like Google
picks something else of for different locations. JOHN MUELLER: So I think
there are two or three things to mention. One is if we don't have the
pages all crawled and indexed already, then with the
hreflang annotations, we will miss that
connection and we might show the wrong version. The other is
similar to, I think, one of the questions before with
regards to canonicalization.

If these pages are
significantly similar, then we might pick
one as a canonical, and we might use that one
to show in the searches. So if, for example, these
are both English language pages, and the content
itself is essentially the same across
them, then that's something where maybe we'll
say, well, these are duplicates. We'll just pick
one and show those. And I think last is also that,
with any setups that you have, you need to assume that
Google won't always get it right with regards
to internationalization. And you need to provide some
kind of a backup mechanism on your site as well. So usually, that's done
with a banner on your pages, where if you can
recognize that a user is from the wrong country, then
you can show with JavaScript the banner on top saying,
like, hey, it looks like you're from, I don't know, Indonesia,
and this page is for Australia.

Here's a link to the Indonesian
version of our website. So those are the approaches
I would take there. On the one hand, if
these pages are really duplicates of each other,
then sometimes, you just have to take that into account. If we just have them and
index index those pages, then sometimes it's a
matter of just waiting until that happens. If you have a lot of
different country versions, and we're not crawling
and indexing all of them, then maybe it makes sense to
reduce the number of country versions, just so that we can
focus more on the versions that we do have. And then finally, just assume
that Google won't always get it right, and
users will sometimes go to the wrong
version of the page, and you need to be able to
catch them on your side as well.

AKASH: OK. Sure. Thank, John. Thank you. JOHN MUELLER: Sure. Let's see, Chris. CHRIS: Yes. Hi, John. Thank you so much
for doing this. So I'm working on a site
that has public profiles as a component for our users. However, this isn't
a social network where users can
follow one another, or there's a directory page
where all the public profiles are listed. It's that each user has a
profile and a unique URL to their profile. And so I wanted to understand
what is the right way to expose to search crawlers,
so that when a user types in their name on Google,
their public profile link from our site can also show up? I was looking at the robot.txt
files for sites like LinkedIn and Twitter, and they don't
expose the individual user's profile links.

And it doesn't appear to
be the site map either. So I was wondering if there's
a way that we can also do something similar so that
search crawlers can find it without exposing it in
the robot.txt files. JOHN MUELLER: So you
want it to be indexed or you don't want
it to be indexed? CHRIS: We do want
it to be indexed. JOHN MUELLER: OK,
essentially, you don't need to do anything
special in a case like that. So from our point of view,
these would essentially just be normal pages
within your website. And ideally, you
would cross-link them within the website normally. So if there is, I don't know,
a comment or some reference to a specific user, you would
link to their profile page. And then we could crawl
the website normally, and essentially, find our
way to those profile pages, and then just index
them like that. CHRIS: OK, and if you don't have
the commenting functionality, or any way for pages
that are supposed to be linking to the
user's individual profile, is there anything that we can
upload into Search Console, for example, like the
links of public profiles that can allow search
to identify them? JOHN MUELLER: I mean, you could
put them in a sitemap file and we could pick
them up like that.

However, if they're only
linked in a sitemap file, then it is a bit hit and miss
if we would actually go off and index them, because
we don't have any context for those individual links. It's basically, like, here are
a bunch of pages on my website, and we only give them to
you in a sitemap file. Users wouldn't be able
to find them otherwise. Then our systems would be a
little bit reluctant probably to actually go off and
index all of those pages. So it is really
something where we need to be able to find
links to those profile pages somewhere within the
normal content of the site and crawl our way through
that, which could be– I mean, I don't know how
these would be embedded. But on the profile
pages themselves, you could have things
like related links, where you say, like, other
users from this location, and then cross-link
them like that. But yeah, essentially, we
need to find normal links to those pages. The other thing
maybe to keep in mind as well is that user
profile pages are super-popular targets for spam.

So that's something
where if spammers realize they can create a profile
page on your website and it's indexable, then they'll
go off and do that in masses with bots, and create
millions of profile pages with names like– I don't know, buy this
pharmaceutical here. And it's something where,
especially if you're starting off to
provide profile pages, you really need to
keep in mind that it's a super-popular
target for spammers. CHRIS: OK, thank you, . JOHN MUELLER: Sure. Let's see, Zach. [INAUDIBLE] ZACH: Hi. So my question is similar
to the previous one, but also the opposite
at the same time.

So our site started
ranking recently, like, on page two of a fairly
competitive keyword, like search query. But yesterday or the day
before, it plummeted to page 6, and I noticed it's because
Google re-crawled our site and indexed, like, 400
or 500 user profiles, but they were all private. So it's my assumption that
they're marking it as spam, and I guess my first instinct
was to 451 all the pages or 401 of the
pages unauthorized. But the project was bootstrapped
with create-react-app, and that's all that's a
single page application. And there's not really a
way to return any status code, other than 200, on a
client-served application like that. So I guess my question is,
like, where do I go from there? Because I was thinking to
have the profiles redirect to a /unauthorized and then
add it to my robots.txt. But yeah, just any thoughts
would be super-helpful. JOHN MUELLER: So I
think, first of all, if these pages just
got indexed now, then I would not expect
our system to say, oh, the quality of
the website is bad.

We will not rank
that well anymore. So my assumption is
that if these pages just got indexed now, it's probably
unrelated to the change in ranking that you were seeing. One of the things
that recently happened is, we launched another core
update, I think, yesterday. So if that aligns
with yesterday, then maybe that's related
to the core update. And the core updates tend to
be more on a broader quality basis across a website. So that would also take into
account the indexable pages for the website. But if these pages were just
indexed, like, last week or so, then that probably wouldn't
be playing a role there, but rather the content
that was indexed over the last couple of months. That might be something that we
take into account for a quality update. That's the one thing. The other thing with regards
to single page apps and pages that you no longer
want to have indexed– one thing you can do is add
a noindex to these pages.

You can do that with JavaScript
as well, where when you load the page and you
recognize, oh, this is actually a page that
shouldn't be indexable, then you can use your
JavaScript to edit the DOM and add the noindex robots
meta tag to that page. That's one thing you can do. You can also, of
course, redirect to an unauthorized page
if you want to do that. On the unauthorized
page, I would also use the noindex instead
of the robots meta tag– instead of the
robots.txt disallow, just because with the
noindex, we really know that we shouldn't
be indexing this page.

And if it's an
error page, then you don't want to have that indexed. Whereas if you block the
page with robots text, then we wouldn't know
what's on that page. And then we would go off
and probably index the URL without knowing its content. And we might end up showing
that in the search results. So especially for
our single page app, I would use robots meta tag
noindex, not the robots text. So those are, I think,
the two main things I'd mention in that regard. And I think if you suspect this
is based on the core update from a timing point of
view, I would go and look for the blog post
that we did, I think, last year about core updates.

And there are
bunch of questions. There are things that
you can ask yourself about the website's
quality overall. And I would try to go
through that, ideally with someone who
knows your website, but is not directly
involved, so that you can get some really objective
answers and reactions based on some of the questions
that are there. ZACH: OK, that's perfect.

And I guess as a
smaller question, but if I want to move my
URL to a more concise– like a smaller and
a higher quality TLD but the new URL doesn't have
any keywords that I'm targeting or any backlinks to the
site, if I were to migrate it in Search Console, would
the rank be maintained, or is it still
starting from scratch because I'm losing a
lot of those things I had before, I guess? JOHN MUELLER: Usually,
that would be maintained.

So the keywords in the URL
are really helpful for us when we first
recognize a website or we don't know anything
about the website. But as soon as we know
something about the website, we've indexed some
of your content, then we can focus on
your actual content and not the URLs themselves. So that's something
where oftentimes, people will focus too much on having
keywords in a domain name, and in the end not realize,
well, actually, the content is an important part. It's not the keywords
in the domain name. If you were to do that
for a single-page app– I don't know if that's
combine, the tricky part is in Search Console we
check to see if there are 301 redirects in place. So if you wanted to migrate
to a different domain with a single-page
app, you would need to make sure that your
server generates those 301 redirects on a per
page basis and not do that with JavaScript. ZACH: OK. Yeah, that's perfect. Great. Thank you so much. JOHN MUELLER: Cool. All right, let me take a
break here with the recording.

And I'll be around if
there are more questions. It looks like there
are more hands up, so we can go through
some of that as well. If you're watching this on
YouTube, one last thing– I almost forgot
to mention it, is we have a survey running
on the YouTube channel. There is a link. I'll put the link
in the description. There's also a link, I think,
in the thread for this video. And I would really
appreciate it if you could take the short time
to go through that survey and leave us some answers
there to help us figure out what we could do to make
the channel and the videos that we're producing
a little bit better, and a little bit more
useful for you as well. All right, and with
that, let me take a break with the recording. And maybe we'll see some of you
in one of the future hangouts.

Watch this as video on Youtube

Hire an SEO Expert and get your job done.

Leave a comment

Your email address will not be published. Required fields are marked *

loader