JavaScript SEO Office Hours April 7th, 2021

JavaScript SEO Office Hours April 7th, 2021

MARTIN SPLITT: Spinner? Yeah? Come on, do the thing. Wow, that takes
longer than usual. Or is that the new way
of showing me that– hello and welcome to the
JavaScript SEO Office Hours on April 7th. With this lockdown,
I can't tell. But from the weather, it's
clearly April here in Zurich. We have seen sunshine,
rain, and snow in a single day at this point. Also some wind. So, yes, we definitely– I think it's April 7th. Yes, it's April 7th. The 7th is the right date. Yeah, we do these
biweekly roughly, where you can all basically
have a post on our Community tab in the
YouTube.com/GoogleSearchCentral channel.

You can ask your questions
there in the thread or you can join
these live recordings as these wonderful people
around me here have done today. We do not have YouTube
questions this time around, so I'll open the floor to
questions from the audience. Feel free to either unmute
yourself, raise your hand if someone else is speaking,
or use the Chat if you don't want to talk out loud today. Do we have any questions? Silence. AUDIENCE: OK, so I'll go first. MARTIN SPLITT: Hi, Giacomo. AUDIENCE: Hi, Martin. So the first question,
I don't know, is in the last
JavaScript SEO meeting, you said that
Chrome is improving, and blah, blah, blah, all
the system is improving and so you are getting less
JavaScript error than before in the rendering
inside tabular-rs. So can you give us, I
don't know if you can now or you can later on
Twitter a list of the most common errors in tabular-rs? The first one you
said is [INAUDIBLE] is blocked by robots.txt
or not reachable. But I don't know if there
are other issues that are very common so we can
just have some things to– MARTIN SPLITT: Oh Oh, that's
a fantastic question, Giacomo.

I should, but I can't
because the thing is we have a bunch of data– they basically have
the logs that we get from each website in ws. So the JavaScript console
logs, like whatever you would see in the JavaScript
Console, we also get that. We highlight that in Search
Console if you run a Live Test. We have this information. I have not had the
time or expertise yet to actually go through
it and categorize it so that I could see trends. That's a project that I have
on my long list of things that I want to do. I haven't done that
so far because so far, no one has cared so I
thought no one is interested. But now, I know at least
one person who's interested. And maybe there's
more people out there who might be
interested, and then I have a good reason to
actually go through and try these things out again. And I need to find out where
the data sits again because I had access to it at some point. But I'm not sure
if I still have.

But that's a good question. I'll see if I can get
that information out. And then, maybe,
we can even put out a blog post on the most
common JavaScript errors that we are seeing in rendering. Good question. AUDIENCE: Thank you. Thank you so much MARTIN SPLITT: You
said, "first question". So you have more than one? AUDIENCE: Yeah, I don't
know if others has question. If not, then I– MARTIN SPLITT: Yeah, let's see. Let's see. Floor's open. AUDIENCE: Yeah I could– yeah. AUDIENCE: No, go for it.

AUDIENCE: OK, OK, OK. It's not a problem. OK, so the second question is
about the layout information. So when you render
a website, you are getting the render tree,
layout tree, blah, blah, blah. And you have all the position
for the element in a page. How that is not
being able to get those information for problem,
not for [? tabular ?] errors problem, but for problem
like you can't get the CSS, and so on. So the first problem I can guess
is the mobile check, mobile– MARTIN SPLITT: Mobile
friendliness, yeah. AUDIENCE: Yeah,
that's the first one. But there are also the problem
like understanding better the page, understanding where
the main content is, and so on.

MARTIN SPLITT: Yeah,
so generally speaking, it's not that big of a
problem because as long as we have the content,
we at least know what's hypothetically on the page. So the question is very
similar to how important it is to have semantically
correct or valid HTML, right? If you don't, that's fine. If everything is a diff on your
page, that's OK-ish for Search.

It will not necessarily be
OK for Accessibility Tools. It will not necessarily
be OK for your users. And it will definitely rob
us some semantic information. So the more information
we have, the better it is, the easier it
is for us to make– well, not educated
guesses, but to know what you think is important,
to know what looks like it's the centerpiece. We don't have that information. We don't know. That happens
sometimes, and it's not like we can't do
anything with that page or we will not rank
it or anything.

It's just we have less
information than otherwise. So that's always going to
be a little bit of an issue. That's also one
of the reasons why we advocate against people
trying to simplify things by excluding CSS resources. That's actually not
making things simpler. It's making things harder. It makes things less
information-rich for us, so that's also not a good idea. And last but not
least, as you said, there are signals such
as mobile-friendliness that might be impacted
when things go wrong or go sideways there. So it's not the most
important thing. It's not a big problem
that you really need to address if it's not there. If it's there, it just gives
us additional information that we can work with. AUDIENCE: Thank you very much. MARTIN SPLITT: Very happy to. Do we have more questions? We have smart people
in the audience so I expect more questions.

If not, then I'll refresh
YouTube one more time. No, still no
questions coming in. AUDIENCE: OK, so
no problem at all. MARTIN SPLITT: There you go. AUDIENCE: No problem at all. I don't remember, probably a
week ago, I asked on Twitter about the queue priority. I'm not going to
ask about that again because I can understand this
is an implementation detail. But can a page with
an error blocking other page from the same
origin to rendering– let's say we have 1,000
page from the same domain.

This because you probably
have some sort of concurrency. Yeah, I can understand that
you are splitting all the page through a bunch of machine. But if one or more
of these pages are not rendering correctly,
you have to render again those, you are still processing
other page before. So you are moving the
page in a different queue, like error queue. I don't know. I don't– MARTIN SPLITT: Yeah, so– AUDIENCE: –we don't want
to say anything about that. MARTIN SPLITT: So
there's nothing that would block
other pages just for one page having a problem. Fundamentally, there's
just different queues with different service
level agreements, so to speak, or service
level objectives. So for instance, live
tests are usually prioritized in a separate queue
and going to separate instances because that's just
needs to be a lot faster. Whereas when indexing, when
the whole rest of the process of the signal collection might
take a few hours, minutes, then the rendering time does not
necessarily become a problem so we could potentially
render it a little later.

And an error would just
mean that we have to retry. So that's like the only
real difference that happens to pages that have
errors happening or preventing them from being rendered
is that we retry them. That's the only thing. They go back into the queue. They're being retried. But it's not that we
are keeping tabs on, ooh, this page from this
website has a problem, so we now have to hold
back all the other pages until we have done this page. That's not the case. AUDIENCE: OK, cool. Thank you. MARTIN SPLITT: You're welcome. All right. If you're not sure if
you're like running over someone else's
desire to ask questions, there's a Raise Hand
button at the lower part of this wonderful tool
called Meet that you can use to get my attention as well. Or you can use the Chat
in this lovely tool, too. But please, please
ask more questions.

AUDIENCE: I have also question. MARTIN SPLITT: Awesome! Hi, Támas. AUDIENCE: Hi. So it's regarding m-dot website. So there are still a
lot of m-dot websites. And as far as I know,
as long as the setup is correct with the
[INAUDIBLE],, the website can perform as good as
with the responsive layout. What happens if dub dub dub
the version has a lot of links, the mobile version,
so the m-dot version has not that much link
on the on the layout. The AMP version has, again,
all the links, internal links as the dub dub dub version. So long story short,
the mobile version has only a few links on the
website, so internal linkings. Does it have an effect
on the performance? Because if it's a
mobile first index, so will the page
that Google crawls, and if Google comes
with the mobile crawler, Google crawls the
m-dot version and sees only these few links, internal
links, how does it work? What's the secret behind it? MARTIN SPLITT: That's not really
a JavaScript-specific question.

And with the Link Graph and
ranking, that's where I'm out. I don't know. AUDIENCE: OK. MARTIN SPLITT:
Don't know, sorry. AUDIENCE: Don't be sorry. OK. MARTIN SPLITT: I would
suggest asking that question in John's office hours. But in general,
what I can say is if the m-dot version has a very
different linking structure and we would eventually prefer
crawling the mobile version, then we would probably see that
as the actual page structure and not necessarily
the desktop one.

It's always a good
idea to try to keep them as consistent as possible
because that definitely minimizes the opportunity
for us to misunderstand the way that your page
structure looks like. AUDIENCE: OK, makes sense. MARTIN SPLITT: But I think
generally if it works now, it should probably continue
to work fine in the future, basically. AUDIENCE: OK. MARTIN SPLITT: All right. AUDIENCE: Cool, thanks. MARTIN SPLITT: No worries. More questions? Oh, yeah! There we go. Raise the hand. Yes, Michael. AUDIENCE: Yeah, hi. MARTIN SPLITT: Hi, Michael. AUDIENCE: Hey, Martin. I'm the guy who made the
post, does Google currently have problems with
the prerender service that thereby 500 errors arise? Our technicians told us that
Google leaves connections open and it does comes
to a traffic jam that no further
connections are possible.

And I sent you a
Bitly link so that you can see the original page. MARTIN SPLITT: Where
did you post that? Because I don't think– AUDIENCE: I posted
this on YouTube. MARTIN SPLITT: On YouTube. AUDIENCE: In the
Comment section. But can I post
the link here now? MARTIN SPLITT: Yeah, sure, sure. AUDIENCE: Yeah? OK. MARTIN SPLITT: Because
in the YouTube Comments, I'm only seeing time warning– AUDIENCE: Oh, yeah, OK.

MARTIN SPLITT: –for Europeans. AUDIENCE: OK, then you
didn't see the other thing. MARTIN SPLITT: No. AUDIENCE: Because I
couldn't get in before. I tried again and
again so I quit. MARTIN SPLITT: Yeah,
I noticed that. I accepted you, and then
you didn't come through and that worried me. AUDIENCE: Oh, OK, OK. OK, where can I post you– MARTIN SPLITT: There's a chat
in here on the lower right hand corner, there should be
like a little Chat button. AUDIENCE: Chat, Chat, Chat,
Chat, Chat, Chat, Chat. [MUMBLES] details. OK, it doesn't show me the Chat. MARTIN SPLITT: No? Interesting. AUDIENCE: No. Up here! Ah, it's on the– MARTIN SPLITT: Ah, OK, it's– AUDIENCE: It's on
the right upper. MARTIN SPLITT: Oh, OK. In that case it's in different
places for different people. I think I have some
Experiments enabled here. Right. Yes. AUDIENCE: OK. And– MARTIN SPLITT: OK. AUDIENCE: It went better now. It went better now and
the pages are returning to the Google Search results. But the Google Search Console
told us that we have 503s and the technicians
told us that it is the problem with a post
in the Comment section.

I post everything
so you can read it. Sometimes, it's easier
to read than to know to hear it if someone talks. MARTIN SPLITT:
I"m not seeing it. But maybe it's in
the– hmm, let's see. Maybe it's in the previous ones. I don't know. Ah, OK, I should not
open our YouTube channel because there's always
something making noise. Maybe here? No, I'm still not seeing it.

Damn it! Ah, here you go. "This Google [MUMBLES]. Our technicians told us that
Google is connected to–" AUDIENCE: I can send you
now the original page. But I didn't want to post
it on YouTube because it doesn't look like a link page. MARTIN SPLITT: Yeah,
and I understand that. I understand that. The page itself seems to be– so at least, the home page
is definitely listing. And I'm wondering
if I can see it. Ah, come on.

Don't do this to me. Our lovely, lovely
internal tools. Hold on. AUDIENCE: The technical
people told us that Google leaves
connections open. MARTIN SPLITT: That
would surprise me. AUDIENCE: And then we
have a traffic jam. And so now, we have
to find out, yeah, can we believe what we heard? And if yes, maybe you just
say, this is the point. This is what I would love most. MARTIN SPLITT: Do we– so I don't think we
leave connections open.

That would very,
very much surprise me if we were to do that because
then, other people would be complaining. AUDIENCE: Yeah. MARTIN SPLITT: That's the thing. In general, we don't have
any special treatment for prerendering
services or anything. We just– one thing I
would check with them is if we are
crawling with HTTP/2 or if we are crawling with
HTTP/1 when we are crawling. That might be something. If anything, I would
be not surprised to see glitches in
HTTP/2 crawling. Let's see. Does it tell me something here? No, we seem to be crawling with
HTTP/1 as far as I can tell. Yeah. Let's see. What does it say in terms
of crawling information? "Indexing crawler is mobile"– AUDIENCE: The page
is also HTTP/2 ready. MARTIN SPLITT: That is possible. It's just we haven't rolled
it out for everyone just yet. AUDIENCE: How are you– MARTIN SPLITT: I
do see that there was a peak around March 17th. AUDIENCE: Yes. MARTIN SPLITT: With– yes. Hmm, hmm, hmm, hmm, hmm.

"Escape fragment." OK. Let's see. Let me– AUDIENCE: So everybody else
who also wants to join us, the link is in the Chat. MARTIN SPLITT: I
wish I could show you how we are debating this
in our internal tools. But unfortunately, I can't. OK, so then that website's host. I need the host, not the URL. So I'll remove this part. And I would like to see how our
crawler has done in the past. Yeah, so that is definitely–
ooh, someone wants to join. I'll jump over. Ooh, another person I know. But again, it seems to
not have worked properly. Dang! So this is strange. This is very, very strange. I am seeing a peak in
terms of connections that we had in parallel
around that time when you were
seeing issues there. I would have to check with the
team, with the Googlebot team to see why there was an increase
in connections at the time. And I could imagine that
your tech people are not wrong to say the
connection open longer.

Or at least, there were
more connections than usual. But that happened beforehand
and didn't raise any problems. So it's not an unusual thing
for us to use more connections. What is unusual
is that this time, it caused 500 responses
on your server. That's interesting. What is the error threshold? And there's also,
from our perspective, there aren't more errors
than usual either. Huh, strange. That's an interesting one. I'll probably have to get
back to that question, probably over Twitter, once
I know more about this. But it seems to be that
it's an HTTP/1 crawl. The connections should
not have stayed longer, or should not have stayed
open longer than usual. But there were 500 responses
that I would investigate on your end because we
only, as far as we can go, we know that we got a 500 back. But they are back to
normal, so this doesn't seem to be a problem anymore. So I'm guessing you
change something or it went back to
normal on its own.

I don't know. AUDIENCE: OK, so shall
I tweet you directly? Or– MARTIN SPLITT: Yeah, you can. You can respond, like
basically tweet at me and then I'll try to track this. AUDIENCE: OK, thanks a lot. MARTIN SPLITT:
Interesting is also that the response size went up
when we were seeing 500 errors. AUDIENCE: Mm-hmm, OK. MARTIN SPLITT: That's funky. AUDIENCE: OK, yeah. And so we always get some
feedback from the agency and then our
rankings are dropping and we never know what's going
on technically behind there.

OK. MARTIN SPLITT: Yeah. AUDIENCE: OK. MARTIN SPLITT: Hmm, this might
be a hosting-related issue on your end. But yeah, interesting. Hmm. AUDIENCE: But as they also
host a big running events for an energy drink, they should
have enough power regularly to solve that, yeah? MARTIN SPLITT: They should. AUDIENCE: OK, so we will see. MARTIN SPLITT: But you
would be surprised. AUDIENCE: Yeah, OK. OK. MARTIN SPLITT: Awesome. Cool. AUDIENCE: Thanks a lot. MARTIN SPLITT: I'll take
a look and see if anything from our side comes up. AUDIENCE: Mm-hmm. MARTIN SPLITT: All right, sweet. Anyone else wants
to ask a question? I don't see any
other raised hands. That was a really nice use
of the Raise Hand feature. I like that. I like that feature a lot. It avoids talking
over each other.

That's nice. Hi, Snezana. Oh, Gabe. Gabe raised. AUDIENCE: Hello. MARTIN SPLITT: Yes. AUDIENCE: Hi, Martin. MARTIN SPLITT: Hi, Snezana. AUDIENCE: Nice to– MARTIN SPLITT: Hi. Hi, Gabe. AUDIENCE: Nice to have– Hello, Martin, can you hear me? MARTIN SPLITT: Yes, sorry. Cross talk. AUDIENCE: Perfect. No, I just wanted to make sure. No worries. It's nice to officially speak
with you after stalking you via video and watching all your
wonderful tips and tutorials. MARTIN SPLITT: Oh, God. AUDIENCE: I don't have a
specific site question. I have some general
questions just around analyzing your thoughts. One of the things that I
think that I'm coming up against with the page
experience update coming out is identifying page load
speeds and that kind of thing. And traditionally,
I think as SEOs, when it comes to
JavaScript, there's always this input to
externalize it and call it in and that kind of thing. And now, there's more feedback
about inlining certain scripts so you can pick up your
speed a little bit more and that kind of thing.

Do you have any
suggestions on any tools that help identify
when to inline or which scripts to inline? Because sometimes when you're
going through that, especially when you're not a
JavaScript developer, you're going through
all the script trying to identify which ones are
activating, which parts of code and that kind of thing. So I don't know if there's
any secretive tools, or maybe things that you keep
close to your chest that you might want to
throw out here on our call. MARTIN SPLITT: So my first
secret weapon is to not inline but remove JavaScript
whenever possible. But that's obviously
not always the case.

And you will not be able
to remove all JavaScript, you'll just be able to remove
unused bits relatively easily. What I do like, as a
JavaScript developer, is the tree shaking ability
of most of the bundlers. So I think webpack
has a built-in. I'm not sure if Rollup
can do it as well. But pretty much, all the
good bundlers out there are able to remove
unused code paths. So that's definitely
a win because code that you don't have
to download is just saved bandwidth and saved
processing power to begin with. When it comes to
inlining JavaScript, you want to inline the bits
and pieces that you absolutely, essentially, really,
truly 100% need to get the users the content
and the core experience. Any kind of tracker
is not that, right? You want to load
the trackers as late as possible in the process,
any kind of analytic solutions, any kinds of, I don't
know, stuff that is not core to the experience
of the user should be, it should be moved down.

I think some of the
JavaScript frameworks have a mode which is called
server-side rendering plus hydration where they inline
the core runtime of themselves plus what's needed to
get the core content to then hydrate and actually
become a fully functional JavaScript application. They should do
that automatically. So I think Next.js
has that built-in. I think Next.js
has that built-in if you work with Vue.js. I'm not sure about
Angular Universal because I think
Angular Universal goes a slightly different route. There is– Oh, God,
what's it called. There's a tool that
I've never tried myself but I have people here
singing the praises for it. I think it's called "Razzle
Dazzle" or something, if I remember correctly.

I know, the name was
fantastic, but the tool is apparently good and works
across different frameworks. But yeah, the question. There is not really
much tooling around to identify which ones
to inline because that's a very subjective, very hard to
automatically answer question. It's only in sitting
down with the developers and thinking about what is the
absolute minimum JavaScript that we need to get
the content there. And then the answer 95% of the
time, if not 98% of the time is not, oh, we inline
this and this and this. It's more like we need to change
the architecture in which we are building our
application in, most likely pointing towards server side
rendering of some flavor. AUDIENCE: OK, got it.

MARTIN SPLITT: No easy answer. I'm sorry, Gabe. AUDIENCE: Yeah,
it depends, right? MARTIN SPLITT: I
mean, I tried to avoid the word or the phrase,
"It depends" but it does. It does, unfortunately
in the end depend. AUDIENCE: Certainly does. MARTIN SPLITT: Because
with most applications, you'll end up with
having more or less the runtime of your framework,
which is huge in its own. And inlining that
does not really help. It minimizes a little bit
in terms of network latency, but you could probably
get the same results by preloading or
prefetching that blob early on in the header versus
actually inlining it. So inlining is a foot gun. Very, very careful
not to shoot yourself. AUDIENCE: That's good
and that's good to know. Thank you. I appreciate that. MARTIN SPLITT: You're welcome. With JavaScript, a lot
of things are foot guns. That's the tricky
[INAUDIBLE] of reality. AUDIENCE: It certainly
feels that way sometimes. MARTIN SPLITT: Because it is.

JavaScript is fantastic
until it isn't. All right. Anyone else? As I said, you can use the Chat. You can use the
Raise Hand feature. Or if no one speaks
up, you can just– oh, yeah, there we go. Snezana. AUDIENCE: Hi. Yeah, I have a question
where, I was wondering if it's possible to get some
ups and downs in crawling if we actually implement the
banner that is cookie privacy, specifically for Switzerland. Because now, we
have this regulation where we need to say to
users privacy settings, all of these things. And we've seen some of the
[INAUDIBLE] vitals, traffic, everything going up and down. And actually for some
time, the content was picked up as
the primary one, even though the
banner was waiting for the whole content to
load and then [INAUDIBLE].. So I was wondering if it's
something that just happens normally that we
should wait for a while or is it something
that we should do like contact the service provider
and tell them this is happening, we don't know how to do it.

Because they say from their side
it's all optimized, let's say. MARTIN SPLITT: Yeah, that's
a fantastic question. In general, I would always
talk to the tool providers, because if they're not being
held accountable for the tool, then they let things slip
that they shouldn't let slip in the first place. In general, I would
assume that we are soon going to get better at detecting
these things, because we're continuously working at
improving the metrics, especially for
LCP, FID, and CLS, CLS being probably
revamped at some point soon because I've seen
that the team is not super happy with how the
metric works right now. We'll see how that goes. But yeah, these
kind of fluctuations can certainly happen from
these kind of patterns and implementations. And I don't really have a
general answer to these things because I have tried
out a few solutions and, at least the lower profile
self-baked ones work well. It seems to be that I have
received a bunch of reports for consent management platforms
and consent management services where there have been
issues specifically with CLS and cookie consent
banners that I haven't– AUDIENCE: Specifically
for that, yes.

MARTIN SPLITT: Yeah. AUDIENCE: Specifically, yeah. MARTIN SPLITT: And you're
not the first to report this. I have an ongoing
program or process of looking into these things. I wish I could do
that more publicly. But in order to not accidentally
shame any providers, I'm not doing this publicly. I'm basically looking
at this both on the side of what can we do to
avoid these problems? On the other hand, also,
what do they need to do? Because I wish us
to have guidelines and guidance for these
tool providers that you and their customers as well
as their own developers can refer to to fix the problem.

Because in the end if it's
a problem in the metrics and it's not just the
measurement error, then that's a problem
for actual users and we should work
together to improve that. AUDIENCE: Exactly, yes. MARTIN SPLITT: So no
finite answer yet, but we are definitely
looking into it. AUDIENCE: And are you going
to publish something regarding that on Twitter or something? Just some kind of
conclusions maybe? MARTIN SPLITT: I don't have
any conclusions yet because I'm early on in the process. I'm still receiving– basically,
I'm at the reconnaissance phase where I'm taking in input. So if you are–
there's a tweet, I– gosh.

OK, let's see if I
can find my tweet. A long, long time
ago, I tweeted that I am asking for
people's, basically, for examples of this happening. So if I find the tweet,
and if not, then you can just send me a URL
where that is happening. You can use a direct
message on Twitter. My direct messages are open
for this kind of stuff. I don't normally
answer direct messages, but I definitely receive
them and look into them. AUDIENCE: This is just one
of the largest Switzerland's classifieds websites. So I think that you probably– MARTIN SPLITT: I think– AUDIENCE: –can get an idea of– MARTIN SPLITT: I think I
know which way to look then. Yes, OK, fair enough. Is it the one with
"T" in the name? AUDIENCE: Yeah, yeah, yeah. MARTIN SPLITT: Yeah, OK. In that case, I
know where to look.

Fantastic. AUDIENCE: Yeah, because we've
seen the Core Web Vitals going really, really crazy
and the overall site performance dropping
incredibly low since we have implemented this. And there has been
some confusion also with the tracking and we're
working on it in analytics, and everything just seemed
to be like boiling out. So, you know. MARTIN SPLITT: Yeah. AUDIENCE: It's definitely
largely from the provider side, I would say. MARTIN SPLITT: So I am in
the reconnaissance phase where I'm basically
taking in the information and looking at it. And once I have
conclusions, I'll publish that both in the Docs
and probably also over Twitter. And who knows? Maybe even in a video. Who knows what's
going to happen? This is 2021. Everything is wild. AUDIENCE: Cool, thank you. MARTIN SPLITT: Thanks a lot. Very good question. AUDIENCE: Thanks. MARTIN SPLITT: Again,
the fantastic use of the Raise Hand feature. I know that– oh, some
people that have joined us have already left again. Sad. We do have 15 more minutes.

So any further questions? Yes, Maria raises hand
the old style way. AUDIENCE: I know there is a
picture right there to raise your hands but blame the cold. I'm a little bit slower. So I'm going to ask a
question from the team where I work, the team at the
new place where I work for. And probably, you've had
this question many times. So here I go. So the question is,
what is the best way to clean on just JavaScript
code, for a page, obviously. What is the best method to
recommend to developers? MARTIN SPLITT: Mm-hmm.

Actually, I haven't had
that question before. So that's good. I like that. That's good and nice. Or at least I can't remember. But then again, I can't remember
I had yesterday for lunch. I don't have a cold that
I could blame it on. I'm just dumb and
can't remember things. That's why I make
computers do all my work. So that's– Oh, God. OK, OK, don't lynch me. That depends a little bit
on how the website is built. If you are using a framework,
the likelihood that you are also using some sort of bundler
or build tool that combines the JavaScript
together in the end– because what you have is– OK, stepping back a
little bit, because I'm pretty sure most
people will actually benefit from this explanation. So what usually happens when
you build JavaScript web apps or websites that heavily
rely on JavaScript is that you end up with some
sort of dependency tree. So for instance, you
might use, I don't know, let's use, let's say, React. React itself is a relatively
small core module that then has a thing,
its templating system is called JSX that's usually
a separate module that uses other things.

And basically, you end up with
lots of small bits and pieces of JavaScript that are
interconnected and basically are combining into your
JavaScript application. And then eventually, you
write a little bit of code that makes the thing do
what you want it to do. So the way that it
works in most frameworks is that you have a huge
amount of code that comes from third parties
that is not your own and then you write
a little bit of code to glue this together roughly. And then what you end up
with is a dependency tree. So like, my thing here
uses this thing here, which uses this
thing here, which uses React, which uses
this thing and that thing and this thing,
and they all need to be compiled down to a
proper combination of things. What easily happens is that
some of these dependencies are larger than others.

Let's say, I don't know,
I am using a library that provides a bunch
of basic functions, like it provides
100 useful functions that you need every day. But I'm only using two out
of these 100 functions. Then what usually happens
is that by bundling all these things together,
it seems like, oh, yeah, so this utility here that
has these 100 functions is needed by this
other thing here so I'll drag it into the
big blob of JavaScript that we are shipping
to everyone. Even though 98% of
it are unused code. They just download, they
have to be executed, they have to be passed,
they have to be executed. But they are never actually
running themselves. It's just like these
98 functions just sit in your bundle and
then don't do anything. This can be partially
automated with tools. Like when you were using
webpack for instance, you can enable tree shaking. Basically, the fundamental
term that you're looking for with
whatever you're using is you want to– bless you. You want to be tree shaking. So you basically take
the tree and shake off every loose apples that
you don't actually need.

So with tree shaking,
these 98 functions will probably fall down. Why do I say probably? Because it depends on how
the developers are using it. That's the tricky bit. What you can to a
certain degree do, though, is in the
Developer Tools, you can– oh, where was that? Was that Sources? Or was that under More Tools? I think it's under More Tools. Yes. Actually, hold on, let me
share my screen real quick.

Never a bad idea to share my
screen, but let's do it anyway. I like to live dangerously. Why am I doing this to myself? I think I never learned. I think I coded this– AUDIENCE: Because
you're a good person. MARTIN SPLITT: I don't know. AUDIENCE: And you
like to help us all. MARTIN SPLITT: I don't know. I don't know. I think I brought this
up a few times already but I'll bring it up again. They did a study
with earthworms. Worms? Earthworms? I don't know, this is probably
how you pronounce that. And they found out that
they need 3,000 repetitions on average to learn something. Anyway, why am I
bringing this up? Because I'm an earthworm when
it comes to learning things.

Let's zoom in a little bit
so that we can actually see what's happening here. So I'm on a random website. You might have heard of it. It's apparently a social network
that has a bird as its logo. And then in the Developer
Tools, so how did I get here? Actually, let me get
back one more time. I right-click, I say "Inspect." I'm not sure if you actually
seeing this right now but I guess you might. I don't know. Right-click Inspect then
I get the Developer Tools. Then I go to the
three little dots, and then say more
Tools, Coverage.

And in Coverage, if I
am reloading the page and get the coverage
information, we are seeing that there is
a huge bundle of JavaScript and 60% of the bytes are
not actually being used. AUDIENCE: OK. MARTIN SPLITT: Be very, very,
very, very, very, very, very, very, very careful with
that, because the thing is, if I were to actually log
in now and do something, I could probably try that, the
number would not be accurate. Because it's very, very
likely, most of this– OK, it doesn't have my data. Most of the data that
is unused is probably used later on when I continue to
actually work with this thing. So this number might go up. But it is a good idea to have
a look where this comes from. SharedCore, vendors~main. vendors~main actually has lots
of unused stuff, apparently. At least like a third
of it is unused.

So I would start looking here
because vendors is probably like some third party code and
there might be bits and pieces that you can get rid of. And you can test
this in development. You don't have to do
this in production. So developers can give it
a try every now and then. And then, they need to
interpret these results. The 27% does not mean that 27%
of the code is actually unused. It is unused at the time
when I took this trace. But it still might
help them to find out where to look for
things where they can save bytes and JavaScript. And then, as I said, tree
shaking in whatever system you are using is a good
idea if that's supported. If it's not supported,
you might want to consider moving
to a system where you can tree shake and import
things more selectively. But again, there's
no automated tool that you just throw
your JavaScript in and all the unused code
falls out, unfortunately. That would be awesome. AUDIENCE: Oh, thank you. Thank you. It was a very detailed answer.

MARTIN SPLITT: I'm
trying my best. I thought there was context
that is worthwhile sharing. AUDIENCE: And
definitely, it will help when once the recording
is ready to go back and look and rewind again. MARTIN SPLITT:
Have, a look, yeah. You're welcome. All right. Any other questions? We already said
we have the Chat. You can just say something. Hey! Gabe raises his hand. And that would be
the third option. AUDIENCE: I'm coming
back for a second one. I'll try to make
it quick because I know we're short on time. But just curious, just
curious random thought. Sometimes when you've identified
a potential JS conflict with the website, you might
then go to the Google, use a site operator then check
the cache, then the index, and then go to check
text-based cache.

And then if you see
your hyperlink there, you know, oh, it's rendering. If you don't, if it's
text, then you're like, oh, it's not rendering. But there's a debate that I had
the other day with another SEO that was saying, well, Google
is rendering JavaScript, that technique doesn't
necessarily work anymore. So A, is that, I guess,
entirely true, or kind of true, not true? And B, if it is true, is
there a more effective way to cross-check
your observations? MARTIN SPLITT: Yes, it's true.

Yes, there's a better way to
cross-check your observations. So the cache– oh, God. The cache is really,
really old feature. And the only purpose that it
has is to give you something should the website
have been removed or has gone down for the
moment so that you can say, I really want this
piece of content, it's only on this one
website, this website has not been available
for the last week. You may find something
in the cache, which means that the indexing
is a pipeline of lots of different moving parts. And at some point
in this, the cache might filter out and
siphon off the version that it shows you in the cache.

It's an un-maintained feature. We have it because I still
think it provides value even though it is not
under development anymore. But the version of content
you get in the cache is kind of random. It might just be from
a previous crawl. It might be straight
from the crawl so we haven't actually run
any of the JavaScript yet. It might have picked
up the snapshot after rendering has happened. It's luck. Sometimes it works. Sometimes it doesn't. But that doesn't mean that there
has been a fundamental issue or that we haven't been
rendering anything. We are rendering every page. Every page gets indexed based
on the rendered version, unless edge cases, but that's
really, really unlikely. And then if you want
to check that and see what we would see in rendering,
with certain caveats, you can see that in the Google
Search Console live test. That's the rendered
version as well as in the mobile friendly test
or in the Rich Results Test. These things are using the same
pipeline as Googlebot uses.

The caveat being
that the indexing uses very strong caching and
can retry multiple times, which means that sometimes it
might take like half an hour for us to actually get
everything downloaded properly and get everything fresh and
everything like that, which we can't really do when you are
sitting in front of the testing tool. So the testing tool
skips the cache because we want to
give you the latest version of what we would
get if there wouldn't be any caches involved. And you might see the
infamous "other error" in the testing tools
which usually just means this took too long. So if we are not able to
fetch the JavaScript file then it just took too long
until we actually got around fetching it and
putting it into the cache. Which, for indexing,
would not matter because the indexing
would go render or it would fail because
one resource wasn't ready, so we'll basically have us
notified once the resource has been fetched. Then eventually, the
resource is being fetched, put in the cache, and then
we are rendering again, and then we have the
rendered version.

So sometimes, you see false
negatives in the testing tools, as in, oh, this looks
like we have a problem. It's other error,
that usually means it's not an actual problem,
unless it says something else. Like if it's a
different error source, then that's a good sign
for an actual problem. But if it's other
error, that usually means the resource
fetch took too long because it hasn't gotten
the position in the queue that it needed to finish
within the time limit that we have set, which is
very low for the testing tools, because we don't want you to
sit in front of a loading bar for half an hour. But the testing tools
are your best bet to actually see what the
rendering has come up with, yeah. AUDIENCE: Definitely. But if you don't have
access to Search Console, say you're analyzing, whether
it be other websites you're competing against, trying to
determine how they're building their websites, is there
a quick and easy way to check off on that? If– MARTIN SPLITT: Rich Results
test, Mobile-Friendly test.

AUDIENCE: But not the– don't mess with the
cache anymore, huh? MARTIN SPLITT: I wouldn't. Because as I said, it's
pretty random what you'll get. And drawing conclusions
from pretty much random data is risky. AUDIENCE: Yeah, it's
never fun, right? It makes a good screenshot
but it doesn't tell the story. MARTIN SPLITT: It does, yeah. AUDIENCE: So just to make sure,
if I find a text, like what should be a link
but it's text-based and it's anchored in the
text-based cache, that is not a sign that there is an issue
with that link being crawled because it's blocked
by JavaScript? MARTIN SPLITT: Correct, yeah. AUDIENCE: OK, all right. MARTIN SPLITT: All right. AUDIENCE: I lost. I lost the debate then. MARTIN SPLITT: I'm sorry. I'm very, very sorry. AUDIENCE: It's just that
old school SEO stuff. That's just what we used to do.

MARTIN SPLITT: I'm so sorry. I'm so sorry. AUDIENCE: I can't keep
up with you, Martin. MARTIN SPLITT: I'm
so sorry, Gabe. I don't always win people
bets, but sometimes, I do. And unfortunately, you
are not one of the won debates in this case. I'm sorry. AUDIENCE: All right. We'll work this out offline. We got to figure this out
when we come live, you know. MARTIN SPLITT: Awesome. Sweet. Thanks a lot. And Mihai has raised
his hand as well.

AUDIENCE: Hey, Martin. MARTIN SPLITT: Hi,
how's it going? AUDIENCE: I'm good, good. Quick question regarding
JavaScript loading priorities. And this is mostly
regarding platforms where you don't really have a
lot of access to change things. So for example, I'm
working on a site that's on a popular e-commerce
commercial platform. Rhymes with "Shopify". Wait, I did that wrong. Obviously, you
cannot really change, go around and change things
and how are they loading stuff. You do have some control,
but not like it's your own server or anything. But I noticed that
for any plugins that you're using with the
platform, they do their best.

They use the JavaScript
function to load the third party JavaScript
files after the unload event. So they're not using
defer or async. They're using that JavaScript
function to basically load everything after the
download event, which is very nice on their behalf. So I'm curious if there's
any difference in terms of how Google sees things
whether you're using, let's say you have a vendor,
a third party JavaScript file, whether you're
putting it in the footer and using the defer attribute
versus using this kind of trick to move everything
after the unload event? MARTIN SPLITT:
No, we don't care. For us, that doesn't
make a difference. It's just a different moment
when we discover the resource, probably. But that's OK. That's fine. AUDIENCE: OK, so as long as
the actual content gets loaded the same, doesn't really matter
when the rest of the cruft comes up. MARTIN SPLITT: Yeah. AUDIENCE: OK. MARTIN SPLITT: I love how you
immediately say, that's cruft.

There we go. AUDIENCE: Well,
as you mentioned, tracking things, other
things that are chat bubbles and things like
that, things that aren't necessary towards
actually rendering the main content of the page. And, as I mentioned, not
everybody has the luxury, well, not luxury,
but not everybody has the chance to work
with their own server, their own platform, so they
have a lot of websites are using this third party platform. So it's, again, very nice that
they not only they defer it but they actually defer it all
the way after the unload event. So I was just curious whether
it's the best practice or if there's any difference in
terms of how Google perceives those or renders those files. MARTIN SPLITT: I don't see
any reason for us to treat or perceive this
differently, no. AUDIENCE: Mainly
because the unload event can be like, five seconds,
six seconds away from the load time.

MARTIN SPLITT: That's true. AUDIENCE: Versus the– MARTIN SPLITT: But– AUDIENCE: –defer, you still
see the resource right away, you just defer it. MARTIN SPLITT: Yeah, so the
resource discovery, that's a thing. That's going to be different
but we don't really care as long as the
content is there. AUDIENCE: Sure. MARTIN SPLITT: We'll
probably trigger a late page, sorry,
a late resource patch for these resources. But, it doesn't hurt.

Yeah. AUDIENCE: Yeah, OK. Gotcha. MARTIN SPLITT: All right? Thanks for the question, Mihai. Oh, Dave's raising his hand. Oh, God, OK. Ooh, oh my. AUDIENCE: That was a bit rough. [INAUDIBLE] Yeah,
basically, it's just a question about
if you circumvent the cache of certain things
by doing POST requests or timestamping or something. MARTIN SPLITT: Yeah? AUDIENCE: Does it really
close an issue other than just [INAUDIBLE]? MARTIN SPLITT: Um– AUDIENCE: I mean, not sure you'd
want to cache everything you could, anyway. MARTIN SPLITT: Yeah, exactly. It does impact
your crawl budget. Also, it has a potential– [AUDIO OUT] AUDIENCE: Oh. You've gone on mute, mate. MARTIN SPLITT: How did I mute? And when did I mute myself? Anyway, it doesn't, it's– ah! I think like it's one of
the top sentences 2020 and probably 2021 as well,
is like, "You're on mute." Can you hear me now? No, I think that was an ad
campaign for a mobile network at some point. Like, "Can you hear me now?" So besides the crawl budget
issue that you already mentioned where pretty
much every single time we are requesting
that resource that is cache busting in
one way or the other, you might run into
a potential pitfall.

I'm saying "potential"
because I've never seen this being an actual problem. But there is a potential for
it it becoming a problem if you do it excessively, which is
if you are skipping the cache, you are reducing the– ah, what's the
English word for it? The resilience against
timeouts, right? Because if we have a cache
resource that we can fetch fresh for a
rendering run then we need to wait for the resource
to actually come through live.

And if the crawler does not have
the budget to actually fulfill this within the deadline,
then you might actually not get the resource in
time for rendering and then you might see
a failing rendering run. And then we have to
retry that again. And then if the
issue persists, it might increase the
time until we actually get a successful render. Again, I've have not
seen this in practice yet but I have not seen that
many people in practice using POST to FETCH resources for
the website to avoid the cache.

So– AUDIENCE: Thanks. I could imagine it's
something more prevalent once GraphQL grows because
that's passed by default, isn't it? MARTIN SPLITT: Yeah, yeah. And I've seen GraphQL
being like, oh, GraphQL doesn't work with Googlebot. And I'm like, no, it does work. Why would it not work? Oh, you know, here
is an example where the content doesn't come in. I'm like, oh, yeah, but that's
because it continuously times out and your postings, we
can't actually catch anything. So, yeah. But if you were to cache
bust, I would probably do that with query strings
that have time stamps that are more or less predictable. As in, maybe every day the
time stamp changes or something like that if you really, really,
really need to cache fast. But, full disclaimer, I'm
not encouraging anyone to cache bust. Inviting whole new
scenarios of problems. You're welcome. Thank you very much
for the question. AUDIENCE: Cheers. MARTIN SPLITT: So repeating
myself for the new arrivals, you can post your questions
in the Chat, as well.

That is, as I learned from
Michael, on the right hand top corner. I think I'm running some sort
of early access internal version where it's in the lower
right hand corner. So eventually, it
might move from the right upper-hand corner to
the right lower hand corner. You can also use the Raise Hand
feature which is somewhere down there, probably. I think it's down there, right? Those who have
raised their hands, it's somewhere down here. Or you can just
speak if there is awkward silence for a
long period of time, which that never is
because I talk too much. So any further questions? We're already
seven minutes over. We can do another
three minutes over. It won't matter. Giacomo is actively
thinking about a question. I like that. It's like– AUDIENCE: I have
another 10 questions. But you know I can't just– MARTIN SPLITT: We actually have,
is it "Ah-meen" or "Ah-meen-a" raising his hand? So go ahead. Oh, I think you're on mute. At least we're not hearing you. Or I'm not hearing you.

I don't. Does everyone else hear
him and I just don't? No, OK. AUDIENCE: No, I can't
hear him either. MARTIN SPLITT: OK, good. So there's something in
your microphone settings that prevents us
from hearing you. No? I was very carefully
lip reading, but I'm not good enough
to actually read. Nope. I know that problem because I
have multiple microphones and– AUDIENCE: Hello? Hi! Hi, sorry. MARTIN SPLITT: Yes! AUDIENCE: Yeah! AUDIENCE: I've gotten
my microphone– MARTIN SPLITT: No worries. AUDIENCE: –into
the external, and I have to change the settings. MARTIN SPLITT: I have
two external microphones and an internal
one because the one that I'm speaking
to you right now is not good enough
for the podcast. So I have another
one for the podcast and then I have the internal
one, and it's always fun. AUDIENCE: Good. I've got a question with
regards to Google Evergreen and with the recent
update on Chrome, and I was wondering
whether you have got any documentation
regarding the JavaScript, and how it's
handled, and if there was any improvement on
that from Google Evergreen, in terms of how it does handle
the additional JavaScript, how it is efficient.

MARTIN SPLITT:
Basically, our web rendering service users whatever
is the current stable Chrome. So whenever that gets
an improvement then we're getting the
same improvement. There's no difference. There's no special thing. There are small, specific
differences and they are documented in our
documentation at Developers.Google.com/Search. I think it's now in Advanced. And then JavaScript,
there's a bunch of information on how
we're handling JavaScript. But it's pretty much the
same as what we are doing– it works roughly the same way as
it works in the normal Chrome. So there's no special treatment. There are certain differences. For instance, we're not
persisting local storage, sessionStorage, or cookies. We are behaving different
in certain, very small but important ways. WebSockets are not
properly supported. HTTP/2 is just coming now. It's an experimental feature
that we are rolling out slowly. But the JavaScript processing
and parsing and the V8 improvements are pretty
much the exact same as we are running on– rendering.

Its Chrome 89 at
this point, I think. AUDIENCE: All right. MARTIN SPLITT: Does that
answer the question? AUDIENCE: I think so. MARTIN SPLITT: Awesome. AUDIENCE: Yeah, it's
not really my question. It's from the dev team. MARTIN SPLITT: Ah, oh. AUDIENCE: So they're asking
in terms of documentation. MARTIN SPLITT: Yeah, hold on. I can actually share. Yeah, well, OK. Let me actually find
you the specific link to that part of
our documentation because I know that
it recently moved. I think it's in– [MUMBLES] No,
that's not the one. Managed JavaScript Content. Here we go. So this is the main– I'm going to share
it here in the Chat. This is the main guide. And from there on out, pretty
much everything else is linked. AUDIENCE: Thanks. MARTIN SPLITT: This is
where the differences are. Most of them, anyway. They are small– I'm really excited
that no one has caught one of the more
interesting and surprising differences yet.

But it doesn't really
matter in 99.9% of cases. Giacomo is smiling because
he probably found something. Cache busting stuff. Ah, yeah. I think I did not explain that. No. We do explain that
you shouldn't use the cache, the text-based
cache thing to test. So that's nice. AUDIENCE: Oh, you going
to rub it in, Martin? All right. All right. MARTIN SPLITT: I'm so sorry. I just like– [LAUGHS] I
just read that sentence because I think
it's on the colored background or something. That's why it caught my eye. Use long-lived caching. So we do talk about caching
in some ways, Giacomo. Just not the cache busting. I'm talking about the
complete opposite, to be fair. So there's that. Sweet. That has been fun. And we are 30 minutes over. Do you have further questions? Because while I'm here, I might
as well answer more questions. So no further questions. Feel free to use the
Raise Hand button. Yeah, Gabe? AUDIENCE: Quick question. MARTIN SPLITT: Mm-hmm? AUDIENCE: Why not? MARTIN SPLITT: Yeah! AUDIENCE: HTTP/2 advantages
in JavaScript-based websites maybe that we
should be thinking about when we're talking to clients? MARTIN SPLITT: Ah, HTTP/2
with its ups and downs.

So at this point, I would not. But I would watch very closely
for general availability of HTTP/2 crawling, which
has not rolled out yet. I think we are at 10%, 20%. I can't remember what's
the percentage of URLs that we are crawling
with HTTP/2. The big upside of HTTP/2
is that you can multiplex through a single connection. So what happened on HTTP/1
one was you have a website, index.html, and
then some style.css, and then some image.jpeg,
and maybe some header.jpeg, and then some
app.js, each of these would have been transferred
over a separate connection. Connection keep open,
or keep connection open, or sorry,
connection keep-alive, that's what I was looking for. Connection keep-alive
would at least avoid the teardown of the
underlying TCP connection. Yay! Because what really happens is
your browser makes a request, so it finds a resource
like index.html, then it has to actually
go to DNS to get the address behind the URL.

Then it needs to establish
a TCP connection, and then over this
TCP connection, it actually does the
HTTP negotiation, like sends a request,
gets a response back. Yay! HTTPS makes that even
slower because then you have a TLS connection on
top of the TCP connection, on which you then actually
do the HTTP transfer. And keep-alive made at least a
TCP connection stay available, but you would still
have to basically, I think it was six
connections at some point to the same thing. So it would open
six connections. And then if you have
more than six things, like the HTML, the two
images that I spoke of, the JavaScript, the CSS,
that's already five, and then there's
like 10 more images.

Then you would
basically download them in bunches of six. I think it has at some
point been raised to 10. But that's pretty much it. With HTTP/2, two you have
one single connection. And pretty much everything
can just smoothly flow into one
connection and then the browser will tear it apart
into the individual different bits and pieces, which means
that domain sharding is not necessary if you use HTTP/2. Domain sharding was exactly
avoiding this problem where to example.com,
there would only be six connections at a time.

So you would move your images
to image.example.com, your CSS to assets.example.com,
same as your JavaScript so that you can basically get
all of these at the same time. But you don't need to do
that if you have HTTP/2. HTTP/2 also has the
push cache feature which is widely
recommended a bad idea, so I would not really
invest in that. Anything else that
is really cool? I think it has pretty good
compression enabled by default. It is binary, so it
doesn't waste as much data. It pipelines, as I said. It uses one connection
to get all the assets. Yeah, that's pretty much it. HTTP/2 is nice but
I don't think it's the revolutionary change in
terms of SEO for Google Search. It's not supported for all
websites at this point. It will be in the near future. AUDIENCE: Right, and you're not
crawling yet in HTTP/2, right? Is that what you said earlier? MARTIN SPLITT: We are
crawling in HTTP/2 for a small percentage
of websites. And we are about to ramp
that up to 100% eventually, but we haven't done that yet. So we are still in the
experimental phase.

For instance, I'm
not sure if it has been resolved but Search
Console has a glitch when displaying
information for HTTP/2 crawled websites,
which is hilarious. Gary is working on getting that
resolved as far as I'm aware. AUDIENCE: I mean, is it a– [INTERPOSING VOICES] MARTIN SPLITT: Sorry? AUDIENCE: Is it a pure
coincidence that you guys are having the page speed update
around page speed load times and then you're going
to be crawling HTTP/2– MARTIN SPLITT:
That's a coincidence. That's a coincidence. The HTTP/2 crawling
started in December, if I remember correctly. Or maybe November last year. So it is already in progress. The page speed or the page
experience is soon to come.

We might get later
than May, by the way. It's unclear at this point. We'll see. And Giacomo adds a tip
to the Chat, saying, "if you want to keep domain
sharding for your HTTP/1.1 clients while using HTTP/2, I
suggest to look at connection coalescing". I haven't actually
heard that before. I need to read up on that. Cool, more work. [LAUGHS] Thanks, Giacomo,
for sharing this tip. Cool stuff. All right. AUDIENCE: One last
question or we are closing? MARTIN SPLITT: I think
one last question is good. AUDIENCE: OK. While we are talking about
HTTP/2 that is from Googlebot to the internet, but internally,
there are differences. There are Chrome instances
talking with Google Bot. And can we know what's the
type of connection between the Chrome instances and
Googlebot is HTTP/1.1, the RPC, or anything else? MARTIN SPLITT: It's
gRPC, actually. Well, it's RPC. Let's call it RPC. They are basically
microservices to each other. So it's doing an RPC
call, one that I'm very familiar with because I
built an internal tool that uses– so the infrastructure that is
the crawling infrastructure is called
[? Trawler.

?] The team is called something
different because they also have another crawler
infrastructure thing that is not what we are using. So [? Trawler ?]
is a microservice that we are talking
to by our RPC calls. So we basically compile a proto
buffer with all the information for the request that we
want, throw that over to [? Trawler, ?] and then a
[? Trawler ?] instance picks it up and works with that. And eventually, comes
back with a response. AUDIENCE: Thank you. MARTIN SPLITT: You're welcome. We have this office
hour every two weeks. And I announce them
on YouTube.com/Goog leSearchCentral/Community.

Sweet! Ladies and gents, it has
been a huge pleasure. Thank you so much for
joining, everybody. Thanks for all the
lovely questions. I hope that you are
staying safe and healthy. And for those who are interested
in the technical bits and bobs, Maria, Dave, Giacomo
already know this. I'm Twitch streaming
every Tuesday. I haven't done that
yesterday because it was a holiday for me. I just took a day
off after Easter. That was awesome. And we went diving and
we went in with sun and came out with snow,
which was April, I guess. Not sure if climate
change or just April. And, yeah, see you either
at the JavaScript SEO Office Hours in two weeks or on
Tuesday for Twitch stream time where there's live coding. Awesome! Thanks a lot. AUDIENCE: Thank you. MARTIN SPLITT:
Have a great time.

AUDIENCE: Yeah, bye bye. MARTIN SPLITT: Bye! AUDIENCE: Bye, Martin. Bye, guys.

Watch this as video on Youtube

Hire an SEO Expert and get your job done.

Leave a comment

Your email address will not be published. Required fields are marked *

loader