Google I/O ’23 in under 10 minutes
[AUDIO LOGO] SUNDAR PICHAI: As
you may have heard, AI is having a very
busy year, so we've got lots to talk about. Let's get started. We have been applying AI to
make our products radically more helpful for a while. With generative AI, we
are taking the next step. Using a combination of semantic
understanding and generative AI, you can do much more
with a new experience called Magic Editor. Let's have a look. This is a great photo,
but as a parent, you always want your kid
at the center of it all. And it looks like the balloons
got cut off in this one. So you can go ahead and
reposition the birthday boy. Magic Editor
automatically recreates parts of the bench and
balloons that were not captured in the original shot. As a finishing touch,
you can punch up the sky. It changes the lighting
in the rest of the photo so the edit feels consistent.
It's truly magical. Imagine if you could see
your whole trip in advance. With Immersive View for
Routes, now you can. Today, we are ready to
announce our latest PaLM model in production, PaLM 2. While PaLM 2 is highly
capable, it really shines when fine-tuned on
domain-specific knowledge. We recently released
a version of PaLM to fine-tune for
security use cases. Another example is Med-PaLM 2. In this case, it's fine-tuned
on medical knowledge. This fine-tuning
achieved a 9x reduction in inaccurate reasoning
when compared to the model. SISSIE HSIAO: Bard will
become more visual, both in its responses
and your prompts.
So if you're looking to have
some fun with your fur babies, you might upload
an image and ask Bard to write a funny
caption about these two. We are removing the
wait list and opening up Bard to over 180
countries and territories. [CHEERING, APPLAUSE] You'll be able to talk to
Bard in Japanese and Korean. With PaLM 2, Bard's math,
logic, and reasoning skills made a huge leap forward,
underpinning its ability to help developers
with programming. Here, Bard created a script
to recreate this chess move in Python. And notice how it also
formatted the code nicely, making it easy to read. As you collaborate
with Bard, you'll be able to tap into
services from Google and extensions with partners to
let you do things never before possible.
APARNA PAPPU: People use Slides
for storytelling all the time, whether at work or in
their personal lives. Let's pick one of the slides
and use the poem on there as a prompt for
image generation. Let's hit Create and see
what it comes up with. Starting next month,
trusted testers will be able to try this and
six more generative AI features across Workspace. As you can see, we've launched
a side panel, something the team fondly calls Sidekick. Sidekick instantly reads
and processes the document and offers some really
neat suggestions, along with an open
prompt dialog. We can see the true potential
of AI as a collaborator, and we'll be bringing
this experience to Duet AI for Workspace. CATHY EDWARDS: I am just
so excited by the potential of bringing generative
AI into search. What you see here looks
pretty different, so let me first give you a quick tour.
You'll notice the new
integrated search results page, so you can get even more
out of a single search. There's an AI-powered snapshot
that quickly gives you the lay of the land on a topic. Let's say you're
searching for a good bike for a five-mile
commute with hills. In the AI-powered
snapshot, you'll see important considerations. Right under the snapshot,
you'll see the option to ask a follow-up question or
select a suggested next step.
Tapping any of these options
will bring you into our brand new conversational mode. If you're in the US, you
can join the waitlist today by tapping the Labs icon in the
latest version of the Google app or Chrome desktop. THOMAS KURIAN: All
of the investments you've heard about today are
also coming to businesses. We're working with many
other incredible partners. You can build generative
applications using our AI platform, Vertex AI. This infrastructure makes
large-scale training workloads up to 80% faster and
up to 50% cheaper. JOSH WOODWARD: I want
to share one concept that five engineers
at Google put together over the last few weeks. The idea is called
Project Tailwind, and we think of it as
an AI-first notebook. You can simply pick the
files from Google Drive, and it effectively creates a
personalized and private AI model that has expertise in the
information that you give it. You can see, it's pulling out
key concepts and questions grounded in the materials
that I've given it.
JAMES MANYIKA: And
while I feel it's important to celebrate the
incredible progress in AI and the immense potential that
it has for people in society everywhere, we must also
acknowledge that it's an emerging technology. That's why we believe
it's imperative to take a responsible approach to AI. For example, have you
come across a photo on a website with
very little context, like this one of
the moon landing, and find yourself
wondering, is this reliable? In the coming months, we're
adding two new ways for people to evaluate images.
First, with our About This
Image tool in Google Search, you'll be able to see
important information, such as when and where similar
images may have first appeared. We will ensure that every one
of our AI-generated images has metadata and markup
in the original file to give you context
if you come across it outside of our platforms. SAMEER SAMAT: With more than
3 billion Android devices, we've now seen the benefits of
using AI to improve experiences at scale. We're launching a major
update to our Find My Device experience to support a wide
range of devices in your life, including headphones,
tablets, and more. We're introducing
Unknown Tracker Alerts. Your phone will tell you
if an unrecognized tracking tag is moving with you
and help you locate it. Last week, we published a new
industry standard with Apple, outlining how Unknown
Tracker Alerts will work across all smartphones. DAVE BURKE: This
year, we're combining Android's guided customization
with Google's advances in generative AI so your phone
can feel even more personal.
With our new generative
AI wallpapers, you choose what inspires
you, and then we create a beautiful wallpaper
to fit your vision. So let's take a look. So, for example, I can pick– what am I going to
do– "city by the bay in a post-impressionist style." Cool. And I tap Create Wallpaper– nice. Now behind the
scenes, we're using Google's text-to-image diffusion
models to generate completely new and original wallpapers. RICK OSTERLOH: We're
completely upgrading everything you love
about our a-series with the gorgeous new Pixel 7a.
Introducing Google Pixel Fold. It combines Tensor G2,
Android innovation, and AI for an
incredible phone that unfolds into an
incredible compact tablet. The unique combination of form
factor, triple-rear camera hardware, and personal
AI with Tensor G2 to make it the best
foldable camera system. DAVE BURKE: I'm looking
at Google Photos of a recent snowboarding trip. Now the scenery is
really beautiful, so I want to show you
on the big screen. I just open my phone,
and the video instantly expands into this
gorgeous full-screen view. Just swipe to bring up
the new Android taskbar and drag Google Messages
to the side to enter a split-screen mode, like so. ROSES YAO: Pixel Tablet is
the only tablet engineered by Google and
designed specifically to be helpful in your
hand and in the place they are used the
most, the home. RICK OSTERLOH: Across
Pixel and Android, we're making huge strides
with large-screen devices. SUNDAR PICHAI: The shift with
the AI is as big as they come.
We are approaching it boldly,
with a sense of excitement. And we are doing
this responsibly, in a way that underscores
the deep commitment we feel to get it right. No one company
can do this alone. Our developer community
will be key to unlocking the enormous
opportunities ahead. We look forward to working
together and building together. [MUSIC PLAYING].