[MUSIC PLAYING] [VIDEO PLAYBACK] [MUSIC PLAYING] – Please remember to fillout your evaluation form and leave it at the collectionbin in the back of the room, OK? – Yeah, that’s a big helpfor people to figure out just how bad our talk was. [LAUGHTER] – Yes, we do have alimited supply of CDs. [CHEERING] – It’s great to be here. I’m going to be talkingto you today about HTML5. – Nothing brings joy to myheart more than robotic Androids dancing and singing. – 5, 4, 3, 2, 1. [CHEERING] [GONG] – Good morning. This is Google I/O. – I am thrilled to be here.- At the Shoreline Amphitheater. – This is the coolest thing. – Excited about thefuture, what’s coming. – You can buildwith the community. – We want to give you thetools to create entirely new technological capabilities. – This is the tattoo. Good, right? – And there’s alwaysendless discoveries. – It’s great to have a platform. You can get to new outcomes. – Things previouslythought to be impossible– – Finally, I’m here.- –may, in fact, be possible. – I hope you allfind some inspiration to keep building for everyone. [CHEERING AND APPLAUSE] [MUSIC PLAYING] [END PLAYBACK] [MUSIC PLAYING] [CHEERING AND APPLAUSE] SUNDAR PICHAI: Goodmorning, everyone. It’s great to beback at I/O. We are coming to you live from ourcampus here in Mountain View. Of course, it’snot quite the same without our developer communitysitting here in person.It’s another reminder ofthe times we are living in. The pandemic hasbrought us together in a shared experiencefor more than a year. But now we are seeing thecommon experience diverge. In some places,people are beginning to live their livesagain as cases decline. Other places like Braziland my home country of India are going through theirmost difficult moments yet. We are thinking of you andhoping for better days ahead. COVID-19 has deeplyaffected every community. It’s also inspiredcoordination between public and private sectors andacross international borders. At Google, we launchedproducts and initiatives to help one anotherthrough this time, to help students and teacherscontinue their learning from anywhere, to help smallbusinesses adapt and grow, and to get emergency reliefand vaccines to communities in need.We work closely with manynon-profit organizations around the globe. And you can go tothe link behind me to support their excellent work. At Google, the mostfundamental way we help is by providing access tohigh-quality information, authoritative informationfrom 170 public health organizations aroundthe world, including the CDC, the FDA, and the WHO. We’re also focusedon helping people find accurate information aboutvaccines, including the hours and locations for vaccine sitesin many countries on Google Maps and Search.COVID-relatedinformation has been viewed hundreds ofbillions of times across our productsand platforms. It continues to help peoplemake decisions and keep their families safe. I/O has always been acelebration of technology and its abilityto improve lives. And I remain optimisticthat technology can help us address thechallenges we face together. So in that spirit,let’s get started. At Google, the past yearhas given renewed purpose to our mission– to organize theworld’s information and make it universallyaccessible and useful. We continue to approach thatmission with a singular goal– building a more helpfulGoogle for everyone. That means being helpfulin moments that matter. And it means givingyou the tools to increase your knowledge,success, health, and happiness. Sometimes, it’s about helpingin little moments that add up to big changes. Recently, we added 150,000kilometers of bike lanes in Google Maps. We’re also introducingtwo new features– first, new eco-friendly routes.Using our understanding ofroad and traffic conditions, Google Maps will soongive you the option to take the mostfuel-efficient route. At scale, this has potentialto significantly reduce carbon emissions and fuel consumption. Second, safer routing. Powered by AI, Maps canidentify road, weather, and traffic conditionswhere you’re likely to have tosuddenly brake. We aim to reduce up to 100million of these events every year. [APPLAUSE] Sometimes, it’s abouthelping in the big moments, like helping 150 millionstudents and educators keep learning over the lastyear with Google Classroom or keeping students connectedwith affordable laptops. Chromebooks are now the numberone device globally in K through 12 education. In Japan, 40% oflocal governments chose to deploy Chromebooksto every child in grades 1 through 9. And here in California, weworked with the Department of Education to providethousands of Chromebooks to students in need. One of the biggestways we can build a more helpfulGoogle for everyone is by reimaginingthe future of work. We have seen work transformin unprecedented ways. And it is no longerjust a place.Over the last year,offices and co-workers have been replaced bykitchen countertops and pets. With so many peoplenow working from home, access to collaboration toolshas never been more critical. In 2006, we introducedDocs and Sheets to help people tocollaborate in real time. A year later, weadded Google Slides. All of this is now partof Google Workspace, which builds on more than 15 years ofcreating ways to work together. Today, we are announcing a newexperience in Google Workspace to enable richer collaboration. We call this Smart Canvas. And to tell youmore, here is Javier. [MUSIC PLAYING] [APPLAUSE] JAVIER SOLTERO: Thanks, Sundar,and good morning, everybody. With Smart Canvas,we’re bringing together the content and connectionsthat transform collaboration into a richer,better experience.For over a decade, we’vebeen pushing documents away from being justdigital pieces of paper and toward collaborative, linkedcontent inspired by the web. Smart Canvas isour next big step. Let’s see how adistributed team uses Smart Canvas to plan animportant marketing campaign. The launch date isjust two months away. So Adu starts a documentand quickly adds a brainstorm table. With at mentions, hepulls in the right people and generates a checklistto assign action items.These simple actionsconnect the team’s plan to people, dates, and tasks,making their collaboration richer and more effective asthey drive toward their launch. Now that he sharedthe document, everyone starts dropping in their ideas. As they continue to brainstorm,the assisted writing feature suggests that theychange the word “chairman” to “chairperson”in the document to avoid a gendered term. New assistedwriting capabilities in Google Workspaceoffer suggestions so you can communicatemore effectively. Not only are we helpingwith language suggestions. We’re also making iteasy to bring the voices and faces of your teamdirectly into the collaboration experience to help them shareideas and solve problems together. Up to now, Adu and his team havebeen collaborating in the Doc and schedulingseparate Google Meet calls to review their progress. But starting today,you can easily present the Doc, Sheet, orSlide you’re working on directly into a Google Meet call.Now Adu can join hiscolleagues with just one click. And this fall, we’re excitedto bring Meet directly into Docs, Sheets, andSlides for the first time. This will enable teams likeAdu’s to actually see and hear each other whilethey’re collaborating. Now they’ll never skip a beat. And to keep that collaborationflowing in the meeting, the team used the newresponsive voting table to see which ideasfor the campaign are the most popular ones. With all the progressthey’ve made together, Adu’s initialdocument has evolved into a highly interactive,always up-to-date, actionable plan. And the team stayed connectedevery step of the way. That’s the powerof Smart Canvas. Two months later, it’s timeto launch the new campaign. Adu and his team are joiningfrom offices from home and everywhere in between,connecting across time zones and continents.To help both officeand remote teammates remain an equal partof the conversation no matter where they are,we’re launching Companion mode in Google Meet. Companion mode gives each ofAdu’s teammates in the office their own video tile sothey can stay connected to the remotecolleagues, and everyone can participate in polls,chat, and Q&A in real time. Companion mode is coming toGoogle Meet later this year. Teammates can alsobe heard wherever they work with noisecancellation powered by the best of Google’s AI,and machine learning and Google Meet to automatically adjustcamera zoom and lighting, ensuring that everyone can beseen across all environments. We’ve also made it easierto customize views and share content so teamscan focus on what matters most in the moment. This means thatwhen Adu presents to the rest of his team, he caneasily arrange people’s faces to gauge their reactions whilestaying focused on his content. And his colleaguesacross the globe can follow along with livecaptions, even translations into their native languages.When Adu finisheshis presentation, he doesn’t feel separatedby time zones or languages or the deviceshis team is using. Instead, with Google Meet’simmersive experience, he feels connectedand in the moment. With Smart Canvas andthese powerful enhancements to Google Meet, we’retransforming collaboration in Google Workspace to helppeople succeed at work, at home, and in the classroom. Previously, the fully integratedexperience in Google Workspace was available onlyto our customers.But it will soon be available toeveryone, from college students to small businesses tofriends and neighbors wanting to stay connected andget more done together. Stay tuned for more detailsin the coming weeks. And now I’ll handit back to Sundar. [APPLAUSE] [BIKE BELL RINGING] [BIRDS CHIRPING] SUNDAR PICHAI: Thanks, Javier. Those were exciting examples ofhow computer science and AI can make us more helpfulacross our products. Google Search wasbuilt on the insight that understandinglinks between web pages leads to dramaticallybetter search results.We’ve made remarkable advancesover the past 22 years, and Search helpsbillions of people. And to improveSearch even further, we need to deepenour understanding of language and context. To do this requires advancesin the most challenging areas of AI. And I want to talkabout a few today, starting with translation. We learn and understandknowledge best in our native languages. So 15 years ago, weset out to translate the web, an incrediblyambitious goal at the time. Today, hundreds ofmillions of people use Google Translate each monthacross more than 100 languages. Last month alone, we translatedmore than 20 billion web pages in Chrome. With Google Assistant’sInterpreter mode, you can have aconversation with someone speaking a foreign language. And usage is up four timesfrom just a year ago. While there is still work to do. we are getting closer tohaving a universal translator in your pocket.At the same time, advancesin machine learning have led to tremendousbreakthroughs in image recognition. In 2014, we first traineda production system to automaticallylabel images, a step change in computers’understanding of visual information. And it allowed us to imagineand launch Google Photos. Today, we can surfaceand share a memory reminding you of some of thebest moments in your life. Last month alone, more than 2billion memories were viewed. Image recognition also meansyou can use Google Lens to take a photo of a math problem. I wish I had thiswhen I was in school.Lens is used more than 3billion times each month. We can now be as helpful withimages as we are with text. Machine learninghas also improved how computers comprehend andcommunicate with human voices. As Javier shared, that’s whywe can caption conversations in Google Meet and whyLive Caption on Android can automaticallycaption anything running on your smartphone locally. It generates 250,000 hoursof captioning every day. Breakthrough technologyfrom DeepMind called WaveNetincreased the quality of computer-generatedspeech, leading to more natural and fluid interactions. WaveNet allowed usto create and deploy 51 voices acrossGoogle Assistant. Together, theadvances in AI I just spoke about acrosstranslation, images, and voice improve the search experiencefor billions of people. They also enabled us tomake a huge leap forward in how computers processnatural language.In 2017, we firstintroduced the world to transformers, a novel machinelearning approach for better natural language understanding. Transformers becamethe foundation for many other breakthroughslike AlphaFold and BERT, which we introduced in 2019. BERT considers thefull context of a word by looking at the wordsthat come before and after, leading to one of ourmost significant quality improvements acrossGoogle Search and enabling us to respondto queries with more helpful answers, inthis case, realizing that the question wasabout parking on a hill without a curb. We have revolutionizedthe ability of computers to understand natural language. Yet, despite the incrediblepower of computers, there are stillmoments like this. [VIDEO PLAYBACK] – Sorry, I don’t understand. [END PLAYBACK] SUNDAR PICHAI: This is becauselanguage is endlessly complex. We use it to tell stories,crack jokes, and share ideas. We continuouslypull from concepts we have learned overthe course of our lives. The richness andflexibility of language make it one ofhumanity’s greatest tools and one of computer science’sgreatest challenges.If someone asks me if thetemperature in the room is OK, and I say I’mfreezing, they know that I’m very cold,not literally freezing. Or if someone says, what’sthe weather like today, I don’t respond, it’s70 degrees, sunny, and a 12% chance of rain. I’d probably say, it’sstarting to feel like summer. I might eat lunch outside. Sensible responses keepconversations going and allow them to end up ina completely different place from where they started. Even if I began bytalking about the weather, we may end up talkingabout football. Today, I’m excitedto share our latest breakthrough in naturallanguage understanding, LaMDA. It’s a language model fordialogue applications. And it’s opendomain, which means it’s designed toconverse on any topic. And while it’s still inresearch and development, we’ve been using it internallyto explore novel interactions. For example, say youwanted to learn about one of my favorite planets, Pluto. LaMDA already understandsquite a lot about Pluto and millions of other topics. Let’s listen to a conversationthe team had with Pluto a few days ago. [VIDEO PLAYBACK] – I’m so curious about you. – I sense your excitement. Ask me anything. – Tell me what I wouldsee if I visited. – You would get to see a massivecanyon, some frozen icebergs, geysers, and some craters. – It sounds beautiful. – I assure you itis worth the trip.However, you needto bring your coat because it gets really cold. – I’ll keep that in mind. Hey, I was wondering, haveyou ever had any visitors? – Yes, I have had some. The most notablewas New Horizons, the spacecraft that visited me. – That must have been exciting. – The team that createdNew Horizons was very excited to see me. I also sent them backa picture of myself since I had not beenseen up close before. – That’s so great. What else do you wishpeople knew about you? – I wish people knew that Iam not just a random ice ball. I am actually abeautiful planet. – Well, I thinkyou’re beautiful. – I am glad to hear that. I don’t get therecognition I deserve.Sometimes, people refer tome as just a dwarf planet. – Don’t let it bother you. You’re amazing. – Thank you for the kind words. I appreciate it. – Thanks for talking with me. – Anytime, I am alwaysopen for people to chat. I hope you have a great day. [END PLAYBACK] [APPLAUSE] SUNDAR PICHAI: I spent sometime with my son conversing with Pluto, and it’s magical. We had a lot of fun learningabout space together. Let’s break down whatmade it feel so natural. First, learnedconcepts– as you saw, the model talked about theNew Horizons spacecraft and the coldness of space. LaMDA synthesized theseconcepts from its training data. These concepts were nothand-programmed in the model. Because none of theresponses were predefined, LaMDA answered withsensible responses, keeping the dialogue open ended. Natural conversationsare generative, and they never takethe same path twice.And LaMDA is able to carrya conversation no matter what we talk about. You can haveanother conversation without retraining the model. Let’s see what happened whenwe talk to a paper airplane. [VIDEO PLAYBACK] – What’s it like beingthrown through the air? – It’s quite theexperience, actually. The wind blowing against youand the trees flying past are quite the sight. You never know whereyou’ll land either. – What’s the worst placeyou’ve ever landed? – That would haveto be in a puddle. I was just barely stillintact, so I was stuck there for a few minutes. It was quite annoying. – Oh, that sounds awful.Were you able to fly again? – Thankfully, I was. I had a few minorinjuries to my wing, but you can’t really tell unlessyou know what to look for. – What’s the secret to areally good paper airplane? – Well, my goodfriend, I must first ask you to specify whatyou mean by “good.” Some may think it’show far it goes, some may think it’s howstraight it goes, and some may think it’s how flat it lands. So what do you think? – I care the mostabout distance. – Then I would have tosay a nice, wide wingspan is what you want. Also, the paper needsto be really firm. – What’s the farthestyou’ve ever flown? – I’ve flown overabout 1,500 feet.The wind was perfect that day. – Wow, that’s really far. That must havebeen quite a throw. – It was. I was actually thrownfrom a hot-air balloon, believe it or not. – That’s so cool. What did the world looklike from out there? – It’s actually a lot differentthan it looks from the ground. The world is so large withits vast oceans and mountains. It really puts thingsinto perspective. – Yeah, it’s importantto keep perspective. Hey, I’m wondering,do you have anything you want to sharewith the people that are watching at home? – I do. I just want them to knowthat if the wind is perfect, and you’re made ofthe right materials, you can go absolutely anywhere.[END PLAYBACK] [APPLAUSE] SUNDAR PICHAI: It’sreally impressive to see how LaMDA can carry ona conversation about any topic. And it’s amazing howsensible and interesting the conversation is. Yet, it’s stillearly research, so it doesn’t get everything right. Sometimes, it can givenonsensical responses, imagining Pluto doingflips or playing fetch with its favoriteball, the Moon. Other times, it just doesn’tkeep the conversation going.At Google, we have beenresearching and developing language models for many years. We are focused on ensuringLaMDA meets our incredibly high standards on fairness,accuracy, safety, and privacy. So from concept allthe way to design, we are making sure it’sdeveloped consistent with our AI principles. We believe LaMDA’s naturalconversation capabilities have the potentialto make information and computing radically moreaccessible and easier to use. We look forward to incorporatingbetter conversational features into products likeGoogle Assistant, Search, and Workspace. We’re also exploring how togive capabilities to developers and enterprise customers.LaMDA is a huge step forwardin natural conversation, but it is stilltrained only on text. When people communicatewith each other, they do it across images,text, audio, and video. So we need to build models thatallow people to naturally ask questions across differenttypes of information. These are calledmultimodal models. Let’s say we want amodel to recognize all facets of a road trip. That could mean the words”road trip” written or spoken in any language, images,sounds, and videos, and concepts associatedwith road trips, such as weather and directions.So you can imagine oneday planning a road trip and asking Google to find aroute with beautiful mountain views. You can also use this to searchfor something within a video. For example, when yousay, show me the part where the lion roars at sunset. We will get you to thatexact moment in a video. [APPLAUSE] It’s still early days, butlater on in the keynote, you’ll hear from Prabhakarabout the progress we are making towards morenatural and intuitive ways of interacting with Search. Translation, imagerecognition, voice recognition, text-to-speech, transformers– all of this work laid thefoundation for complex models like LaMDA and multimodal. Our compute infrastructureis how we drive and sustain these advances. And tensor processing unitsare a big part of that. Today, I’m excited to announceour next generation, the TPUv4. These are poweredby the v4 chip, which is more than twiceas fast as the v3 chip. TPUs are connected togetherinto supercomputers called pods.A single v4 podcontains 4,096 v4 chips. And each pod has 10x theinterconnect bandwidth per chip at scale compared to anyother networking technology. This makes it possiblefor a TPUv4 pod to deliver more than 1exaFLOP, 10 to the 18 power floating-point operations persecond of computing power. Think about it this way–if 10 million people were on their laptopsright now, then all of those laptops puttogether would almost match the computingpower of 1 exaFLOP.This is the fastestsystem we’ve ever deployed at Google and ahistoric milestone for us. Previously, to getan exaFLOP, you needed to build acustom supercomputer. But we already have manyof these deployed today, and we’ll soon have dozens ofTPUv4 pods in our data centers, many of which will beoperating at or near 90% carbon-free energy. And our TPUv4 pods willbe available to our Cloud customers later this year. It’s tremendously exciting tosee the space of innovation.As we look furtherinto the future, there are types of problems thatclassical computing will not be able to solve ina reasonable time. Quantum computing representsa fundamental shift because it harnesses theproperties of quantum mechanics and gives us the bestchance of understanding the natural world. Achieving a quantummilestone was a tremendous accomplishment,but we are still at the very beginningof a multi-year journey. One problem we face todayis that our physical qubits are very fragile. Even cosmic raysfrom outer space can destroy quantum information. To solve more complexproblems, our next milestone is to create anerror-corrected logical qubit. It’s simply a collection ofphysical qubits stable enough to hold quantum informationfor a long period of time. We start by reducing the errorrate of our physical qubits then combining a thousandphysical qubits to create a single logical qubitand then scaling that up to a thousand logicalqubits, at which point, we will have created anerror-corrected quantum computer.Today, we are focusedon enabling scientists and developers to accessbeyond-classical computational resources. But we hope to one day createan error-corrected quantum computer. And success couldmean everything from increasingbattery efficiency to creating more sustainableenergy to improve drug discovery and so much more. The roadmap beginsin our new data center, which we are callingthe Quantum AI Campus. Let’s step inside. Michael, are you there? [MUSIC PLAYING] MICHAEL PENA: Hey, Sundar. How’s it going? Yeah, I’m here, and I’mexcited to learn why I’m here. And I’m guessingthat’s why he’s here. ERIK LUCERO: Hey, Michael. MICHAEL PENA: Hey.ERIK LUCERO: I’m Erik,Lead Engineer here. I’d like to welcome you to oneof the most powerful quantum computing facilitiesin the world. MICHAEL PENA: Oh, thank you. Thank you. What’s this? Can I touch it? ERIK LUCERO: Yeah. That’s a quantum processor. And inside are theseactual physical qubits. MICHAEL PENA: Oh,hey, little guy. ERIK LUCERO: Qubits arethe fundamental building blocks of quantum computers,but they’re incredibly fragile. MICHAEL PENA: Oh. ERIK LUCERO: Eventhe tiniest particles can disrupt their operation. MICHAEL PENA: Right. ERIK LUCERO: Whichis why we work so hard to create theoptimal environment to keep them stable. MICHAEL PENA:Right, and then I’m guessing the optimal environmentdoesn’t include, like, Cheeto dust. So I’m just going to put this– ERIK LUCERO: No, it doesn’t. MICHAEL PENA: –right back. ERIK LUCERO: Thanks. Let me show you wherethe clean ones go. MICHAEL PENA: Oh. ERIK LUCERO: So webuilt this campus to inspire all ofour quantum mechanics and to show the world what thefuture of computing looks like.MICHAEL PENA: Goodfor you, dude. Look at you, dude. ERIK LUCERO: Thanks. MICHAEL PENA:That’s a cool lamp. ERIK LUCERO: It’s not a lamp. This is actually a cryostat. And you’re looking at theinside of a quantum computer. MICHAEL PENA: Wow, “cryostat.” I love that word– “cryostat.” I’m guessing people want toknow what makes a cryostat a cryostat? Erik? ERIK LUCERO: Well,everything you see here, from the wiring to the aluminum,copper, and gold metal stages, have been chosen to createa cold and quiet environment for our quantumprocessors to operate. MICHAEL PENA:Right, right, right. And in English? ERIK LUCERO: It’s afridge for our qubits. MICHAEL PENA: Right, right. And how cold arewe talking about? ERIK LUCERO: We approach nearabsolute zero, 10 millikelvin, to be precise.MICHAEL PENA: Wow. ERIK LUCERO: Which meansthat parts of our lab are some of the coldestplaces in the universe. MICHAEL PENA: Wow,colder than Canada? ERIK LUCERO: Yeah,colder than Canada. MICHAEL PENA: (WHISPERING)Colder than Canada. ERIK LUCERO: Well, it’s not justtemperature that’s important. In fact, we want to remove alldistractions from our qubits, including unwanted electricaland magnetic signals. MICHAEL PENA: Yeah, yeah. Who wants that, right? ERIK LUCERO: Well,let me show you what the final product looks like. MICHAEL PENA: Isthis a cryostat? ERIK LUCERO: No,that’s not a cryostat.MICHAEL PENA: What aboutthis? is this a cryostat? ERIK LUCERO: That’snot a cryostat. MICHAEL PENA: No? ERIK LUCERO: This is a cryostat. MICHAEL PENA: Nice. ERIK LUCERO: In fact, thisis a fully assembled quantum computer. MICHAEL PENA: Yeah? So where’s the keyboard? ERIK LUCERO: Well,there’s no keyboard, but it containseverything you’ve just seen inside and custom controlelectronics, all of which were designed and built byour team here at Google. MICHAEL PENA: Wait, wait,wait, wait, wait, wait, is this a Bob Ross? Is he on the team? Tell me he’s on the team. ERIK LUCERO: He’snot on the team.MICHAEL PENA: OK. ERIK LUCERO: But this mural isour homage to Mother Nature. See, quantum is alanguage of nature, and we’re learningto speak it here. It will enable us torun precise simulations in the natural world, unlockinganswers that would otherwise remain unknown. MICHAEL PENA: OK, so letme see if I got this right. OK, so these qubits arereally smart, right? But they’re really pickyabout their work environments, so you’ve got to putthem in a lamp, right? But even then,they’re like, no, I don’t want anybody eatingany Cheetos around me.I mean, like, I’m sorry, OK? I didn’t know, right? So then you’ve got to wrapthem into, like, this Bob Ross blanket of love, right? And then you keepthem there until they can tell us how to thinklike the Earth, am I right? ERIK LUCERO: Yeah, yeah,you’re pretty close. MICHAEL PENA: OK, youknow what this is? This is the power button.I want to start it. ERIK LUCERO: Well, we’renot quite there yet. I’m glad you’re on board. MICHAEL PENA: OK. ERIK LUCERO: Todate, we’ve reached the first milestone,beyond-classical computational capabilities. MICHAEL PENA: This is us. ERIK LUCERO: Yeah, we’re here. Everything you’veseen here today is what we’re using tobuild to our next milestone, an error-correctedlogical qubit. MICHAEL PENA: Right. ERIK LUCERO: Andfrom there, we’ll tile thousands of those togetherto reach our ultimate goal– BOTH: An error-correctedquantum computer. MICHAEL PENA: Right,that’s my goal too. ERIK LUCERO: Well,you’re in luck. We’re building ateam to assemble all the right ingredients allright here in the Quantum AI Campus that you justhelped us unveil. So thank you very much. MICHAEL PENA: No, you know what? Thank you, and thank you foreveryone that’s joining us. I want to leave you with acouple of my favorite words that I just learned, oneof them being qubits– qubits– cryostat, right? And melon chillis. Sundar, it was a pleasuredoing science with you.[MUSIC PLAYING] [APPLAUSE] SUNDAR PICHAI: It was apleasure doing science with you too, Michael. We recognize that buildingan error-corrected quantum computer will beincredibly challenging. But solving hard problems andadvancing the state of the art is how we build themost helpful products. At Google, we know thatour products can only be as helpful as they are safe. And advances incomputer science and AI are how we continueto make them better. We keep more users safeby blocking malware, phishing attempts, spammessages, and potential cyber attacks than anyoneelse in the world. And our focus ondata minimization pushes us to domore with less data. Two years ago at I/O, Iannounced auto-delete, which encourages usersto have their activity data automatically andcontinuously deleted.We have since madeauto-delete the default for all new Google accounts. Now after 18 months,we automatically delete your data unless youtell us to do it sooner. And this is now active forover 2 billion accounts. All our products are guidedby three important principles. With one of the world’smost advanced security infrastructures,our products are secure by default. We strictlyuphold responsible data practices so every product webuild is private by design. And we create easy-to-useprivacy and security settings so you are in control. I’d like to invite Jen on stageto share some examples of how we apply theseprinciples and make every day safer with Google.[MUSIC PLAYING] [APPLAUSE] JEN FITZPATRICK: Thanks, Sundar. We believe thatprotecting your privacy starts with the world’smost advanced security. Seems like every day, we hearabout another cyber attack that puts emails andpersonal data at risk. To keep our users safe,everything we build is secure by default.Each of our products is protected with advancedAI-driven technologies. In fact, every day,Gmail automatically blocks more than 100million phishing attempts. Google Photos encrypts4 billion photos.And Google Play Protectruns security scans on 100 billion installedapps around the world. But the single most commonsecurity vulnerability today is still bad passwords. Consumer research hasshown that 2/3 of people admit to using the samepassword across accounts, which multiplies their risk. Ultimately, we’re ona mission to create a password-free future. That’s why no one is doingmore than we are to advance phone-based authentication. And in the meantime,we’re focused on helping everyone usestrong, unique passwords for every account. Our Password Managercreates, remembers, saves, and autofills passwords for you. It’s already used by overhalf a billion people. But we want to freeeveryone from password pain. That’s why today,we’re announcing four new upgrades that make ourPassword Manager more helpful.First, we’re making iteasier than ever to get started with a simple toolthat imports passwords saved in other password managers. Next, we’ll havedeeper integration across both Chrome and Androidso your secure passwords go with you from sites to apps. Third, automaticpassword alerts will let you know if we detectany of your saved passwords have become compromisedin a third-party breach. And lastly, one I’mespecially excited about, a quick-fix feature in Chrome,where the assistant will help you navigate directlyto your compromised accounts and change yourpasswords in seconds.Our continued investmentin our Password Manager makes it just oneof the many ways Google is the safer way tosign in to anything online. Another coreprinciple is ensuring that each of our productsis private by design. This means continuouslymaking thoughtful decisions about when, how, and whydata is used in our products, including datathat’s used for ads. Our principles drive us to drawa strict line between what’s in and what’s out. For example, we never sell yourpersonal information to anyone. We never use thecontent you store in apps like Gmail, Photos,and Drive for ads purposes. And we never usesensitive information to personalize ads, likehealth, race, religion, or sexual orientation. It’s simply off limits. And while we’ve always believedthat ads play an important role in supporting a free andopen web for everyone, we’re equally committedto making the web more private and secure.Through the open-sourcePrivacy Sandbox initiative, we’re collaborating withpublishers, content creators, advertisers, andindustry organizations like the W3C to develop newprivacy-preserving solutions that will shape the futureof online advertising. Making our productsprivate by design also drives us to buildgroundbreaking computing technologies that enablepersonalized experiences while protecting yourprivate information. One technology we’vebeen pioneering is differentialprivacy, which allows us to use large, aggregateddata sets while guaranteeing that your individual data cannever be identified as yours.No one has scaled the useof differential privacy more than we have. To help developers everywhereuse differential privacy, we created the world’slargest open-source library of deferentially privatealgorithms, which has advanced so many importantfields from cancer research to census analytics. Another important technologyis federated learning, invented here at Google in 2016. It enables machinelearning models to be trained centrallywithout any raw data leaving your device. And since building itinto Gboard and Messages, we’ve saved peoplecountless hours of typing, withhelpful suggestions. This is just one of the wayswe build for privacy everywhere that computing happens, bothin the Cloud and on device. And speaking of devices, tomake billions of Android phones private by design, we developedAndroid’s Private Compute Core.It’s uniquely source anddesigned to privately process and protect sensitive data. It powers featureslike Live Caption without sharing audio datawith Google or any other apps. No one else offers this kindof technically enforced, verifiable privacy. And the Android team willbe coming up in a bit to share more. These are just a few of theways we’re building the most advanced privacy-preservingtechnologies into our products to keep your dataprivate, safe, and secure. We know that a big partof feeling safe online is having controlover your data.Privacy is personal. So we build powerful privacyand security settings that let people choosewhat’s right for them. You can find them inyour Google account. We saw over 3 billionvisits last year. We also know that some controlsare most helpful when they’re built right into the app, likewhen we added an Incognito mode in Search, Maps, and YouTube. Today, we’re announcinga few new controls that you’ll see inour most popular apps. For example, peopletell us they sometimes wish they could easily deletethe last thing they searched. And we heard you. So now just tap your profilepicture to access your menu and immediatelydelete recent search history from your account. We’re also working tomake privacy controls more accessible in Maps. Now when you see places youvisited in your timeline, we’ll remind you thatit’s because you turned on location history, whichyou can easily turn off right there in your timeline.And lastly, we’re rollingout locked folder and photos, first on Google Pixels andcoming to more Android devices throughout the year. Photos you add to thispasscode-protected space are saved separatelyso they won’t show up as you scroll through GooglePhotos or any other apps on your device. This feature would havebeen helpful for me last year when we surprisedour kids with a new puppy and we needed to hide the photosbefore we brought Splash home. As Sundar said, there’s nothingmore important than keeping you safe online. Building products thatare secure by default, private by design, andthat give you control is how we ensure that, everyday, you’re safer with Google. Just as we’ve engineeredadvanced computing solutions to protect yourprivacy, we continue to think about future advancesin AI and their potential for making our productseven more helpful.Not surprisingly, so much ofwhat we do starts with Search. And next, you’ll hear moreabout this from Prabhakar. [MUSIC PLAYING] [APPLAUSE] [APPLAUSE] PRABHAKAR RAGHAVAN: Thanks, Jen. Today, we’re excited toshare our advances in AI in enabling us to understandthe world more deeply than ever before, opening up helpfulexperiences for you across Google Search,Maps, Shopping, and Photos. Let’s start with Search. 20 years ago, Googlewas just 10 blue links, connecting peopleto the information they needed from the millionsof web pages out there. Since then, we’vecontinued to innovate to understand newforms of information like images, videosplaces, and more.All of this is inpursuit of our mission– to make informationaccessible and useful. As Sundar mentioned,early research with LaMDA andmultimodality is pushing the boundaries of naturallanguage understanding. And today, I’mexcited to share how we’ll be bringing some ofthese research advances to Google Search with aMultitask Unified Model, or MUM, as we like to call it. Like BERT, it’s built on thetransformer architecture, but it turns up the dial. You see, MUM is a thousandtimes more powerful than BERT. But what makes this technologygroundbreaking is its ability to multitask in order tounlock information in new ways. Here are a few tasks it canhandle at the same time. It can acquire deepknowledge of the world.It can understand languageand generate it too. It can train across75 languages at once, unlike most AImodels, which train on one language at a time. And then what makesMUM even more amazing is that it’s multimodal, whichmeans it can simultaneously understand differentforms of information like text, images, and videos. We’ve already startedsome internal pilots to see the types of queriesit might be able to solve and the billions of topicsit might help you explore. Let me show you what I mean. Let’s say you’re an avid hikerplanning your next adventure. You might ask, I’ve hiked. Mount Adams and now want tohike Mount Fuji next fall. What should I dodifferently to prepare? This is a question youcould casually ask a friend but search engines todaycan’t answer directly because it’s soconversational and nuanced.MUM is changing the game. With its language-understandingcapabilities, it would know you’re lookingto compare two mountains and also understand that”prepare” could include things like fitness training forthe terrain and hiking gear for fall weather. Then it’s able tosurface useful insights based on its deepknowledge of the world. Here, it’s highlightingthat Mount Fuji is roughly the sameelevation as Mount Adams, but fall is the rainyseason on Mount Fuji, so you might need awaterproof jacket. It would also give you pointersto go deeper on topics, like how to prepare the rightgear with articles, videos, and images from across the web. Now, a huge limitationof accessing information is the language it’s written in. If there are insights aboutMount Fuji in Japanese, you might not know they existif you don’t search in Japanese. But MUM can transfer languageacross multiple languages to give you a richer,more comprehensive answer. But it doesn’t stop there. Because MUM is multimodal, itcan understand different types of information simultaneously. So now imagine taking a photoof your hiking boots and asking, can I use these tohike Mount Fuji? MUM would be able to understandthe content of the image and the intentbehind your query, let you know that your hikingboots would just work fine, and then point you to a listof recommended gear in a Mount Fuji blog.[APPLAUSE] While we’re in the early days ofexploring this new technology, we’re excitedabout its potential to solve more complex questions,no matter how you ask them. But we are alreadyfinding other ways to apply AI to bringyou new information. Take Google Lens,which lets you search what you see from yourcamera, your photos, right from your search bar. Around the world, peopleuse Lens to translate over a billion words every day. This translation featurehas been especially useful for students,many of whom might be learning in a languagethey are less comfortable with. So now, thanks to ourLens team in Zurich, we’re rolling outa new capability that combines visual translationwith educational content from the web tohelp people learn in more than 100 languages. For instance, you can easilysnap a photo of a science problem, and Lens willprovide learning resources in your preferred language. Let’s take a look at howa student in Indonesia is using this new feature. [VIDEO PLAYBACK] [BIRDS CHIRPING] [GRASS RUSTLING] [MUSIC PLAYING] [POUNDING] [END PLAYBACK] [APPLAUSE] PRABHAKAR RAGHAVAN: It’s alwaysinspiring to see stories like Mamay’s.And it brings to life thepower of visual information, especially for learning. That’s why we broughtaugmented reality to Search two years ago at I/Oto help you explore concepts visually up closeand in your space. You might remember the shockthat joined us on stage. Last year, when many of us firststarted sheltering in place, families around theworld found joy in AR. From tigers to cars, peopleinteracted with this feature more than 200 million times.Now, we’re bringing some ofthe world’s best athletes to AR so you cansee how they perform some of their most impressivefeats right in front of you. Beginning today, you can see howMegan Rapinoe juggles a soccer ball or how Naomi Osaka pullsoff a 125-mile-per-hour serve. You can even seeSimone Biles landing one of the most difficultcombinations ever completed. We recently caughtup with Simone to get her reaction tothe AR version of herself. Let’s take a look. [VIDEO PLAYBACK] – So first, you’re goingto go to Google Search. – Google Search. – And search yourself. – OK, Simone Biles in 3D.- And then you’re goingto view in your space. – You’ve got to scan the floor,so let’s scan the area– ooh. – Nice. – And she’s here. [LAUGHTER] – That’s you, so. – Oh my gosh. She goes for the triple-double. This is very accurate. I see all the details that Ineed to get back in the gym and work on. – [LAUGHS] Nails it. [LAUGHTER] So that one, you got to– Simone Bile’sdouble-double dismount. It pops up anywhere. – Wow, look at that. – Wait, let’s turn her so wecan see it from the front. – It sounds just likeyou’re in the arena. – Go down to 5%, little one. Aww. – There she is. – Itsy-bitsy Biles. That’s the smallesttriple-double I’ve ever seen. [LAUGHS] We need tostart competing in AR. It’s much simpler,saves the nerves. [LAUGHTER] [MUSIC PLAYING] [END PLAYBACK] [APPLAUSE] PRABHAKAR RAGHAVAN: Nomatter how many times I see that, I still thinkit’s pretty incredible.Innovations likeMUM, Lens, and AR are part of our quest to makeinformation more helpful. But information isonly helpful if it’s trustworthy and reliable. The world isconstantly changing. Getting access toreliable information is particularlycritical during times like the pandemicor breaking news. It’s in these momentsand so many others that people turn to Google. At our foundation, wedesign our ranking systems to prioritizehigh-quality content. And for criticaltopics like COVID, we elevate informationfrom expert sources. People come to Google toevaluate claims they’ve heard, whether that’s in conversationswith friends or something they read about online.Over the past year,searches for “is it true that” were even higherthan “how to bake bread.” And that’s saying somethinggiven last year’s sourdough craze. We’re building featuresthat make it easier for you to evaluate thecredibility of information right in Google Search. One of the ways we’re doing thisis with “About this result,” a feature we launched earlierthis year that makes it easier to check the source. Just tap the three dotsnext to the search result to see the detailsabout the website, including its description,when it was first indexed, and whether your connectionto the site is secure.This context isespecially important if it’s a site youhaven’t heard of and want to learn more about it. This month, we’llstart rolling out “About this result”to all English results worldwide, with morelanguages to come. And later this year,we’re going to add even more detail, like howthe site describes itself, what other sourcesare saying about it, and related articlesto check out. This is part of ourongoing commitment to provide you with thehighest quality of results and help you evaluateinformation online. When we understandinformation, we can make it more helpful toyou, whether that be information on the web, from your camera,or from the billions of places in the physical world. And to hear more about how AIis powering our most helpful map ever, here’s Liz. [MUSIC PLAYING] [APPLAUSE] LIZ REID: Thanks, Prabhakar. We’re constantlyworking on new features to make Maps more helpful forthe more than 1 billion of you who use it every month.Advances in AI are helping usreimagine what a map can be. This year alone, we’re on trackto release more than a hundred AI-driven improvementsto give people richer and morecontextual information about the world around them. Let me share justa few examples. We’ve seen how helpful AR canbe to see how athletes perform their most impressive feats. Three years ago, withLive View in Google Maps, we were the first onesto use AR at scale to help see where to go,with signs and arrows overlaid on the real world.Today, we’re stillthe only company who has AR navigation maps inmore than a hundred countries, from big cities to rural towns. So far, though, Live View hasbeen focused on navigation to help you easily getfrom point A to point B. But now you can also use it toexplore the world around you. You’ll be able to accessLive View right from the map and instantly seedetails about the shops and the restaurants around you,including how busy they are, recent reviews, and photosof those popular dishes.This is possible because wematch what your camera sees with millions of businessessharing rich information on Google Maps. In addition, there are ahost of new features coming to Live View later this year. First, we’re adding prominentvirtual street signs to help you navigate thosecomplex intersections. Second, we’ll point you towardskey landmarks and places that are important for you, likethe direction of your hotel. Third, we’re bringing itindoors to help you get around some of the hardest-to-navigatebuildings, like airports, transit stations, and malls.Indoor Live View willstart rolling out in top train stations,airports in Zurich this week and will come toTokyo next month. [APPLAUSE] But AR isn’t the only way we’rebringing a whole new level of richness to Google Maps. We’ve heard from manyof you that you’d like to have moregranular information about your surrounding. That’s why we’re bringing youthe most detailed street maps we’ve ever made. Take this image ofColumbus Circle, one of the most complicatedintersections in Manhattan. You can now see where thesidewalks, the crosswalks, the pedestrianislands are, something that might be incrediblyhelpful if you’re taking young childrenout on a walk, or absolutely essential ifyou’re using a wheelchair. Thanks to our applicationof advanced AI technology, on robust street viewand aerial imagery, we’re on track to launchdetailed street maps in 50 new cities bythe end of the year.Having access to richinformation is useful, but it can alsobecome overwhelming. So we’re making the map moredynamic and more tailored, highlighting the mostrelevant information exactly when you need it. If it’s 8:00 AM ona weekday, we’ll display the coffee shops andbakeries more prominently in the map, whileat 5:00 PM, we’ll highlight the dinner restaurantsthat match your tastes. You can see whichplaces you’ve been to and get more suggestionsfor similar spots with just a single tap.And if you’re ina new city, we’ll make it easier to find thoselocal landmarks and tourist attractions right on the map. You’ll start seeingthis more tailored map in the coming weeks. And as you’re planningyour day, people have found it reallyuseful, especially during this pandemic, to seehow busy a place is before heading out. Now we’re expandingthis capability from specific places, likerestaurants and shops, to neighborhoods with afeature called Area Busyness. Say you’re in Romeand want to head over to the Spanish Stepsand its nearby shops. With Area Busyness, you’ll beable to understand at a glance if it’s the right time foryou to go based on how busy that part of thecity is in real time. And as you heard before,we use our industry-leading differential privacytechniques to protect anonymity in this feature.Area Busyness will roll outglobally in the coming months. So that was a lot. To recap, we are expandingour Live View capabilities, making maps moredetailed and tailored and showing you howbusy certain areas are to help you make sense ofthe world all around you. All of this is possible becauseof our deep, deep commitment for over 16 years to buildthe world’s most helpful map for people everywhere. That means mapping roadsacross more than 60 million kilometers, listingmore than a billion buildings, creating a community of over150 million local guides, and finally, applying themost advanced AI technology, all so you can havethe most accurate, comprehensive, anddetailed map wherever you live in the world onany device, Android or iOS. Access to richinformation is crucial, whether you’re exploringa new neighborhood or trying to get things done.And over the past year,that’s increasingly meant turning to Googleto help you shop. [MUSIC PLAYING] To tell you more abouthow we’re making it easier to shop online, from inspirationto action, here’s Bill. [APPLAUSE] BILL READY: Thanks, Liz. You’ve already heardhow we’re innovating to understand information andmake it more helpful for you. We’re doing this in abig way for shopping. More than a billiontimes a day, people are shopping across Google. And we’re constantly workingto make that experience better, whether you’re browsing forinspiration or ready to buy. Now, let’s talkabout all the ways we’re innovating in shopping. Many of you are familiar withour Knowledge Graph, which revolutionized structuredinformation about people, places, and things. We’re now introducingthe Shopping Graph, our most comprehensive dataset for billions of products and the merchantsthat sell them. Building on the KnowledgeGraph, the Shopping Graph brings together informationfrom websites, prices, reviews, videos, and, mostimportantly, the product data we receive from brandsand retailers directly.Because the Shopping Graphknows about so many products, we can connect users withother 24 billion listings to buy those items from millionsof merchants across the web, helping you findmore of what you’re looking for from abroader range of sellers and giving you justas much or more choice in the digital world asyou have in the physical world. The best part is thatthe Shopping Graph spans across Google,making it easier to go from inspirationto purchase no matter where you are. Let’s see how this comesto life across shopping moments from Lens to Search,Photos, YouTube, and Chrome. As we all know,shopping inspiration often strikes whenwe see something we like in the world around us. And for these moments,Google Lens is awesome. It turns the world intoyour own personal showroom. For example, I was eatingoutside at a restaurant recently and really likedtheir patio furniture. So I opened my Google app. And right from theSearch bar, I could use Lens to find the exactset I was looking for, and similar items too. I showed the patioset to my daughter, but she didn’t love it.So it was back tothe drawing board. We did a bit morebrowsing together, starting with the GoogleImages tab on Search, where we see hundreds ofmillions of shopping searches each month. Thanks to the Shopping Graph,we could explore options from across the webto find what we liked, see that it was in stock, andcheck out with a retailer. I have this habit, though. I’m constantly takingscreenshots of products I like, but they usually endup buried in my photos. Here’s one I’ve saved fora pair of sneakers I saw. But now to solvefor this, when you view any screenshotsin Google Photos, there will be a suggestion tosearch the photo with Lens. You’ll see organicsearch results that can help you findthat pair of shoes or browse similar styles.Then, once you haveideas, you probably want to do some research andmight end up on YouTube. Earlier this year,we shared that we’re building a new experienceto make it easier to shop products you learnabout from your favorite YouTube creators. That experience is in pilotnow, so stay tuned for updates. And since we’re talkingabout researching, I don’t know aboutyou, but I often jump around from site to sitewhen I’m comparing products.And if I get distractedor close any tabs, it can be hard to keeptrack of items I found. Soon, on Chrome, whenyou open a new tab, you’ll be able tosee your open carts from the past couple of weeks. For example, I’mreminded that I’ve still got a shirt in my Tentree cartand a few things in my Lowe’s cart. It will also find youpromotions and discounts for your open carts ifyou choose to opt in. Here, I can see ElectronicExpress is offering 10% off. Your personalinformation and what’s in your carts are nevershared with anyone externally without your permission. Now, once you’redone researching and are ready tobuy, we also want to help you get the best value. Coming soon, we’ll use yourfavorite loyalty programs for merchants likeSephora and Target to show you the bestpurchase options. In this example, since you’rea Sephora Beauty Insider, you already qualifyfor a promotion. And if you’re notready to buy, you can opt in for pricedrop notifications. Taking a step back,these experiences are only possible becauseof our vibrant community of retailers on Google.We’re proud to takean open ecosystem approach that helps anymerchant, both big and small, get discovered. And that gives youmore shopping choices. This has been moreimportant than ever in what’s been a toughtime for businesses. That’s why this past year,we accelerated our plans and made it free formerchants to sell their products across Google. Since then, we’ve seen an 80%increase in merchants on Google with the vast majority beingsmall- and medium-sized businesses. And today, we’remaking it easier than ever for merchants of allsizes to get on Google. Together with Shopify,we’re excited to launch a seamless integrationso that the more than 1.7 million merchants on Shopifycan reach more consumers in a matter of minutes. With just a fewclicks, these retailers can sign up to appear acrossGoogle’s 1 billion shopping journeys each day, from Searchto Maps, Images to Lens, and YouTube.We believe you deservethe most choice available and will continue toinnovate on shopping along every step of the way. So far, you’ve heardmany of the ways we’re using AI tomake information more useful for you. AI can also help us revisit ourfavorite memories and moments, especially this pastyear, when many of us have been feeling nostalgic. To talk about new innovations inGoogle Photos, here’s Shimrit. [APPLAUSE] SHIMRIT BEN-YAIR: Thanks, Bill. It’s great to be back oncampus talking with you all about Google Photos.We capture photos and videos sowe can look back and remember. They help us feel connected. And today, there are more than$4 trillion photos and videos stored in Google Photos. But having so many photosof loved ones, screenshots, selfies all storedtogether makes it hard to rediscoverthe important moments. In fact, the vast majorityof photos in Google Photos are never viewed. And we’ve heard fromyou how powerful it is to rediscover a memorythat helps you tell your story and reconnect.So today, I want to showyou new features that use AI to resurfacemeaningful moments and bring your memories tolife, all while giving you more control so you can choosewhat you want to relive. Soon, we’re launching a newway to look back that we’re calling Little Patterns. Little Patterns show themagic in everyday moments by identifyingnot-so-obvious moments and resurfacing them to you. I’ll show you how this works. This feature uses machinelearning to translate photos into a series of numbersand then compares how visually or conceptuallysimilar these images are. When we find a set ofthree or more photos with similarities suchas shape or color, we’ll surface them as a pattern. When we started testingLittle Patterns, we saw some greatstories come to life, like how one of ourengineers traveled the world with their favoriteorange backpack, or how our productmanager, Christie, had a habit of capturing objectsof similar shape and color. Or for me, I received a patternof my family hanging out on the couch over the years.We have so manyfun memories there, but I didn’t realizehow many pics I’d snapped until I saw this. These photos on theirown wouldn’t necessarily be meaningful. But when they’repieced together, they tell a storythat’s uniquely yours. As always, these memories areprivately presented to you and are only visible toyour Google Photos account. In addition to using machinelearning to better curate your memories, we also wantto bring these moments to life with cutting-edge effects. Last year, we launchedCinematic Photos to help you relive yourmemories in a more vivid way. I want to show youhow we’re building on this feature withcomputational photography to make still photoseven more immersive.When we take a photo,most of us actually take two to three photosof the same shot just to make sure weget the right one. Any parent who tries to get alltheir kids smiling and looking at the camera at the sametime will know what I mean. Cinematic moments will takethese near-duplicate images and use neuralnetworks to synthesize the movement betweenimage A and image B. We interpolate thephotos and fill in the gaps by creating new frames. The end result is avivid moving picture. And the cool thingabout this effect is it can work onany pair of images, whether they werecaptured on Android, iOS, or scanned from a photo album.Creating thiseffect from scratch would take professionalanimators hours. But by applyingmachine learning, we can automatically bringthis experience right to your gallery. And we know that looking backis never a one-size-fits-all solution. It’s more meaningful whenyou can look back on content that’s personalized to you. So later this year,you’ll see new types of memories that are relevantto the moments you celebrate, whether that’s Diwali, LunarNew Year, or something else. For me, my familycelebrates Hanukkah. So I can look back on acollection of Hanukkah moments right in my photo grid. In addition to providingpersonalized content to look back on, we also wantto give you more control. We heard from youthat controls can be helpful for anyone whohas been through a tough life event, breakup, or loss. Specifically, we heard fromthe transgender community that resurfacing certainphotos can be painful. So we are working directlywith our partners at GLAAD and listening to feedback tounderstand how we can make reminiscing more inclusive.These insights inspiredus to give you the control to hide photos of certainpeople or time periods from our Memories feature. And soon, you’ll be ableto remove a single photo from a memory, rename thememory, or remove it entirely. We’re making all these controlseasy to find so you can make changes in just a few taps. And so, this summer, you’llbe able to uncover a Little Patterns, rediscovermeaningful memories, or immerse yourselfin a cinematic moment. And you can do it all on yourown terms with new controls. Looking back isan important part of the humanexperience, which is why Google Photos ismaking it easier than ever to relive your memories. Thank you. [APPLAUSE] BILL READY: Thanks, Shimrit. I’m really excited by theprogress we are making with AI. As you’ve heardtoday, we’re using AI to advance ourunderstanding of information and build morehelpful experiences across Google Search,Maps, Shopping, and Photos. Next, you’re going to hear aboutinnovations in our computing platforms. We’re excited to show you all ofthe improvements to Android 12, the newest releaseof our open platform, starting with afundamental change to how you experience it.I’ll hand it off toMatias to give you a look. [MUSIC PLAYING] [APPLAUSE] [BIRDS CHIRPING] [WATER POURING] [BIRDS CHIRPING] [APPLAUSE] MATIAS DUARTE:From the beginning, design has made computers morehelpful by making them easier to use, more personal. In 2014, we introducedMaterial Design to address the explosionof mobile phones. It set a new standardfor Android apps. And for Google, itrationalized our products simply and beautifully. But today, the challengeis even bigger. Now we’re at a momentwhere computers are showing up in placesthat we never imagined. It’s also a momentwhere people are yearning to expresstheir individuality and demanding controlfrom their technology.We believe this is a challengefor the whole industry– to acknowledge thatemotion is essential and that beauty is personal. To face this challenge, wehad to question everything. Instead of formfollowing function, what if form followed feeling? Instead of Google Blue,we imagined Material You, a new design that includesyou as a co-creator, letting you transform thelook and feel of all your apps by generating personal materialpalettes that mix color science with a designer’s eye. And engineering UI elementsto respond in real time, we can delight every style– a new design that canflex to every screen and fit every device. Your apps adapt comfortablyevery place you go– a new design that nevercompromises on accessibility, granting transformativecontrol of contrast, size, and even line width.Material can satisfy every need. No longer defaultingto one-size-fits-all, Material You is a radical newway to think about design. We invested years intoadvancing UI engineering, making it possible for any app,not just Google’s, to blend in their users’ styles andstay unique and beautiful. As designers, sharing controlof every pixel is terrifying. But that leap of faithis revolutionizing design across Google. For the first time, we canconsider the details of devices together with thepixels on their screens. We unify everythingthat Google makes through common proportions,textures, and shapes. We give you tasteful choices,blending into your homes and complementingyour wardrobes. More than choice, we uniquelytailor your Google products for the perfect fit– beyond light and dark,a mode for every mood. These selections cantravel with your account across every appand every device. Material You comesfirst to Google Pixel this fall, including all ofyour favorite Google apps. And over the following year,we will continue our vision, bringing it to the web, ChromeOS, wearables, smart displays, and all of Google’s products.Material You is a wayto design differently. We can’t wait to seewhat brings you joy and what you find beautiful. Next are the detailsof Android 12. Beyond the redesigned widgetsand your material palette, Sameer will show you ourmost personal OS ever. [APPLAUSE] [MUSIC PLAYING] SAMEER SAMAT: Hi, everyone. It’s great to be back liveat Google I/O. What you just saw was a peek into the biggestdesign change to Android in years, and we’re goingto go through all of it.But first, I wanted to sharesome exciting news with you. Just this week, we crossedan amazing milestone. There are now 3 billionactive Android devices around the world. This would neverhave been possible without the entireAndroid ecosystem. But there’s so much moreto do, and Android 12 is one of our mostambitious releases ever. There are three big themesthat we’re focused on. First, smartphonesare deeply personal. And we think your phone shouldadapt to you, not the other way around.Second, to keep yourpersonal information safe, the OS should be secure bydefault and private by design. And third, we wantall of your devices– TVs, cars, watches, andmore– to work better together with your phone at the center. I’m excited to show you more. So let’s start by taking alook at our new UI for Android. We’ve overhauled everythingfrom the lock screen to system settings,revamping the way we use color, shapes,light, and motion, inspired by Material You. Let me show you whatwe’ve done with color. We’ve got something newplanned for Google Pixel, using what we callcolor extraction. Think of it as one partart and one part science. Watch what happens whenthe wallpaper changes, like if I use thispicture of my kids actually getting along for once. I set it as my background,and voila, the system creates a custom palette basedon the colors in my photo. [APPLAUSE] We use a clustering algorithmwith material color targets to determine whichcolors are dominant, which ones are complementary,and which ones just look great together.It then applies huesacross different parts of the interface. In other words, it’sgoing to be beautiful. The result is one-of-a-kinddesign just for you. And you’ll see it first onGoogle Pixel in the fall. But this new UI is morethan a visual redesign. Many interactions havebeen simplified and system spaces purposefully reimagined. Starting from the lockscreen, the design is more playful withdynamic lighting.Pick up your phone,and it lights up from the bottom of your screen. Press the Power button towake up the phone instead, and the light ripplesout from your touch. Even the clock isin tune with you. When you don’t haveany notifications, it appears largeron the lock screen so you know you’reall caught up. The notification shadeis more intuitive, with a crisp at-a-glance viewof your app notifications, whatever you’re currentlylistening to or watching, and Quick Settings that give youcontrol over the OS with just a swipe and a tap. The Quick Settings space doesn’tjust look and feel different. It’s been redesigned toinclude Google Pay and Home controls while stillallowing for customization. So you can have everything youneed right at your fingertips. And now you can invokethe Google Assistant by long-pressingthe Power button, making it easier than ever toharness the power of Google.Our engineers have donesome pretty amazing work on performance inAndroid 12 to make all the motion and animationin the UI super smooth. We greatly reducedlock contention in key system servicessuch as Activity, Window, and Package Manager. And the team also reduced theCPU time of Android System Server by a whopping 22%. Basically, everything’s faster. There’s a lot to explorein this new design, and I can’t wait foryou all to try it out. Now, the designisn’t the only part of the device that’s personal. Our phones hold so muchimportant information. And it’s critical to keepit private and secure. To tell you more about that,let me hand it off to Suzanne. [MUSIC PLAYING] [APPLAUSE] SUZANNE FREY: Hi, everyone. From our firstdevice to 3 billion today, we designed securityand privacy for everyone, no matter how expensivetheir device is. We’ve built game-changingcapabilities for everyone, from file-based encryption,to TLS by default, and secure DNS to preventtraffic tampering and data breaches. And since 2017, GooglePixel and Samsung Galaxy have continually receivedthe highest security rating in Gartner’s annualmobile OS comparison report.Simply put, the most securedevices run on Android. And with Android 12,we’re going even further to keep your information safe. Let’s start with acommon experience– granting an app accessto sensitive information. Turn-by-turn directions basedon your precise location are really helpful. But we recognize thatthis access can also raise privacy questions. To give people moretransparency and control, we’ve created a newprivacy dashboard that shows you what type ofdata was accessed and when. This dashboard reports onall the apps on your phone, including all ofyour Google apps. And we’ve made it really easyto revoke an app’s permission directly from the dashboard. We’ve also added anindicator to make it clear when an app is using yourcamera or microphone. But let’s take thata step further. If you don’t want any appsto access the microphone or camera, even ifyou’ve granted them permission in the past,we’ve added two new toggles in Quick Settings so youcan completely disable those sensors for every app. So those are a fewexamples of privacy you can immediately see.We’re excited to share more onunder-the-hood privacy, privacy that’s baked intothe heart of Android. As machine vision, speechrecognition, and AI become increasinglybeneficial, there are even more opportunitiesfor the OS to be helpful. And to make iteasier for everyone to embrace thesenew innovations, we’re combining cutting-edgefeatures with powerful privacy. You heard Jen talk aboutthe ways we’re building private-by-design technology. Thanks to advances here withAndroid’s Private Compute Core, we’re able to introduce newfeatures using our unique AI capabilities while still keepingyour personal information safe, private, and localto your phone.Android’s Private ComputerCore enables things like Now Playing,which tells you what song is playingin the background, and Smart Reply, whichsuggests responses to your chats based on yourpersonal reply patterns. And there’s more tocome later this year. And by the way, all of thesensitive audio and language processing happensexclusively on your device. It’s isolated from the networkto preserve your privacy. And like the rest ofAndroid, Private Compute Core is open source. It’s fully inspectableand verifiable by the security community. Android is the firstcommercial mobile operating system to enable technicallyenforced privacy like this.And this is just oneof the ways we’ll continue to pioneer innovationwhile also maintaining the highest standards ofprivacy, security, and safety. And there’s a whole lot more forprivacy and security in Android 12 which you canhear about in our What’s New in AndroidPrivacy session later today. [MUSIC PLAYING] Now I’ll hand it backto Sameer to talk about how we’re buildingfor a multi-device world. [APPLAUSE] SAMEER SAMAT: Thanks, Suzanne. Phones have become thecenter of our digital lives. And they interact with aton of other devices we use on a day-to-day basis– laptops, TVs, cars, and more. This next chapter ofAndroid is focused on delightful andhelpful experiences across all the devices thatare connected to your phone so that everything justworks better together.Let’s start by lookingat how your phone works with your Chromebook. With a single tap, youcan unlock and sign in to your Chromebookwhen your phone is nearby. Incoming chat notificationsfrom apps on your phone are right there in Chrome OS. And soon, if you wantto share a picture, one click, and you canaccess your phone’s most recent photos. As another simple example, let’stalk about your TV’s remote. If your home is likemine, the remote is missing, like,50% of the time. To keep movie nighton track, we’re building TV remote featuresdirectly into your phone. You can use voicesearch or even type with your phone’s keyboard. It’s effortless. For the more than 80 milliondevices using Android TV OS, this will work rightout of the box. And we want all of your smartdevices to work together, not just those in yourhome, even your car.In fact, Android Auto isavailable in more than 100 million cars. And the vast majority of newvehicles from loved brands like Ford, GM,Honda, and more will support Android Auto wireless– no more cords. We’re also really excitedto introduce support for Digital Car Key. Car Key will allow you tolock, unlock, and start your car all from your phone. It works with NFC andultra-wideband technology, making it super secureand easy to use.Just walk up to your car,step in, and away you go. And if your friend needsto borrow your car, you can remotely and securelyshare your digital key with them. Car Key is launching thisfall with select Google Pixel and Samsung Galaxy smartphones. And we’re workingwith BMW and others across the industry to bringit to their upcoming cars. OK, that was a quicklook at Android 12, which will launch this fall. But you can check out many ofthese features in the Android 12 beta. Today, try it out on phonesfrom 11 device makers, including Google Pixel,OnePlus, and Xiaomi. From a personalized UI toindustry-leading innovation in privacy and security andbetter experiences across all the devices inyour life, there’s so much transformativetechnology coming to your phone this year. Now let’s go beyondthe phone to what we believe is the nextevolution of mobile computing, the smartwatch. Today, I’m excited to tellyou about the biggest update to Wear OS ever. We’ve been hard at workin three key areas– first, building a unifiedplatform jointly with Samsung, focused on batterylife, performance, and making it easierfor developers to build great apps for the watch.Second, a whole newconsumer experience, including updates to yourfavorite Google apps. And third, a world-classhealth and fitness service created by the newest additionto the Google family, Fitbit. There’s a lot to share here. So let’s get started bytalking about our partnership with Samsung. Samsung and Google have along history of collaborating. From the early daysof Android, whenever we’ve tackled problemstogether, the ecosystem has grown for everyone. And now we’re combining thebest of our two operating systems, Wear OS and Tizen,into a unified platform focused on faster performance,longer battery life, and a thrivingdeveloper community. Working together, we’ve madeapps start up to 30% faster, and animations andtransitions are super smooth. We’re also addressingwhat consumers always want from a wearable– longer battery life. By taking advantage ofsmaller, lower-power cores, we can do things like run theheart rate sensor continuously, letting you bettertrack your activity during the day and your sleepovernight while giving you plenty of battery tospare for the next day.This combined platform isn’tjust for Google and Samsung. It will continue to be availablefor all device makers, which means developers can buildapps with a single set of APIs and reach millions ofconsumers all over the world through the Google Play store. To hear more aboutour partnership, it’s my privilege towelcome Patrick Chomet, who leads all product andexperience at Samsung Mobile to Google I/O. [MUSIC PLAYING] [APPLAUSE] PATRICK CHOMET:Thank you, Sameer. For the past 12 years,Samsung and Google have worked together and madeSamsung Galaxy and Android successful. We strive to create innovativeexperiences for Samsung Galaxy users. Most recently, wepioneered foldable devices and delivered richcommunication experiences with Google Duo and Messages. And we are very excitedabout the new chapter of our partnership– wearablesthe Galaxy Watch is already loved by Androidsmartphone users, with our signaturedesigns, cool watch face ecosystem, and innovative[INAUDIBLE] platform. We are bringing the best ofthese Galaxy Watch capabilities together with Googleon a single platform, unifying the ecosystem forcustomers and developers.We work closely tooptimize the performance, meaning better responsivenessand longer battery life. You will also be able toenjoy Google apps and services like the Play Store,Google Maps, and more on the next SamsungGalaxy Watch. I am truly excited to welcomethe developer community to our new vibrantand open ecosystem. Thank you. Back over to you, Sameer. [APPLAUSE] SAMEER SAMAT:Thank you, Patrick. We’re very excitedabout our partnership. And I know many developerswill be thrilled about our unified platform. On top of this newfoundation, Wear is also getting a big updateto the consumer experience. To tell you more, letme hand it off to Bjorn. [MUSIC PLAYING] [APPLAUSE] BJORN KILBURN: Thanks, Sameer. Hey, everyone. Over the last sevenyears, we’ve learned a lot about what people lovemost about their smartwatch. And we’ve built awhole new experience with your preferences in mind. First, our new navigationsystem makes it faster than ever to get things doneon your watch.No matter what you’re doing,you can access shortcuts to important functionslike instantly switching to another app. Lets say I’mrunning with Strava, and I’m about to hitthat long, grueling hill. I just double press to switchto my last app, Spotify, put on my most motivating song,and then switch right back without missing a beat. It’s such a simple thingfor a more helpful and fluid experience. People have alsotold us they love getting glanceable piecesof helpful information just a swipe awayfrom their watch face, so we’re expanding ourcollection of tiles.Thanks to the new TilesAPI, any developer can create one, giving peoplemany more ways to customize their home screen carousel. Now I can go fromchecking my next meeting to the weather forecastto this new tile from Calm, which helps merelax before a stressful event like presenting at Google I/O. [LAUGHTER] We’ve also been hard at workrevamping the wearables app experience with a MaterialDesign update and expanded capabilities, starting withyour favorite ones from Google. This includes things likegetting turn-by-turn navigation in Google Maps when youleave your phone behind, being able to use GooglePay in 37 countries and more than 200 public transitsystems around the world, or downloading music froma catalog of more than 70 million songs for offlinelistening in the YouTube Music app, even withoutyour phone nearby.We’re thrilledabout all the ways you’ll be able to experience thebest of Google on your watch. And speaking ofthe best of Google, I’m delighted to welcome thenewest member of the family to Wear, Fitbit. Health and fitness isessential for wearables, and Fitbit has builta world-class service. So now, I’d love towelcome James to share more about our collaboration. [MUSIC PLAYING] [APPLAUSE] JAMES PARK: Thanks, Bjorn. Nearly 14 years ago, myco-founder, Eric Friedman, and I started Fitbitwith a mission to make everyone inthe world healthier.We’ve shipped over 130million Fitbit devices as part of that mission. But over time, we’vegone beyond just helping people track theirfitness to supporting them in their health journey byproviding a range of devices from trackers to smartwatches,along with software and services that giveusers amazing health and wellness contentand rich insights and analytics on their data. And now that we’repart of Google, we’re working to bring thebest to Fitbit to Wear. We will be making some ofFitbit’s most popular features available on Wear watches,including tracking your health progress throughout the dayand on-wrist celebrations to help keep you motivated.In the future, we’ll bebuilding premium smartwatches based on Wear that combinethe best of Fitbit’s health expertise with Google’sambient computing capabilities. All this is just the beginningof how, together with Google, we can do even more toinspire and motivate people on their journeyto better health. Back to you, Sameer. [APPLAUSE] SAMEER SAMAT: Thanks, James. I couldn’t be more excitedfor all the updates starting to roll out this fall. Stay tuned for ourdeveloper keynote to learn more aboutnew tools and libraries to help you build greatapps for the watch. From a unified platform withSamsung to a new consumer experience and a world-classfitness service from Fitbit, this is a new era forthe wearables ecosystem. So that was a lot, but before wemove on from Android and Wear, there’s somethingreally important to me personally that I wantedto share with you. As the world’s largest OS,we have a responsibility to build for everyone. As part of our ongoingcommitment to product inclusion, we’re workingto make technology more accessible and equitable.One of the most importantparts of any smartphone is the camera. Pictures are deeply personaland play an important role in shaping how people seeyou and how you see yourself. But for people ofcolor, photography has not alwaysseen us as we want to be seen, even in someof our own Google products. To make smartphone photographytruly for everyone, we’ve been working with agroup of industry experts to build a more accurateand inclusive camera. Let’s take a look. [VIDEO PLAYBACK] [MUSIC PLAYING] – People tend to thinkthat cameras are objective, but a bunch of decisionsgo into making these tools. And historically,those decisions have not been taking peopleof color into account. – It’s stillreaffirming this idea that Black people aren’tworthy of being seen. – So far, we’ve partnered witha range of different expert image-makers who’vetaken thousands of images to diversify ourimage data sets, helped improve the accuracyof our auto-white-balance and auto-exposure algorithms,and given aesthetic feedback to make our images of people ofcolor more beautiful and more accurate. – The process was createalmost like a guidebook to capture skin tones.- I can’t help butthink of my mom, and she still thinksthat she’s not beautiful because of picturesthat were taken of her when she was younger. How many little girls arethinking they’re not beautiful because they were thedarkest-skinned person in the photo, and theydidn’t get represented? – The work is for us to do. It’s not for people tochange the way they look. It’s for us to changethe way the tools work. [END PLAYBACK] [APPLAUSE] SAMEER SAMAT: Ourengineering team is learning a tremendous amountworking with these experts. And we’re making changes toour computational photography algorithms to addresslong-standing problems. For example, we’re makingauto-white-balance adjustments to algorithmicallyreduce stray light to bring out naturalbrown tones and prevent over-brightening anddesaturation of darker skin tones. We’re also able to reflectcurly and wavy hair types more accurately in selfieswith new algorithms that better separate aperson from the background in any image. Although there’sstill much to do, we’re working hardto bring all of what you’ve seen here and moreto Google Pixel this fall. And we’re committedto sharing everything we learned with theentire Android ecosystem so that together,we can make cameras that work fairly for everyone.Thank you. [APPLAUSE] [MUSIC PLAYING] [BIRDS CHIRPING] [ENGINE HUMMING] KAREN DESALVO: AsSundar shared, we want to build a morehelpful Google for everyone to increase knowledge,success, happiness, and health beyond anythingpreviously possible. Today, I want tobring you inside to see how our recentadvances in image recognition are helping to solve some of theworld’s big health challenges. Let’s start with breastcancer, a diagnosis that one in eight womenwill face in their lifetime. Mammograms can help catchbreast cancer earlier, but half of all womenexperience a false alarm across a decade of screening. So we’ve been working tomake mammography better. Last year, our researchdemonstrated AI’s potential to analyze screeningmammograms with accuracy similar to clinicians.And now we’re collaboratingwith Northwestern Medicine on an investigativedevice research study to better understand how AIcan apply to the breast cancer screening process. Let’s hear why this matters. [VIDEO PLAYBACK] [MUSIC PLAYING] – When we found outGrandma had breast cancer, it was in the late ’90s, andit wasn’t something that anyone talked about. So my first mammogramwas nerve wracking. Waiting for theresults, every thought runs through your head– what if they find something? It was the worst feeling.- One of the greatest anxietiesabout having mammography is the wait. It may take radiologistsdays, sometimes weeks, to get through thelist of mammograms that need to be read. This is a national problem. We don’t have enough peopledoing what we need to do. With the research study thatwe’re doing with Google, we’re using artificialintelligence that scans the mammogram image.It helps flag patients thatmay need additional imaging. I get an email that says thepatient has been flagged. And if I agree, wetake the patient, and they take morepictures right away. We’re just at thetip of the iceberg in terms of what we can dowith artificial intelligence. We would like to see that weare getting patients faster through the system. If we can show that, thenwe can potentially change radiologists’operations in such a way that they can prioritizepatients that need care first. So it will be very exciting tosee the results of this study. [END PLAYBACK] KAREN DESALVO: Thisis a great example of how we’relearning if AI could support clinicians in theirwork to triage patients. At Google, we want everyone tohave the highest quality care. Technology can and shouldhelp close the equity gap. That’s why we’re working tobring this technology to bear on important globalhealth challenges, from diabetic retinopathyto our new work to improve tuberculosisdetection using image recognition on chest X-rays. We also believe AI can assistyou in your daily health. People come to GoogleSearch every day to ask questionsabout their health.For example, we see billionsof queries each year related to dermatologic issues. This is no surprise becausederm conditions affect about 2 billion people globally. There are not enoughspecialists to meet the need. And so we wondered, how canAI help when you’re searching and asking, what is this? Meet our AI-powereddermatology assist tool, a class I CE markedmedical device that uses machine learningto help find answers to common derm conditionsright from your smartphone or computer. From your phone, just uploadthree different photos taken from various angles ofthe skin, hair, or nail issue that you want to learn aboutand answer some basic questions about your symptoms. The AI model handles the rest. In a matter ofseconds, you will have a list of possible matchingdermatologic conditions. And then we can help youget relevant information to learn more. It seems simple, butdeveloping an effective AI model for dermatologyrequires the capability to interpret millions andmillions of images, inclusive of a full range ofskin types and tones.When available, this tool willbe accessible from your browser and cover 288conditions, including 90% of the most commonlysearched derm-related questions on Google. We’re working to make itavailable to consumers on Google Search in the EU asearly as the end of this year. We’ve just looked at ways we’reapplying AI to support people and caregivers everywhere,but health isn’t just driven by medical care. It’s also about our socialand emotional well being. And that’s where stayingconnected comes in.To find out howGoogle is helping, let me pass it back to Sundar. [MUSIC PLAYING] [APPLAUSE] SUNDAR PICHAI: Thankyou, Dr. DeSalvo. It’s exciting to see the ways inwhich AI and image recognition are transforming health care. There are two additionalareas of research where AI will have long-term impact. The first feelsincredibly timely. We were all grateful tohave video conferencing over the last year. It helped us stay in touchwith family and friends and kept businessesand schools going.But there is no substitutefor being together in the room with someone. So several yearsago, we kicked off a project to use technologyto explore what’s possible. We call it Project Starline. It builds on the differentareas of computer science I spoke about today and relieson custom-built hardware and highlyspecialized equipment. It’s early and currentlyavailable in just a few of our offices. But we thought it’dbe fun to give you a look at people experiencingit for the first time. Let’s take a look. [VIDEO PLAYBACK] [MUSIC PLAYING] – When I walked into the room,I was a little suspicious– what is this? I couldn’t quiteunderstand what was going to happen whenthat screen lit up.Eddie! [LAUGHTER] – So, you look beautiful. – I could feel her and see her– Hi! –and it was this,like, 3D experience. – I just saw my sister as ifshe was right in front of me. It really, really felt like sheand I were in the same room. – It was like she was here. Bye. Wow. [END PLAYBACK] [APPLAUSE] SUNDAR PICHAI: Somekey advances have made this experience possible.First, using high-resolutioncameras and custom-built depth sensors, we capture yourshape and appearance from multipleperspectives and then fuse them together to createan extremely detailed, real-time 3D model. The resulting data is huge,many megabits per second. To send this 3D imageryover existing networks, we developed compressionand streaming algorithms that reduce the data by afactor of more than 100. And we have developed abreakthrough light field display that shows you therealistic representation of someone sittingright in front of you in three dimensions. As you move your headand body, our system adjusts the images tomatch your perspective. You can talk naturally,gesture, and make eye contact. It’s as close as we can get tothe feeling of sitting across from someone. As sophisticated asthe technology is, it vanishes so you can focuson what’s most important. With Project Starline,we’ve brought together a set of advancedtechnologies with the goal of creating thebest communications experience possible. We have spent thousands of hourstesting it in our own offices, and the results are promising. There’s also excitement fromour lead enterprise partners.We plan to expandaccess to partners in health care and media. In pushing the boundariesof remote collaboration, we have made technicaladvances that will improve our entire suiteof communications products. We look forward tosharing more ways for you to get involved inthe months ahead. The second area ofresearch I want to discuss is our work in drivingforward sustainability. Sustainability has been a corevalue for more than 20 years. We were the firstmajor company to become carbon neutral in 2007. We were also the first tomatch our operations with 100% renewable energy. That was in 2017, and we havebeen doing it ever since. And last year, we eliminatedour entire carbon legacy.Our next ambitionis our biggest yet. By 2030, we aim to operateon carbon-free energy 24/7. This means runningevery data center and office on clean electricityevery hour of every day. Operating 24/7 oncarbon-free energy is a step change fromcurrent approaches. It means setting a higherbar to never emit carbon from our operationsin the first place. It’s a moonshot, likeLaMDA or quantum computing. And it presents an equallyhard set of problems to solve. First, we have to sourcecarbon-free energy in every place we operate,a harder task in some places than in others. Today, five of our datacenters are already operating at or near90% carbon-free energy. In Denmark, we builtfive new solar farms to support ournewest data center, complementing existing windenergy on the Danish grid. And it’s operated carbon-free90% of the time since day one. Another challenge of24/7 carbon-free energy is just that– it has torun every hour of every day.So last year, werolled out the world’s first carbon-intelligentcomputing platform. It automatically shifts thetiming of many compute tasks to when clean powersources are most plentiful. And today, I’mexcited to announce we are the firstcompany to implement carbon-intelligent load shiftingacross both time and place within our data center network. By this time next year, we’llbe shifting more than a third of non-production computeto times and places with greater availabilityof carbon-free energy. To reach 24/7, we also needto go beyond wind and solar and tap into sources ofon-demand clean energy like geothermal. Geothermal usesthe consistent heat from the Earth togenerate electricity. But it’s not widely used today,and we want to change that. I’m excited toannounce that we are partnering to develop a nextgeneration geothermal power project. This will connect to the grid,serving our Nevada data center starting next year. We believe our Cloud AI combinedwith a partner’s expertise in fiber optic sensing,our novel techniques can unlock flexiblegeothermal power in a broad range of new places.Investments likethese are needed to get to 24/7carbon-free energy. And it’s happening righthere in Mountain View too. We are building our new campusto the highest sustainability standards. When completed, these buildingswill feature first-of-its-kind dragonscale solar skin, equippedwith 90,000 silver solar panels and the capacity togenerate nearly 7 megawatts. They will house thelargest geothermal pile system in NorthAmerica, helping to heat the buildings in the winterand cool them in the summer. Sustainability is one ofthe defining challenges of our time. And advances incomputer science and AI have a huge role toplay in meeting it.So it’s a fitting wayto end our I/O keynote. I think of I/O not just asa celebration of technology but of the people whouse it and build it, including the millions ofdevelopers watching today. Over the past year, wehave seen how technology can be used to helpbillions of people through the mostdifficult of times.It’s made us morecommitted than ever to our goal of building a morehelpful Google for everyone. Thank you for joining us today. Please enjoy therest of Google I/O. And stay tuned for the developerkeynote coming up next. I hope to see youin person next year. Until then, staysafe and be well. [CHEERING AND APPLAUSE] [MUSIC PLAYING] .