Readers like you help support Pocket-lint. When you make a purchase using links on our site, we may earn an affiliate commission. Read More.

The Pixel 8 Pro has just been announced, showing off the latest marvels from Google. Light applause accompanied the New York launch event, as Rick Osterloh and crew presented the latest details to assembled media. With much of the device leaked - or simply shared by Google in advance - there were fewer surprises than at the recent iPhone 15 launch event.

But Google doesn't surprise people with hardware. Way back in the life of Pixel devices, Google CEO Sundar Pichai said (and I paraphrase here) that hardware didn't matter. More and more, the Pixel is becoming the vessel for artificial intelligence and there was no greater demonstration of that than at the launch of the Pixel 8 Pro.

But this being a phone launch, most of the attention is on the camera. There's new hardware there - a 48-megapixel ultrawide camera for example - as well as Pro Controls, giving photographers more power, such as the ability to take full-resolution photos. But for the sort of gasps and wonderment that you find at an Apple launch, it's AI that takes centre stage.

Google has long pushed the notion of computational photography. While others (hello Apple) were talking about lenses and sensors, Google dropped the pretense and pushed the message that AI could do it all. But now there's a real sense that AI is doing it all. The introduction of Pro Controls vanishes into background of the AI capabilities and the resounding feeling that every photo on the Pixel 8 Pro is basically now fake.

videoboost
Google

Let's take the new Video Boost mode, for example. Google actually called out the iPhone 15 Pro Max here, showing how the Pixel 8 Pro could take that video, give it more depth and detail, with more accurate skin tones, because it's pushing everything through its HDR pathway. If you come for the king, as they say, you better not miss, as the iPhone has long been regarded as the best phone for video.

Powering Video Boost - and the Video Night Sight - is AI. The results do look spectacular, with Google saying that the Pixel 8 Pro produces the best night video of any smartphone, thanks to our friend AI.

Video Night Sight
Google

Now AI involves a lot of jargon that we're all becoming more familiar with Large Language Models aren't relevant here (although they're in Assistant with Bard, Call Screen and a lot more that the Pixel 8 Pro offers), but generative AI is. This relies on foundation models (a less common term) that can now run on device, rather than in the cloud. In fact, the Pixel 8 Pro can run 150x more computations than the Pixel 7 Pro thanks to the Google Tensor 3 - and this is the stuff that's going to fake your photos like a science fiction movie.

2023-10-04_15-31-38.761
Google

Magic Eraser wowed when it arrived on the Pixel 6 in 2021, blending and blurring background pixels to remove things you didn't want from a photo. Thanks to generative AI, you're not just going to be blending pixels in the background, you're basically going to be generating new content to fill the background when something is removed. Rather than just getting a bit of a blur based on contextual surroundings, AI in Google Photos is going to fake it instead. Google's example involved removing a whole car from the background of a photo, replacing the wall, ground and shadows from where it was. That's not a small edit, it's a significant change.

besttakeimage
Google

But that's not all that Google showed off at the launch of the Pixel 8 Pro. Perhaps the biggest piece of fakery comes in the form of Best Take. In the old days of photography, you'd take a bunch of photos and then pick the best one. You know, the one where everyone is smiling and everyone has their eyes open - and all the kids are looking in the right direction. With Best Take, the Pixel 8 Pro will effectively do that for you. It will take that bunch of photos and pick the best bits from each shot. It will then combine them into a photographic utopia, an idealised scene which probably never existed. You won't get Grandma looking the wrong way as she always does, because Google will AI her face into the right aspect for that hauntingly-fake family photo.

While a lot of AI processing is taking place in Pixel photos (and has been for some time) the final example that I caught sight of today was the AI in zoom. We know that Google has been using its hybrid zoom system to boost the performance of telephoto images for a couple of years. This looks at the scene, compares neighbouring pixels and sharpens things up - and it's pretty good and closing the gap with optical systems. But thanks to having all that AI power in the Pixel 8 Pro on the Tensor G3, you'll be able to fake some of that zoom too.

Zoom Enhance
Google

This will use generative AI to look at when you're zooming into and basically make it all good for you. No more pixelated mess, no more handshake, no more washed-out colours. In the future, Google's AI could be painting this into your zoom photos to make them look better. Sure, Google isn't the only one doing this - anyone who has taken a photo of the moon with a Samsung Galaxy S23 will know that photographic fakery is going around - but the question is whether you care?

For a long time the mantra in smartphone cameras was about presenting the most realistic depiction of the scene in front of you. It was about using HDR to balance things out, so it was more like you'd see through your own eyes.

People have often called out Samsung for making things a little more saturated - greener grass and bluer skies - but Google's approach after several years of being accepted as the best point-and-shoot phone out there, seems to be to divert from that. It's not about what you actually see, it's an idealised vision of that scene.

Magic Eraser
Google

The question is whether anyone will care. With people hogging global beauty spots to get that perfect selfie - all for the external validation on social media - does anyone care if the photos they are taking are no longer the actual image, but an AI representation of that image? And at what point does the camera stop actually being about the capture of light (hence photo-graphy) and just become a portal for AI data capture?