How AI scene optimization works in your phone's camera

How AI scene optimization works in your phone's camera
If you have a flagship camera phone, you're probably wondering how it takes such amazing photos. Phones brighten shadows, dilute backgrounds for DSLR-like portraits, bring out colors, and generally add flavor to images in ways that traditional cameras don't. A big part of this process comes down to optimizing the AI ​​scene, which you'll find to some extent in most smartphones today, at its best on the best smartphones out there. Camera phones are smart; after all, they are minicomputers. The chip that powers your phone delivers processing power we couldn't have dreamed of decades ago. Smartphone manufacturers combine this intelligence with the tiny camera module on the back of your phone to give you artificial eyes. With these eyes, your phone can understand what you're taking, determine what to optimize, and prepare your final Instagram shot without hours of retouching. Most impressive; In certain scenes, a smartphone camera can outperform a DSLR by more than $XNUMX, thanks to all that smarts and processing power in its processor and clever HDR effects. In short, it's a story about how frustration (tiny smartphone sensors) created the next evolution in computer photography.

What is AI scene optimization?

The clue is in the name. AI scene optimization is the process by which your phone optimizes a photo it takes depending on the scene it captures. While talking while taking a photo, when you point your camera phone at a subject, light passes through the lens and falls on the sensor. This light is processed as an image by the Image Signal Processor (ISP), which is typically a part of your smartphone's chip, whether it's a Qualcomm Snapdragon 11 or an Apple AXNUMX Bionic. ISPs are not new. They have been using digital cameras for years and doing the same basic things on smartphones. Some of its tasks are giving you an image to preview, reducing noise, adjusting the exposure, counting whites, etc. The thing about smartphones these days is how smart these ISPs are. For starters, with improved computing capabilities, camera phone ISPs can better understand what you're recording. While you were on an old DSLR or compact digital camera, you may have had to adjust your scene manually. Now smartphones can do it on their own. However, it only scratches the surface.

Samsung galaxy s21

(Image credit: Samsung) With current AI scene optimization, a camera phone can understand multiple elements of your image and adjust the processing of each in very specific ways. For example, if you take a picture of a person in front of a field of grass with a blue sky in the frame, your phone's ISP might light up their face since that's probably the subject, beef up the greens in the frame. make them look richer and bring out the blues of the sky separately, and depending on your phone, maybe even soften the background slightly to bring the subject into focus. Today, many smartphones go even further. Sony's high-end smartphones, for example, have pet eye tracking, while Huawei smartphones have automatic night detection that works so well on flagships that it can illuminate an almost dark scene, making it seem to night to day This technology is available on iPhones, such as in the perfect Portrait mode, which allows maximum control over background blur and lighting effects when snapping a photo of a person, and Google's Pixel phones are renowned for their astrophotography. Although not AI-enabled, when turned on, it looks up at the night sky and intelligently judges how long the digital shutter should stay open so it can catch stars and even galaxies. So probably smartphones should pale in comparison to DSLRs when it comes to AI photography? In truth, they don't. Interestingly, camera manufacturers learn two things about photo processing from camera phone engineers.

iPhone 13 Pro Max

(Image credit: LaComparacion)

Small sensors; Smart solutions

If you think back ten years ago, Nokia released smartphones with big, powerful xenon flashes like the Nokia XNUMX Pureview. It had to bump up the hardware specs on its camera phones because photos taken on most phones at night looked like an unbearable, grainy mess. Even a phone like the Nokia Pureview didn't make a dent in some really tough scenes. The reason why mobile cameras have been so much discussed is that phones need to fit in our palms and pockets and therefore need to be small. Smartphones must also have many other elements: screens, speakers, batteries, antennas, etc. Dwarf motion sensors with dwarf lenses have a hard time letting in a lot of light. More light means a better image, and that's where the problem lies: small sensors, limited light-catching capabilities, poor image quality. These restrictions on mobile cameras have forced phone makers to stop trying to fix the problem with better, expensive, big, battery-draining hardware, and instead turn to software. L'une des premières fois où cela a vraiment fait les gros titres, c'est lorsque Google a sorti le Pixel, un téléphone sans image stabilization optics, mais avec une si bonne stabilization électronique qu'il a surpassé une grande partie de the competition. The Pixel and Pixel XNUMX then showcased incredible photo processing capabilities that turned photos from meh to stunning right before your eyes as you recognized the scene. This then led to brands like Huawei introducing neural processing units into chipsets, the Mate XNUMX featuring AI scene detection, and the feature found in phones from other smartphone makers, with Samsung phones warning about thirty-two scenes.

Huawei mate 40 pro

(Image credit: Huawei)

What can't scene detection with AI do?

And this is how the smart people who solved a photography challenge made something impossible to overcome with hardware alone (high-quality smartphone photography in any and all lighting conditions) possible using software and motion detection. scenes by AI. The next frontier is AI scene detection in video. Although already available to some degree, the ultra-smart night photography capabilities used in photography by Apple, Google (Night Sight) and Huawei have not been released to clean night video. A video is at least XNUMX frames per second, so that's another level of processing power. As processors get more powerful, AI development reaches new heights, and smartphone sensors become more capable of catching light despite their small size, AI scene detection seems poised to keep changing. the face of photography for everyone.

LaComparacion created this content as part of a paid partnership with Huawei. The content of this article is completely independent and reflects only the editorial opinion of LaComparacion.