Megapixels used to be so much more straightforward: a higher number meant that the camera could catch more video detail as long as the picture had enough light. The concept called pixel binning, which is now creeping through flagship smartphones, changes traditional imaging guidelines for the better.
In brief, pixel binning technology allows you a camera that shows much info when it’s light without being worthless when it’s dark. Nevertheless, the necessary hardware improvements carry some tradeoffs and fascinating data, which is why we take a closer look.
You may have seen pixel binning on earlier smartphones like LG G7 ThinQ in 2018 and the new LG V60 ThinQ, but this year it’s finally catching up. Samsung’s largest Android phone company has installed pixel binning on its flagship Galaxy S20 Ultra. Last week, other high-end phones unveiled Huawei’s P40 Pro and Pro+, and Xiaomi’s Mi 10 and Mi 10 Pro also offer binning pixels.
What is pixel binning technology?
Pixel binning technology is designed to adjust an image sensor more to various conditions. Today’s latest phones include improvements to the image sensor that first gathers the light and the image processing algorithms that turn the sensor’s raw data into a picture or video. Pixel binning combines the sensor data from groups of pixels to produce, in turn, a smaller number of pixels of higher quality.
When there is plenty of light, in the Galaxy S20 Ultra case, you can take images at an actual resolution of an image sensor — 108 megapixels. But when it’s dark, pixel binning lets a phone take a fine, albeit lower-resolution picture, which is the 12-megapixel resolution still useful for the galaxy S20 Ultra that has prevailed on mobile phones for a few years now. LG Product manager in the United States Ian Huang says, “We wanted to give the flexibility of having a large number of pixels as well as having large pixel size,”
Is pixel binning a marketing ploy?
Of course, it helps phone makers talk about megapixel numbers that surpass what you can see even on DSLRs and other professional quality cameras. That’s a little dumb, but if you want to get the most out of your smartphone’s camera, pixel binning technology will offer some real benefits.
How does pixel binning technology work?
To fully understand pixel binning, you need to realize the image sensor for a digital camera. It is a silicon chip with a grid of millions of pixels capturing the light coming through the camera lens. Each pixel only records one color: red, green, or blue. But the colors are arranged in a particular checkerboard structure called a Bayer pattern that allows a digital camera to reproduce all three-color values for each pixel.
Pixel binning blends data on the image sensor from several tiny pixels into one bigger virtual pixel. That’s especially helpful in lower light circumstances, where large pixels are pretty good at keeping image noise in the bay.
The technology typically combines four total pixels into one virtual “bin” pixel. Still, the S20 Ultra from Samsung combines a 3×3 group of actual pixels into one virtual pixel, a method that the Korean company calls “nona binning.” You don’t have to think about noise; the camera will take a picture with no binning using the full image sensor resolution. That’s useful to print big photos or crop them in on an area of interest.
What is the right time to use pixel binning?
After reviewing the new Samsung Galaxy phones, many people would be satisfied with lower resolution shots, which is the recommended standard by many experts. The primary justification to use pixel binning is improved low- efficiency, but it also prevents full-picture giant file sizes that can swallow storage and online resources like Google Photos on your smartphones. For instance, you can capture 3.6 MB with pixel binning at 12 megapixels and 24 MB at 108 megapixels.
When it’s bright, photo enthusiasts are more likely to want to use full resolution. That could help you recognize distant birds or take more photos of distant subjects of a more dramatic nature. And if you’re going to print big pictures, it’s all about megapixels.
Can a Samsung’s 108-megapixel S20 Ultra shoot better than a professional Sony A7r IV 61-megapixel camera?
Besides other factors like lenses and image processing, each pixel’s size on the image sensor also matters. There’s a reason why the Sony A7r IV costs $3,500 and takes pictures only, while the S20 Ultra costs $1,400 and can run many other applications as well as make phone calls.
Imaging sensor pixels are squares with a diameter of one millionth of a meter or microns. A human hair is about 100 microns wide. Every pixel on the S20 Ultra is 0.8 micron wide. A virtual pixel is 2.4 microns wide, with Samsung’s 3×3 binning. A pixel is 3.8 microns via a Sony A7r IV. This means the Sony can capture 2.5 times lighter per pixel than the S20 Ultra with 12-megapixel binning mode and 22 times more than in full-resolution mode with 108-megapixel — a significant improvement in image quality.
However, phones close an image’s quality and performance gap, mainly when using computational photography technologies such as combining multiple frames into one shot. And image sensors are getting bigger and bigger in smartphones to improve efficiency.
Why is pixel-binning technology becoming prominent?
Since miniaturization has made smaller pixels ever possible, “What has propelled binning is this new trend of submicron pixels,” those less than 1 micron long, said Devang Patel, a senior marketing manager at OmniVision, a top manufacturer of image sensors. Getting plenty of those pixels lets smartphone makers desperate to make the phone stand out this year to talk of plenty of megapixel scores and 8K images. Binning technology allows them to think of that without losing sensitivity to low light.
Do smartphones need distinctive sensors for pixel-binning technology?
The primary sensor is the same, but a shift in the color filter array on an aspect connected to it affects how the sensor collects red, green, and blue light—conventional checkerboards of the Bayer system swap colors with each adjacent pixel. But the sensors organize the same-color pixels in 2×2, 3×3, or 4×4 classes for the pixel binning. Pixel binning is possible without these groups, but it requires extra processing, and Patel said image quality suffers somewhat.
The Samsung Galaxy S20 Ultra uses group binning with 3×3 pixels. Huawei’s P40 Pro models prioritize low-light output with 4×4 pixel binning. While the Mi 10, Xiaomi took a different 2×2 strategy; the pictures are 108 megapixels at full resolution images and 27 megapixels with pixel binning. (“SedecimPixel,” named for the 16:1 ratio) an incredible 4.5 micron across for simulated pixels.
For the image processing of hardware and software optimized for standard Bayer pattern pixel data, grouping pixels into larger simulated pixels works well. But for high-resolution images, it applies another stage of image processing (called demosaicking, if you’re curious) that essentially creates a finer-detail Bayer pattern on the sensor from the coarser pixel color groups.
Is it possible to shoot specifically with pixel binning?
Shooting lovers like raw images’ versatility and image quality — the unprocessed camera sensor data, presented as a DNG file. Pixel binning fits just fine with them. Still, if you want the data in full resolution, sorry. LG and Samsung don’t share it, and raw processing tools like Adobe Lightroom assumes a typical Bayer pattern, not 2×2 or 3×3 patches of the same color clustered pixel groups.
What are the drawbacks of pixel binning technology?
According to Judd Heape, a senior director at smartphone chipmaker Qualcomm, 12 actual megapixels will do a little more than 12 binned megapixels with the same sized sensor. The sensor, too, would probably be less expensive. And when you shoot at high resolution, you’ll need more image processing, which will reduce your battery life.
You will get more exceptional sharpness with a standard Bayer pattern for high-resolution images than using a binning sensor using 2×2 or 3×3 groups of pixels of the same color. But the problem isn’t too bad. “With our algorithm, we’re able to recover anywhere from 90% to 95% of the actual Bayer image quality,” Patel said. Comparing the two techniques with photographs, you couldn’t say it a difference in complicated conditions like fine lines outside lab test scenes.
Just in case you forget to turn your camera to pixel binning mode and then take shots of high resolution in the dark, the picture’s accuracy will suffer. LG is trying to compensate in this case by mixing several shots to try to minimize noise and increase the dynamic range, Huang said.
The regular cameras can also use the pixel binning technology judging by Sony’s full-frame sensor, the top image sensor maker right now.
What is the potential future of pixel binning technology?
There are some potential innovations. Increasing j4x4 pixels binning may enable phone makers to push sensor resolutions well beyond 108 megapixels. Lower-end smartphones can also get the binning option.
Another better photography approach is HDR, or high dynamic range, capturing a more significant period of bright and dark image images. Small camera sensors fail to record a wide variety of complexities, which is why companies such as Apple and Google merge several shots to produce HDR photos computationally.
Pixel binning also means higher flexibility at the pixel level. You can commit two pixels to regular exposure in a 2×2 group, one to a darker exposure to capture highlights such as bright landscapes, and a more luminous sensitivity to capture shadow details.
Omnivision plans enhancements to the autofocus, too. Growing pixel today has its microlens, which are designed to collect more light. You may also place one single microlens over a set of 2×2, 3×3, or 4×4. Each pixel with the same microlens is given a marginally different view of the scene based on its placement, and the disparity helps a digital camera measure focal length. This should help your camera maintain a sharp focus on the photo subjects.
Check out: Best Smartphones of 2020: Best Phone Ranking