When you take a photo in the backlight environment, the face in the photo will be absolutely black like a charcoal burn. Even if the person is your mother, you can't recognize it. This is the tragedy of taking a photo in the backlight. But in this environment, our human eyes can see very clearly, and all the details and wrinkles on our faces can be seen. Why can the effect seen by our eyes be so different from that seen by mobile phones?
Why can't the mobile phone capture the effect seen by the human eye? This involves tolerance, which refers to the ability of the camera to record the brightness range of the scene. The difference between the lightest part and the darkest part of the subject surface can be represented by the ratio between light and shade. For example, if the brightest part of the scene is 50 times brighter than the darkest part, then the ratio between light and shade is 1:50. The ability of the phone lens to correctly represent the difference between light and shade of the subject is the tolerance of the phone, also known as exposure tolerance.
It is said that the brightest light that the human eye can feel is 50000 times of the darkest light line, so our naked eyes can see the scenery even in backlight. However, for digital cameras, including mobile phones, they can not correctly show the scenery with excessive light and shade differences. The tolerance of digital camera sensors such as mobile phones is relatively low compared with the human eye. Therefore, in the environment with large light and shade differences such as backlighting, the digital photos taken are either the sky is correctly exposed and the face is dead black, Either the face is correctly exposed and the sky is excessively exposed, which means that the highlight overflows or the dark part is missing.
The sky is exposed correctly, but the darkness is completely black, and the details are lost
Better exposure in the dark, but over exposure in the sky also lost details
When taking digital photos in an environment with a large difference between light and shade, we often need to increase exposure to ensure the level of darkness, or reduce exposure to obtain details in highlights, but we always ignore one thing and lose the other. Although some cameras now have higher tolerance sensors, when compared with the human eye, it is still too far away, and it is still as black in the backlight. In order to improve this situation, people invented the HDR shooting method to simulate the effect of the human eye.
In the past, HDR photography enthusiasts would use photoshop and other software to combine multiple photos taken by SLR into one HDR image. But with the improvement of mobile phone performance, now we can also directly take multiple photos with different exposures on the mobile phone to achieve HDR. Compared with our common iPhone, when we turn on the HDR function, the iPhone will take three consecutive photos. The exposure of these three photos will be different, namely, under exposure, normal exposure, and overexposure. Then the three photos will be combined into one. After HDR synthesis, you can improve the detail performance of dark and light parts of photos.
The landscape on the surface of non HDR photos is black
HDR photos retain details of the sky and ground
Related article link
What's the difference in taking photos after HDR is turned on?
Stacked CMOS analysis (II): OPPO Find 5 camera depth mining
Tell you: how strong is HDR video recording supported by hardware