Mobile photography does not have the universal evaluation method (performance and power consumption) of the performance circle or the metaphysics of the audio circle, but it is still "black box". Users have no knowledge of the operation process, so they have accumulated many misunderstandings and cold knowledge.
This time, I will answer some daily questions and hot and cold knowledge of mobile photography Simple Science Popularization: What Does Computational Photography Calculate The effect is better if it is eaten through popular science. After knowing, you will be certified as "You have exceeded 90% of the users in the country".
camera lens
Mobile phone camera is almost all Plastic/resin material 。
The "P" of 8P lens is not the pinyin abbreviation of "piece", but the English abbreviation of Plastic, which is called plastic in Chinese Or Plastik 。
The "G" of 1G6P lens is the abbreviation of Glass. Redmi K40 Game Plus vivo X70 Pro+、vivo X80 Pro、 Some machines such as Xiaomi Civi 2 are in use.
The core use of glass lenses for mobile phones is Reduce thickness And reduce the difficulty of assembly, followed by the ability of glass to withstand high temperatures and can be coated with more layers. The effect of 1G6P is similar to that of 8P, but the transmittance is higher and the thickness is lower. The only disadvantage is that it is expensive.
Co branded and self-developed chips
Why can't the mobile phones of Panasonic CM1, Huawei, Sharp and Xiaomi see the "Coke logo" of Leica? This is Leica's pot, and its requirements No mobile phone factory log can appear at the same time if there is a Coke label o, So you can only use Leica's text label.
Leica is a "scum man" with numerous partners, including Panasonic, Huawei, Sharp, Xiaomi Insta360、 Nut projection, Hisense projection, etc
OPPO actually tried to cooperate with ZEISS, but failed to reach an agreement. Leica was preempted by Xiaomi, and finally had to "share" Hasu with Yijia (this is gossip)
In 2005, Nokia N90, the first Nokia Nseries product, launched Carl Zeiss lens, 17 years earlier than the vivo X60 series with the Zeiss logo in 2021.
After Huawei broke up with Leica, it launched its own image brand XMAGE. Some people said that it was the mobile phone factory that overthrew the co branding of cameras. But in fact, Nokia has its own image brand PureView since the beginning of 2012, and the first model is the famous monster - Nokia 808 PureView.
For the purpose of computational photography, Google carried Pixel Visual Core designed jointly with Intel on Pixel 2 series in 2017, as well as Huawei HiSilicon NPU and Apple A series chips added with "bionic suffix". Many years later, vivo V1+and OPPO Mariana X were all used for computational photography at the beginning.
Self developed ISPs used to belong to a category that sounded so high-end - external ISPs . Don't blame the major manufacturers for developing their own ISPs, because the integrated ISPs of Qualcomm and MediaTek are really not good enough.
HTC One X in 2012 has an ImageChip image chip (which can take 5 consecutive shots per second, 99 consecutive shots at a time, and also take photos during video recording), which is more than vivo OPPO、 Xiaomi's self-developed ISP/NPU is more than 8 years earlier.
sensor
Dimensional proportion comparison of mainstream sensor ↑
The digital camera was invented by Kodak, the film camera tycoon, in 1975, and finally got rid of it.
The mainstream 1/2 inch sensor IMX586 in 2019, Only 5% larger than the mainstream 1/2.3 inch sensor IMX363 in 2018 (Because on the 1/2.3 inch scale, "per inch" corresponds to 16mm and 18mm respectively).
It is also 1/2.3 inch, and there is 14% area difference between different CMOS (ImX377 vs. IMX230); It is also 1 inch, and there is 12% difference between different models.
Weak light noise is mainly contributed by blue pixels. The photoelectric conversion rate of bare silicon in the visible light band is 20% - 60% of the peak, reaching the peak for 1000 nm infrared light, and the blue light at the other end is only 15% of the peak.
The sensor cannot be infinitely small Because the wavelength range of red light is between 620 – 750nm (i.e. 0.62 to 0.75 μ m), the effect of sub wavelength sub pixels will decline significantly.
Now? The smallest unit pixel size is less than the theoretical limit Samsung's 200 million pixel HP3 sensor has been compressed to 0.56 μ m per pixel. Howell also has sensors of the same size, but now they are not equipped with mass production machines. The good news is that these sensors are all 4-in-1 and 9-in-1 outputs by default, with daily equivalent 1.12 μ m and 1.68 μ m, far from the physical bottleneck, but the high pixel mode is basically abandoned.
The 0.64 μ m 200 million pixel HP1, 100 million pixel HM6, and 50 million pixel JN1 are all disgraces at their respective pixel levels. The effect of 0.56 μ m is really unimaginable.
The smallest unit pixel size of Sony sensor is 0.8 μ m (IMX586/582/598, IMX686/682, IMX709/787, etc.) Why doesn't Sony follow up on small pixels like Samsung and Howell? Because the manufacturing process of Sony factory is too backward, it is impossible to make the CMOS unit pixel smaller at a low cost (Samsung and Sony core sensors are produced by themselves, Howell and Sony part of the production lines are OEM). Taking Howell Purecel Plus sensor in the public information as an example, 45nm process is used for 1 μ m unit pixel and 28nm for 0.8 μ m unit pixel. However, Sony's own factories are concentrated above 40nm
The sensor is color blind. It can only sense the intensity of light and cannot distinguish colors. In practice, red, green and blue filters (corresponding to the common RGGB Bayer array, but there are also other schemes such as RYYB) are used to make a single pixel record one of the three primary colors, sacrificing 2/3 of the number of pixels for the ability to record colors. As a result, a screen with 1 million pixels will have 3 million sub pixels, and one CMOS with 1 million pixels has only 1 million "sub pixels" 2/3 of the information in the photos we usually see is the result of interpolation and color guessing reconstruction of the de mosaic algorithm.
The Bayer Bayer array most commonly used in modern sensor filters was invented by Bryce Bayer, a Kodak scientist.
However, there is no so-called optimal solution for filter arrangement. The most common RGGB arrangement (there are also commonly used GBRG/GRBG/RGGB, which has the advantage of decades of accumulation. The color is accurate and easy to adjust, but the light loss is large), and there are RYYB in use by Huawei (Y means yellow, but the amount of light entering is large, but it is prone to red and highlight overflow) OPPO likes to use RGBW (W is white, that is, without filter, with large light input but light color), RWB (simple green is replaced by white without filter, and color is also a problem), MONO (black and white, maximum light input, but black and white).
The ToF lens (Time of Flight) is not so powerful The laser focusing was a kind of ToF in that year, but the LG G3 in 2014 was available. Later, Huawei P9, HTC M10 and many other models were used (the effective distance of the ToF sensor in Italy and France was only 2 meters in that year). Later, it gradually declined due to the popularity of dual core focusing. The more low ToF implementation is the distance sensor (which controls the phone call and the automatic screen out in the pocket). The iPhone 7 in 2016 has been in use.
Algorithm/Computational Photography
Computational photography on mobile phones is not so far away, and the essence is to automate and hardware the manual/semi-automatic photography method and post revision of the camera master.
The reason why mobile phones used to have a black screen "click" when taking photos (now it basically doesn't exist): CMOS has preview and photo mode. In order to save power and reduce ISP pressure, the former may be 1080P or even 720P. The short black screen when the shutter is pressed is the result of a short interruption of the preview video stream caused by the CMOS switching to the photographing mode.
A few years ago, Google cameras could be transplanted to all kinds of mobile phones (becoming the "last piece of puzzle" of Xiaomi 6), because the post-processing pipeline was not strongly dependent on hardware at that time. The basic version could run as long as it supported Neon, but the unofficial equipment did not have corresponding correction (noise reduction model, etc.), which might lead to too light or too heavy application, color deviation, and failure to achieve the best effect.
The shutter duration in exif information of most of the automatic night scene proofs is no longer referential (for a few seconds of night scene, the shutter is displayed for 1/4 second). It displays not the total duration of multi frame synthesis, but the sample shutter speed of correct exposure of long exposure frame EV0.
The HDR algorithm will reduce the texture of delicate materials (such as hair, lines, and other dense textures). The more frames of multi frame synthesis, the greater the probability of rollover. Therefore, when the light is good, Google and many manufacturers use two frames of HDR synthesis (Apple Smart HDR is 9, Google HDR+is 2-8).
Although self developed photographic algorithms are popular now, algorithm suppliers are still the biggest players. Famous suppliers include Hongruan Technology (with a market value of 10 billion, 21 year revenue of 570 million, R&D of 270 million, and R&D of 444 people), Shangtang (with a market value of 30 billion, 2000 people), Kuangshi Technology (with a market value of 20 billion, and a total of 1400 people), Core Photonics (with a market value of 250 million, and R&D of 50 people in Israel), and Morpho (Japan, with unknown R&D personnel, and a market value of 760 million)
How much is the mobile camera algorithm worth? It was mentioned in Hongruan's 19 year financial report that under piecework mode, the average algorithm cost per mobile phone is 0.55 yuan
Daily use
Why does the app in the background disappear when the camera is turned on?
Because the camera app really eats memory In order to do multi frame synthesis+zero delay delay shutter, mobile phone manufacturers keep taking pictures after the camera is opened. After pressing the shutter, they will send the latest 2-15 frames to do multi frame synthesis (Apple Smart HDR and Deep Fusion usually have 9 frames). The image processing process of running into RAW domain (generally understood as from jpg file to RAW file), together with various algorithm models, and the machine gun like shutter response of some models, will lead to a ghost if the memory does not explode.
Conversely, if it weren't for the huge memory demand of computer photography, Apple would not let the iPhone grow from 3GB in 2017 to 6GB today. If you cheat yourself in this way, you will feel more comfortable (Apple only uses 9 in 1, instead of piling up, because the power consumption and memory are not enough, and the marginal utility is decreasing, and too many proofs make it more difficult to align and easy to paste)
It is said that Android is better at taking photos than Apple, but why is Android's circle of friends worse than Apple's? Because of Zhang Xiaolong. Android WeChat has stronger image compression and lower resolution. Zhang Xiaolong!!!
Android third-party apps are not as good as native cameras for taking photos or recording videos? Because third-party apps (including the built-in photo and video portal on WeChat) cannot/do not call the algorithm of the native camera, and even use preview images to output videos, there is no reason to be confused. (Therefore, improving the image effect of third-party apps has become one of the goals of Google Pixel Visual Core in 2017, OPPO Mariana X in 2021 and other external chips.).
Why is the field of vision for taking videos smaller than that for taking photos? Because the video recording does not call all pixels, taking the common 4:3 12 million pixel sensor as an example, the 4K 16:9 (the ratio here is one loss) video single frame only uses 8.29 million pixels out of 12 million pixels, and the 8K video only uses 33.18 million pixels in the 48 million mode, so the field of view will naturally be smaller. In theory, combined sampling and oversampling can be used to avoid the shrinking of the viewing angle, but the computational power and power consumption requirements are too high.
In recent years, all flagship mobile phones may occasionally have a few "waste films" that have no algorithm to deal with due to their inability to deal with even shooting, commonly known as "sandwich". The situation of iPhone is slight, but the flagship of Android is obvious, When shooting continuously or snapping, cherish the first few shots , the front is not clear, and the back is probably blurred.
Even in 2022, The majority of Android flagship ships are unable to stick to the 30 minute 4K 60fps video because of the heating problem (Not to mention the effect)... The 30 minute video shooting limit of cameras is not to prevent overheating, but to avoid being classified as cameras when importing and exporting. The latter has higher tariffs (although it is true that many cameras will overheat).
What kind of experience is it to master a lot of cold/hot photography knowledge that can be used as a talking point, and even sign up for a master class of 68888 yuan to learn a course of treatment, thinking that it may be used during meals and parties?
A: Just like last night when I dreamt of breaking up with my girlfriend, I cried bitterly. When I woke up the next day, I thought that I had no girlfriend, so I cried even more bitterly
Follow our Weibo @ Love Computer
Follow our WeChat official account: playphone
Of course, we also pay attention to our Bilibili account: love computer