Considering that the highest sensor rez on full frame 35 digital is 60MP, and the highest for a full sized medium format sensor is currently 150MP, both requiring very high quality lenses to achieve those resolutions, we can see that this is just another Samsung sales gimmick.
Around 45MP is needed for full frame 8K recording.
Lenses for SLRs were always big, complicated and expensive because of the minimum back focus length required for the rear element of the lens to clear the swinging mirror. This required complicated optical formulae, especially for short focal length lenses, which in turn required even more lens elements to correct the distortions and aberrations that caused. And if you want a zoom lens, well...
However, it is a lot easier and cheaper to design and manufacture high quality fixed focal length short back focus lenses for very small sensors.
That said, I still don't understand what Samsung is trying to achieve here.
Short back focus was touted as a key advantage of mirrorless cameras. But while current SLR lenses can keep up with current sensors, they are way more expensive and complicated.
Back focal length has little, if nothing to do with the complexity of lenses. It’s a matter of lens speed and resolving power. Microscope lenses, for example, use a very thin oil to connect the lens surface to the slide, but have a very high N.A, and can have ten elements, or more.
Tessar lenses have just four elements, and can have good sharpness and high contrast, but they’ve normally f 2.8 to 3.5. Higher speed designs are usually six or seven element Gauss designs. Lens design is a very complex subject. Trying to blame it on back focus depth is incorrect.
Aren't they needlessly losing actual sensor area because of the spacing between pixels? Surely the "50MPix mode" isn't really using the exact equivalent of a 1.28µm pixel pitch. How much sensor area is being lost here?
I can't help but feel like the GN2 sounds like the only sensible implementation here.
The isolators are around 110-150nm wide, so yes, on a 0.64µm pixel that is actually quite a percentage unless they've shrunk things down even more on newer generation sensors. I don't think anybody expects the binned FWC to be similar to a larger pixel.
Not to mention that both the "200MP" and "50MP" images are actually "reconstituted" by AI, by re-arranging the sub-pixels around and running some AI on top of it.
I wonder what the effect would be if instead of "muh 200 MP", they "fixed" Bayer sensors by using a sensor with 50MP that resolved down to 12.5MP by combining the individual colour pixels, instead of relying on de-mosaicing.
I was really hopeful that this is what we'd be getting when they started talking about these modes. Big disappointment. It's easier to sell the higher spatial resolution than good colour definition.
I also think that would be a good idea. I see 2 modes on the comments on my phone's 64MP sensor. Those who shoot in 64 MP always criticize, but those shooting @ 16 MP are at least satisfied.
Comparing similar technologies, it is the size that counts. Only that it's the size of the actual pixels and the area of the sensor that are important, not how many pixels there are. More light = more information, and pixel binning is usually just used to de-noise the information. 16-1 pixel binning makes no sense to me based on what I know about camera sensors.
Jokes aside, I suppose what I meant by technique was whatever goes into making a well-captured, excellent picture. No doubt, the sensor pitch will play a large part in that.
We're well into the era of computational photography and we need to break the association that sensor points have any close relation to output pixels. I've long thought it would be nice to have much higher sensor points than output pixels (at least 10x) as it should offer better noise control and positional colour accuracy than bayer output. The tradeoff is silicon efficiency (how much detection area is lost from inefficiencies implementing smaller detectors) and higher processing load to make sense of that data.
Some of us are still wedded to the idea that a photograph should at least begin as an accurate representation of what you were pointing the camera at. There's a philosophical debate in here too but using machine learning to fill in gaps and create detail that might not have been there doesn't sit very well with me. It blurs the lines between photography and computer generated imagery.
What if: Sensor -> A.I/Computation -> Pixel Correlates to: Eye -> Nerves/Synapses/Unknown -> Brain I'm pretty sure our brains are not "seeing" exactly what our eyes are "seeing". I've heard somewhere that technically the image at the back of our eyes are upside down and mirrored or something like that. Isn't it crazy how nature and technology can go hand in hand at times?
Your brain is doing a lot more than just flipping the image. It's also doing spatio-temporal interpolation across your blindspot, and a similar sort of resampling to compensate for the vastly different densities of photoreceptors in your fovea vs. periphery. Not to mention the relatively lower density of color-sensing cones. In order to support this fancy processing, your eyes are continually performing micro-movements, not unlike how Intel's XeSS jitters the camera position to facilitate superresolution.
To put it another way, the image you *think* you see exists only in your brain. It's not the raw signal from the photoreceptors in your retina, that's for sure!
Excellent description. It's interesting how this muxing brings it all together out of bits and pieces. There are disturbances of the visual system where a person can't see motion, and even stranger effects. Also, in line with the lower density of colour cones, isn't colour really "an extra," if we may call it that? The outlines and non-colour elements seem to carry the mass of the visuals. (Cell animation, black-and-white films, anyone?)
Prime957, you're quite right. It seems technology often ends up copying nature, unwittingly or otherwise. Perhaps it's because nature has already got the best designs.
You are so right about that one. We see a tree with leaves, But if you would change those leaves with balloons or whatever, we would still see those leaves. We can only see color in the center of our view. But you think you can see what color your whole view is.
LOL. As Prime957 said, you're not seeing an image in the way it actually hits your retina. The computational processing by your brain is vastly more intense than what porina is talking about.
Yes, you're right. It just doesn't strike me when I made that comment. I suppose what I'm looking for are pictures coming out like they used to on film, whereas something looks off and plastic in digital today (even in movies for that matter).
I'm guessing what you see as "plastic" is either a lack of grain or overzealous noise reduction?
There were a lot of different film stock options available to photographers of that era and each one had its own specific (and different) response to the incoming light. By contrast, the image processing pipelines in modern digital cameras are generally designed to record images that as neutral as possible, so the raw images that come out of them will look pretty flat until adjustments are applied. It's then up to the photographer or editor to "interpret" the scene (which is another philosophical debate...)
It's not just a plastic appearance one sometimes finds. I find that tolerable, despite liking grain. Rather, something looks off in many movies shot on digital cameras (and, to a lesser extent, still pictures). This will be controversial but fake is how I describe it to myself. And it's not all. Some, shot digitally, look fantastic. Blade Runner 2049 is one example.
Well we are pretty good already at making whole computer generated words. Why bother with the cost size and complexity of a physical camera. Take a few potraits with something like Sony A7r4 or a1, with a 50 mm or 85 mm prime, then your phone users map coordinates, and generates a series of super hi quality selfies. No need to wait for the golden hour or the Blue hour, no need to scramble into awkwards positions, just GLORIOUS AI.
Even better. It'll anticipate what sort of a picture you're looking for. Feeling ostentatious? Wanting to show off your holidays? No sweat. It'll pick up your mood just like that, and composite you with the Eiffel Tower at the edge of the frame. As an extra, another of your sipping coffee in Milan. All ready to upload to Facebook and the rest, hashtags included.
Samsung mobile is launching a new plus interesting feature and it's suitable for the marketing. Recently, I started new eCommerce online marketing, and where I hired a (https://www.webstudios.ae/) is a top-drawer professional website design company based in the United Arab Emirates known for providing hassle-free web designs with attention-capturing features to attract high-quality organic traffic to your website. Their digital services include on-demand business mobile apps, web development, web design, and 2D/ 3D video animation design services.
I have to admit I do not understand the article and I would like someone to explain it to me. There are 4 pictures showing 4 different colour filter layouts. Which one is correct? The colour filter is fixed for a given sensor. If the colour filter has 2.56 micrometre square resolution per RGB colour, then the effective "real" pixel size is 5.15 micrometres on a side, which is what is needed to produce a single [r,g,b] colour point. This means that the real resolution is about 3Mpx, i.e.that is the number of points that can be assigned an [r,g,b] intensity. The "HP1" diagram suggests this is actually the case. On the other hand the left hand image shows [r,g,b] points 1.12 micrometres square, for 50Mpx resolution, though the real resolution would be a quarter of that, 12Mpx. It would be helpful if the author could clarify. I admit my optics is long out of date - I was last involved with semiconductor sensors 35 years ago - but the laws of optics haven't changed in the meantime. Pixel binning AIUI is a technique for minimising the effect of sensor defects by splitting subpixels into groups and then discarding outlying intensities in each group; at least that was what we were doing in the 1980s. Is this now wrong?
All camera sensors do not have color resolution that is equal to their advertised pixel resolution, all colour filter sites are end up being actual logical pixels in the resulting image, however they take colour information of the adjacent pixels to get to your usual RGB colour, the process is called demosaicing or de-Bayering.
In the graphic, only the top image showcases the physical colour filter, which is 2.56 microns. The below ones are supposed to showcase the "fake" logical Bayer output that the sensor sends off to the SoC.
I'm glad to see someone take diffraction limit and such into account. It's like some Samsung exec is sitting in their office demanding more pixels without regard to any other consideration whatsoever. So the underlings have to do what they're told.
Definitely marketing! For all the reasons listed in the article and some comments here. Reminds me of what is, in microscopy, referred to as "empty magnification". The other question Samsung doesn't seem to have an answer to is why they also make the GN2 sensor? If Samsung gets smart, they use that sensor in their new flagship phone. This 200 MP sensor is nonsensical at the pixel sizes it has and the loss of area to borders between pixels. If that sensor would be 100 times the area and then used in a larger format camera, 200 MP can make sense; in this format and use, it doesn't.
In my knowledge pretty much all smartphone camers are diffraction limited. For example iPhone telephoto has 1 micron pixels which is not sufficient for 12MP resolution.
I also wonder if that approach (many, but tiny pixels) also makes it easier for Samsung to still use imaging chips with significant numbers of defects. The image captured by this and similar sensors that must bin to work is heavily processed anyway, so is Samsung trying to reduce the number of defective chips to death by sheer pixel numbers? In contrast, defective pixels are probably more of a fatal flaw for a sensor with large individual pixels like the GN2. Just a guess, I don't have inside information, but if anyone knows how many GN2 chips are rejected due to too many defects, it may shed some light on why this otherwise nonsensical sensor with a large number of very tiny pixels exists. Could be as simple as manufacturing economics. Never mind what that does to image quality.
The Lytro was a waste of time. It used about ten times as many pixels as in the final photo to get the depth. And even that was terrible. When I first heard of it I was pretty stoked. But the reality was that when I tried it, the images were just a few hundred pixels across. You could only adjust the focus to a few distances.
Computational photography is a far better solution, and continues to improve.
You're awfully quick to write off lightfield cameras. To truly experience its potential, you want to view the resulting image on a lightfield display, like the ones from Looking Glass Factory. Or a lightfield HMD like the one Magic Leap was trying to build.
Anyway, I think it's a fine answer to the question of what to do with all of the megapixels in this sensor. According to your ratio of 10:1, we'd still get better than a 4k image.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
44 Comments
Back to Article
melgross - Thursday, September 23, 2021 - link
Considering that the highest sensor rez on full frame 35 digital is 60MP, and the highest for a full sized medium format sensor is currently 150MP, both requiring very high quality lenses to achieve those resolutions, we can see that this is just another Samsung sales gimmick.Around 45MP is needed for full frame 8K recording.
boozed - Friday, September 24, 2021 - link
Lenses for SLRs were always big, complicated and expensive because of the minimum back focus length required for the rear element of the lens to clear the swinging mirror. This required complicated optical formulae, especially for short focal length lenses, which in turn required even more lens elements to correct the distortions and aberrations that caused. And if you want a zoom lens, well...However, it is a lot easier and cheaper to design and manufacture high quality fixed focal length short back focus lenses for very small sensors.
That said, I still don't understand what Samsung is trying to achieve here.
drajitshnew - Saturday, September 25, 2021 - link
Short back focus was touted as a key advantage of mirrorless cameras. But while current SLR lenses can keep up with current sensors, they are way more expensive and complicated.melgross - Tuesday, September 28, 2021 - link
Back focal length has little, if nothing to do with the complexity of lenses. It’s a matter of lens speed and resolving power. Microscope lenses, for example, use a very thin oil to connect the lens surface to the slide, but have a very high N.A, and can have ten elements, or more.Tessar lenses have just four elements, and can have good sharpness and high contrast, but they’ve normally f 2.8 to 3.5. Higher speed designs are usually six or seven element Gauss designs. Lens design is a very complex subject. Trying to blame it on back focus depth is incorrect.
Byte - Tuesday, October 5, 2021 - link
Bragging rights, what else they got?ToTTenTranz - Thursday, September 23, 2021 - link
Aren't they needlessly losing actual sensor area because of the spacing between pixels?Surely the "50MPix mode" isn't really using the exact equivalent of a 1.28µm pixel pitch. How much sensor area is being lost here?
I can't help but feel like the GN2 sounds like the only sensible implementation here.
Andrei Frumusanu - Thursday, September 23, 2021 - link
https://www.techinsights.com/sites/default/files/2...The isolators are around 110-150nm wide, so yes, on a 0.64µm pixel that is actually quite a percentage unless they've shrunk things down even more on newer generation sensors. I don't think anybody expects the binned FWC to be similar to a larger pixel.
ToTTenTranz - Thursday, September 23, 2021 - link
So roughly 20% of the pixel area are being spent on isolators, i.e. not an area that is effective for the sensor.Wereweeb - Thursday, September 23, 2021 - link
That makes it even worse.Wereweeb - Thursday, September 23, 2021 - link
Not to mention that both the "200MP" and "50MP" images are actually "reconstituted" by AI, by re-arranging the sub-pixels around and running some AI on top of it.I wonder what the effect would be if instead of "muh 200 MP", they "fixed" Bayer sensors by using a sensor with 50MP that resolved down to 12.5MP by combining the individual colour pixels, instead of relying on de-mosaicing.
Spunjji - Friday, September 24, 2021 - link
I was really hopeful that this is what we'd be getting when they started talking about these modes. Big disappointment. It's easier to sell the higher spatial resolution than good colour definition.drajitshnew - Saturday, September 25, 2021 - link
I also think that would be a good idea.I see 2 modes on the comments on my phone's 64MP sensor. Those who shoot in 64 MP always criticize, but those shooting @ 16 MP are at least satisfied.
GeoffreyA - Thursday, September 23, 2021 - link
It's a bit like the size metric of a certain organ, when the truth is, it's the technique that counts, not the size.Samsung: "Look what a big megapixel I've got."
Apple: "Friend, it's the technique. Find the Sacred Pixel, use the digits slowly, and come hither to the Temple of Joy."
Wereweeb - Thursday, September 23, 2021 - link
Comparing similar technologies, it is the size that counts. Only that it's the size of the actual pixels and the area of the sensor that are important, not how many pixels there are. More light = more information, and pixel binning is usually just used to de-noise the information. 16-1 pixel binning makes no sense to me based on what I know about camera sensors.GeoffreyA - Thursday, September 23, 2021 - link
Jokes aside, I suppose what I meant by technique was whatever goes into making a well-captured, excellent picture. No doubt, the sensor pitch will play a large part in that.Prime957 - Friday, September 24, 2021 - link
Best. Comment. Ever.drajitshnew - Saturday, September 25, 2021 - link
More power to youshabby - Thursday, September 23, 2021 - link
Can't wait for the first gigapixel smartphone sensor 😂porina - Thursday, September 23, 2021 - link
We're well into the era of computational photography and we need to break the association that sensor points have any close relation to output pixels. I've long thought it would be nice to have much higher sensor points than output pixels (at least 10x) as it should offer better noise control and positional colour accuracy than bayer output. The tradeoff is silicon efficiency (how much detection area is lost from inefficiencies implementing smaller detectors) and higher processing load to make sense of that data.boozed - Friday, September 24, 2021 - link
Some of us are still wedded to the idea that a photograph should at least begin as an accurate representation of what you were pointing the camera at. There's a philosophical debate in here too but using machine learning to fill in gaps and create detail that might not have been there doesn't sit very well with me. It blurs the lines between photography and computer generated imagery.GeoffreyA - Friday, September 24, 2021 - link
I agree with this sentiment. A camera should capture a picture as the eye would see it directly.Prime957 - Friday, September 24, 2021 - link
What if:Sensor -> A.I/Computation -> Pixel
Correlates to:
Eye -> Nerves/Synapses/Unknown -> Brain
I'm pretty sure our brains are not "seeing" exactly what our eyes are "seeing". I've heard somewhere that technically the image at the back of our eyes are upside down and mirrored or something like that. Isn't it crazy how nature and technology can go hand in hand at times?
mode_13h - Saturday, September 25, 2021 - link
Your brain is doing a lot more than just flipping the image. It's also doing spatio-temporal interpolation across your blindspot, and a similar sort of resampling to compensate for the vastly different densities of photoreceptors in your fovea vs. periphery. Not to mention the relatively lower density of color-sensing cones. In order to support this fancy processing, your eyes are continually performing micro-movements, not unlike how Intel's XeSS jitters the camera position to facilitate superresolution.To put it another way, the image you *think* you see exists only in your brain. It's not the raw signal from the photoreceptors in your retina, that's for sure!
GeoffreyA - Saturday, September 25, 2021 - link
Excellent description. It's interesting how this muxing brings it all together out of bits and pieces. There are disturbances of the visual system where a person can't see motion, and even stranger effects. Also, in line with the lower density of colour cones, isn't colour really "an extra," if we may call it that? The outlines and non-colour elements seem to carry the mass of the visuals. (Cell animation, black-and-white films, anyone?)GeoffreyA - Saturday, September 25, 2021 - link
Prime957, you're quite right. It seems technology often ends up copying nature, unwittingly or otherwise. Perhaps it's because nature has already got the best designs.Foeketijn - Sunday, September 26, 2021 - link
You are so right about that one. We see a tree with leaves, But if you would change those leaves with balloons or whatever, we would still see those leaves.We can only see color in the center of our view. But you think you can see what color your whole view is.
mode_13h - Saturday, September 25, 2021 - link
LOL. As Prime957 said, you're not seeing an image in the way it actually hits your retina. The computational processing by your brain is vastly more intense than what porina is talking about.GeoffreyA - Saturday, September 25, 2021 - link
Yes, you're right. It just doesn't strike me when I made that comment. I suppose what I'm looking for are pictures coming out like they used to on film, whereas something looks off and plastic in digital today (even in movies for that matter).boozed - Saturday, September 25, 2021 - link
I'm guessing what you see as "plastic" is either a lack of grain or overzealous noise reduction?There were a lot of different film stock options available to photographers of that era and each one had its own specific (and different) response to the incoming light. By contrast, the image processing pipelines in modern digital cameras are generally designed to record images that as neutral as possible, so the raw images that come out of them will look pretty flat until adjustments are applied. It's then up to the photographer or editor to "interpret" the scene (which is another philosophical debate...)
GeoffreyA - Saturday, September 25, 2021 - link
It's not just a plastic appearance one sometimes finds. I find that tolerable, despite liking grain. Rather, something looks off in many movies shot on digital cameras (and, to a lesser extent, still pictures). This will be controversial but fake is how I describe it to myself. And it's not all. Some, shot digitally, look fantastic. Blade Runner 2049 is one example.drajitshnew - Saturday, September 25, 2021 - link
Well we are pretty good already at making whole computer generated words. Why bother with the cost size and complexity of a physical camera.Take a few potraits with something like Sony A7r4 or a1, with a 50 mm or 85 mm prime, then your phone users map coordinates, and generates a series of super hi quality selfies.
No need to wait for the golden hour or the Blue hour, no need to scramble into awkwards positions, just GLORIOUS AI.
GeoffreyA - Sunday, September 26, 2021 - link
Even better. It'll anticipate what sort of a picture you're looking for. Feeling ostentatious? Wanting to show off your holidays? No sweat. It'll pick up your mood just like that, and composite you with the Eiffel Tower at the edge of the frame. As an extra, another of your sipping coffee in Milan. All ready to upload to Facebook and the rest, hashtags included.ezrahabib - Thursday, September 23, 2021 - link
Samsung mobile is launching a new plus interesting feature and it's suitable for the marketing. Recently, I started new eCommerce online marketing, and where I hired a (https://www.webstudios.ae/) is a top-drawer professional website design company based in the United Arab Emirates known for providing hassle-free web designs with attention-capturing features to attract high-quality organic traffic to your website. Their digital services include on-demand business mobile apps, web development, web design, and 2D/ 3D video animation design services.kupfernigk - Thursday, September 23, 2021 - link
I have to admit I do not understand the article and I would like someone to explain it to me.There are 4 pictures showing 4 different colour filter layouts. Which one is correct? The colour filter is fixed for a given sensor.
If the colour filter has 2.56 micrometre square resolution per RGB colour, then the effective "real" pixel size is 5.15 micrometres on a side, which is what is needed to produce a single [r,g,b] colour point. This means that the real resolution is about 3Mpx, i.e.that is the number of points that can be assigned an [r,g,b] intensity. The "HP1" diagram suggests this is actually the case. On the other hand the left hand image shows [r,g,b] points 1.12 micrometres square, for 50Mpx resolution, though the real resolution would be a quarter of that, 12Mpx.
It would be helpful if the author could clarify. I admit my optics is long out of date - I was last involved with semiconductor sensors 35 years ago - but the laws of optics haven't changed in the meantime.
Pixel binning AIUI is a technique for minimising the effect of sensor defects by splitting subpixels into groups and then discarding outlying intensities in each group; at least that was what we were doing in the 1980s. Is this now wrong?
Andrei Frumusanu - Thursday, September 23, 2021 - link
All camera sensors do not have color resolution that is equal to their advertised pixel resolution, all colour filter sites are end up being actual logical pixels in the resulting image, however they take colour information of the adjacent pixels to get to your usual RGB colour, the process is called demosaicing or de-Bayering.In the graphic, only the top image showcases the physical colour filter, which is 2.56 microns. The below ones are supposed to showcase the "fake" logical Bayer output that the sensor sends off to the SoC.
Frenetic Pony - Thursday, September 23, 2021 - link
I'm glad to see someone take diffraction limit and such into account. It's like some Samsung exec is sitting in their office demanding more pixels without regard to any other consideration whatsoever. So the underlings have to do what they're told.eastcoast_pete - Thursday, September 23, 2021 - link
Definitely marketing! For all the reasons listed in the article and some comments here. Reminds me of what is, in microscopy, referred to as "empty magnification". The other question Samsung doesn't seem to have an answer to is why they also make the GN2 sensor? If Samsung gets smart, they use that sensor in their new flagship phone. This 200 MP sensor is nonsensical at the pixel sizes it has and the loss of area to borders between pixels. If that sensor would be 100 times the area and then used in a larger format camera, 200 MP can make sense; in this format and use, it doesn't.GC2:CS - Friday, September 24, 2021 - link
I do not understand the premise.In my knowledge pretty much all smartphone camers are diffraction limited. For example iPhone telephoto has 1 micron pixels which is not sufficient for 12MP resolution.
melgross - Tuesday, September 28, 2021 - link
I believe it’s 1.7.eastcoast_pete - Friday, September 24, 2021 - link
I also wonder if that approach (many, but tiny pixels) also makes it easier for Samsung to still use imaging chips with significant numbers of defects. The image captured by this and similar sensors that must bin to work is heavily processed anyway, so is Samsung trying to reduce the number of defective chips to death by sheer pixel numbers? In contrast, defective pixels are probably more of a fatal flaw for a sensor with large individual pixels like the GN2. Just a guess, I don't have inside information, but if anyone knows how many GN2 chips are rejected due to too many defects, it may shed some light on why this otherwise nonsensical sensor with a large number of very tiny pixels exists. Could be as simple as manufacturing economics. Never mind what that does to image quality.mode_13h - Saturday, September 25, 2021 - link
Actually, it would *not* be a waste, if they'd just put a plenoptic lens in front of that thing!On a related note, I wonder what ever happened to Lytro's IP.
melgross - Tuesday, September 28, 2021 - link
The Lytro was a waste of time. It used about ten times as many pixels as in the final photo to get the depth. And even that was terrible. When I first heard of it I was pretty stoked. But the reality was that when I tried it, the images were just a few hundred pixels across. You could only adjust the focus to a few distances.Computational photography is a far better solution, and continues to improve.
mode_13h - Wednesday, September 29, 2021 - link
You're awfully quick to write off lightfield cameras. To truly experience its potential, you want to view the resulting image on a lightfield display, like the ones from Looking Glass Factory. Or a lightfield HMD like the one Magic Leap was trying to build.Anyway, I think it's a fine answer to the question of what to do with all of the megapixels in this sensor. According to your ratio of 10:1, we'd still get better than a 4k image.
torrewaffer - Saturday, October 23, 2021 - link
Why not just make a very large sensor at 12MP? Is there any technical limitation? Why do they keep making sensors with so many megapixels?