I am still trying to digest all of the information in this article, and I love it!
It is because of articles like this that I check Anandtech multiple times per day. Thank you for continuing to provide such insightful and detailed articles. In a day and age where other "tech" sites are regurgitating the same press releases, it is nice to see anandtech continues to post detailed and informative pieces.
Optics are certainly an area the average consumer knows little about, myself included.
For some reason it seems like consumers look at a camera's MP like how they used to view a processor's Hz; as if the higher number equates to a better quality, or more efficient device - that's why we can appreciate articles like these, which clarify and inform.
The more the average consumer understands, the more they can demand better products from manufacturers and make better educated decisions. In addition to being an interesting read!
Yeah, I just love in depth stuff like this! May end up beyond my capabilities but none the less I love it, and love that Brian is so passionate about it. It's so great to hear on the podcast when he's ranting about terrible cameras! And I mean that, I'm not making fun, I think it's awesome.
Is there any feasibility (anything on the horizon) to directly measure the wavelength of light hitting a sensor element, rather than relying on filters? Or perhaps to use a layer on top of the sensor to split the light rather than filter the light? You would think that would give a substantial boost in light sensitivity, since a colour filter based system by necessity blocks most of the light that enters your optical system, much in the way that 3LCD projector produces a substantially brighter image than a single-chip DLP projector given the same lightbulb, because one splits the white light and the other filters the white light.
I'm not an expert on the subject so take what I'm saying here with a grain of salt.
As I understand it you would have to make sure that no more than one photon is hitting the pixel at any given time, and then you can measure the energy (basically energy = wavelength) of that photon. I would imagine if multiple photons are hitting the sensor at the same time, you wouldn't be able to distinguish how much energy came from each photon.
Since we're dealing with single photons, weird quantum stuff might come into play. Even if you could manage to get a single photon to hit each pixel, there may be an effect where the photons will hit multiple pixels at the same time, so measuring the energy at one pixel will give you a number that includes the energy from some of the other photons. (I'm inferring this idea from the double-slit experiment.)
I think the only way this would be possible is if only one photon hits the entire sensor at any given time, then you would be able to work out it's colour. Of course, that wouldn't be very useful as a camera.
Hi Hlby photodetection does not quite work like that. A photon hitting a photodiode junction either has enough energy to excite an electron across the junction or it does not. So one way you could make a multi-colour pixel would be to have several photodiode junctions one on top of the other, each with a different "energy gap", so that each one responds to a different wavelength. This idea is now being used in the highest efficiency solar cells to allow all the different wavelengths in sunlight to be absorbed efficiently. However for a colour-sensitive photodiode, there are some big complexities to be overcome - I have no idea if anyone has succeeded or even tried.
You're right, conceptually one would "only" need to adapt a multi-junction solar cell for spatial resoluition, i.e. small pixels. This would introduce shadowing for the bottom layers similar to front side illumination again, though. Which might be countered with vias through the chips, at the cost of making manufacturing more expensive. And the materials and their processing become way more expensive in general, as they will be CMOS incompatible III-V composites.
And worst: one could only gain a 3 times higher light sensitivity at maximum, so currently it's probably not worth the effort.
I think you are talking about Foveon sensors, used by Sigma to make some of their DSLR cameras. Since photons of different colors have different energies, they use this principle to detect color. Not sure how they do it (probably by checking at which depth the electron is generated), but there is lot of information on web about it.
one of the non-traditional imaging sensors around is the foveon x3 sensor. each pixel can sense all three primary colours rather than relying on bayer interpolation. it does have many limitations though.
Yeah, like only being in Sigma cameras that use Sigma mounts. Who on earth buys those things? The results are stunning to see, but they need some, well, design wins, to use the parlance of cell phones.
They also need to make tiny sensors. AFAIK they only have the APS-C one, and those won't be showing up in phones anytime soon. :)
The camera equivalent of a 3 LCD projector does exist, for example in so-called "Multi-Spectral Imager" instruments for space missions. The light entering the camera aperture is split into spectral bands by dichroic mirrors, and then imaged on a number of CCDs.
The problem with this approach is that it takes considerable engineering effort to make sure that all the CCDs are aligned to each other with sub-pixel accuracy. Of course the cost of multiple CCDs and the space demand for the more complex optical system make this option quiet irrelevant for mobile devices.
We have 1.34um at f2.0 and 1.88um at f2.8. Typical 8MP sensor have 1.4um photosites so 8MP sensors looks like an ideal spot for f2 optics. (Yes, 13MP @ 1.1um is just marketing gimmick I think)
In comparison, 36MP Nikon D800 has 4.9um photosite size, which is diffraction limited between f5.6 and f8.
"smartphones are or are poised to begin displacing the role of a traditional point and shoot camera " That started quite a while ago so a rather disappointing "trends"section. Was waiting for some actual features , ways to get there. and more talk about video since it's becoming a lot more important.
Agreed, the presentation feels a bit out of date for current technology particularly as you say phone cameras have been displacing compact cameras for years - I'd say right back to the N95 which offered a decent 5MP AF camera and was released before the first Iphone.
I'm also surprised to see no mention of Nokia pretty much even though they've very much been pushing the camera limits, their ultra high resolution Pureview camera showed you could have a very high number of pixels and high image quality (which this article seems to claim isn't possible even with lower resolution devices) and the Lumia 920 is an interesting step forward in having a physical image stabilisation system.
Also with regards to shallow depth of field with F2, that's just not going to happen on a camera phone because depth of field is primarily a function of the actual focal length (not the equivalent focal length) so to get a proper shallow depth of field effect (as in not shooting at very close macro distances) a camera phone would need a massive aperture many stops wider than F2 to counter the very short focal length.
Actually it makes sense he doesn't mention all that. He's talking about trends, the Pureview did not fit into the trends, both in quality and sensor size.
Optical image stabilization doesn't fit in either as it only affects image quality in less than ideal situations such as no tripods/shaky hands. But he did mention the need for extra parts in the module configuration shouldnthat be part of the setup.
And in his defence of the comment ofndisplacing P&S cameras, he says "smartphones are or are poised to begin displacing the role" so he's notnsaying that they aren't doing it already, he gives you a choice in perspectives. Also I don't think you can say the N95 displaced P&S at the market level, only in casual use.
What about the "large" sensor 41MP Nokia 808 phone? It is sure a interesting outlier.
And point & shot cameras still have the advantage of optical zoom, better handling, and can have bigger sensors. Just look at S110, LX7 or RX100 cameras. But budget super-compact cameras are indeed in extinction.
How can I start reading when there is no mention of any Nokia products? HTC One should not be even included in the article because it's not even considered a breakthrough in camera phones.
How many MP or to be more accurate Million Sub Pixels are these Nokia sensors because these numbers come surely after interpolation or pixel shift or something... Also do we have information about HTC One sensor? Is it one Bayer sensor or 2-3 stacked like Foveon ones?
While HTC definitely deserves credit for discontinuing a trend of ever more MPs, I don't think they went far enough. There's still an unclenched status quo thinking in smartphones about sensor size. Virtually all of them have a 1/3.2" sensor.
No one has even thought about making that one much larger? Even after Nokia showed Pureview 808 and got a lot of praise for it? Really? What is wrong with all these companies? Do they want us to spell it out for them?
I get that clean phone design is a big factor, but you can't just keep doing things the same way everyone has always done it. And then they wonder why they can't beat Samsung. Sure Samsung has a lot of marketing power, but they also play a little less safe than everyone else. They add the S-pen to devices, even though it adds quite a bit of cost and then they need that device to compete with others, so it's a risk for them that the consumers might not want to pay extra for it - but they still do it. They also started the "phablet" trend all by themselves. And while these things don't necessarily have mass market appeal, they get a lot of publicity and quite a lot of passionate fans and customers for those devices.
So why aren't the other manufacturers experimenting in the same way not just with incrementally better cameras, but WAY better cameras, that they put in phones. I'm talking putting camera capabilities in a phone that could add $100 or even $200 to the retail price of the device. That's being BOLD in the market. That's being DIFFERENT.
So I want to see them come up with devices that have 1" large sensors of 5-8 MP, with some high quality lenses, and powerful ISP's and software behind them. Create that and you get at least 3 different types of consumers coming to buy that phone (the type of consumers that yell :SHUT UP AND TAKE MY MONEY!): professional photographers (they buy thousands of dollars worth of equipment for fun, and they need phones, too), amateur photographers (people who love taking great photos with their phones), and you'd also pretty much convert the whole (in time of course) point and shoot market to your device.
So the potential for sales is right there for reaping. And these people wouldn't get about a slight bump on the back, if they can get a phone that is 5x better than anything else on the market at the time of shipping. I want to see that kind of LEAP in smartphone cameras, not just these regular "2013 camera is slightly better than the 2012 camera", and so on.
I think you're generally right, but these days from manufacturer's point of view it's very easy to sink the ship by being bold and not being understood well by the market/consumers. It is then very easy for their opponents to do nasty counter marketing to make things worse even further.
As a photographer myself I really would like to see a no compromise smartphone camera (give glass!!! give me bigger sensor with less megapixels!!!), but I guess we're not at that point just yet. Only recently manufacturers of compact cameras started to show some interest for making cameras geared toward serious photography, so I guess it's still a few years wait to see that approach in the smartphone world.
Besides I guess 1" sensor in a smartphone will never happen due to physical constrains - you need a lot of light gathering power to lit such a huge sensor not to mention the focal distance needs to be considerably bigger for the field of view not to be ultra wide
That's kinda the point. It didn't look TOO BAD, which is a far cry from looking great. I also remember reading some reviews which complained about the ergonomics. Also the pure view was more of a niche experimental product.
It's kind of like some article that I read years agonwhere itnsaidnthat according to research (or a survey or something) consumers, when buying a tv), were most interested in size THEN image quality and then all the other stuff. Most phone buyers are similar, they want a better camera but are not willing to give up styling for it, at least not without some re-education.
As for HTC not going far enough. I think it has more to do with the fact that they're not doing too well as is and most likely feel like they're taking enough risks at the moment.
Personally I would be willing to give up some slimness (but hopefully not styling) for a decidedly improved camera. I hope HTC does well with the One and that they are willing to experiment a bit more in future generations. But then again I am not sure yet how I feel about the loss in detail. Another site got their hands on a One and have some comparison shots between the On eand the iPhone 5
Well, in the comparison shots I prefernthe images ofnthe One, but I occasionally neednthe detail procided with 8MP but then again only sometimes. When 4K TVs vecome mainstream and I also have one then I'll kost likely think it's the minimum pics should be taken at.
They can make it more stylish than that. I wouldn't mind if my phone looked a bit like a point and shoot, and didn't have a perfectly flat back surface.
Again, I'm not saying these phones are for everyone. Note phones are not for everyone either. In fact I don't think I can ever see myself owning a Note phone. But yet millions of people have them, and those who do love it, and wouldn't imagine going back to a smaller phone.
That's the type of market I think such a phone can target. A niche market indeed, but a big niche nonetheless. And I would be part of that niche.
Sony's new Exmor RS sensors use a stacked structure which places the circuit section underneath the pixel section instead of beside it which should free up more room for more pixels or larger pixels within a given sensor module area or enable smaller modules. They also add a dedicated white channel for a RGBW coding which they claim improves low light performance. Any comment on the efficacy of these techniques?
Apple has been using Sony image sensors for both the iPhone 4S and iPhone 5 so that new Sony IMX135 with 13.13MP, 1/3.06", 1.12 μm pixel sensor looks like a prime candidate for the iPhone 5S.
Well done, Brian. At last someone trying to explain and stress how much more is there to tiny smartphone cameras than just the megapixels. A few years ago I felt like we got over it and the race was over, but now few years later the race just continues but in the smartphone realm (previously compact cameras had the same issues where ultimate picture quality was compromised just to get to a higher number of megapixels on the box). Hopefully this will change in near future, but somehow I'm afraid that were stuck with it just as with the horrible quality displays in notebooks. And it's not that people don't want good quality, it's just that the behemot companys are not willing to take the risk.
This is a fanatastic article. Its well written, relatively easy to understand for any layman, and most importantly makes the reader coming back for more.
....for a smartphone to give me as good a picture quality as I got from my 3.2MP Nikon from 2004.
When I find one I'll be happy.
I think a lot of the phone companies need to start poaching the optics and software specialists from the camera companies. It's all very well looking at the spec lists for components but its another thing entirely to make them all work together to produce a decent picture.
In the past for me near decent cams have been ruined by over zealous compression settings (Palm Pre2) or what appears to be zero configuration of the imaging processors (Nexus4).
No one yet has tweaked every part of the chain to provide a truly viable alternative to taking a $250+ camera along instead.
Never had anything to do with megapixels IMO. It's other factors that let them down.
I reckon in another 2 years we'll have it pretty much there.
Same here, my Sony DSC-W5 from 2005 or so is still way better than my smartphone. If I could get at least similar quality in a new phone that would easily make it worth 50 - 100€ more for me.
I enjoyed the article, it cleared up some doubts I had and taught me a few new things.
In light of this article I hope you go into more detail in the One review when you eventually do it. I assume you will give special attention to the c amera due to how it goes against the trend and HTC's focus on it but I also hope you mention how it fits into all this a bit for those of us who read both articles. Thanks again.
much enjoyed your intro to camera optics. However you state: "If we look at the airy disk diameter formed from a perfect diffraction limited ... we get a spot size around 3.0 microns"
You mentioned this is a back illuminated chip so that the light is focussing in a medium of refractive index ~3.5. Therefore the wavelength of red light inside the medium is ~700/3.5 = 200nm - still much smaller than the pixel. It is a bit more complicated than this since the optical resolution is of course determined not only by the wavelength but also the numerical aperture (NA) - the light refracts at the air-silicon interface so that a beam in air at 37 deg (2omega =75deg - your example lens), becomes a beam in silicon of only 10 deg. 10 deg in silicon gives a theoretical resolution (Rayleigh condition) of 0.7microns. Anyway, did you take all these factors into account?
thanks for this insightful article, brian, that's the sort of read i i visit this site for regularly!
i have to say that i expected a little more (side-) content, though, like a more thorough look at the htc one initially pictured, one of the two reasons i clicked on the article. also a comparison with nokias pureview-approach would've been nice, since it's on the complete other side of the spectrum.
but this is just to nitpick, as i enjoyed the read none the less.
Seeing the interest everyone has in good pictures, and the relatively small size and cost of the lens apparatus, wouldn't it make sense to have 2 distinct cameras on a smartphone, and coordinate them to get better image quality ?
Couple of comments on this and your rant in the podcast :)
First of all, you're lauding HTC for their larger pixel size and lamenting the move towards smaller pixels. But isn't it true that effective resolution, especially when your pixels are significantly smaller than the airy disk, is basically a function of integration area? The only downside to using smaller pixels is that you increase the effect of read noise and decrease fill factor. In an ideal world, a 100MP phone camera with the same sensor size as a 10MP one would make pictures that are just as good. With read noise being essentially absent nowadays, I don't see the reason to particularly bash on 13MP phone cameras compared to larger-pixel but same-integration-area sensors. They make the same pictures, just take up a little less space on the sd card.
Of course, you could make the argument that it's wrong to give in to the 'moar megapixels!' consumer side of things and try to educate people that sometimes less is more.
Next, you say that refractive index and focal length is essentially what limits the focal length for very thin cameras, but this can be alleviated by using diffractive optics (not yet now, but in the future). We may very well see 3mm-thickness 35mm focal length equivalent camera modules with large sensors someday. It's technically possible. Especially with, as you said, nanodiamonds and other very high refractive index synthetic lens materials in the making.
Next, about the resolving power. There's the airy disk and rayleigh's criterion, but this is not the end of resolving power. It does make sense to oversample beyond this point, you will get extra image information. It becomes exponentially less as you increase the megapixel count but you can still get about 150% extra image information by oversampling beyond the size of the airy disk. Again, in an ideal world without drawbacks to doing so, this does make sense.
Well, keep in mind that the reason you can resolve beyond the diffraction limit is the fact that the geometrical properties of the sensor and optics differ. Optics will by definition cause gaussian blur as their defect mode, while the sensor has square and offset pixels. These areas do not overlap perfectly, so in order to perfectly image that blurry optical image you need pixels that are smaller than the fundamental size of the diffraction pattern (airy disk).
These optical effects don't go away when you're using metamaterials/quantum opticss/etc. Light will still be a wave that will not necessarily enter the sensor perfectly perpendicular.
I ave seen many many reviews of lenses and the technical details of digital imaging ect, and almost every time the article would have really shitty JPG images. I found it highly ironic. Kudos to you for using PNG throughout this quality article.
I was reading the review for Sony's Xperia Z at techradar, I was astonished at how poor the 13MP Exmor RS sensor performs. Frankly, the image looks blurry and more like it's taken by a 5MP scaled up, with heavy noise even in a well lit scene:
While I don't really care too much about smart phone camera, and I use my budget DSLR (cheaper than a smart phone) for my photography pleasure, I was thinking if the MP race and new gen smart phones can eliminate the need for me to lunge a DSLR around. If this article is correct on the physical limitations of smartphone camera technology, looks like there is still a future for DSLRs.
Traditional, aka -crap- P&S clearly are at a disadvantage now, only the still very useful of optical zoom keeping them alive. However high end, 'big' sensor P&S such as the not too young Sony RX100 are still many many generations ahead of smartphone cameras, even the Nokia Pureview has terrible image quality next to it.
I am surprised at the lack of mention for Carl Zeiss lenses in here. If you're going to make an article about lens quality and cameraphone technology, why wouldn't you include the best in the market for such? Or are we disputing that fact?
Also, not all cameraphones suffer as much from dramatic lens flare discoloration issues as said "very popular phone."
Sure, you get a 3µm diffraction spot on your camera, and with 1.1µm pixels it gets oversampled. But that does not have to be a waste. As long as the diffraction pattern is well characterised, you can remove the diffraction effect through a deconvolution as part of your ISP. This even remains true for near-field optical effects that occur once you pixel size gets close to or below the image wavelength. As long as such corrections are implemented, and as long as your per-pixel noise is small enough for these algorithms to work, decreasing the pixel size does make a certain sense.
Once noise becomes a larger problem then resolution, the smaller pixels hurt though, by wasting light through the larger crop factor and also by increasing the overall read-out noise. When exactly that point is reached depends on the light conditions you want to use your camera in, so it would be interesting to understand for which kind of conditions smartphone-cameras are being optimised.
I don't know where your Rayleigh limit comes from, but in real world optics, Rayleigh is: [1.22 x F# x wavelength] -giving 1.3µm for green (550nm) light in an F2.0 lens. But maybe it's your interpretation of Rayleigh that is wrong, and that's where the error stems from. From the graphs, you show spot resolution limit as 2xRayleigh - and it isn't. Spot resolution is 1xRayleigh - giving an F2.0 lens a maximum resolution of the aforementioned 1.3µm - NOT 2.6µm.
The definition of Rayleigh: -"Two point sources are regarded as just resolved when the principal diffraction maximum of one image coincides with the first minimum of the other.
"Just resolved" in this case means a resulting MTF of about 7% - i.e The minimum distance between two peaks where you can still resolve that they are two, not one large is equal to the RADIUS of the first null on the Airy disk. Not the diameter. This is quite a common error made by people from the "E" side of ElectrOptics.
You say "This is the standard format for giving a sensor size, but it doesn’t have anything to do with the actual size of the image circle, and rather traces its roots back to the diameter of a vidicon glass tube" The above statement, though partially true, is misleading. The dimension DOES give sensor size multiplied by factor of roughly 1.5. For example if some one says 1/1.8" sensor, the sensor diagonal is ~ 1/(1.8*1.5). The 1.5 factor probably comes from vidicon glass tube.
Infact if some one wants just one parameter for image quality, it should be sensor size. Pixel technologies do improve (like using BSI) but even now a 1/3" sensor size of iphone or samsung or lumia 920 camera can just barely match quality 1/1.8" sensor of 4-year old Nokia N8.
I am currious if you can elaborate a little regarding lens material.
You say that today most lens elements are made of plastic. Is this both for front and rear facing camera lenses?
I was under the impression that lens elements in phones still were made of glass but that the industry is looking to change to plastic but this change has not been done yet. Please correct me if I am wrong and a link or two would not hurt :)
Although the term Bokeh is commonly used to refer to the effect in pictures of low depth of field techniques it should only be used to refer to the quality of the out-of-focus regions of such photographs. It is much more an aesthetic term than technical. Camera phones usually have such deep depth of focus that little is out of focus in normal use. However, with the newer f/2, f/2.4 phone cameras when doing close focus you can get the out of focus regions from low depth of field.
Someone fixed the Zeiss link to the Nassw article for me awhile back but I forgot the exact fix. In any case a search on the terms Zeiss, Nasse and Bokeh should bring up the article.
Your insight in Smartphone Cameras is awesome! Thanks for everything!
From your experience would it be possible to have only the camera module on a device, with a micro-controller/SoC that has sufficient power ONLY for transmitting the non processed 'RAW' data on another device via Bluetooth - on which the ISP and the rest needed for image processing to be situated.
I have a homework regarding this. Do you now any reference material/books that could help me?
What an amazing article! Finally something serious about smartphone imaging (the processor/phone makers don't tell us ****)! Just an updated version might be cool.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
60 Comments
Back to Article
Sea Shadow - Friday, February 22, 2013 - link
I am still trying to digest all of the information in this article, and I love it!It is because of articles like this that I check Anandtech multiple times per day. Thank you for continuing to provide such insightful and detailed articles. In a day and age where other "tech" sites are regurgitating the same press releases, it is nice to see anandtech continues to post detailed and informative pieces.
Thank you!
arsena1 - Friday, February 22, 2013 - link
Yep, exactly this.Thanks Brian, AT rocks.
ratte - Friday, February 22, 2013 - link
Yeah, got to echo the posts above, great article.vol7ron - Wednesday, February 27, 2013 - link
Optics are certainly an area the average consumer knows little about, myself included.For some reason it seems like consumers look at a camera's MP like how they used to view a processor's Hz; as if the higher number equates to a better quality, or more efficient device - that's why we can appreciate articles like these, which clarify and inform.
The more the average consumer understands, the more they can demand better products from manufacturers and make better educated decisions. In addition to being an interesting read!
tvdang7 - Friday, February 22, 2013 - link
Same here they have THE BEST detail in every article.Wolfpup - Wednesday, March 6, 2013 - link
Yeah, I just love in depth stuff like this! May end up beyond my capabilities but none the less I love it, and love that Brian is so passionate about it. It's so great to hear on the podcast when he's ranting about terrible cameras! And I mean that, I'm not making fun, I think it's awesome.Guspaz - Friday, February 22, 2013 - link
Is there any feasibility (anything on the horizon) to directly measure the wavelength of light hitting a sensor element, rather than relying on filters? Or perhaps to use a layer on top of the sensor to split the light rather than filter the light? You would think that would give a substantial boost in light sensitivity, since a colour filter based system by necessity blocks most of the light that enters your optical system, much in the way that 3LCD projector produces a substantially brighter image than a single-chip DLP projector given the same lightbulb, because one splits the white light and the other filters the white light.HibyPrime1 - Friday, February 22, 2013 - link
I'm not an expert on the subject so take what I'm saying here with a grain of salt.As I understand it you would have to make sure that no more than one photon is hitting the pixel at any given time, and then you can measure the energy (basically energy = wavelength) of that photon. I would imagine if multiple photons are hitting the sensor at the same time, you wouldn't be able to distinguish how much energy came from each photon.
Since we're dealing with single photons, weird quantum stuff might come into play. Even if you could manage to get a single photon to hit each pixel, there may be an effect where the photons will hit multiple pixels at the same time, so measuring the energy at one pixel will give you a number that includes the energy from some of the other photons. (I'm inferring this idea from the double-slit experiment.)
I think the only way this would be possible is if only one photon hits the entire sensor at any given time, then you would be able to work out it's colour. Of course, that wouldn't be very useful as a camera.
DominicG - Saturday, February 23, 2013 - link
Hi Hlbyphotodetection does not quite work like that. A photon hitting a photodiode junction either has enough energy to excite an electron across the junction or it does not. So one way you could make a multi-colour pixel would be to have several photodiode junctions one on top of the other, each with a different "energy gap", so that each one responds to a different wavelength. This idea is now being used in the highest efficiency solar cells to allow all the different wavelengths in sunlight to be absorbed efficiently. However for a colour-sensitive photodiode, there are some big complexities to be overcome - I have no idea if anyone has succeeded or even tried.
HibyPrime1 - Saturday, February 23, 2013 - link
Interesting. I've read about band-gaps/energy gaps before, but never understood what they mean in any real-world sense. Thanks for that :)MrSpadge - Sunday, February 24, 2013 - link
You're right, conceptually one would "only" need to adapt a multi-junction solar cell for spatial resoluition, i.e. small pixels. This would introduce shadowing for the bottom layers similar to front side illumination again, though. Which might be countered with vias through the chips, at the cost of making manufacturing more expensive. And the materials and their processing become way more expensive in general, as they will be CMOS incompatible III-V composites.And worst: one could only gain a 3 times higher light sensitivity at maximum, so currently it's probably not worth the effort.
mdar - Thursday, February 28, 2013 - link
I think you are talking about Foveon sensors, used by Sigma to make some of their DSLR cameras. Since photons of different colors have different energies, they use this principle to detect color. Not sure how they do it (probably by checking at which depth the electron is generated), but there is lot of information on web about it.fuzzymath10 - Saturday, February 23, 2013 - link
one of the non-traditional imaging sensors around is the foveon x3 sensor. each pixel can sense all three primary colours rather than relying on bayer interpolation. it does have many limitations though.evonitzer - Wednesday, February 27, 2013 - link
Yeah, like only being in Sigma cameras that use Sigma mounts. Who on earth buys those things? The results are stunning to see, but they need some, well, design wins, to use the parlance of cell phones.They also need to make tiny sensors. AFAIK they only have the APS-C one, and those won't be showing up in phones anytime soon. :)
ShieTar - Tuesday, February 26, 2013 - link
The camera equivalent of a 3 LCD projector does exist, for example in so-called "Multi-Spectral Imager" instruments for space missions. The light entering the camera aperture is split into spectral bands by dichroic mirrors, and then imaged on a number of CCDs.The problem with this approach is that it takes considerable engineering effort to make sure that all the CCDs are aligned to each other with sub-pixel accuracy. Of course the cost of multiple CCDs and the space demand for the more complex optical system make this option quiet irrelevant for mobile devices.
nerd1 - Friday, February 22, 2013 - link
I wonder who actually tested the captured image using proper analysing software (Dxo for example) to see how much they ACTUALLY resolve?And I don't think we get diffraction limit of 3um - see the chart here
http://egami.blog.so-net.ne.jp/2011-07-11
We have 1.34um at f2.0 and 1.88um at f2.8.
Typical 8MP sensor have 1.4um photosites so 8MP sensors looks like an ideal spot for f2 optics. (Yes, 13MP @ 1.1um is just marketing gimmick I think)
In comparison, 36MP Nikon D800 has 4.9um photosite size, which is diffraction limited between f5.6 and f8.
jjj - Friday, February 22, 2013 - link
"smartphones are or are poised to begin displacing the role of a traditional point and shoot camera "That started quite a while ago so a rather disappointing "trends"section. Was waiting for some actual features , ways to get there. and more talk about video since it's becoming a lot more important.
Johnmcl7 - Friday, February 22, 2013 - link
Agreed, the presentation feels a bit out of date for current technology particularly as you say phone cameras have been displacing compact cameras for years - I'd say right back to the N95 which offered a decent 5MP AF camera and was released before the first Iphone.I'm also surprised to see no mention of Nokia pretty much even though they've very much been pushing the camera limits, their ultra high resolution Pureview camera showed you could have a very high number of pixels and high image quality (which this article seems to claim isn't possible even with lower resolution devices) and the Lumia 920 is an interesting step forward in having a physical image stabilisation system.
Also with regards to shallow depth of field with F2, that's just not going to happen on a camera phone because depth of field is primarily a function of the actual focal length (not the equivalent focal length) so to get a proper shallow depth of field effect (as in not shooting at very close macro distances) a camera phone would need a massive aperture many stops wider than F2 to counter the very short focal length.
John
Tarwin - Saturday, February 23, 2013 - link
Actually it makes sense he doesn't mention all that. He's talking about trends, the Pureview did not fit into the trends, both in quality and sensor size.Optical image stabilization doesn't fit in either as it only affects image quality in less than ideal situations such as no tripods/shaky hands. But he did mention the need for extra parts in the module configuration shouldnthat be part of the setup.
And in his defence of the comment ofndisplacing P&S cameras, he says "smartphones are or are poised to begin displacing the role" so he's notnsaying that they aren't doing it already, he gives you a choice in perspectives. Also I don't think you can say the N95 displaced P&S at the market level, only in casual use.
Manabu - Friday, February 22, 2013 - link
What about the "large" sensor 41MP Nokia 808 phone? It is sure a interesting outlier.And point & shot cameras still have the advantage of optical zoom, better handling, and can have bigger sensors. Just look at S110, LX7 or RX100 cameras. But budget super-compact cameras are indeed in extinction.
gadjade - Friday, February 22, 2013 - link
How can I start reading when there is no mention of any Nokia products? HTC One should not be even included in the article because it's not even considered a breakthrough in camera phones.Krysto - Saturday, February 23, 2013 - link
Next time try reading more than the first paragraph.Diagrafeas - Saturday, February 23, 2013 - link
How many MP or to be more accurate Million Sub Pixels are these Nokia sensors because these numbers come surely after interpolation or pixel shift or something...Also do we have information about HTC One sensor?
Is it one Bayer sensor or 2-3 stacked like Foveon ones?
Krysto - Saturday, February 23, 2013 - link
While HTC definitely deserves credit for discontinuing a trend of ever more MPs, I don't think they went far enough. There's still an unclenched status quo thinking in smartphones about sensor size. Virtually all of them have a 1/3.2" sensor.No one has even thought about making that one much larger? Even after Nokia showed Pureview 808 and got a lot of praise for it? Really? What is wrong with all these companies? Do they want us to spell it out for them?
I get that clean phone design is a big factor, but you can't just keep doing things the same way everyone has always done it. And then they wonder why they can't beat Samsung. Sure Samsung has a lot of marketing power, but they also play a little less safe than everyone else. They add the S-pen to devices, even though it adds quite a bit of cost and then they need that device to compete with others, so it's a risk for them that the consumers might not want to pay extra for it - but they still do it. They also started the "phablet" trend all by themselves. And while these things don't necessarily have mass market appeal, they get a lot of publicity and quite a lot of passionate fans and customers for those devices.
So why aren't the other manufacturers experimenting in the same way not just with incrementally better cameras, but WAY better cameras, that they put in phones. I'm talking putting camera capabilities in a phone that could add $100 or even $200 to the retail price of the device. That's being BOLD in the market. That's being DIFFERENT.
So I want to see them come up with devices that have 1" large sensors of 5-8 MP, with some high quality lenses, and powerful ISP's and software behind them. Create that and you get at least 3 different types of consumers coming to buy that phone (the type of consumers that yell :SHUT UP AND TAKE MY MONEY!): professional photographers (they buy thousands of dollars worth of equipment for fun, and they need phones, too), amateur photographers (people who love taking great photos with their phones), and you'd also pretty much convert the whole (in time of course) point and shoot market to your device.
So the potential for sales is right there for reaping. And these people wouldn't get about a slight bump on the back, if they can get a phone that is 5x better than anything else on the market at the time of shipping. I want to see that kind of LEAP in smartphone cameras, not just these regular "2013 camera is slightly better than the 2012 camera", and so on.
slatanek - Saturday, February 23, 2013 - link
I think you're generally right, but these days from manufacturer's point of view it's very easy to sink the ship by being bold and not being understood well by the market/consumers. It is then very easy for their opponents to do nasty counter marketing to make things worse even further.As a photographer myself I really would like to see a no compromise smartphone camera (give glass!!! give me bigger sensor with less megapixels!!!), but I guess we're not at that point just yet. Only recently manufacturers of compact cameras started to show some interest for making cameras geared toward serious photography, so I guess it's still a few years wait to see that approach in the smartphone world.
slatanek - Saturday, February 23, 2013 - link
Besides I guess 1" sensor in a smartphone will never happen due to physical constrains - you need a lot of light gathering power to lit such a huge sensor not to mention the focal distance needs to be considerably bigger for the field of view not to be ultra wideKrysto - Saturday, February 23, 2013 - link
Pureview 808 had a 1/1.2" which is pretty close, and didn't look too bad:http://images.fonearena.com/blog/wp-content/upload...
Tarwin - Saturday, February 23, 2013 - link
That's kinda the point. It didn't look TOO BAD, which is a far cry from looking great. I also remember reading some reviews which complained about the ergonomics. Also the pure view was more of a niche experimental product.It's kind of like some article that I read years agonwhere itnsaidnthat according to research (or a survey or something) consumers, when buying a tv), were most interested in size THEN image quality and then all the other stuff. Most phone buyers are similar, they want a better camera but are not willing to give up styling for it, at least not without some re-education.
As for HTC not going far enough. I think it has more to do with the fact that they're not doing too well as is and most likely feel like they're taking enough risks at the moment.
Personally I would be willing to give up some slimness (but hopefully not styling) for a decidedly improved camera. I hope HTC does well with the One and that they are willing to experiment a bit more in future generations. But then again I am not sure yet how I feel about the loss in detail. Another site got their hands on a One and have some comparison shots between the On eand the iPhone 5
Tarwin - Saturday, February 23, 2013 - link
Oops, accidentally pressed post.Well, in the comparison shots I prefernthe images ofnthe One, but I occasionally neednthe detail procided with 8MP but then again only sometimes. When 4K TVs vecome mainstream and I also have one then I'll kost likely think it's the minimum pics should be taken at.
Krysto - Monday, February 25, 2013 - link
They can make it more stylish than that. I wouldn't mind if my phone looked a bit like a point and shoot, and didn't have a perfectly flat back surface.Again, I'm not saying these phones are for everyone. Note phones are not for everyone either. In fact I don't think I can ever see myself owning a Note phone. But yet millions of people have them, and those who do love it, and wouldn't imagine going back to a smaller phone.
That's the type of market I think such a phone can target. A niche market indeed, but a big niche nonetheless. And I would be part of that niche.
ltcommanderdata - Saturday, February 23, 2013 - link
http://www.sony.net/SonyInfo/News/Press/201208/12-...Sony's new Exmor RS sensors use a stacked structure which places the circuit section underneath the pixel section instead of beside it which should free up more room for more pixels or larger pixels within a given sensor module area or enable smaller modules. They also add a dedicated white channel for a RGBW coding which they claim improves low light performance. Any comment on the efficacy of these techniques?
Apple has been using Sony image sensors for both the iPhone 4S and iPhone 5 so that new Sony IMX135 with 13.13MP, 1/3.06", 1.12 μm pixel sensor looks like a prime candidate for the iPhone 5S.
slatanek - Saturday, February 23, 2013 - link
Well done, Brian. At last someone trying to explain and stress how much more is there to tiny smartphone cameras than just the megapixels. A few years ago I felt like we got over it and the race was over, but now few years later the race just continues but in the smartphone realm (previously compact cameras had the same issues where ultimate picture quality was compromised just to get to a higher number of megapixels on the box). Hopefully this will change in near future, but somehow I'm afraid that were stuck with it just as with the horrible quality displays in notebooks. And it's not that people don't want good quality, it's just that the behemot companys are not willing to take the risk.Anyways, thanks for a good read.
Shftup - Saturday, February 23, 2013 - link
Brain - Well done!This is a fanatastic article. Its well written, relatively easy to understand for any layman, and most importantly makes the reader coming back for more.
jabber - Saturday, February 23, 2013 - link
....for a smartphone to give me as good a picture quality as I got from my 3.2MP Nikon from 2004.When I find one I'll be happy.
I think a lot of the phone companies need to start poaching the optics and software specialists from the camera companies. It's all very well looking at the spec lists for components but its another thing entirely to make them all work together to produce a decent picture.
In the past for me near decent cams have been ruined by over zealous compression settings (Palm Pre2) or what appears to be zero configuration of the imaging processors (Nexus4).
No one yet has tweaked every part of the chain to provide a truly viable alternative to taking a $250+ camera along instead.
Never had anything to do with megapixels IMO. It's other factors that let them down.
I reckon in another 2 years we'll have it pretty much there.
MrSpadge - Sunday, February 24, 2013 - link
Same here, my Sony DSC-W5 from 2005 or so is still way better than my smartphone. If I could get at least similar quality in a new phone that would easily make it worth 50 - 100€ more for me.Tarwin - Saturday, February 23, 2013 - link
I enjoyed the article, it cleared up some doubts I had and taught me a few new things.In light of this article I hope you go into more detail in the One review when you eventually do it. I assume you will give special attention to the c amera due to how it goes against the trend and HTC's focus on it but I also hope you mention how it fits into all this a bit for those of us who read both articles. Thanks again.
DominicG - Saturday, February 23, 2013 - link
Hi Brian,much enjoyed your intro to camera optics. However you state:
"If we look at the airy disk diameter formed from a perfect diffraction limited ... we get a spot size around 3.0 microns"
You mentioned this is a back illuminated chip so that the light is focussing in a medium of refractive index ~3.5. Therefore the wavelength of red light inside the medium is ~700/3.5 = 200nm - still much smaller than the pixel. It is a bit more complicated than this since the optical resolution is of course determined not only by the wavelength but also the numerical aperture (NA) - the light refracts at the air-silicon interface so that a beam in air at 37 deg (2omega =75deg - your example lens), becomes a beam in silicon of only 10 deg. 10 deg in silicon gives a theoretical resolution (Rayleigh condition) of 0.7microns. Anyway, did you take all these factors into account?
fokka - Sunday, February 24, 2013 - link
thanks for this insightful article, brian, that's the sort of read i i visit this site for regularly!i have to say that i expected a little more (side-) content, though, like a more thorough look at the htc one initially pictured, one of the two reasons i clicked on the article. also a comparison with nokias pureview-approach would've been nice, since it's on the complete other side of the spectrum.
but this is just to nitpick, as i enjoyed the read none the less.
StormyParis - Sunday, February 24, 2013 - link
Great articleStormyParis - Sunday, February 24, 2013 - link
Seeing the interest everyone has in good pictures, and the relatively small size and cost of the lens apparatus, wouldn't it make sense to have 2 distinct cameras on a smartphone, and coordinate them to get better image quality ?ssj3gohan - Sunday, February 24, 2013 - link
Couple of comments on this and your rant in the podcast :)First of all, you're lauding HTC for their larger pixel size and lamenting the move towards smaller pixels. But isn't it true that effective resolution, especially when your pixels are significantly smaller than the airy disk, is basically a function of integration area? The only downside to using smaller pixels is that you increase the effect of read noise and decrease fill factor. In an ideal world, a 100MP phone camera with the same sensor size as a 10MP one would make pictures that are just as good. With read noise being essentially absent nowadays, I don't see the reason to particularly bash on 13MP phone cameras compared to larger-pixel but same-integration-area sensors. They make the same pictures, just take up a little less space on the sd card.
Of course, you could make the argument that it's wrong to give in to the 'moar megapixels!' consumer side of things and try to educate people that sometimes less is more.
Next, you say that refractive index and focal length is essentially what limits the focal length for very thin cameras, but this can be alleviated by using diffractive optics (not yet now, but in the future). We may very well see 3mm-thickness 35mm focal length equivalent camera modules with large sensors someday. It's technically possible. Especially with, as you said, nanodiamonds and other very high refractive index synthetic lens materials in the making.
Next, about the resolving power. There's the airy disk and rayleigh's criterion, but this is not the end of resolving power. It does make sense to oversample beyond this point, you will get extra image information. It becomes exponentially less as you increase the megapixel count but you can still get about 150% extra image information by oversampling beyond the size of the airy disk. Again, in an ideal world without drawbacks to doing so, this does make sense.
tuxRoller - Sunday, February 24, 2013 - link
Especially, with the use of metamaterials that make use of negative indexes of refraction to allow you to resolve detail beyond the diffraction limit?ssj3gohan - Monday, February 25, 2013 - link
Well, keep in mind that the reason you can resolve beyond the diffraction limit is the fact that the geometrical properties of the sensor and optics differ. Optics will by definition cause gaussian blur as their defect mode, while the sensor has square and offset pixels. These areas do not overlap perfectly, so in order to perfectly image that blurry optical image you need pixels that are smaller than the fundamental size of the diffraction pattern (airy disk).These optical effects don't go away when you're using metamaterials/quantum opticss/etc. Light will still be a wave that will not necessarily enter the sensor perfectly perpendicular.
UltraTech79 - Monday, February 25, 2013 - link
I ave seen many many reviews of lenses and the technical details of digital imaging ect, and almost every time the article would have really shitty JPG images. I found it highly ironic. Kudos to you for using PNG throughout this quality article.AnnihilatorX - Monday, February 25, 2013 - link
I was reading the review for Sony's Xperia Z at techradar, I was astonished at how poor the 13MP Exmor RS sensor performs. Frankly, the image looks blurry and more like it's taken by a 5MP scaled up, with heavy noise even in a well lit scene:http://mos.futurenet.com/techradar/art/mobile_phon...
While I don't really care too much about smart phone camera, and I use my budget DSLR (cheaper than a smart phone) for my photography pleasure, I was thinking if the MP race and new gen smart phones can eliminate the need for me to lunge a DSLR around. If this article is correct on the physical limitations of smartphone camera technology, looks like there is still a future for DSLRs.
danacee - Monday, February 25, 2013 - link
Traditional, aka -crap- P&S clearly are at a disadvantage now, only the still very useful of optical zoom keeping them alive. However high end, 'big' sensor P&S such as the not too young Sony RX100 are still many many generations ahead of smartphone cameras, even the Nokia Pureview has terrible image quality next to it.pandemonium - Tuesday, February 26, 2013 - link
I am surprised at the lack of mention for Carl Zeiss lenses in here. If you're going to make an article about lens quality and cameraphone technology, why wouldn't you include the best in the market for such? Or are we disputing that fact?Also, not all cameraphones suffer as much from dramatic lens flare discoloration issues as said "very popular phone."
ShieTar - Tuesday, February 26, 2013 - link
Sure, you get a 3µm diffraction spot on your camera, and with 1.1µm pixels it gets oversampled. But that does not have to be a waste. As long as the diffraction pattern is well characterised, you can remove the diffraction effect through a deconvolution as part of your ISP. This even remains true for near-field optical effects that occur once you pixel size gets close to or below the image wavelength. As long as such corrections are implemented, and as long as your per-pixel noise is small enough for these algorithms to work, decreasing the pixel size does make a certain sense.Once noise becomes a larger problem then resolution, the smaller pixels hurt though, by wasting light through the larger crop factor and also by increasing the overall read-out noise. When exactly that point is reached depends on the light conditions you want to use your camera in, so it would be interesting to understand for which kind of conditions smartphone-cameras are being optimised.
rwei - Wednesday, February 27, 2013 - link
hurr hurrtheSuede - Wednesday, February 27, 2013 - link
I don't know where your Rayleigh limit comes from, but in real world optics, Rayleigh is:[1.22 x F# x wavelength] -giving 1.3µm for green (550nm) light in an F2.0 lens.
But maybe it's your interpretation of Rayleigh that is wrong, and that's where the error stems from. From the graphs, you show spot resolution limit as 2xRayleigh - and it isn't. Spot resolution is 1xRayleigh - giving an F2.0 lens a maximum resolution of the aforementioned 1.3µm - NOT 2.6µm.
The definition of Rayleigh:
-"Two point sources are regarded as just resolved when the principal diffraction maximum of one image coincides with the first minimum of the other.
"Just resolved" in this case means a resulting MTF of about 7% - i.e The minimum distance between two peaks where you can still resolve that they are two, not one large is equal to the RADIUS of the first null on the Airy disk. Not the diameter. This is quite a common error made by people from the "E" side of ElectrOptics.
mdar - Thursday, February 28, 2013 - link
You say "This is the standard format for giving a sensor size, but it doesn’t have anything to do with the actual size of the image circle, and rather traces its roots back to the diameter of a vidicon glass tube"The above statement, though partially true, is misleading. The dimension DOES give sensor size multiplied by factor of roughly 1.5. For example if some one says 1/1.8" sensor, the sensor diagonal is ~ 1/(1.8*1.5). The 1.5 factor probably comes from vidicon glass tube.
Infact if some one wants just one parameter for image quality, it should be sensor size. Pixel technologies do improve (like using BSI) but even now a 1/3" sensor size of iphone or samsung or lumia 920 camera can just barely match quality 1/1.8" sensor of 4-year old Nokia N8.
frakkel - Thursday, February 28, 2013 - link
I am currious if you can elaborate a little regarding lens material.You say that today most lens elements are made of plastic. Is this both for front and rear facing camera lenses?
I was under the impression that lens elements in phones still were made of glass but that the industry is looking to change to plastic but this change has not been done yet. Please correct me if I am wrong and a link or two would not hurt :)
vlad0 - Friday, March 1, 2013 - link
I suggest reading this white paper as well:http://www.mediafire.com/view/?0o5oo43h8os4ba9
it deals with a lot of the limitations of a smartphone camera in a very elegant way, and the results are sublime.
http://sdrv.ms/VQ3eCd
Nokia solved several important issues the industry has been dealing with for a long time...
wally626 - Monday, March 4, 2013 - link
Although the term Bokeh is commonly used to refer to the effect in pictures of low depth of field techniques it should only be used to refer to the quality of the out-of-focus regions of such photographs. It is much more an aesthetic term than technical. Camera phones usually have such deep depth of focus that little is out of focus in normal use. However, with the newer f/2, f/2.4 phone cameras when doing close focus you can get the out of focus regions from low depth of field.http://www.zeiss.com/c12567a8003b8b6f/embedtitelin...$file/cln35_bokeh_en.pdf
Is a very good discussion of this by Dr. Nasse of Zeiss
wally626 - Monday, March 4, 2013 - link
Someone fixed the Zeiss link to the Nassw article for me awhile back but I forgot the exact fix. In any case a search on the terms Zeiss, Nasse and Bokeh should bring up the article.huanghost - Thursday, March 21, 2013 - link
admirablemikeb_nz - Sunday, December 22, 2013 - link
how do i calculate or where do i find the field of view (angle of view) for smartphone and tablet cameras?thanks
oanta_william - Monday, July 20, 2015 - link
Your insight in Smartphone Cameras is awesome! Thanks for everything!From your experience would it be possible to have only the camera module on a device, with a micro-controller/SoC that has sufficient power ONLY for transmitting the non processed 'RAW' data on another device via Bluetooth - on which the ISP and the rest needed for image processing to be situated.
I have a homework regarding this. Do you now any reference material/books that could help me?
Thanks!
solarkraft - Monday, January 9, 2017 - link
What an amazing article! Finally something serious about smartphone imaging (the processor/phone makers don't tell us ****)! Just an updated version might be cool.albertjohn - Tuesday, November 20, 2018 - link
I like this concept. I visited your blog for the first time and became your fan. Keep posting as I am going to read it everyday.