Comments Locked

68 Comments

Back to Article

  • wr3zzz - Tuesday, May 19, 2020 - link

    Where is the MP ceiling for phones? At some point you'll need a bigger lens that just won't do in a phone.
  • StevoLincolnite - Tuesday, May 19, 2020 - link

    Or more lenses.
  • Valantar - Tuesday, May 19, 2020 - link

    More lenses won't help past a certain point - miniaturization of lens elements brings with it a series of issues (chromatic aberration, coma, various distortions, etc., not to mention that the smaller the lens, the larger any production error will become in the final image). I'm already shocked that they fit as large a sensor+lens combo in the S20 Ultra as they did even considering its huge hump - I doubt there's anywhere to go without making some kind of late-2000s camera bump making the phone massively thick at one point. And still there's both practical and image related issues to increasing pixel counts (not that this does that though).
  • yeeeeman - Tuesday, May 19, 2020 - link

    That is what everyone believe 10, 20, 30 years ago and look where we are today. There will always be rooms for improvement and technical challenges are just an unknown, not an impossibility.
  • BedfordTim - Tuesday, May 19, 2020 - link

    Nokia tried it recently and failed dismally. They used 5 tiny sensors without OIS, and it all fell apart in low light. If the images were too noisy the software couldn't combine them and gave up leaving a hopeless mess. There were other issues such as the picture element of motion photos being out of focus but they were entirely fixable if Nokia had bothered.
    It may be better to have a set of small field of view images and stitch them as it gets around the low light issue, but the assembly would be interesting.
  • melgross - Tuesday, May 19, 2020 - link

    No, there are theoretical limits to optical resolution. As we reach them, and we’re pretty much there with the tiny sensors in smartphones, we won’t be able to go any higher. We’ve seen that with microscopes for well over 100 years. As you approach the wavelength of light, you simply can’t resolve any more. Then its called “empty resolution”.
  • BedfordTim - Wednesday, May 20, 2020 - link

    You can reach the limits of resolution but you don't have to put all of the sensing area in one camera. Think about how a panorama photo is generated by stitching lots of smaller photos together.
  • Santoval - Tuesday, May 19, 2020 - link

    Merely increasing resolution is nowhere close to an "improvement". These sensors are super noisy due to their very small pixel pitch and thus require quite aggressive noise reduction (either conventional or, as is more often done nowadays, via neural networks). There are other drawbacks as well as resolution increases. Increasing resolution merely increases the *size* of the image (*not* its quality) with everything else being equal or -usually- worse.

    There is no point to a 50 MP or a ... 108 MP (!) image because no monitor or other device exists with such a resolution so that someone can view that image. Since there is no way to view it the sole purpose of such images is for bragging rights and they only just waste storage. The only value a 108 MP image might have is for zooming in to see minute details of images shot from afar (assuming its memory footprint does not cause a buffer overflow, that is). In other words for fields like astronomy, photos taken by satellites of certain spots on Earth, bird & nature watching etc etc How many people do you thing bought a phone with a 108 MP sensor for such purposes?
  • surt - Tuesday, May 19, 2020 - link

    <head explodes> You've never heard of zooming in? Maybe I take a 50MP picture of my kid's graduation group, and then we take a look at each kid's face as we walk down memory lane?
    Which is not to mention a future in which displays evolve from 8k to 32k, and become capable of displaying much higher resolution images. Do you only ever look at your photos the same day? Why bother to take them?
  • Valantar - Wednesday, May 20, 2020 - link

    Have you ever zoomed into a smartphone photo of this type of resolution? The quality to do so simply isn't there - everything looks like blotchy garbage. There are definitely uses for resolutions above 1-2mp even if >99% of photos shot on a smartphone are never viewed elsewhere, but above 12? Nah. A good quality 12mp photo can be printed in A3 size or even larger (either by allowing for it not being viewed very close, or by using a high quality upscaling algorithm). Sensors this size have already hit the point of dramatically diminishing returns for quality around this point, so increasing resolution further will typically only have detrimental effects on actual image quality. There are of course ways of alleviating this, but none that are any better than leaving the pixels reasonably large. There's also the issue of file sizes and data handling, of course, with 50mp pictures making even ridiculously compressed jpegs huge and unwieldy.
  • Psyside - Wednesday, May 20, 2020 - link

    "Have you ever zoomed into a smartphone photo of this type of resolution? The quality to do so simply isn't there - everything looks like blotchy garbage"

    You have no clue do you? The s20 64mp looks amazing.
  • s.yu - Thursday, May 21, 2020 - link

    Your definition of "amazing" is severely flawed.
  • PaulHoule - Wednesday, May 20, 2020 - link

    "Tack sharp" images that can be enlarged require getting quite a few things right: a narrow enough aperture for the depth of field to cover the target, enough light to be able to run the sensor at low light sensitive (high resolution) and still be able to use a short enough exposure that you don't get blur from camera shake or subject motion.

    That's why wedding photographers might have a $3000 lens and an external flash that weighs almost as much as the camera body. And it is part of the comparative advantage that L.A. has in the movie business since L.A. has more usable light for photography than any spot on earth except the sahara desert. (e.g. Tunisa, where some of the Star Wars movies were shot)

    It takes more than just a sensor to get good results.
  • BedfordTim - Wednesday, May 20, 2020 - link

    You have missed the point of 50MP and 108MP sensors. Both output a 12MP image. The sub-pixels are there for deBayering and single short HDR. Even though you have 4 or 9 sub-pixels per Bayer pixel you still get a much better result than with a single pixel.
  • s.yu - Thursday, May 21, 2020 - link

    Theoretically yes, but the 40-50MP sensors are still far from pixel-for-pixel 12MP a la Foveon X3(they should be very close on paper, since 12x4 is 48), I don't even think they match larger 12MP bayer in most ILCs, like the a7S series.
    And the 108MP that outputs 12MP in S20U is on par or slightly worse than the 40-50MP after accounting for processing differences.
    Then again it's basically a moot point to entirely rule out software in mobile imaging solutions.
  • ChrisTexan - Wednesday, May 20, 2020 - link

    In addition to raw resolution, the article clearly states one of the benefits, in this case this sensor is a step forward by stepping back in 2 regards:
    1. Larger pixel size - this means more light-gathering area which (all else equal) means either/both better low-light sensitivity (less noise in darker lighting) and faster/better speeds (less motion blur).
    2. Aggregating pixels - as indicated clearly in the article, they are intending the option to combine multiple (2x2 arrays) of pixels as one "larger" pixel. Effectively, this gives it a "native" resolution of 12.5MP, but with 4x the light-gathering area (quadrupling the sensor capabilities for low-light and/or speed (ISO) capabilities, again multiplying (several times over) it's speed to shoot and/or lowering the noise threshold further.
    The resulting picture therefore at 12.5MP in arrayed mode, can thus be MUCH more vivid potentially than a lower count. And compared to a "native" 50MP (or 108MP, etc), likely will have truer coloration, lower noise, basically a better picture "captured" by the sensor. AND the optical lensing requirements become less critical, as you aren't needing razor sharp clarity on each cell, you'll be receiving the average across 4 cells.
    If shooting in "full pixel" mode, the tradeoffs will be, much more "detailed" pictures (for zooming in), at the expense of slower shooting (or higher motion blur risk), and much more "noise" at the pixel level.
    In a cell phone, not sure how much of that matters, your point is valid about zooming in, although not for optical/sensor reasons. If the camera is saving the output as a lossy .jpg, ultimately, you will have color blocking and averaging, that when zoomed-in digitally, will make "max pixel res" pointless. If you could access the raw native pixel image, that would NOT be the case, but on cell phones that's really not an option (saving dozens of 250+MB images on a phone with 32GB total space, is obviously not going to be practical).
    Bottom-line, the "50MB, with quad-arrayed pixels" output, on a cell-phone, is a GREAT application. The sensor itself, should capture wonderful images (optics-dependent), and although you'll end up with a JPEG output, it'll probably be the best 12.5MP JPEG output from a cell phone possible. You can't "digitally" improve a 50MB native image post-capture, as each sensor cell will have been processed to "optimize it individually", at best you can average the output data, and come up with a "similar" 4x4 matrix. But that won't match the clarity of the sensor-driven 4x4. So for both speed, and output, this is a great solution.
    All my opinions, worth every penny that was paid for them.
  • ChrisTexan - Wednesday, May 20, 2020 - link

    Sorry, I mispoke, 50MB divided into 4x4 array pixels is not 12.5MB, however, I would assume their algorithm will divide it into overlapping arrays (so row 1, 1,2 and row 2 1,2 are one array, then row 1 2,3 and row 2 2,3 are an array, etc)... just depends on capture algorithms as to the "native" output, non-overlapping would be like a 6.25MP output)
  • willis936 - Tuesday, May 19, 2020 - link

    Sensor fusion isn’t even tapped into yet, despite its demonstrated benefits in academia. The hardware is there, we already have 3 and 4 camera smartphones. Someday soon industry will wake up and actually properly implement sensor fusion. The rules based on old assumptions can be thrown out.
  • Valantar - Wednesday, May 20, 2020 - link

    Laboratory conditions rarely translate to the real world - it's much easier to develop systems like that for controlled condition than it is for real-world applications. There's no doubt that sensor fusion has promise for the future, but it'll still be a long, long time before we see flexible applications of it in the consumer space. The same goes for other promising tech like metamaterials, that has been "five years away" for a decade or more, and which has a realistic ETA for consumer uses better described in decades than in years.
  • willis936 - Wednesday, May 20, 2020 - link

    Sensor fusion is primarily algorithms. There is no magic materials to figure out how to mass produce, only an FPGA to develop then spin to ASIC once they’re happy with the design. It hasn’t been done because demand hasn’t incentivized cameras improving by a significant amount, so why waste the R&D funds in this area?
  • close - Tuesday, May 19, 2020 - link

    There's only so much light a pin-prick aperture will let in.
  • emn13 - Tuesday, May 19, 2020 - link

    We're already well beyond the reasonable maximum MP range for smartphone cameras, 50MP is still way too much. The pixel-level quality just isn't there, so there is little lost by downsampling, so why bother with the extra pixels?

    It's telling that phones like the s20 by default take 12MP images, not 108 or 50 - because even at 12MP pixel-level quality isn't necessarily brilliant, and you just don't need more. Most pictures will be shared or view on a smartphone, and most images aren't of hyper-crisp text; so the meaningful resolution as perceived is likely more like 1-2MP. Having a little more than that to allow the occasional crop and to see some extra detail makes sense, but more pixels may well slightly reduce the light capure and quality (as there are more pixel-borders). Given that even images of more than 2MP are a niche; images of more than 10 or so are a positively tiny niche; is it worth wasting battery life, cost, and some low res image quality for the niche ability to have tons of poorly resolved detail?

    No; it's not worth; it simple makes images worse and more expensive. I'd *much* rather a phone take good 8MP images than slightly worse 8MP images with the option for 64MP or whatever.
  • BedfordTim - Tuesday, May 19, 2020 - link

    It is only 50MP for marketing reasons. It is really 13MP with sub pixels for improved deBayering and single shot HDR. The pixel size should roughly match the diffraction limit of the lens so it really is a 13MP. Sony's industrial division has a nice presentation on sub-pixels and why they are worth the small sensitivity loss.
  • fmcjw - Tuesday, May 19, 2020 - link

    Can you please provide a link to the presentation, and does it address video quality? Because photos are a good candidate for modern processing techniques, but video usually requires a clean signal, which 1.12 micron pixels just couldn't deliver, even with 3x3 binning. Samsung S20/S20 Ultra still uses 12MP (with larger pixels than prev. gen) for its main camera, rather than a 48MP with 2x2 binning.
  • BedfordTim - Wednesday, May 20, 2020 - link

    I saw it at a Sony seminar and unfortunately we didn't get a copy of the presentations. It doesn't offer any significant benefits in terms of low light imaging, and there is a small loss of sensitivity compared to a single big pixel.
    Video will still be limited by the sensor area, but it does allow you to shoot in HDR.
  • boogerlad - Tuesday, May 19, 2020 - link

    Can you provide a link to the presentation? Would love to read it.
  • Psyside - Wednesday, May 20, 2020 - link


    "No; it's not worth; it simple makes images worse and more expensive. I'd *much* rather a phone take good 8MP images than slightly worse 8MP images with the option for 64MP or whatever"

    Again, people that haven't seen S20 64MP photos, should not comment
  • s.yu - Thursday, May 21, 2020 - link

    Are you a regular reader of this site? Because Andrei covered this back in April, during the S20+ and S20U review.
    This is the page:
    https://www.anandtech.com/show/15603/the-samsung-g...
    And this is the sample you're looking for:
    https://images.anandtech.com/galleries/7547/S20-E_...
    It looks marginally better in terms of resolution than what I expected of the CFA layout and pixel pitch, but then there's far more sharpening too.
    On my 13" 1080P screen, it's unusable at 100%. It's decent at 50%, however 50% constitutes a 4-1 bin, which leaves it at 16MP.
  • Spunjji - Friday, May 22, 2020 - link

    God, you're right. That's terrible at 100%! Absolute junk. Also exactly what I'd have expected, given the design of the sensor and the optics available.

    Of course the point of that sensor design was never really to produce 64MP images.
  • Psyside - Sunday, May 24, 2020 - link

    What are you talking about? the 64MP are drastically sharper from 12MP ones, and this is an old firmware, get a S20 shot an 64MP daylight photo, and compare it with the 12MP crap, i can't help you more, test for yourself, i did and i know the ENORMOUS difference.
  • s.yu - Monday, May 25, 2020 - link

    Your theoretical difference is irrelevant. Samsung smartphones' (lack of) detail in their output is a product of their firmware and whatever processing it forces upon the user.
    64MP quad bayer outputting 64MP retains more detail than outputting 12MP, that goes without say, but first of all it's not worth outputting 64MP(like I said, it's sharp at 16MP, how about releasing a tweaked camera app that defaults to 16MP output directly resized from the 64MP pipeline?); furthermore, the 64MP viewed at 50% is sharper at a glance than 12MP viewed at 100%, which strongly suggests there's intentional tampering with the 12MP pipeline that crushed details where they were otherwise recorded by the sensor; thirdly, this doesn't exclude a 12MP sensor of the same size and process outputting 12MP outperforming this 64MP with the current 12MP pipeline, I'm only stressing this because Gcam works most reliably with the tried and true 12MP readout, and I've heard very little about how reliable it is with high resoluion quad bayer 48MP and up.
  • s.yu - Monday, May 25, 2020 - link

    Actually that sample does make the 12MP mode look a bit worse than it is, if you look in the shadows then you'd realize that either the 12MP mode has a stronger HDR applied or the 64MP mode is simply without HDR. Samsung's ZSL form of HDR seems to go way back but never properly improved over the years, that's why the merge always results in subpar texture and detail retention. If the scene had a stronger contrast, you'd more easily perceive that 12MP mode pushes more shadows, which apparently is what Samsung decides most people would like. Personally, I'd definitely prefer a 16MP supersampled from 64MP as the default, instead of this low quality HDR 12MP, which looks something like JPG quality 3-4 in Adobespeak.
  • Psyside - Tuesday, May 26, 2020 - link

    Again, you are using outdated SAMPLES. New updates improved the HDR on 64MP alot, and now if you cant tell the difference or you think is not worth it, you should quite following photography, and focus on things you understand.
  • s.yu - Tuesday, May 26, 2020 - link

    If you discredit the samples, then there's nothing to discuss. Go whine at a site with "updated" samples.
  • Psyside - Wednesday, May 27, 2020 - link

    Ok, send me your email to show you "how bad" are the 64MP.
  • BedfordTim - Tuesday, May 19, 2020 - link

    The camera bumps have got larger, and effectively they are now using the full depth of the case. Folding phones trade area for thickness and could potentially go up to about 20MP. Extending lenses like those in compact cameras also offer potential to go bigger.
    Sony and Kodak used to do wi-fi linked cameras with full size sensors and optics, which are the best option to go any further.
  • brucethemoose - Tuesday, May 19, 2020 - link

    Oh man, telescopic lenses in a smartphone... back to the future, and a glorious future at that.
  • s.yu - Thursday, May 21, 2020 - link

    As long as there's a thick portion of the folding phone(like the chin of that Moto), it would make plenty of sense to use an internal zoom lens with a 1" sensor. It doesn't have to extend.
  • jamDphax - Tuesday, May 19, 2020 - link

    These new crops of 1/1.5~1/1.33 inch sensors are WAY bigger than the 12mp 1/2.55" sensors of last gen. Which by themselves are not too bad, and the comparable size as the 20mp point and shot, i.e. Canon PowerShot. And surface area goes up by the products of scaling factors of the sides.

    So this 1/1.33" 50mp sensor is 3.7x the size of the 1/2.55" 12mp sensor of the Pixel 4, with 4.2x the pixels, not a ridiculous scaling. The quad patterning of the bayer filter reduces color information to a quarter, but luminance wise, it should capture nearly 4x the light/details of the Pixel 4 sensor.

    Personally, I am excited to see sensors > 1/2" use in phones. The bulk and complexity of camera bulges can be manage by taking out the ridiculous telephoto/marco/lidar sensors, which are often 1/3" or even smaller that's coupled with optics that are 1~3 stops slower, capturing only a few percent of the luminance of the main camera.
  • BedfordTim - Wednesday, May 20, 2020 - link

    Think of it as a 12MP sensor with sub pixels. The lens limits you to 12MP anyway. The 50MP bit is just marketing.
  • s.yu - Thursday, May 21, 2020 - link

    Luminance doesn't translate well with bayer (much less quad bayer) because there's still interpolation involved, which is why Leica M Monochrome does indeed take notably sharper photos with the same sensor as regular M, just that they're limited to BW, and another downside is that you can no longer manipulate individual color channels doing post processing in BW.
  • Psyside - Monday, June 8, 2020 - link

    So will you admit that you were wrong or you will not have the decenty do to so? i was wating for you to send me your email, but you didn't so i will be generous in order to avoid embarrassing you (completely) and i will use OLD inferior photos,

    Mouse over to compare, identical crops,

    https://screenshotcomparison.com/comparison/3285
  • Santoval - Tuesday, May 19, 2020 - link

    The MP ceiling for phones lies at the same level as the marketing ceiling.
  • verattemples - Wednesday, June 3, 2020 - link

    I was without work for 6 months when my former Co-worker finally recommended me to start freelancing from home… It was only after I earned $5000 in my first month when I actually believed I could do this for a living! Now I am happier than ever… W­­W­W.iⅭ­a­s­h­68­.Ⅽ­O­Ⅿ
  • FunBunny2 - Tuesday, May 19, 2020 - link

    I don't care a fig about the camera on my phone. Only used one a few times in all the years I've had a smartphone, but I've used both Nikon SLRs and Leica rangefinders and the lens construction between those two designs (nothing to do with brand names, btw, just physics) raises a question. With the SLR, short focal distance lenses require a retro-focus, basically the lens mount has to be farther from the focal plane, due to leaving space for the mirror swing. Retro-focus fools the lens into thinking the focal plane isn't way off in the distance. If memory serves, even 'normal' 50mm lenses are retro-focus.

    Retro-focus lens have to do their voodoo by inserting multiple elements into the lens, in essence, focus the main element inside the lens, then push the focused image past the mount out to the film.

    With smartphone cameras, I'd expect the opposite to occur: the lens is way too close to the sensor, putting the focal plane behind your left ear, and thus requiring moving the physical focal plane back toward the lens to find the sensor. Each added element eats up light, and thus degrades the image. At some point physics/optics says, 'No mas!!'.
  • PeachNCream - Tuesday, May 19, 2020 - link

    Phone cameras are very useful for casual photography, capturing images or video as events happen, and photographing during emergencies. Few of us routinely carry an a dedicated camera with us on a routine basis unless our occupation makes it a requirement. I do not think its really going out on much of a limb to say that a large majority of people do not bother with purchasing a dedicated camera and rely exclusively on whatever capabilities happen to be in the phone they own.
  • Dizoja86 - Wednesday, May 20, 2020 - link

    I know few people with dedicated point-and-shoot or SLR cameras at this point. The processing that smartphones to compensate for their tiny sensors and weak optics leaves images that point-and-shoot cameras from a decade ago couldn't dream of. I'm more than happy with the cameras on my Galaxy s10, and most people have been impressed by the images I've taken.

    Of course I wouldn't consider smartphones for professional photography, but even flagship smartphones from 2015 would leave most people happy with the images they were capturing.
  • watzupken - Wednesday, May 20, 2020 - link

    "I know few people with dedicated point-and-shoot or SLR cameras at this point. The processing that smartphones to compensate for their tiny sensors and weak optics leaves images that point-and-shoot cameras from a decade ago couldn't dream of. I'm more than happy with the cameras on my Galaxy s10, and most people have been impressed by the images I've taken."

    There are not that many people that have a passion for photography, and you can kind of validate this from the people that you know. So I don't disagree that most people will find photos taken from older flagship good enough. For me, I still keep a bridge camera after I sold off my DSLR which I find too bulky after sometime. The bridge camera is a good companion when I travel as the image quality is still superior and offers the ability to have a good optical zoom. While mobile phones have somewhat caught up in terms of zooming capabilities, but the quality is nowhere as good from my opinion due to the small sensor size for the zoom lens.
  • s.yu - Thursday, May 21, 2020 - link

    You're not really making sense, mirrorless lenses can also be retrofocus, like Sony's ZE 35/1.4, and Zeiss's ZM 35/1.4. The current trend is better correction at the expense of more elements, which generally means significantly increased size and weight, and retrofocal designs seem to allow correction in that manner.
    Regarding smartphone lenses, they generally use a couple radical aspherics, but I don't think they fall into any single category as a whole, for example the 3x telephotos are probably really telephotos, in the original sense that the physical length is shorter than the FL.
  • GC2:CS - Tuesday, May 19, 2020 - link

    How does the dual pixel work ? There are two photodiodes for phase detection under each pixel. So why doesn’t it make the sensor twice the resolution ? If there are two diodes they are insulated. Does that mean effective area that gathers light is less than without dual pixels ?
    What about octa pixel huawei uses ? What is the difference a 50ML dual pixel and 100 MP regular pixel ?
  • wax2142 - Tuesday, May 19, 2020 - link

    ok here's a bit to unpack here....maybe if you DM I can give you a full run down of digital photography...but pertaining to your question, no, it doesn't mean you have 2 pixels for the price of one. Dual Pixel actually refers to the method of autofocusing. Essentially 1 of the photodiode is dedicated for capturing light information and the other to essentially to detect phase differences between pixels and use that information to tell the lens how much adjustment is needed to get the photo in focus. you can learn more from this video here: https://www.youtube.com/watch?v=Tk-Y34nMkeY

    so in short 50MP dual pixel AF is basically a 50MP sensor with Dual Pixel phase detection for fast on sensor autofocus. Because the alternative autofocus methods are : Having a dedicated phase detection sensor as in DSLRs (so yes DSLRS have 2 separate sensors located separately), or the less robust contrast detection AF for Image sensors that do not have an integrated phase detection capability (Eg some cheaper point and shoot). Because contrast detect AF methods can only tell that the image is out of focus but cant really immediately tell by how much or if the image is front or back focused, this AF system is not really favored as focus speed tends to be slow , requiring multiple iterations to finally nail the right focus, and leads to focus hunting as the camera continually oscillates between over/under focusing.
  • saratoga4 - Tuesday, May 19, 2020 - link

    >Essentially 1 of the photodiode is dedicated for capturing light information and the other to essentially to detect phase differences between pixels

    No, there is not a phase and a light photodiode. Both photodiodes record intensity from a common microlens. Each is read out separately, and phase inferred from the difference in intensity between the pixels (tilt in the wavefront will cause either the left or right pixel to get more light). The final pixel value is calculated as the sum of both photodiodes and the phase as the difference.
  • Spunjji - Friday, May 22, 2020 - link

    that's the first time I've seen this explained in a way I understood. Thanks!
  • s.yu - Thursday, May 21, 2020 - link

    CDAF is usually more accurate than pure PDAF, the reason PDAF is preferred in phones is their deep DoF, so accuracy generally doesn't matter.
  • Spunjji - Friday, May 22, 2020 - link

    It's genuinely more about the speed than the accuracy. It's the same reason mirrorless cameras moved from the more-accurate CDAF to hybrid PDAF - CDAF speed is hard-limited by how fast you can move the lens elements, how fast you can read out from the sensor and how fast you can perform the necessary calculations. That all sped up a *lot* over the past 10 years, but it's still not optimal.
  • saratoga4 - Tuesday, May 19, 2020 - link

    >How does the dual pixel work ? There are two photodiodes for phase detection under each pixel.

    There are two separate pixels sharing a single microlens.

    >So why doesn’t it make the sensor twice the resolution ?

    They share a microlens, so each records the same spatial information, but different angular information. You can use them to infer the shape of the wavefront hitting the pixels, but not to extend resolution (at least not easily).

    >Does that mean effective area that gathers light is less than without dual pixels ?

    Yes, which is why they don't use it with the smallest pixel sizes.
  • trivik12 - Tuesday, May 19, 2020 - link

    its definitely the right decision to reduce MP from 108 to 50. I am curious how optics work. Ultimately sensor has good possibilities but overall quality is dependent on optics and software. I hope this is used with Note 20 and they knock it out of the park.
  • boozed - Tuesday, May 19, 2020 - link

    "with the autofocus performance not been nearly as performant and a dual-pixel PD solution"

    Ow my brain
  • watzupken - Wednesday, May 20, 2020 - link

    We are back in the megapixel race again, just like we did for those portable cameras in the past. But frankly speaking, there is so much this supposed Quad Bayer or tricks can do. As it stands now, we are no longer seeing significant mobile phone photography image quality improvements. Huawei managed to pull off improvements mainly due to a larger sensor with their P40 Pro, but unless you have an eye for good quality photos, or deliberately looking for flaws, most people will not be able to discern the difference from the output of all the flagship phones. At some point, this megapixel bubble will burst, and I think we are not too far from it now.
  • Tomatotech - Wednesday, May 20, 2020 - link

    I don't understand: This sensor is 1/1.33", or converted to sensible units, 0.75" or 1.9cm wide. How on earth is that and the associated lens / imaging pathway going to fit into a smartphone? I imagine it will also require >2cm wide lenses.
  • BedfordTim - Wednesday, May 20, 2020 - link

    You have spotted a bit more marketing. The sensing area diagonal is only about 12mm.
  • Tomatotech - Friday, May 22, 2020 - link

    Thanks for explaining.
  • s.yu - Thursday, May 21, 2020 - link

    The sensor size in inches go by the original video tube formats, so the number(or fraction) of inches denote the diameter of the whole tube, not the actual sensor. The only way of knowing the actual diagonal of the sensor is to look it up, and you'll get a general sense of it if you come across them often enough.
  • Tomatotech - Friday, May 22, 2020 - link

    Thanks for explaining.
  • Tomatotech - Wednesday, May 20, 2020 - link

    Another question: I notice on the picture of the sensor, in the article, several of the outlets on the chip are not connected to the backing circuit board, and likewise several of the circuit board inputs are not connected to the sensor. About 10 such on the top edge and several more on the right and left side. Can anyone explain why?
  • PaulHoule - Wednesday, May 20, 2020 - link

    I wonder if those phase contrast sensors would make for a good depth camera.

    Also in terms of megapixels I've read (and experienced a bit) of digital SLRs and 4k video cameras reveal weaknesses in 35mm size lenses, never mind the tiny optical platforms on smartphones
  • FunBunny2 - Wednesday, May 20, 2020 - link

    "weaknesses in 35mm size lenses"

    may be those rice lenses, but Leica??? can't be topped. :)
  • Janie Durham - Thursday, May 21, 2020 - link

    It's very cool! I've been waiting for him for a long time!

Log in

Don't have an account? Sign up now