Comments Locked

74 Comments

Back to Article

  • ET - Thursday, March 1, 2018 - link

    Can we call the AI Processing Unit and AIPU? Would be clearer.
  • Mumrik - Thursday, March 1, 2018 - link

    Is it me or does "Competitor B (SD835) look hotter in every area of the phone, yet have the same temperature? What are they getting at, or is that slide just bullshit?
  • Colin1497 - Thursday, March 1, 2018 - link

    The die is maintaining the same temperature doesn't mean that the surface of the phone is maintaining the same temperature. Better heat transfer can transfer more power out of the die and maintain the same temperature.
  • Santoval - Friday, March 2, 2018 - link

    You should take that thermal comparison "test" with a very large grain of salt. The bottom heat patch of the Competitor B phone is obviously due to it charging at the time, but it is unknown if the other two phones also draw power at the same time. Competitor A does not appear to be charging (its bottom is barely orange at all), while the Helio P60 phone also does not draw any power.

    I don't think the optical image of the three directly corresponds to their thermal image - I have no idea what these weird adaptors(?) are supposed to be. In any case, this "test" is not a thermal test at all, since for it to have any merit the three phones would need to be at the exact same state running the exact same thing. A mere thermal photo with zero other data is not a thermal test comparison.
  • BigDragon - Thursday, March 1, 2018 - link

    Looks promising. I just need a list of phones that will contain these new processors. After the Snapdragon 808 and 810 debacle, I'd like to avoid Qualcomm.
  • Dr. Swag - Saturday, March 3, 2018 - link

    Because it's their fault that 20nm had really bad leakage...

    The 835 is perfectly fine. Pretty good power consumption too.
  • Plumplum - Saturday, March 10, 2018 - link

    20nm fault?
    Helio X20 is able to run 10 cores nearly at full speed with 20nm process during 10minutes! Only loosing 6% performances...rumors are false on cpu, gpu lost about 30% (Many others lost 50% on the same test but nobody noticed!)
    It's both cortex A57 and 20nm that doesn't work.
    Both Huawei and mediatek refuse to release A57 (X10, kirin 930).
    Qualcomm use it for scoring in benchmarks and release crappy soc.
    All their first generation or 64bits was crap, 410, 615, 808, 810! But they managed to sell it (patents abuse?)
    Then they use these problems to make people think their custom cores are better. They aren't (on both krait and kryo on 820).
    New kryo are nearly the same as cortex A73/A53 on 660 (and probably A75/A55 on 845)

    It's obvious that kryo on 660 or 835 is nearly nothing more than a Cortex A73, compare detailed score :
    https://browser.geekbench.com/v4/cpu/7397584
    https://browser.geekbench.com/v4/cpu/7397478
    https://browser.geekbench.com/v4/cpu/7165483
    Note : 835, kryo 280, 2.45ghz - 660, kryo 260, 2.2ghz - P60, cortex A73, 2.0ghz
  • ZolaIII - Thursday, March 1, 2018 - link

    Meh still an entry lv GPU which will reach only 75% performance of A508~A509 and still use much more power. As usual QC counterpart (S635) will be much better in every day use & for light gaming not to mention long term support from CAF and availability of the kernel source. Still it may not be all grim for MTK we will see how will their Project Treble work for them as now they can skip most part the OEM's but I don't have high hopes. On the other hand it will have one advantage & that is pricing thanks to MTK recorded on doing so but more importantly because TSMC so called 12nm FinFET process is actually 15~20% cheaper (thanks to less employed layer masks) than TSMC 14 nm or Samsung 16nm one. Still I am eager to see products on GF 22nm FD-SOI that pushes that cost limit future 5~10% down (compared to 12nm FinFET), can push the power consumption even more down (back biosing) and most importantly it will put RF components power consumption down a lot (50~75% compared to today).
  • serendip - Thursday, March 1, 2018 - link

    Process node comparisons are almost meaningless now because each foundry has different ways of calculating feature size. Mediatek have been relegated to the bargain basement end of the market, their chips are dirt cheap but support is woefully lacking. They also haven't learned anything about GPL licensing and code release requirements either.

    How is it that other SoC vendors can integrate Mali GPUs with decent power profiles but Mediatek can't? They're way behind compared to Qualcomm's Adreno GPUs that perform better and use less power.
  • Retycint - Friday, March 2, 2018 - link

    The weak GPU bit is only true for the X series though. Sure the mali is not as efficient as the adreno but the mali-t880mp2 in the helio P20 actually performs identically to the Adreno 506 in the SD625

    Source: https://www.gsmarena.com/sony_xperia_xa1_plus-revi...
  • ZolaIII - Friday, March 2, 2018 - link

    Yup Mali T880 MP2 @ 900 MHz with DDR4 almost catches A506 @ 650 MHz with DDR 3. Take a good look what A509 (which is the same A506) archives @ 850 MHz pared with XDDR 4. To simplify the things up T880 @ 800 MHz MP4 on X20 is about as equal to A510 @ 650 MHz on S65x. So G72 MP3 @ 800 MHz is around the same performance as T880 MP4 on X20 and it will match A509 still being behind A512. Difference becomes bigger in real use when MALI throttle down while Adreno continues to pump. Still G72 is the latest MALI generation while Adreno 5xx is two + years old. Adreno 6th generation is emerging & it only makes gap wider.
  • Plumplum - Saturday, March 10, 2018 - link

    On modern fonction like Tesselation, Mali is even far better...
    On this Task, Mali>Adreno>powervr
    For exemple, gfxbench doesn't test Tesselation on iPhone!

    Give credits to 3d benchmarks is dangerous. Most time, the rendered scene in them doesn't represent well what a real game is.
    A benchmark is a free software...who pays the developpers?
    Nowadays, mid-range soc add Big cores.
    On my opinion, buying 845 instead of 660 or P60 is useless
    Benchmarks are there to give to some reasons!

    Results in benchmarks oversize the importance of texturing and the difference between high-end and mid-range.
    It sometime leads to strange results :
    Mali 450mp6 is better than Mali T760mp2 on many benchmarks...most games run far far better on T760
  • ZolaIII - Sunday, March 11, 2018 - link

    I give credit to sole 3D Mark regarding GPU benchmarks. You don't know what property Apple api includes support for nor is it tested or not. Most really good benchmarks aren't neither free for more precise use nor open sourced. They actually cost a lot.
    Your opinion isn't valid. Sure the difference in CPU in similar capable (same amount of same type of core's) is minimal & midrange SoC's even win when it comes to efficiency as aren't clocked insanely high & are made on high end manufacturing process and in real usage difference is neglectable. So if you don't play games your opinion stands. But on the other hand GPU is the biggest portion of the SoC, considerably bigger even on entry lv midrange SoC's that have let's say Mali T720 MP2. As its much bigger it also adds most of the SoC cost so in comparison the A512 on the S660 is only half size of the A540 on the S835 so the S835 costs 40% more. Still S512 is bigger than let's say Mali G72 MP4. Mali 4xx series is very old and support only EGLS 2. While Mali 450 MP6 will in fact be some 20% faster (in EGLS 2 only) than T760 MP2 its also 35~40% bigger and extremely power inefficient when a
  • ZolaIII - Sunday, March 11, 2018 - link

    Mali 450 emulates EGLS 3.0 it became much slower in doing so that T760 that supports it nativity & most present game's use EGLS 3.0.
  • Plumplum - Sunday, March 11, 2018 - link

    It's the case even with opengl2.0
    The reason is that 450mp6 has heavy texturing capabilities and a big weakness with geometry...
    As many benchmark mostly test texturing, the weakness doesn't appear...
    ...but it appears in game
  • ZolaIII - Monday, March 12, 2018 - link

    Ok hire is contra example old San Andreas will work perfectly fine on Mali 4xx MP4 wile it will work terribly bad on Adreno 305 - 306 - 308. But it's a lonely one. After all it is mostly about poligon & texture count GPU can pump while it's easier & cheaper to improve on textures. The last generation of both Power VR and Adreno both significantly improved on texture engine & result is 30 to 50% increase in performance. Adreno has a leading ALU performance which isn't put to any good use thanks to (bad) drivers & the fact how it ain't actually needed all that much regarding actual gaming but it's much more limited to memory bandwidth & texturing capability.
  • Plumplum - Sunday, March 11, 2018 - link

    I don't since I saw a 3D benchmark that says Tegra 3's GPU was far better than Mali400mp4 and one other says exactly the opposite!

    3D mark is the one I prefer too!
    It separates :
    - texturing
    - geometry
    - CPU usage for physic
    Others tests mostly texturing and badly represents what a game is.
    They pretend flagship's GPU are 200% better than mid-range.
    In game, that's nearly never the case!

    P60 (or SD660)'s GPU is more than enough to run nearly all games during the life of the devices...
    845 will be useless...

    About tesselation on Powervr, we know that they're bad due to other soc that use them (mt8176, X30)...
    We don't know about the last one
  • ZolaIII - Monday, March 12, 2018 - link

    The benchmark results actually are accurate representative of what you get. So A540 is 2x size and 2x memory bandwidth of the A512 so it's accurately 2x performance. Games on Android do limit the frame rate to preserve battery & I know just couple game's that will give you an option to control this. Xenowerk is a prime example of options done right so you can choose the graphics lv along with 30 or 60 FPS limit. G72 MP3 on the MTK P60 (or A509 regarding performance & not throttling) will be far from enough for running all games & it will throttle (we will see how significantly), A512 on the other hand will still give 50% more performance & won't throttle but even that is either bare minimum or not sufficient enough for some titles (Dolphine emulator, Talos Principle, GOF3 to name a couple of those). A510 even with Vulkan front end in PPSSPP has a difficult time running some of more advanced game's even at 720p. Still only dumb people make game's that won't work properly on 92% of the device's when you took A512 as a base or 80~83% when you took A510, G72 MP3 or A509 as a base, but still important on the GPU front are badly needed especially in the midrange segment that actually does drive entertainment (gaming) industry base point.
  • Plumplum - Tuesday, March 13, 2018 - link

    You say it : 2x size, 2x bandwidth...to apply texture and fit to benchmarks.

    Comparing 509 and 510 isn't a good Idea when talking about throttling...510 use 28nm process, 509 use 14nm
    Throttling is heavier on high end!
    You can use gfxbench and compare Manhattan 3.1 on-screen and Manhattan long term performances.
    On Mi note 3's Adreno 512 : 14.7/14.6 no throttling
    On Mi 6's Adreno 540 : 35.8/28.2 throttling
    On some other high-end like Galaxy S7, throttling can reach 50%...even on Real racing 3 that renders scenes in 1080, kirin's maliT880mp4 becomes faster than exynos's T880mp12!!!! Real life test!
    Gfxbench's long term test isn't perfect as it doesn't saturate CPU at the same time.
    Due to 28nm process 650/652 has an horrible CPU throttling, it impacts the whole soc and it's Adreno...running emulator that use both CPU and GPU kills 650/652 performances
  • ZolaIII - Friday, March 2, 2018 - link

    Well I used TSMC own reference to it's own lithography processes, 12nm 12 mask layers, 14~16nm 14~16 mask layers. It's pretty much simple math even disregarding used rooting libs & cetera. GF statement is on their own calculations and even FD-SOI wafer costs 2x more the mask layer reduction to 8 makes it more than up for it. Still more than obvious advantages are; RF & mixed RF usability and potential of integration of all analogue, digital components on the same single package (with FinFET you can only do digital while RF remains on planar bulk & separate package[s]), back & forward biasing that gives you ability to adjust voltage (0.4V, 0.8V & beyond) & performance targets along with naturally power consumption in the flight, the rooting cost all together with less layers and compatibility tools is half of FinFET one & speed of getting job done and time to market is 50% faster (conservatively compared to simplest/cheapest 12nm 12 layers TSMC FinFET one). All faunderis use base element cell blocks for calculations and comparison but that even it's not wrong nor cheating in any way doesn't give complete final pictures as for instance base transistor cell block on one process can be bigger than competition one but base SRAM cell is smaller...

    Other vendors aren't so cheap basterd's so they put more GPU clusters at lower speed or better say Samsung does, HiSilicon is just a tad better than MTK regarding this & still the Samsung best effort is 2x less power efficient than Adreno. The Imagination teach did some catching but QC responded equally (also rising texture unit limit [Gx8xxx - A6XX]) pretty much zeroing that. Still me by now we will see a better competition on mobile GPU front after Chinese consortium both Imaginations GPU business.
  • agoyal - Thursday, March 1, 2018 - link

    Can someone list phones using P20and P30 series...I don’t recall seeing any main vendors using them
  • leo_sk - Thursday, March 1, 2018 - link

    Basically a snapdragon 660 competitor in cpu power but lagging in graphics and modem. They are avoiding a direct clash with snapdragon 635, which would be present in most midrangers, unless they are pricing it that competitvely
  • name99 - Thursday, March 1, 2018 - link

    Uh
    the super-midrange “premium” category

    WTF???
  • serendip - Thursday, March 1, 2018 - link

    Well, considering that Qualcomm just came up with the 700 series as a rebrand for its 66x chips, it's possible these companies see a segment opportunity just below flagships. Not everyone needs or can afford a Snapdragon 845. Efficiency is still a big issue with Mediatek chips though, their GPU and overall SoC designs use a lot more power than comparable Qualcomm chips.
  • name99 - Saturday, March 3, 2018 - link

    midrange I accept.
    super-midrange I'm willing to accept.

    But "super-midrange premium" is in the realm of lunacy
  • HardwareDufus - Friday, March 2, 2018 - link

    "super-midrange “premium” category"

    I couldn't agree with you more.... absolutely meaningless distinction.... do 'Super', 'MidRange' and 'Premium' even belong in the same descriptor set??

    I miss the days of Cheap, Good, Better, Best. There! We're done....
  • jjj - Thursday, March 1, 2018 - link

    This would be nice at 18$ or about, with P40 bellow it vs SD636 and above it, the P70 maybe at around 25% higher perf.
    Overall not sure if MTK did enough with A73 on 12nm when Qualcomm seems to be going with A75 on 10nm, with a slight delay i suppose. Not a problematic difference in the end, as long as there are no unexpected issues.

    The so called AI accelerator is supposed to be Cadence Vision P6.
    Do you have any clue on die size, we rarely get such data for mid range SoCs.

    Should be a good year for mid range, too bad that DRAM prices are ruining the party.
  • ZolaIII - Friday, March 2, 2018 - link

    MTK is trying to kick in in cheap SoC market range so neither A75 which is 50% bigger than A73 & there for cry out for more advanced lithography nor 10 nm FinFET that they already had bad experience with that is mostly considered as a half node and pricier have much sense.
    Can you confirm that its Tensilica Vision P6 DSP?
    That would actually make my day as it's much more superior than QDSP (especially cut down version of it) would also mean that the most of the code is possible to share between Samsung, HiSilicon and now MTK as they all share the same DSP all do probably in different cluster configurations. It's one of my favourite designs & scalable up to MP4 so very capable in both AI and media - vision tasks.
  • jjj - Friday, March 2, 2018 - link

    Qualcomm will likely have A75 at up to 2GHz with their 700 series, not higher to protect the SD845. A P70 with A73 at 2.5GHz on 12nm is competitive enough but won't lead in perf and efficiency. Mediatek is not being as aggressive as Qualcomm and they should know better by now as Qualcomm has been very aggressive for a couple of years. Mediatek will keep getting burned if they don't step up, they can't afford to behave like the tiny guy, they are not. Qualcomm used to get outfoxed by pretty much everybody else but they are not leaving any room for that anymore.
    Yeah there is the SD636 and maybe the 660 stay above it (the 636 is a downclocked 660) so they got that part covered but the 670&friends, AKA SD7XX, should be A75 on 10nm.

    I'm pretty certain about Vision P6 and it's interesting that the X30 uses Vision P5 for 70 GMAC.
  • ZolaIII - Friday, March 2, 2018 - link

    & so did the previous Kirin. S636 isn't downclocked S660 as the GPU is 1/3rd cut off or better say the old A506 on proteins (higher clock & XDDR 4 mem controller). An old question; an chicken or an egg. Arm said how A75's can give peek performance gain over A73 at around 25% but how sustainable performance will be about the same. So why would someone as MTK go for it? Sure DinamIQ which ain't compatible (or they say so) with older A73's. DinamIQ reduces (35~40% even 55% in best case scenario) up migration times (if core's are in active idle state) which in real workloads adds very little to the performance but it will significantly rise what user's call "snappines" & we call response time. This will indeed make a difference to how most folks will feel it. I know you are a fun of the MTK, you always whose but let's face it they simply ain't doing things bright. They or for that matter anyone else can only win it by playing smart & focusing on real user needs (similar how AMD done it before & it's probably doing it again). I hope we will soon see some DinamIQ cluster product's containing only two A75's & four A55's an P6 (or it's successor) MP2 & GT8540 GPU on the TSMC 12nm or GF 22nm FD-SOI naturally all reasonable clocked (1.8~2 GHz CPU which is sustainable for two A75's & 600~700 MHz for GPU...). It doesn't matter that it won't be ground braking, it doesn't matter that SoC would cost 35~40 $ it would still be much cheaper than the same one on 10 or 7nm (until EUV doesn't become a real standard). That's the way to win the war by making it solid, mass & still affordable. I don't expect MTK to deliver that, HiSilicon had some ups and downs but they are far from mature behaviour so they certainly won't deliver it, Samsung could but they won't as they most of all value their profits, QC won't as it's to long in the leader's position & also like to change a little extra, PNX really could but they have no interest in mobile market at all & besides will probably become the part of QC anyway, old Broadcomm is gone, Texas Instruments bailed down long ago and has no intention of getting back. So basically nothing will change. Me by if we got an hyper competition with Chinese jumping in something could be different but US is blocking it. Sorry for the long one, anyway it's just my perspective of things as they are now.

    Best regards.
  • jjj - Friday, March 2, 2018 - link

    I'm nobody's fan and I take any such suggestion as an insult but that wasn't your intention so i'm not gonna get upset here. Anyway. A75 would offer the possibility of much higher peak perf vs A73. They could go as high as SD845 and Qualcomm would be reluctant to follow with SD670. A73 at 2.5GHz should be behind A75 at 2GHz, at least in FP as that's where A73 is weak so , depending on everything else the SoC offers, they might have to price it significantly lower while also winning far fewer sockets.

    35-40$ is a lot, even the very high end would be lower if there was competition.What you want (assuming you want a tiny MP2), would be nice at 12-15$ next year on 7nm, with a newer core, not A75.
    Cost wise, 7nm does not need EUV to be much cheaper than 14/16nm and EUV won't do a lot on the cost side. EUV arrives when it arrives and then folks struggle to avoid double patterning, then maybe high-NA EUV arrives on schedule in 2024 , or maybe not... EUV is too late to provide a huge upside, it will help and it is needed but there is revolution on the cost side.
    As for a focus on reasonable solutions, that's what MTK has been mostly doing and they got screwed by marketing because consumers are less than reasonable. What they need is to beat Qualcomm on the GPU side, consistently., at any cost. Aside being aggressive and flawless execution but that's a given.
  • jjj - Friday, March 2, 2018 - link

    there is NO revolution*
  • ZolaIII - Saturday, March 3, 2018 - link

    & revolution eats it's children. Development of the Chinese home grown full complex semiconductor manufacturing won't be a revolution but an overtaking & US is aware of it.
  • serendip - Friday, March 2, 2018 - link

    EUV will take a long time to reach commercial production so we could be stuck with 10-12nm for a while.

    The only way to beat Qualcomm right now is to have a SoC that's equal on the CPU side with equal or better GPU performance, along with a decent modem. HiSilicon and Samsung have come close for the high end but Mediatek are behind in the flagship, midrange and low end segments. In other words, they're way behind. A DynamIQ A75+A55 chip for the midrange priced lower than the SD670 would make Mediatek more attractive to OEMs.
  • ZolaIII - Saturday, March 3, 2018 - link

    It's not possible to beat QC on the flagship segment regarding the GPU. On the higher end mid range segment it's probably possible to match it & most folks would be satisfied with what. EUV is apparently hire already.
  • ZolaIII - Saturday, March 3, 2018 - link

    Their is no much use of clocking either A73 or A75 beyond the 2 GHz point & even so only for a burst tasks over 1.6GHz with FinFET and 1.2GHz with bulk planar or FD-SOI they begin to leak as hell. Over 2 GHz the power consumption rises almost exponentially.

    You didn't read right; the GT8540 is a flag Imagination last generation GPU with four clusters so it's huge that's why I told 35~40$ as PowerVR cluster is actually bigger than MALI cluster. The MP2 whose for Tensilica P6 Vision DSP.
    Cost wise increased complexity along with a double or triple pattering now dose add significantly to the price as the wait period between patterns is long, wafers are stored in sterile environment between them and it significantly reduces output per time unit. EUV is there for significantly reducing that wait so it will boost the output & lower the cost. As much as I did read recently Samsung already implemented it on it's 7nm FinFET, it's production ready & production is starting (already or soon). Anyway products on it are coming 1Q next year. I know EUV is not preliminary needed at 7nm but it will be a game changer as you will see. Other than that 7nm isn't cheaper than previous nodes as it also employees use of rare earth conduct materials but that doesn't actually rise the final price all that much. To simplify things if you get 50% more cells (7 vs 10nm)on 300mm wafer but you also need 50% more time to produce it then the cost is the same, if you need only 75% of time (EUV) than the cost is slashed in half.
    MTK or even Nv can't beat QC on performance/W, performance per square mm. What they could do is beat it at the higher midrange segment by employing bigger GPU on more economic lithography so that it narrows the current Adreno advantages. I suggested PowerVR as it's the most competitive one known & with it they could even win. Still MTK does change less (or at least we assume so) per equal size SoC than QC (all do I am sceptic about that as QC doesn't make much money out of SoC's) if they use 30~40% cheaper lithography that gives them opportunity to put 2x sized GPU & offer it by competitive (to QC) prices (not cheaper). Up to date MTK did quite opposite to this. At the end 35~40 $ is a lot more than 15~20 $ but it doesn't make much difference on the 200~350$ product.
  • ZolaIII - Saturday, March 3, 2018 - link

    MTK does charge less *
  • jjj - Saturday, March 3, 2018 - link

    "Can't beat" is not a realistic statement in semiconductors, they just need need to figure out how.

    On EUV you are not informed, at all. EUV will be inserted when, at the very least it doesn't increase costs while shortening cycle time. In Samsung's case, they might do it at any cost, even at huge losses and remains to be seen when as they have been rather silent lately on timing. The insertion is being forced and the upside will be none to minimal at first. 1 year later, 5nm arrives and it's another struggle that only gets worse with 3nm and beyond. EUV should help mitigate the escalating costs and that's a slightly optimistic scenario, it won't enable any kind of return to solid cost scaling though.
    7nm DUV should offer maybe 30% savings over 14/16nm on the silicon side, dev costs are another matter. Dev costs are why folks do not migrate, high ASPs and/or vols are needed for enough total revenue, otherwise cost per transistor is lower.
  • ZolaIII - Saturday, March 3, 2018 - link

    If someone runs twice as fast as you, you can race him but you won't win (MALI vs Adreno). If someone is 33% faster than you but you can compensate that by going to a league down you still can be a champion in that league as long as you're expenses are competitive (PowerVR on cheaper manufacturing node).
    I am sorry my bad I ment DUV.
    As much as I can recall you ware not exactly for 22nm FD-SOI. Did this change recently? I still see it as competitive to the 12 nm FinFET on digital space while being a sure thing in RF and mixed A/D. As its easier to do things on one node only it wins overlay. They are gaining a lot of traction; for instance P6 Vision DSP whose already rooted on it, the RF part's are emerging with high speed & they have both SRAM and MRAM cell's ready for quite some time now, unfortunately wafer availability isn't good but we see that trend now globally regarding water protection & estimated prices jump. I see it as the best solution if you want to go cost effective way & you are relatively small or medium vendor especially if you are on the go for time to market.
  • ZolaIII - Saturday, March 3, 2018 - link

    Regarding wafer production & estimated price jump *
    Actuality confirmed price jump.
  • jjj - Saturday, March 3, 2018 - link

    Mediatek is not small or medium, they are a giant. Excluding memory and foundry, only a few are larger than them. Intel, Qualcomm, Broadcom, TI, NXP and Nvidia. For any phone SoC above 10$, they will likely go 7nm as soon as possible, it's not like others won't. What they do at 5$, I have no idea.
    On the GPU side, their options range from Mali, Imagination, Vivante to AMD and Nvidia to in house development.It's up to them to figure out how to get it done and at what cost but they need to, if they are gonna improve their image. Carriers and OEMs might care about the modem but users don't give much of a ... about that.
  • serendip - Saturday, March 3, 2018 - link

    Mediatek is indeed a large company but they play at the bargain basement end of the industry with $50-150 devices in emerging markets. That's a lot of volume to keep them profitable but their attempts to go after the midrange and high end have failed.

    To me, Mediatek messed up in two areas: perf/watt and UI responsiveness. Snapdragon chips that cost slightly more offer much better perf/watt while also making Android feel snappier.
  • ZolaIII - Saturday, March 3, 2018 - link

    Many enthusiast & volunteers developer eyes & efforts behind CAF made QC's SoC's actually work better. CAF is a last of it's kind after QC killed much better efforts from Texas Instruments and old Broadcomm, Samsung & Huawei tried a bit but it's far behind the open source infrastructure already mature and I'm place from CAF. MTK has none & never even tried to have one at least regarding Android and on Linux it's also a sad story all do some SoC's are there regarding support. QC pushes the clocks beyond reasonable which is a ongoing trend for a long time now regarding the whole industry and only Intel & Apple difference a bit couple years ago but now even them are in the same bandwagon. While everyone else is pushing it beyond reasonable the MTK pushes it to the insane level.
  • jjj - Saturday, March 3, 2018 - link

    @serendip That's nonsense. They did not play in high end but they always did in upper mid and bellow. How they compare vs Qualcomm, will differ from SKU to SKU ,from cycle to cycle. Sometimes they are ahead, sometimes they are behind any relevant metric.
    Qualcomm has larger share today only because they leverage their patents, in legal ways and less than legal ways, to boost chipset sales. They have also been allowed to charge absurdly high prices for their patents, the income before tax from patents almost covers their entire operating expenses on the chip side and that's a very copious free lunch.
    If regulators would have done their jobs, allowing Mediatek to compete on product, Qualcomm would be a lot smaller today.
    You are clearly impacted by marketing BS but, you should realize that high end is some 20% of the smartphone volumes and the bulk of that is Apple, while Samsung takes the bulk of what's left. That's why MTK has not played in high end, it's risky , costly and limited vols.
    MTK messed up last year by not having a 16nm competitor to SD660 and by only having CAT 6 with the octa A53 offering.
  • Plumplum - Saturday, March 10, 2018 - link

    Problem of Mediatek is mainly patents fee on devices over 200dollars.
    OEM pay fee to Qualcomm even if they use soc from other vendors.

    The more the device cost, the more patents fee is important (that's the reason of the War between Apple and Qualcomm)...
    That's why mediatek can't sell high-end such as mt6595 or X30.
    Intel failed in that way too with Z3580! With a powerfull GPU (Powervr6430, nearly the same that Apple use), and CPU cores that beat Krait on many tasks (JavaScript and HTML5 used in browsers for exemple)
    After that Intel stop makingsoc for smartphone!
    Patents fee is by far, the first source of benefits of Qualcomm (6.5billions dollars - 600% marging in 2016)
    Benefits on soc are far far Lower (1.8billions dollars - 12% marging).

    Kftc (Korean fair trade commission) says Qualcomm use patents to sell their soc. They aren't the only one.
    In my opinion, the idea that mtk soc are cheaper is false. Mediatek need to make benefits by selling soc. Qualcomm doesn't need to!

    Perf/watt, it depends of what you compare...20nm of X20 with 16nm of 820? P2x serie is efficient...at least on some devices.
    Responsiveness? Oh! Well used, mtk are quite fast! Despite 20nm process (the same that lead to crappy 810!), X20 is nearly on par with devices that use 14nm
    Some soc like S410 and S615 were even ridiculous compare to their competitors mt6732 and mt6752.
    Western countries don't see many of them! Mediatek was forced to reduce power on their next generation (mt6735 and mt6753)

    Mtk reduce cost on GPU and modem, but CPU part is quite powerfull!

    When mtk can't most time reach major OEM, they sell to little compagnies :
    - no developpers - no source code - no major updates
    - correct CPU/GPU performances but link with other vendors is bad. ISP is badly use...ISP on helio was the first one to be able to treat dual camera effects! Despite these capabilities, some devices shows horrible bokeh effects.

    Mediatek's image is linked to these OEM. But Mediatek need them!
  • ZolaIII - Saturday, March 3, 2018 - link

    That's not an answer to the question I asked you & compared to the Samsung & Intel they are tiny, after QC merges PNX in it will also become a same tire as mentioned one's above and by far the largest one fables. With all do respect compared to the really big one's MTK is medium rare all do I am certain that they are concerned about growing over that. I am certain that they will push to 7nm & continue to push insane clock rates but that is exactly the way how to do it wrong. Do you think people really care more about 20% better synthetic benchmarks performance or 20~30% better battery life?
    Users don't care... about cellular modem as long as it works & LTE bands they need are supposed but still reducing its power consumption (along with every other RF) for 50~75% is something they would greed eagerly much more than Cat XX spread improvements and besides it's becoming necessarily for further move to the 5G.

    I will pretend that you answered me on the questions about FD-SOI as (obviously) that you still prefer the FinFET. So I will ask you what do you think about hybrid SOI FinFET's that IBM is going for?

    Just if someone else wants to join the discussion hire is one of the recent papers about FinFET, FD-SOI comparation.
  • ZolaIII - Saturday, March 3, 2018 - link

    Supported *

    The paper:
    http://file.scirp.org/Html/1-7601225_76277.htm
  • jjj - Saturday, March 3, 2018 - link

    @ZolaIII Already said this so not sure why I must say it again, consumers are not rational , they buy on marketing and yes vendors need to push stupid high perf instead of rational solutions. Rational solutions are what ruined MTK's image ,creating the "they got slow GPU" rhetoric -you can even see that in comments here from a bunch of folks. Or look at how many think that Apple is ahead in CPU , just because they have a gigantic core pushed to the limit (that nobody needs) and in MT they throttle by like 30%. It's stupid but it's how the consumer is today.

    On the manufacturing side, I support monolithic 3D and non-volatile switches for brain like devices, I don't particularly care about planar efforts to nowhere- huge efforts for minimal gains, a push that makes no sense and does not enable the solutions the world needs for future devices like glasses, robots, IoT and even future servers.
  • serendip - Saturday, March 3, 2018 - link

    @jjj Slow down on the insults, friend, I'm always a fan of civil discussions even when people disagree.

    Zola is right on what really matters: user experience and battery life. And no, I didn't swallow Qualcomm BS, I think their patent licensing is stifling the industry but their technical prowess can't be denied.

    Mediatek cannot compete on product because their product is behind comparable Qualcomm SoCs. Regulators could level the playing field on the baseband side but the computing side is an open market. Mediatek's only advantage is lower price. Their chips score highly on synthetic benchmarks but they consume 20-30% more power than equivalent Snapdragons. Third party ROM support is also huge on Snapdragons thanks to CAF: Zola is right again on this, no other SoC vendor is as open, and I've sworn never to use Mediatek again after being burned by missing updates on older phones.
  • jjj - Saturday, March 3, 2018 - link

    @serendip
    There is no insult, I stated facts, your discourse is clearly one that is heavily impacted by fanboysm and fabricated facts.

    "they play at the bargain basement" - false, they try in high end at times and cover everything bellow.
    "$50-150 devices in emerging markets|" - again false
    "their attempts to go after the midrange and high end have failed." - mega-false, they have dominated mid range for many years and they where close to surpassing Qualcomm in overall smartphone AP units share in 2016 but they had some 20nm and 28nm shortages, then they messed up in 2017.
    Then you claim that Mediatek is always inferior and if that's not ridiculous and fanboysm, what is? Qualcomm messes up a lot, they got outfoxed again and again for a decade.
    And now you repeat a bunch of the same lies. In 2015 Qualcomm messed up across the board, across the effing board, SD810 and SD615 both melting phones and in low end they lost almost all share. In 2016 they had the horrible Kryo. Late with DDR4 support in mid, last year they forgot support for 18:9. Even the 845 smells wrong, the CPU perf is too low for those clocks, there might be a slight bottleneck in the memory subsystem.
    You've seen some review here of a A53 Mediatek clocked high (like the X10) some years ago and you think Mediatek sucks.
    Stop with the lies and get your facts straight, that would be the first civil step you could take.
  • serendip - Saturday, March 3, 2018 - link

    I'll grab my coat ;)

    Anyway, one final riposte - I'm not a fanboy of anything, I just want a fast and efficient *device* with long-term software updates. I don't care who the component vendors are.

    Xiaomi + Qualcomm give me what I want despite both companies being utter ***holes in a lot of ways. Xiaomi are always late with kernel source releases and their bootloader policies are in sane whereas Qualcomm have stupid arbitrary restrictions on their chipset models, along with their predatory licensing practices. Qualcomm also screwed up the 845 but I'm happy that their midrange 660/670 designs are more efficient than the competition.

    Xiaomi + Mediatek gives me slightly cheaper phones with less battery life and almost no third party ROM support, a compromise I'm not willing to make. At the end of the day, I'm a device user, not a semiconductor market analyst. My phone could be using a single core OMAP for all I care, as long as my usage requirements are met.
  • jjj - Sunday, March 4, 2018 - link

    MTK went with 12nm and moderate enough clocks, memory not sure ,here we see 2x16bit but maybe that's not accurate- if true and coupled with the modem that's only Cat 7, this might be designed for a lower price point than what I was expecting.. They also got the new modem architecture, you can't bet on Qualcomm being lower power or higher perf or lower price.or anything.
    This space will be very crowded this year. It seems there will be P40 at some 12$, SD636 at some 15$, P60 likely somewhere between 15 and 18$, SD660 maybe above 20$, then P70, SD640 and SD670 - this last one might be priced high, not sure because not sure about specs, if it's aimed at 25-30$ or 40$.

    The idea that Mediatek is cheaper is urban legend. Just like with any other product, the marketplace and costs will determine pricing. If anything, prices are good now because in the last couple of years Qualcomm has applied a lot more pressure in mid and low and that drove prices down.
    Mediatrek had towards 50% margins in 2014 and that's solid, now they are at mid 30s because of the intense pricing pressure. Qualcomm's margins tanked too and they had to do rather large job cuts to sort out their financials, they aren't quite there yet- and all that despite having a monopoly in high end. Every solution is priced in line with what it offers, if they got a cost advantage vs the competition, they can apply more pressure, if they don't they take it easy. Qualcomm has a bit more leverage due to patents and scale but other than that,neither of them is some kind of bottom feeder that sells at very low margins.
  • ZolaIII - Sunday, March 4, 2018 - link

    Well Xiaomi isn't that bad regarding releasing sources (all do released sources aren't that good) I would say they are average. Besides it's not a biggie one as they use unified kernel infrastructure (which is even bigger problem now regarding Treble requirements) along with most parts shared between different models so if their is a device earlier with same or similar SoC that source can be used (example Kenzo source used for Mi Max & Max Prime reworked). What they are really bad at it's updating source & working on it which brings question of security as you can't apply CVE's with out updating infrastructure to latest subversion (actually most of the times you can but it's hell of a work & it's much easier going the regular way). QC is a biggest peace of shit and last one standing (regarding open source). I am & developer and a guy who likes to play a lot with it especially when the scheduler & HMP scheduler. Jjj is a CEO & I won't say where.
  • ZolaIII - Sunday, March 4, 2018 - link

    It's not wrong (the S845 performance) most partly benchmarks use the Android favored instructions execution & most of those are only two instructions per clock so the third one most of the times isn't used at all. Take a look at the Web based PC Mark one's & you will see that then it's right on the Arm's projections as it uses the all 3 instructions per clock most of the times, you will also similar see decline in the A73 performance there compared to the A72. The RAM memory controller remain the exactly the same one as on S835. ARM messed up with shared L3 victim cache on per DinamIQ cluster that is actually rather bad and that in combination with half amount from maximum L2 QC employees is a result. ARMs bold claims how newly introduced per core unshared L2 & access to it is a big leap forward is very discussable. I mean sure as ARM puts it if those are only cores that will be used non shared exclusive access brings 2x performance & along with 2x of it that's equivalent to 4x more entries in same time frame which actually adds 5~9% (11~12% in extreme cases) towards performance. But let's take a look at reality; big cores are mostly used in a short bursts (for initing new processes & multiple worker's) or for running apps (with large numbers of workers) but then again in most cases especially on the Android platform (99%) only two of them. As it's possible (and desired) on the pre A75 core's SoC's that are octa big little one's it's possible to tune with a hotplug which cores remain always idle so that only the first one from the pair that shares L2 cache stay idle & with it have same exclusive access to the L2 which is also idle (besides even in those rare cases when all 4 are needed it will put them to use faster as it only needs to prepetch cache & init the core not initialise cache and core's. Having it like that along with amount of two core's idling also optimises the upmigration times on two cluster systems to optimal achievable. Now 2x larger L2 is costly and most if not all vendors won't go with it (QC didn't). So at the end old A72's A73's achieve benefit of exclusive access (in octa big little) & 2x larger L2 (when things are set up as described) while A75 won't because of the L2 size cut. I am concerned about S845 as the CPU clock's are far beyond sustainable level but as stated before that is becoming universal. At the end best design for A75/A55 DinamIQ on the mobile platform is 2x A75 (hopefully with ARMs referred 2x L2 size increase) & 4x A55's as in reality you really don't need more.
    One more thing as the things go FP operations are much better executed on small in order core's disregarding if its on VFP unit or NEON SIMD as performance penalty is only 22~30% while power usage is 3 to 5 times less when small core's are used as FP tasks scale good in SMP (while integer one's don't) this future more enhances the small in order core usage benefits in FP tasks...
  • Plumplum - Saturday, March 10, 2018 - link

    Mediatek doesn't use standard Big.LITTLE but Corepilot
    Corepilot is something between Big.LITTLE ans DynamIq
    Coherent cache is allready include in Corepilot since Helio X20 (in addition of allowing more than 2 clusters)

    We don't know Many things about SD700 serie and if they will really use a rebranded cortex A75 or A73.
    Qualcomm only tell about IA improvement (that is allready include in P60)
    About A75, I think it will be as stable as A72...A72 is ok even on 20nm process
  • Plumplum - Saturday, March 10, 2018 - link

    P60 is Snapdragon 660 competitor...even the name shows it
    Kryo 260 is mostly a rebranded cortex A73...nearly exactly the same performances
  • SoC lover - Friday, March 2, 2018 - link

    For my opinion i 100% hate Mali GPU 1.it lags
    2.it heats
    3.not worth
  • RaduR - Saturday, March 3, 2018 - link

    Why they are not using Imagination GPU I can't understand. Imagination is desperate to get a new design win as they almost went under after Apple ditched them .

    They had MIPS did nothing with it why they are not present in these CPUs I can understand. The only way MTK can beat QC is with Imagination. Somehow both companies need each other
  • jjj - Sunday, March 4, 2018 - link

    Andrei, are you 100% certain it's 2x 16bit LPDDR4X @ 1800MHz. I know they list dual channel but maybe it's semantics and they mean 2x32.
    If 2x16, then this is designed to be rather cheap and maybe there is no P40?
  • ZolaIII - Sunday, March 4, 2018 - link

    Think memory controller remains the same as previously on P series which narrow it down to 2x 16 bit LPXDDR 4 up to 2.3 GHz (but hardly anyone will put top of the pop ones especially in memory shortage crises so 1.33 GHz modules is what we will be seeing) & 1x 32 bit LPDDR 3.
  • jjj - Sunday, March 4, 2018 - link

    If true it suggests it's a solution focused on costs. If there is a P40, they could go as low as 12$ with it, then this one at 15-18$ and they likely have a significant cost advantage over SD636 and SD660 so they would win share. A73 and 12nm makes sense in this context (cost focus) , time to market ,lower dev costs, smaller die (vs A75) and so on. But then, what is P70 and is it a different die.P70 might have a more difficult time vs SD640 and SD670- assuming this is the SD7xx series- but they would grab good share with the P40 and P60. So they address the 10-20$ price bands very well and P70 tries to get some wins above.

    Hopefully DRAM prices don't go above 10$ per GB this year, not that 10$ isn't already stupid high.
  • ZolaIII - Sunday, March 4, 2018 - link

    We can only speculate about P70 & what will be a S7xx which isn't exactly the most great full thing to do. Earlier P leak:
    https://www.gizmochina.com/wp-content/uploads/2017...
    & what should be a S7xx:
    https://www.gizmochina.com/wp-content/uploads/2017...
    Seams P40 turns out to be a P60 while S640 is actually S670 at least now. S7xx isn't there to address the P series it's there to address the most unsuccessful MTK chip the X30 which they sell now for future discounted prices as they had a bulk of it & didn't really know what to do with it.

    Well 10$ is double the price of what it used to cost last time we discussed about DRAM prices.
  • jjj - Sunday, March 4, 2018 - link

    DRAM was getting to 3$ in 2016.
    P70 is either this same die and then this P60 die likely has a Mali MP4 and a modem that can do better but that does not help costs, or P70 is a different die and then... we'll see.

    The SD670 is likely to be 4xA75 @ 2Ghz or higher (the leaks vary with some at 2GHz , others at 2.6GHz) while the 640 should be same die with large chunks disabled- 2 big cores, memory controller, GPU. Some leaks also claim more SKUs but all these leaks are from China and most are 100% fabulation.
    The SD636 is clearly just a SD660 lite so not ideal from a cost perspective as that die wasn't aimed at lowest costs, or power for that matter.
    The P40 (it it exists as a P60 at slightly lower clocks) and the P60, seem to be aimed to replace the octa A53 SoCs in the 10 to 20$ range, so we finally got to this point and that should be the headline here. 8xA53 are relegated to bellow 10$ and these new SoCs give us quite a lot more.

    SD670 ,likely legit https://browser.geekbench.com/v4/cpu/6133079
  • jjj - Sunday, March 4, 2018 - link

    * I suppose, SD670 could be 2xA75 and 6 small cores.
  • Wardrive86 - Sunday, March 4, 2018 - link

    While I agree with what most of you are saying, when you buy a Mediatek chip and can play fewer mainstream games... because only Chinese developers support it and few elsewhere, and have terrible drivers, terrible battery life, and extreme throttling compared to Qualcomm, then there is no competition . You say the 845 is terrible when it literally loses in one synthetic benchmark...though it will be used in literally every flagship not just because of baseband but also because, more than likely, it will have little to no competition in the high end. Qualcomm equals universal app support, great efficiency, great performance at all price segments...currently...yes there has been mistakes but they are much more consistent than their rivals ....including Samsung. In my experience when you buy a cheap Chinese SoC once, because of app incompatibility, battery life, etc. You will never buy one again.
  • serendip - Monday, March 5, 2018 - link

    Mediatek could put up real competition to Qualcomm if it can close the efficiency gap. I don't game so the lack of GPU performance doesn't bother me, but a SoC needs to be snappy (hehe) and efficient. Mediatek has to stop looking at benchmarks only.

    See the Redmi Note 3 X10 vs. SD650 and Note 4 X20 vs. SD625 comparisons. The Mediatek SoCs have comparable performance but efficiency is way behind, even when using the same components.
  • ZolaIII - Monday, March 5, 2018 - link

    Not a fair comparison... S650 vs X20 it's a more realistic one but we are bean there & done that.
    I did a lot of; study, reading and finally writing for the script that makes the most in power/performance/snappines for S650/650 trough standard in kernel mechanisms (scheduler, HPM scheduler & in built (now module) corectl hotplug. While writing it whose not that hard making it work as it should gave me much headaches as there is still a design flow in A53's that makes IRQ's & other near real time tasks to fall upon migration while this is present & without use of hotplug, with it it really cased a lot of problems like networking stalls and crashes. Their isn't a fix for it but we used workaround to run those tasks as RT one's & it's all good that way. Then there ware problems & in scheduling code which we addressed true mainline backports. At the end biggest problem ware users as they did to to the separate
  • ZolaIII - Monday, March 5, 2018 - link

    Extremes. From extreme power savings to best possible peak performance. It whose very hard to make up for those antagonisms. That part I did with default not used HPM sched "heavy parts" smoothing migration transitions & allowing short bursts up to max frequency... End result whose a 20% better SOT while retaining rather good user experience, tho not everyone whose satisfied nor will ever be. Thing is if you have a code to work with you can change vendors benchmarking related advertising garbage behaviour & gain a lot on user experience and usability but it's do it yourself & will remain so.
  • ZolaIII - Monday, March 5, 2018 - link

    "heavy tasks" *
  • ZolaIII - Monday, March 5, 2018 - link

    It's concerning that S845 pushes CPU clock rates so much higher than the S835 while new core's use more power & certainly switch from 10nm Early to 10nm Performance isn't capable to compensate so S845 will be almost two times less efficient than S635 in heavy CPU tasks. On the GPU side as you could read hire it drives almost a W more of power & will throttle significantly. If the S7xx (announced before as as S670) turns out to be as hoped it will represent much better choice as it will be able to sustain two big cores @2 GHz & all 4 at 1.6GHz (same as S845) GPU & Memory will be cut off 25% but will also be able to sustain near maximum performance in prolonged use while S845 will throttle down to the almost same level (will remain only cuple procenta faster thanks to 4x16 LPXDDR vs 3x16 LPXDDR). I don't even want to talk about SoC's with MALI graphics.
    You ain't buying cheap Chinese SoC yet MTK is Taiwanese & HiSilicon isn't cheap nor available to other vendors. Things will hopefully improve on drivers front as that is the main purpose of project Treble.
  • jjj - Monday, March 5, 2018 - link

    @Wardrive86
    If you got cost, perf and power, for each product there is a choice to be made. Pushing clocks ,leads to higher perf per mm2 and you get payed for that perf. Pretty much any high end SoC is that, in both PC and phones, you always get the highest perf and best perf per cost ratio when you sacrifice efficiency. That can also be done in other segments, not only in high end. Some phone SoCs do that even bellow high end, from a bunch of folks, Qualcomm, Mediatek, Hisilicon.
    I did not say that SD845 is terrible just that i suspect it has a minor issue.Other than that , it's pretty good, vs most of Qualcomm's high end SoCs in recent years. The price is terrible but that's because there is no competition and there is no competition because the market is small while costs and risks are high so , if Mediatek does not spend on that, there is nobody else to.
    Then the claim that Qualcomm is better is false and so is the "cheap" claim. Mediatek is not cheaper and OEMs do not chose them on price.As noted before, Qualcomm leverages licensing to gain and retain share, on merit ,they would have a lot lower share.
    One more little bit that you should consider, in low end, battery costs are very important, a less efficient SoC increases battery costs by too much and in that segment, low power is very important. In high end, the battery's mechanical volume is an issue and it's getting much worse with full screen, foldable displays and 5G- with 5G the antennas are a bit problematic with mmWave.
    I understand that folks around here have seen things like the Helio X10 that sacrifice efficiency for cost and perf but you shouldn't generalize. Even Helio X20, that one was on 20nm and 20nm was a nightmare (look at SD810) but aside from that, it was just 100mm2 and sold at some 20$ Clocks were pushed to 2.1GHz because OEMs and consumers prioritize perf. The main mistake was the 20nm choice. Same year they had MT6752 vs the the phone melting SD615 and here Qualcomm made the wrong process choice.
  • ZolaIII - Monday, March 5, 2018 - link

    I don't believe neither of them cut dies, those are still small SoC's so from both development time & economic factor would be justifiable but we won't know that part for sure. Sure it's similar design (so similar that they don't preroote most of it) but I think quartz oscillators & power rails are different.
  • serendip - Monday, March 5, 2018 - link

    Zola is right, Qualcomm is pushing clocks too high on the SD845 just to hit nice benchmark figures. That thing will burn power and throttle like a bad joke though. The beauty of the 625 and to a lesser extent, the 650, is that they don't throttle much while still getting decent performance. Hopefully Mediatek gets smart and tries the same approach instead of chasing benchmarks and adding cores just for marketing points. Then again, who am I kidding... Idiot consumers still buy phones based on core counts without bothering about real world performance.
  • ZolaIII - Monday, March 5, 2018 - link

    S625 doesn't throttle at all, it's even peak power consumption (it's about 2.1W) doesn't reach 2.5W which is max sustainable. S650 does throttle but not much max sustainable for big A72's is 1.4 GHz (both running) and 450~533 MHz for GPU. Tho in the real use including & extensive gaming S650 rarely fails down to those values. The big cores are 1.8x faster (A72's, A73's) clock per clock in integer performance compared to smaller in order ones & most of the tasks are still integer one's. A75's In comparison to the A55's extend that gap to 2.2x.

Log in

Don't have an account? Sign up now