Comments Locked

56 Comments

Back to Article

  • danielfranklin - Sunday, December 28, 2014 - link

    While the device launch couldnt be more boring (same as SGS5 LTE-A, eg. only for Korea, no real performance benefit) the ability for Qualcomm to launch the 810 (even in small numbers) in January is a big deal.

    Puts most of those rumors to rest about the 810 not being ready.
    That said, we will never know what their intended clocks were going to be, they still might be having issues with the 810 on the new 20nm and just dropped the clocks to compensate.

    Time will tell, interesting none the less.
  • CoreyWat - Sunday, December 28, 2014 - link

    I would say this all but confirmed, that the 810 is ready for the S6, if Samsung decides to go with a Snapdragon
  • KPOM - Sunday, December 28, 2014 - link

    It will be interesting to see how the 810 that ships compares with the Apple A8. The 810 is based on a stock ARM design. It's been difficult to make apples-to-apples comparison (no pun intended) since Apple has been on AArch64 for over a year now.
  • jeffkibuule - Monday, December 29, 2014 - link

    Stock ARM CPU architecture, but not preferred CPU cluster arrangement admitted by even the designer of the Cortex A53. He would have thought that 2xA57 / 4x53 would be more balanced, but cores cores cores!
  • Vegator - Monday, December 29, 2014 - link

    Indeed, Qualcomm already has such as chip, Snapdragon 808, which is supposed to be shipping around the same time as Snapdragon 810. This SoC is probably more balanced in terms of cost and power consumption than SoCs with four Cortex-A57 cores. I'd imagine the clock speed could also be higher.

    At the same time, Cortex-A53 only can also provide good performance in a many-core configuration (such as octa-core), providing performance rivalling or surpassing (for multi-core performance) current generation performance-oriented SoCs (such as Snapdragon 801 or Cortex-A15-based) for a fraction of the price. This is happening with Snapdragon 615 and several MediaTek SoC such as MT6752 and MT6795, which all have eight Cortex-A53 cores as the CPU.
  • Flunk - Monday, December 29, 2014 - link

    But marketing can't call that a octo-core so no one will buy it. I mean that very seriously, the only reason we even have quad-core phones right now is marketing.

    Almost nothing actually uses more than 2 cores because most mobile apps are primarily single-threaded with secondary worker threads used occasionally. Perfect use case for a dual-core.
  • ws3 - Monday, December 29, 2014 - link

    Of course, you don't even need two cores to run two or three threads just fine. The only time and extra core becomes "necessary" is when you are at 100% utilization on one core -- something which is almost never the case in normal usage, except perhaps in the case of some small number of demanding games.
  • maximumGPU - Monday, December 29, 2014 - link

    That's not quite right. If you have several independent tasks that need to be run, then even if none of them pushes the cpu to 100% (i.e they're not cpu bound), then you'd still benefit greatly from multicore.
  • ws3 - Monday, December 29, 2014 - link

    Your tasks *might* finish earlier with more cores if they aren't i/o bound (which almost all phone software is). But even in such a case, if "earlier" means 10 milliseconds earlier, then there is no benefit from the perspective of the phone user (remember, we're talking about phones here).
  • azazel1024 - Wednesday, December 31, 2014 - link

    Not entirely true, there are cases where having a pair of cores at modest utilization levels, but low clocks uses less power than a single core at much higher clocks, as it enables lower voltage for the active cores than if one was clocked at full speed.

    So two cores at .8v and 1.2GHz instead of a single core running at 2.4GHz, but 1-1.1v is likely to consume less power.

    It is very much a balancing act where it make sense to sleep extra cores and when it makes sense to wake them and spread the workload (when possible) so that you can downclock and devolt the active core(s).
  • name99 - Monday, December 29, 2014 - link

    This is way too strong a claim.
    The case on ongoing demand for high CPU is a limited one, but the case of on-going interest has to do with the burstiness of CPU demand. Over a period of, say, 10 seconds, the average CPU usage may be below 10% even during actual interactions (eg browsing the web). But there are periods during those 10 seconds where multiple CPU intensive things can occur simultaneously. Eg scrolling upward will require UI interaction, plus perhaps some image decompression of pictures that were downloaded earlier but not yet displayed, plus some network/OS interaction that pulls in content even further down the page.

    The value of multiple CPUs is primarily in handling such simultaneous peaks. It is unquestionable that two CPUs are valuable for this purpose under almost all circumstances --- the iPhone 4S felt a lot snappier than the iPhone 4, where the only difference in the CPUs was two vs one. Three CPUs is a trickier call. My iPad Air 2 feels crazy snappy --- but that's compared to my older retina iPad 3; I never had an iPad Air so I can't really say how much different the third core makes wrt snappiness. We have basically NO metric/benchmarks that measure snappiness, so reviews are no help.
    Apple tend to be pretty sensible about when they add CPU, so I'm guessing they do have internal tests that show substantial value for three cores.

    What about A57+A53? The problem with saying "lotsa slow cores [2+4] is a better match to real world usage" is that, while it is TRUE, it is also not ACTIONABLE. For most purposes, you don't know until after the fact whether a given work item needed the performance of a fast core or could have been handled in time with a slow core.
    There are a few cases you do know. Work scheduled in the background is obvious. Much network work and interrupt handling can probably happen on the slower core. But tasks submitted by the main app and UI work have to be handled by a fast core to be safe.
    Configs with a few fast cores and lots of slow cores are a good match for servers, where the workloads are very predictable and your OS can be told (or can even learn automatically if it's outfitted with some machine intelligence code) which requests to send to slow vs fast cores. This is a much harder problem on phones, and you don't want your phone to get a reputation as "yeah, it usually seems fast, but every so often it seems to just take twice as long to respond to a button press or a scroll --- drives me crazy!"

    In theory Win8 would seem the phone OS currently best suited to taking advantage of these many core CPUs. They have, in the latest versions of C#, a much richer and easier to use framework for async calls than either OSX or Android, and they have been extremely aggressive about ensuring that every call in Metro that could be performed async is given only an async API, so that sync is not even an option. [The ideas that make this work will be part of C++17, which means, among other things, that those capabilities that are not yet in LLVM will soon be there, which means in turn, IMHO, that Apple will be adding the same sort of facilities to Swift --- we may even heard about them at WWDC2015, but Apple still then has to retrofit these capabilities to their APIs. Java seems less advanced than C# in this style of programming, and I don't know that Google can move as fast as Apple given the Oracle Java involvement...]
    So in principle WP8 could fly on these octacore CPUs (as could Windows RT) but WTF can understand MS' marketing philosophy? They seem to have given up utterly on Win RT so their only tablet story is x86 based, and they seem unwilling [or scared] to compete in phones at the high end, so they don't run their OS on these cores even though (theoretically at least) they're a perfect match. [Which does make you wonder if there is some horrible design flaw inside Metro which renders this theory moot and makes it scale very baldy to more than two or so cores. I guess we'll see if Win10 gets shipped on more aggressive HW.]
  • Conficio - Tuesday, December 30, 2014 - link

    I like your argument about the average CPU load over (relatively) long periods of time being meaningless.

    While, I have little practice in mobile programming or Windows/iOS/OSX programming for that matter. I thought Apple has taken care of the multi thread paradigm long time ago (2010 for iOS 4 - http://www.anandtech.com/show/3779/apples-ios-4-ex... , http://arstechnica.com/apple/2009/08/mac-os-x-10-6... ) with the Grand Central scheduling theme. This is past tense, not C++17 future like WP8.
  • name99 - Tuesday, December 30, 2014 - link

    Parallel programming done easily requires a VERY LARGE set of primitives.
    Apple offers some of the basic elements. GCD offers thread pools (but a rather clunky syntax for specifying how to use them if you want more than the default). Blocks offer the equivalent of C++11 or C# lambdas. But that is all right now.
    Apple does NOT (that is, does not offer in Swift or ObjectiveC, of course they track C++) for example, offer
    - atomics (C++11)
    - futures (C++11)
    - aggregation of futures (any or all) (C++17)
    - .then (C++17)
    - parallel for and similar constructs (probably C++17, but I don't know for definite)

    C# has a pretty incredible resource in the async/actor-based programming they offer, not to mention, as I said, that they have this woven throughout the Metro API.

    The difference between what Apple offers today and what C# offers today is the difference between a language that offers malloc and a managed language. I'm an Apple fan, but let's not let fandom turn us stupid (which is the history of every fandom --- I remember when idiots were arguing in the 90s that co-operative multi-tasking was superior to pre-emption, and that a single address space was superior to protected VM). MS is far ahead of what Apple offers today as regards language, API, developer tools, and probably performance.

    It doesn't have to stay that way --- Apple has done an awesome job of learning the best ideas from a range of modern languages to create the Swift we have seen so far, and I expect they will do the same when it comes to adding parallelism. (For example, in addition to the actor-like programming that Metro/C# does so well, you'd like to see fine-grained heterogeneous parallel programming, ie OpenCL/CUDA type programming, added to the language in a way that fits the syntax and is very natural. MS offers C++AMP to deal with this, and AMP seems likely to be part of C++17; but AMP is as ugly and clumsy to use as everything is in C++ these days, and isn't beautifully integrated into C# syntax.)
    But dreaming about Swift next WWDC is the future, not today...
  • sonicmerlin - Thursday, January 1, 2015 - link

    Hah I went from an iPad 3 to an Air 2 as well. It's ridiculous the difference in speed, weight, screen quality and brightness, battery life, etc. 2 GB of RAM is also a godsend.

    Also the rest of your post was very interesting. Thanks.
  • Vegator - Monday, December 29, 2014 - link

    As an aside, the Cortex-A53 based SoCs such as MT6732 and MT6752 are really a step up from current cost-sensitive SoCs like Snapdragon 400. Not only do they have great multi-core performance, single-core integer performance has also doubled (1.2 GHz Cortex-A7 Snapdragon 400 vs. 1.7 GHz Cortex-A53 in MT6752), floating point somewhat less. This of course makes a tangible difference, even without considering multi-core.

    Memory performance and efficiency is also greatly increased in chips like MT6752 (although not yet in earlier Qualcomm Cortex-A53 SoCs like the low-end Snapdragon 410), meaning they get a lot more performance out of an economical 32-bit memory subsystem than existing solutions while keeping cost down.
  • name99 - Monday, December 29, 2014 - link

    Presumably the A57 on the 810 is an actual working A57 (unlike that bizarre pathetic little "A57" on the Exynos that seems to be incapable of actually operating 64-bit code). But who knows?

    I don't think Samsung ever gave a coherent explanation of why their Exynos A57 was so crippled --- presumably it was NOT a function of the core design, which suggests it was a function of either the peripherals or Samsung firmware. You'd like to think those issues have been corrected the past three months, but if the selling point for this device is "look, three x aggregation --- in the non-existent markets that offer this" rather than "look, 64-bit" that suggests Samsung still haven't got their act together.

    So, bottom line, this thing probably is NOT going to offer any sort of realistic benchmarks of how A57 compares with A8.
    [And, if I'm right about Samsung's issues, who knows? Maybe Apple will have launched the large-screen iPad Pro with A9 in early Q2 before someone is offering an actually functioning 64-bit A57 system?]
  • mfmx - Thursday, January 1, 2015 - link

    And you know all the details about Exynos 5433 from where? Gossip and rumors on Anandtech?

    FYI, Note 4 is marketed in Asia as a 64-bit device.
  • jettto - Monday, December 29, 2014 - link

    Nothing interesting. Knowing K1 crushes Snapdragon with only a dual core and Apple's own cpu with triple crushes everything.... Snapdragon is completely ridicules
  • tipoo - Monday, December 29, 2014 - link

    You also have to look at the power draw of the Denver version of the Nvidia K1. It's not suitable for phones. Maybe when they jump ship to 16nm.
  • Morawka - Monday, December 29, 2014 - link

    isn't the 810 using a cookie cutter configuration from ARM itself A53/57? basically qualcom adds their modem and power management IP's and call it a day.

    Since this is a cookie cutter build, time to market should be greatly reduced, but it seems that is not the case. Nevermind a fully custom core like apple's a7 and a8
  • TheBear8 - Tuesday, December 30, 2014 - link

    Performance will be visible later on once the OS keep upgrading.
  • shompa - Wednesday, December 31, 2014 - link

    "intended clocks"
    The problem is that Qualcomm and Samsung use turbo speeds on their SoCs. Why brand a SoC 2.7ghz when it can only run it for seconds before it hits the thermal limit?

    For years I wondered how Apple A class SoCs with half clock speed/ half cores could beat Sammy/Qualcomm. When I understood that Qual/Sammy used their "turbo speeds" it all made sense.

    Imagine if Apple/Intel or anyone else did the same. "we have a 8ghz SoC" and when people measure true clock speed they found out that 8ghz was only for a couple of seconds before thermal.

    Anyway:
    The Open architecture of ARM shows how great competition is. Qualcomm/Sammy/Apple/Nvidia push each other forward. Compare that to X86/Intel where Intel can release a 5-7% bump each year and people think it "great".

    Intel is doing the same misstake that MSFT did with Windows Phone. Windows phone was the biggest smartphone platform in 2007, but MSFT wanted 20-50 dollars in license per phone = they lost the mobile market.

    Intel have maybe 6-18 month before they loose the mobile market. They could solve this by licensing X86 and create competition.
  • hpglow - Sunday, December 28, 2014 - link

    Wonder if they plan on fixing the battery life on it? I get less than a days worth on mine. If I play a couple games I have gotten as few as 5 hours battery before hitting 15%.
  • danielfranklin - Monday, December 29, 2014 - link

    Im sorry but the battery is not broken, at least not with the SD805 version. Its at least on par but mainly better than the other high end devices on the market, ive tested them all.
    You either have run away apps eating your CPU, your screen up too bright, or are running 3D games for 5 hours straight at 2560x1440 and expecting it to somehow cope...
    40 hours standby, 4 hours screen on @ 60% while overclocked to 2.9ghz and i still have battery left...
  • hpglow - Tuesday, December 30, 2014 - link

    I allow the screen to operate in the default adaptive fassion (you know like most people do.) The best I have ever gotten is 20h just browsing mixed with idle time. Read the damn Internet there are many with battery issues. The main game I play is 2d so your assessment isn't even good. What I'm saying here is that in real life people are getting less than what reviewers were getting. I'm glad that you get 40h doing nothing but the rest of us use our devices. I love the phone but some of us are realists.
  • hpglow - Tuesday, December 30, 2014 - link

    And I shouldn't have to close apps all day and reboot the phone daily (which does help a lot) because the os should handle that.
  • sonicmerlin - Thursday, January 1, 2015 - link

    Well that's android for you
  • jjj - Monday, December 29, 2014 - link

    When something arrives days before CES you know it's a preemptive move so lets see what others got.
    Oh an they forgot to mention, that speed costs you like 20$ per minute so thank you very much mobile industry lol.
    Come to think about it, wonder what NAND they are using that can write at 450Mb/s, the original Note 4 sure can't.
  • ZoZo - Monday, December 29, 2014 - link

    Not everything goes to storage memory. 450Mb/s can be useful for anything that only goes into RAM, such as loading a video.
  • przemo_li - Monday, December 29, 2014 - link

    Loading video cost You 10Mb/s according to Netflix!

    How come you need 450Mb/s for it? (if not downloading it to nand)
  • jospoortvliet - Monday, December 29, 2014 - link

    Theoretical vs real sustained bandwidth...
  • name99 - Monday, December 29, 2014 - link

    "Theoretical vs real sustained bandwidth "
    This is not the issue. There are two points here:

    (a) The capacity of a cell is shared between lots of people. We don't improve the encoding so that a single person can now download 200Mb/s rather than 100Mb/s; we do this so that fifty people in the cell can now share 200Mb/s rather than 100Mb/s.

    (b) Carrier aggregation superficially does not change bits/MHz efficiency, so doesn't seem to match my claim in (a). [That is, if I run the cell at two frequencies, each 10MHz wide, each serving 25 users; or I aggregate so each user is using 2x10 MHz and we have 50 users, there isn't any improvement in the average bandwidth available per user.]
    The problem is that the existing FDMA specs (ie as the cell system is used pretty much everywhere except China) require equal frequency bandwidths to be made available for uplink and downlink. This, even though in a data (rather than a voice) world, most capacity is used on the downlink and the uplink is usually idle.
    Carrier aggregation is a (clumsy, but better than nothing) way of dealing with this.
    Rather than providing 20MHz of bandwidth to be shared (which means 20MHz up AND a DIFFERENT 20MHz down), the carrier provides 10 MHz of bandwidth (10 up, 10 down) AND a separate (DOWNLINK ONLY) 10MHz that can be aggregated. Now we have 20 down, 10 up for a total use of 30 rather than 40 MHz, and we have a better match to the usage statistics (10MHz up, 20MHz down).

    THAT is the value of carrier aggregation. Not that it gives a single individual a ridiculously high data rate, but that it provides a larger data rate to be shared amongst many individuals, in a way that starts to better match the uplink vs downlink data rates we see for modern usage patterns.
  • Conficio - Tuesday, December 30, 2014 - link

    Interesting thought. Question: What do the carriers do with the left over uplink capacity? Is there any economical use they can derive from it?

    I have not seen any tech that allows the download only frequency to have any better throughput or latency or else. Is there a protocol that does tell a phone to use particular channel only for download? Otherwise any of the phones that receives in a channel, could also start sending, which would allocate the channel anyway for sending.
  • name99 - Tuesday, December 30, 2014 - link

    The uplink spectrum is not defined.
    Read what I said. Your phone connects to the "base" spectrum which it uses for uplink and downlink. IF your phone supports aggregation, the cell tower will then send ADDITIONAL downlink data over the aggregated (secondary) frequency.
    I'm not sure what your mental model is here. You seem to think that the cellular system (outside China) uses the SAME frequency for uplink and downlink. WiFi and BT do this, cellular does not --- cellular uses dedicated SEPARATE frequency bands for uplink and downlink.
  • Penti - Monday, December 29, 2014 - link

    Original Note 4 could write seq at 37MB/s or at more than 300Mbit/s. Other nand can certainly write at full speed. Ram can handle the speed with ease.
  • Klaster2014 - Monday, December 29, 2014 - link

    I think AnandTech confirm about Snapdragon LTE Cat.6 now. Huawei, EE and QTI just finish testing for cat.9 23 december. But, maybe I 'm wrong???
  • Vegator - Monday, December 29, 2014 - link

    I don't think it is very likely that this new Note 4 LTE-A model, which seems to be already commercially available according to the Samsung release (http://global.samsungtomorrow.com/samsung-electron... uses a Snapdragon 810. More likely, it uses the Snapdragon 805 with an upgraded separate modem chip, or an Exynos 7 Octa in conjunction with an upgraded modem.

    I find it very unlikely that Samsung would announce immediate commercial shipment of a device with supposedly a Snapdragon 810 before any earlier announcement of the future availability of such a device or any official announcements of a Snapdragon 810-based device.

    The Snapdragon 805 Soc as already used in many Note 4 models does not have an integrated modem. Phones using this SoC use a seperate modem chip, which I believe can be more easily upgraded to higher specifications. Cutting-edge network speeds are usually implemented earlier and more easily in a stand-alone modem chip implementation. At the same time, the Exynos 7 Octa-based Note 4 probably also uses a separate modem chip that can be upgraded.

    According the the Snapdragon 810 processor specifications page from Qualcomm, the SoC does integrate a CAT6 World Mode LTE modem with CAT6 speeds of up to 300 Mbps with support for up to 3x20 MHz carrier aggregation, but with no mention yet of CAT9 or 450 Mbps, however Qualcomm has announced that CAT9 will be supported later in 2015. However, I think it is too early for this SoC to be already used in a commercially shipping device. Qualcomm also lists H.265 (HEVC) among the supported video formats for the SoC, which it is missing from Samsung's specification sheet for the new LTE-A Galaxy Note 4 model, which I believe points to the fact that Samsung is not (yet) using Snapdragon 810.

    Qualcomm, as well as other stand-alone modem chip suppliers for Samsung such as Intel and perhaps Samsung's internal chip division probably all at an advanced stage of supporting LTE CAT9 (and certainly CAT6 with carrier aggregation) in their modem chips, so a stand-alone modem chips implementation seems most likely.

    Finally, although I cannot find confirmation that the new model would initially be confined to Korea, Samsung has a history of releasing rare Exynos-based models of flagship smartphones in Korea (e.g. Galaxy S4), while using Snapdragon for all the rest of the world. This is partly because Korea has a unique network infrastructure that has a very restricted number of different freqency bands for 3G and 4G (carrier aggregation would use several smaller bands within a single band such as 1800 GHz). This makes the chip and RF implementation much more straightforward than it would for a world-mode device that needs to support all the different bands that are supported in most countries as well as variations between countries, and for that reason Samsung has been able to test the ground for prior-generation Exynos chips as well as advanced LTE connectivity in Korean models.
  • Andrei Frumusanu - Monday, December 29, 2014 - link

    It's a Snapdragon 810 unit, there's no discussion about this.
  • Vegator - Monday, December 29, 2014 - link

    Maybe, but if so it doesn't seem to represent a massive ramp if it is just a Korean model, maybe its an early revision/production batch. If the benchmarks posted on Geekbench this month for a model called Samsung SM-N916S (http://browser.primatelabs.com/geekbench3/search?u... actually present this new model, then the performance would be disappointing for a high-end device. Multi-core scaling performance and Geekbench subtests such as Dijkstra seem to be significantly lower than Exynos 5433, which could correspond with the rumours about heat issues requiring thermal throttling. These are however AArch64 results, compared to AArch32 for Exynos 5433.
  • danielfranklin - Tuesday, December 30, 2014 - link

    Eek, those are terrible numbers.
    SD805 can beat them easy. Doesnt seem right. Clocks are low, lets hope they are early silicon and dont mean anything. Still, not a good sign. Its possible the reports of issues with the 810 are correct and they are down clocking it...
  • londedoganet - Monday, December 29, 2014 - link

    There's no information released by Samsung in the linked source that corroborates your claims. The processor isn't even specified in the press release.
  • kyuu - Monday, December 29, 2014 - link

    While you're correct that the source doesn't specify the SoC, we know that the 810 is the only possibility for an SoC that supports the 3x20 aggregation. The only other possibility is that they are using a separate cellular chipset. Besides the fact that there isn't any precedent for Samsung using a separate cellular chipset that I'm aware of, it would be a really odd decision since that'd increase the power budget significantly and result in substantially reduced battery life, among other things.

    Do you have any source to indicate the 3x20 aggregation is the result of anything other than moving to the 810?
  • Vegator - Monday, December 29, 2014 - link

    It is my understanding that the Galaxy Note 4 uses a seperate cellular chipset (baseband chip) for both the Snapdragon 805 model (with separate Qualcomm modem chip) and the Exynos 5433 model (with separate Intel or Samsung modem chip). That's also why the mentioned model would be a relatively easy upgrade. The Galaxy Alpha is another example with an Exynos 5430 SoC coupled with a separate Intel XMM baseband/modem chip.

    So yes, separate modem chip are still used frequently in high-end devices. In fact you can't use Snapdragon 805 in another way (it doesn't have a modem). Apple also uses separate modem chips (Qualcomm) with its Ax SoCs.
  • kron123456789 - Monday, December 29, 2014 - link

    "Samsung has a history of releasing rare Exynos-based models of flagship smartphones in Korea " - Samsung also has a history of releasing models of flagship smartphones with the new Qualcomm's chip in Korea(Galaxy S4 LTE-A with Snapdragon 800, or Galaxy S5 LTE-A with Snapdragon 805)
  • milan03 - Monday, December 29, 2014 - link

    There isn't anyting on their site that confirms 810 either. So officially, we really don't know if it's 810 or not. All they're talking about is improved baseband processing.

    It is strange since Qualcomm recently announced Cat 9 integrated support in 810, but there are absolutely no Qualcomm announcements following this Samsung presser.

    It could very well be a standalone Samsung's in-house Shannon Cat 6 (upgradable to 9) baseband processor + one of the existing SoC already commercially available. No HEVC support listed in Samsung page is interesting as well.
  • Klug4Pres - Monday, December 29, 2014 - link

    I agree with Vegator. It would be astonishing and headline-grabbing news for a Snapdragon 810 to be released th is early, so much so that the author has jumped the gun. Hard to believe.
  • hrrmph - Monday, December 29, 2014 - link

    The real battle in the high-end market for phones and tablets right now is all about worldwide coverage. Apple just pulled way out ahead in this game by supporting more 4G LTE bands than most of the competition. I think they are now supporting 12 bands of 4G LTE coverage.

    Samsung *needs* to get more coverage on more LTE bands to be taken seriously for worldwide personal and business travel. Especially on tablets.

    6 bands of coverage just isn't enough for a high-end device when the 4G spectrum is split into 40+ bands worldwide.

    At the lower end of the market, it is forgivable to be forced back to 3G when traveling. But, at the high-end we need about 20 bands of coverage to really be able to properly cover the "Worldwide-Device" market.

    Not all 40+ bands are currently in use, and most carriers offer a choice of 2 bands. So generally speaking, 20 bands would probably provide 99.99% coverage.. for now. Apple's 12 bands probably provide 95+% world-wide coverage at the moment.

    As far as aggregating bands for higher speed: Meh. Properly cover the world's bands first so there can be a reasonable expectation that basic 4G will work, then we can talk about the 1% of the surface of the Earth that is capable of providing a higher signal.. like Korea.
  • GC2:CS - Monday, December 29, 2014 - link

    Apple iPhone 5 got three versions with up to 5 LTE bands at up to 100 Mbit per second.

    iPhone 5C and 5S support 13 LTE bands in a single version (I think) at up to 100 MBits/s

    iPhone 6 and 6+ already have support for 20 LTE bands, the most of any phone at speeds up to 150 Mb/s

    Considering Apple's current iPhone lineup they are pretty serious about coverage.
  • mfmx - Thursday, January 1, 2015 - link

    The number of supported bands are actually irrelevant, several of the iPhone bands are used in Japan only (and you probably can't roam on them with a foreign SIM-card or even prepaid Japanese SIM-card). while it doesn't support T-mobile on band 12... Actually Nexus 6 has better (i.e. more relevant to most people) LTE band support than iPhones 6.

    Also Apple gives the operator the possibility to restrict tethering/wifi hotspot and you can only use LTE if the operator has approved the iPhone 6, no thanks...
  • tipoo - Monday, December 29, 2014 - link

    When are we going to get those A53s and A57s in any shipping product in North America?
  • TheBear8 - Tuesday, December 30, 2014 - link

    I bet that that 28mobile site will have this available same as the S5 LTE aka Prime
  • shinpil - Wednesday, December 31, 2014 - link

    Hey, this phone does not use snapdragon 810. 810 has many big issues, so samsung was not able to use it. This phone is powered by exynos 5433(same as normal note4) and new exynos modem 333(normal note4 use exynos midem 303).
  • MTEK - Thursday, January 1, 2015 - link

    TechRadar reports the following:

    "A Samsung spokesperson has confirmed with TechRadar that the new, more powerful Galaxy Note 4 will not in fact feature Qualcomm's Snapdragon 810 chip, as some reports claimed.

    The spokesperson could not comment on where those reports got their information, but confirmed with TechRadar that the new Note 4 will be powered by Samsung's own Exynos chips."
  • juicytuna - Thursday, January 1, 2015 - link

    Oh dear, how embarrassing. Should we blame Purch?
  • SydneyBlue120d - Friday, January 2, 2015 - link

    The entire internet rely on Anandtech news reliability, so everyone gave credit to AT. And we were all wrong :(
  • ompq - Friday, January 2, 2015 - link

    A South Korean paper has confirmed that the new "S-LTE" variant of Galaxy Note 4 in fact uses an Exynos SoC with Exynos Modem 333.

    http://ebuzz.etnews.com/news/article.html?id=20141...

Log in

Don't have an account? Sign up now