Comments Locked

101 Comments

Back to Article

  • 29a - Thursday, October 24, 2019 - link

    Did Atom processors ever stop sucking?
  • solidsnake1298 - Thursday, October 24, 2019 - link

    That depends on your needs. As a HTPC, starting with Apollo Lake (Goldmont) the iGPU was upgraded sufficiently that it can decode 4K HEVC. I haven't tested 4K HEVC, personally. But I have played 1080p60 HEVC without a single dropped frame.
  • vladx - Friday, October 25, 2019 - link

    I have a Goldmont tablet, 4K HEVC works fine as long as the bitrate doesn't surpass the limits of its eMMC storage, in which case artefacts and stuttering is present. Maybe I should look into replacing it with a SSD if that's even possible.
  • qap - Friday, October 25, 2019 - link

    Even the slowest eMMC storage can do 50MB/s sequential read. There is no way, you have 400Mbps+ HEVC video (and if that is the case, Atom is obviously not for you). The limit must be somewhere else. Most likely it supports hardware HEVC decoding up to some bitrate only and you are hitting this limit.
  • vladx - Friday, October 25, 2019 - link

    400Mbps no, but I have some 100+ Mbps videos and most sit around 60 so it can definitely push the eMMC to its limits especially considering it also needs to run the OS processes at the same time.
  • s.yu - Friday, October 25, 2019 - link

    A common confusion between B and b...
  • eddman - Friday, October 25, 2019 - link

    Unless the storage is so crap that can't even sustain 12.5 MB/s (a.k.a 100 Mbps), it's probably the decoder itself that is unable to properly accelerate such high bit-rate videos.
  • nathanddrews - Sunday, October 27, 2019 - link

    Quite a few eMMC implementations run off a USB 2.0 bus, so yes, it can bottleneck a system hard. Same thing frequently happens with networking components in devices. It will have AC/GbE, but can't reach those speeds.
  • eddman - Sunday, October 27, 2019 - link

    Even a USB 2.0 eMMC should be able to sustain a 12-13 MB/s sequential read.

    It has to be the decoder. He doesn't know the difference between bit and byte and thinks 60 Mbps is too much for 50 MB/s.
  • eek2121 - Monday, October 28, 2019 - link

    Not if the bus is shared with 2 network controllers, a bluetooth controller, etc. I haven't looked at how Atom is set up admittedly, but that is one of the major issues with SBCs. Everything hangs off the USB 2.0 bus. The USB 2.0 bus also can't really maintain true USB 2.0 speeds in quite a few cases due to hitting micro-usb power limits.
  • azazel1024 - Tuesday, October 29, 2019 - link

    Sure in some cases, but most not super cheap Atom implementations from even the Cherry Trail era weren't all on the USB2 bus, at least not the eMMC. Most typical performance I saw was >100MB/sec reads and 30 or so MB/sec writes on slower implementations. Some of the better eMMC implementations were hitting ~180MB/sec reads and 70MB/sec writes and 6-7k IOPS.

    Not SSD performance, but storage performance isn't the issue with HEVC playback. HEVC support is. My Cherry Trail doesn't support H265 decode. I can play back a 1080p HEVC file, but the processor is running between 70-90% utilized when doing it. For an H264 encoded 1080p file it typically runs about 15% utilization to do it.

    It can't handle 4k decode.

    My biggest issue has been networking performance on the one that I have. Some are better setup, but not all of them. My first generation Cherry was an Asus T100. Max storage performance was 110MB/sec reads, 37MB/sec writes, 5k IOPS. The microSD card slot maxed out at 20MB/sec read and writes. The Wireless was 1:1 802.11n and maxed at about 10MB/sec down and 8MB/sec up (obviously not concurrently) and it was only 40MHz on 5GHz, not 2.4GHz (20MHz only on that).

    My current one is a T100ha after my T100 died. Some improvements, some backslides. The read/write speed is up to 170MB/sec and 48MB/sec with 7k max IOPS. The microSD card reader can hit about 80MB/sec reads and 30MB/sec writes (in a card reader in my desktop the same microSD card can hit 80MB/sec reads and 50MB/sec writes). The wireless though is WAY slower. It hits 6MB/sec down and 3MB/sec up max. Supposedly it can do the same 40MHz on 5GHz and 20MHz on 2.4GHz, but I don't see anything like real 1:1 40MHz performance on 5GHz (which should be in the ballpark of 10-12MB/sec, 80-100Mbps).

    That is honestly my biggest complaint is the wireless on it is just horrendous. I often use an 802.11ac nano dongle in the keyboard dock USB3 port as that easily pushes 20MB/sec up and down. Even simple website loading using it is significantly faster than the embedded wireless. I know it is a cheap tablet/2-in-1, but it is one of those probably springing an extra $1-2 on BOM for a nicer even 802.11n 1:1 solution would have gone a long way. Let alone at the time it was released, 1:1 802.11ac wireless options were pretty widely available.

    I am curious if someone like Asus (or someone else, I am NOT tied to them) will use Tremont in any small 2-in-1. Heck, an update to Surface with one might be nice. I do like the smaller form factor of a 10-11" size tablet. I almost always use my 2-in-1 as a laptop, so a hard keyboard dock is reasonably important to me (but a really nice type cover would be fine, I almost always use it on a table, not on my lap), but I do sometimes use it as an actual tablet for reading (movie/TV/YouTube would generally be fine as a laptop as I am rarely just holding my tablet in front of my face to do that. Usually on a table/desk, occasionally sitting on my knees/stomach but docked). I don't need a TON of performance with one. But at the same time, if I want to grab a movie off my server for an overnight trip or something, it is kind of painful to be downloading a 3GB file at 6MB/sec and having to wait the better part of 10 minutes to download the darn thing. It is usually worth my while to go rummage in my desk drawer, grab my USB3 GbE adapter, plug it in to my tablet and in to a spare LAN drop in one of my rooms and quick grab the file at ~50MB/sec or so a second of the micro SD card write speed and be done in maybe 2 minutes of doing all those steps and the download time. Let alone if I want to grab maybe 2 or 3 movie files at 6-10GB.

    A nicer screen would of course be real swell too, but honestly 720p on a 10.1" screen isn't horrible. The wireless limitations are my biggest headache. A bit more CPU and GPU performance would also be nice. I wouldn't mind being able to handle slightly newer/more advanced games on it, but frankly it isn't my gaming machine nor do I need it to be. Portable is more important to me that powerful. But some of the basic tasks it needs to be better at/feeling its age.

    Wireless being at least 2x better, and it would be nicer to be more like 3-4x better (which 802.11ac 1:1, if you don't mess up the implementation IS at ~20-25MB/sec). If CPU performance was maybe 15-20% better (and Tremont sounds like it is probably more like 50-100% faster than Cherry trail), GPU maybe twice as fast (also sounds like it would be a lot faster than that), storage performance and peripheral storage is fine as it is on my T100ha, but yeah I sure as heck don't mind some improvements there also. Battery life being better would be nice, but I usually manage >10hrs if I am not doing anything super intensive. I could even live with the current screen, though better coverage of sRGB (I think mine is about 70% sRGB), contrast (actually mine is pretty good at I think around 800:1 or so, not great, but not bad) and higher resolution (900p would be nice, 1080p better).

    Maybe someone can do all that in a package less than $400. Oh and 8GB of RAM and 128GB of storage. Max $500 price tag.
  • eek2121 - Monday, October 28, 2019 - link

    eMMC isn't typically known for speed.
  • Namisecond - Friday, November 1, 2019 - link

    Most eMMC isn't optimized for performance. They tend to be optimized for cost.
  • levizx - Friday, October 25, 2019 - link

    You are confusing iGPU with QSV, they are different IP blocks.
  • solidsnake1298 - Monday, October 28, 2019 - link

    I am not confusing QSV with the iGPU. While QSV is functionally different from the EUs that generate "graphics" and physically occupies a different section of die area from the EUs, QSV is LOGICALLY part of the "iGPU." I'm not sure this is an option in my particular BIOS, but humor me here. If I were to disable the iGPU in my J4205 and use an add-in Nvidia/AMD GPU wouldn't that also mean that QSV is no longer available? On the full power desktop side, if I bought a KF SKU Intel processor (the ones without an iGPU), doesn't that mean that QSV is not available?

    Yes, I was referring to QSV specifically. But QSV is a feature of Intel's iGPUs. Just like NVENC is a feature of most of Nvidia's GPUs.
  • abufrejoval - Tuesday, November 5, 2019 - link

    If you disabled the iGPU, the VPU is gone, too. But you don't need to disable the iGPU when you add a dGPU: Just connect your monitor to the dGPU and leave the iGPU in idle.

    Not sure it's worth it, though. I can't see that the Intel VPUs are any better than the ones from Nvidia or AMD, neither in speed nor in quality. And for encoding quality/density CPU still seems best, if most expensive in terms of energy.
  • solidsnake1298 - Tuesday, November 5, 2019 - link

    The point of my post was to point out that I was not "confusing" QSV with the iGPU when they are logically part of the same block on the die. You can't have QSV (Quick Sync Video) without the iGPU being active. So when, in the context of video decoding, I refer to "iGPU" I am obviously talking about the QSV block on the iGPU.
  • Namisecond - Friday, November 1, 2019 - link

    4K output was completely dependent upon the vendor to implement. I have a Gemini Lake laptop that used an HDMI 1.3 or 1.4 output chip. I love it for it's all-day long battery and don't miss the 4K output at all.
  • hyno111 - Thursday, October 24, 2019 - link

    Atom performance actually improved a lot every generation. I would perfer Goldmont Plus based Pentium than the low power dual core Skylake++ without turbo.
  • Samus - Thursday, October 24, 2019 - link

    That's not true. Atom at various stages has actually taken a step BACKWARDS in performance.

    Most obviously, Cedarview was around 20% slower per product SKU than Pineview, thought performance per watt remained nearly identical. Still, the D525 remained the top performaning Atom for years until Avoton launched in 2013.

    Atom was also plagued with x64 compatibility issues until Avoton officially supported the x86 extension, along with virtualization, mostly because Avoton was designed specifically as a "Server" product, finding its way in everything from NAS to SMB microservers where it performed terribly compared to even rudimentary RISC CPU's.

    It's an absolute marketting failure by Intel to continue pushing the cute name Atom with the reputation they have built for it. They were moving away for awhile, branding many traditional Atom-architecture products Pentium J\Celeron J, then going back on that move to shift Pentium\Celeron back to the Core microarchitecture, and further mutilating the process by actually calling Core-based CPU's Atom's with the x3/x5/x7.

    No wonder AMD has maintained consistent OEM support. At least their CPU product stack has made sense for the last 10 years...
  • Valantar - Friday, October 25, 2019 - link

    Intel doesn't use the Atom name any longer, at least not officially. Is it anywhere in this slide deck? As for Pentium/Celeron, P/C Silver is Atom-based, Gold is Core, and that applies even to the newest product stack (Intel even launched updated Goldmont Plus Gemini Lake Pentium/Celeron chips just a few days ago, topping out with the J5040 and N5030: https://www.techpowerup.com/260374/intel-gemini-la... Confusing? Absolutely. But nothing has changed since the (admittedly stupid) gold/silver branding was introduced.
  • levizx - Sunday, October 27, 2019 - link

    Maybe get your eyes checked? This while article is about the Tremont uArch, so are the slides.
    Gemini Lake is old news, doesn't matter what "new" product Intel releases. Intel had Atom on their roadmap as late as last December.

    BUT, their May investor meeting failed to disclose what brands Lakefield will be under. So as far as the general public know. Atom is still alive in ultra low-power Server SoC and embedded SoC space.
  • eddman - Sunday, October 27, 2019 - link

    There hasn't been an atom branded mainstream (desktop, laptop or tablet) processor strarting with goldmont; only server, embedded and automotive parts, and starting with goldmont plus there is none whatsoever.
  • eddman - Sunday, October 27, 2019 - link

    *No edit function*

    What I meant to say is, while atom branding is not dead, intel has moved away from using it for the products facing the mainstream consumers. It's all pentium and celeron.

    I doubt they'd bring it back for lakefield, which even uses a single "big" sunny cove core.
  • eek2121 - Monday, October 28, 2019 - link

    Not true, Apollo Lake showed up in an SBC form factor. That is goldmont + gen9 iGPU.
  • eek2121 - Monday, October 28, 2019 - link

    Gah anandtech needs to join the year 2000 and add an edit button. The atom CPU I have seen used is the atom e3900. Yes it is technically an embedded part, but it is also used by consumers for various projects, including home theater applications and the like.
  • eddman - Monday, October 28, 2019 - link

    Such devices are not mainstream.
  • Namisecond - Friday, November 1, 2019 - link

    You're talking the uarch. The person you're talking to is talking about the Atom brand name.
  • rrinker - Thursday, October 24, 2019 - link

    Depends on what you use them for. I have a box I built at least 7 years ago, so whatever Atom was current - I think it was the first one to have 2 cores. I ran Linux on it, mostly for controlling my model railroad. It was plenty fast enough, even with a slow hard drive and just 2GB RAM. I couldn't stream video to it, but music was fine, and it ran both the control program as well as a 2D CAD program to design track plans.
    Not every task a computer does requires insane speeds. There's a whole lot more than just gaming.
  • 1_rick - Thursday, October 24, 2019 - link

    They're good enough for things like NASes.
  • 29a - Friday, October 25, 2019 - link

    Interesting that you say that because I had to install Plex on my desktop the other day because the Atom in my NAS couldn't decode the video. My other experience with an Atom was when I bought a netbook right before tablets became popular. That netbook couldn't stream a video from Youtube without studdering. Both of my experiences with Atoms have been absolutely horrible,
  • eddman - Friday, October 25, 2019 - link

    You expect a J1900 from 2013 to decode every single type of video? That processor's decoder cannot hardware accelerate VP9 and HEVC. It also cannot accelerate 10-bit h.264 (there is no hardware decoder that can).

    If it fails even for regular h.264, then the problem is somewhere else, not the processor.
  • eddman - Friday, October 25, 2019 - link

    It seems it can accelerate HEVC after all, but I don't know if it's full hardware acceleration or hybrid. If the latter, then it'll struggle with high bit-rate videos.

    There is definitely no 10-bit support though.
  • eddman - Friday, October 25, 2019 - link

    Scratch that; no HEVC for Bay Trail: https://www.anandtech.com/show/9167/intel-compute-...
  • digitalgriffin - Thursday, October 24, 2019 - link

    Lol. Hey Randy. Hows the layout going? Small world.
  • PeachNCream - Friday, October 25, 2019 - link

    When it comes to gaming on Windows, Bay Trail runs everything I ask it to run. Picking the right games for the system obviously is necessary. The first and second Star Wars Battlefront, Terraria, Age of Wonders, and a smattering of other fairly old titles that are on the low end of demand run fine at max settings (minus AA, of course). In Atom's single core days, the old N270 and N450 were sufficient as my primary Windows gaming laptops as well when they were in little netbooks. Pick your games wisely, go have fun. You still have to do that even with relatively modern hardware and recent titles so I don't see the difference in working with a little less and lowering your overall hardware (and often software) cost because you were still entertained for a few hours when its all said and done.
  • mkozakewich - Sunday, October 27, 2019 - link

    I was playing Minecraft on the N270. Good times! The newer Intel chips focus more on lowering the power usage than on increasing performance, although Bay Trail was still pretty good. (I'm writing this on a Surface Book with a Core i7, and the whole system is running on less wattage than even my 2011 netbook.)
  • Jorgp2 - Thursday, October 24, 2019 - link

    Yeah, like 4 years ago.

    Get with the times
  • digitalgriffin - Thursday, October 24, 2019 - link

    Gimini lake can transcode 4 1080p streams at a time for plex. And it will do 4k video no problem. But it lacks hdr support. If intel solves this problem then you jave a great little media server
  • III-V - Thursday, October 24, 2019 - link

    Over half a decade ago... where have you been?
  • vladx - Friday, October 25, 2019 - link

    Obviously Intel waited until it was left with no choice.
  • Ratman6161 - Friday, October 25, 2019 - link

    "Did Atom processors ever stop sucking?"
    Actually I don't believe they ever started sucking. As with many things, once OEM's decided they were for cheap systems, they built systems where everything else was cheap too. Atom was never designed for high performance and when you combined it in a laptop where everything in it was the cheapest the manufacturer could get...you got crap. I think within its niche it wasn't really half bad.

    I've Got an Asus Zenbook S8 with an Atom Z3580 running Android 6 and it was actually pretty fast for an Android tablet of its day. It competed well with Samsung's tablets at that time and was a fraction of the cost ($249 on Amazon back then). I still use it and its still more than adequate for web surfing, email, Netflix, Amazon Prime video etc. Keep Atom where it belongs and don't set unreasonably high expectations for it and it doesn't suck.
  • mode_13h - Saturday, October 26, 2019 - link

    I dunno... did x86 processors ever stop sucking?

    They do deliver good perf/W - better than Intel's big cores, but still not as good as ARM.
  • Korguz - Sunday, October 27, 2019 - link

    " if they are not as good as arm " how so ??
  • Namisecond - Friday, November 1, 2019 - link

    Not as good perf/W but we are comparing ARM to x86 here.
  • olde94 - Saturday, October 26, 2019 - link

    that widely depends on your application. as far as i understand atoms are widely used in servers due to their power consumption, low heat output and focus on CPU performance compared to other mainstream cpu's. so yeah for some applications atoms are quite ok.
  • olde94 - Saturday, October 26, 2019 - link

    a system like this https://www.supermicro.com/products/system/3U/5039... is a single server rack with a total of 192 cores at less than 400W. and while it's atom cores i would like to have you realize that they run at 2.1ghz, wheres a system like a Gold CPU 6148 x2 is a 40 core @2.4ghz at above 300w. so while the xeon has better IPC we are talking about a 40c vs 192c system at a comparable power budget runing a frequency that is similar. so the xean need more then 4x better IPC to win here. once again there are a lot of features the atom does not support and the IPC surely is different, but atom is not "just" bad
  • Calista - Sunday, October 27, 2019 - link

    Atom have come a really long way from its earliest versions. Try using something like a N270 and compare it to an X5-E8000. The former will be completely unusable, the later not really zippy but more than fast enough for most common workloads except gaming.
  • yeeeeman - Sunday, March 15, 2020 - link

    Silvermont based tablets that sold at 100 bucks were amazing value for money.
    You should stop sucking whatever you are sucking, because atom was quite good after silvermont launched.
  • ternnence - Thursday, October 24, 2019 - link

    The two ALUs have one focused on fused additions (FADD), while the other focuses on fused multiplication and division (FMUL).

    fadd != fused add, fadd = float point add
  • mode_13h - Saturday, October 26, 2019 - link

    Yes, fadd is simply floating-point add. Same for fmul.

    What makes FMA "fused" is that the product isn't truncated before the accumulate, resulting in higher precision. So, what's "fused" is the multiplication and accumulation.

    Fused-add or fused-multiply makes no sense - they each only do one thing, so what would you even be fusing?
  • The Hardcard - Thursday, October 24, 2019 - link

    when I squint at the power/performance graph, I don’t see much of a power savings for Tremont. If that is 1.5 for Sunny, it looks like Tremont is will be more like 1 watt rather than 200mW. is it my eyes or are they being loose with the graph. also, it looks like performance drops much faster than power.

    Lakefield seems like it should be 2+4 rather than 1+4. it will be interesting to see how it compares to the 8cx for performance and battery life.
  • Santoval - Thursday, October 24, 2019 - link

    In the graph Sunny Cove goes down to 12 - 13% "relative power" while Tremont reaches around 4 - 5%. So, if we assume a lowest of 13% relative power for Sunny Cove at 1.5W and a lowest of 4% for Tremont, this would suggest that Sunny Cove at its lowest power/frequency consumes 3.25 (13/4) times more power than Tremont at its lowest power/frequency.

    If that's indeed the case, and that graph is accurate, then Tremont consumes ~0.45 watts (1.5W / 3.25) at its lowest power, not 1 watt. However if that graph is only slightly inaccurate Tremont might really go down to 200mW operation mode. For instance if Sunny Cove's relative power was meant to terminate at 20% and Tremont's at 3% then their difference in power consumption at the lowest power mode is (20/3) 6.67 times, thus Tremont would go down to ~220mW.
  • The Hardcard - Thursday, October 24, 2019 - link

    interesting, we both put Sunny at 12 percent. But my eyes put Tremont at 8 percent, which is how I got one watt. It just looks much closer to the 10 than the one. But, I assume Microsoft got figures that made sense to them so maybe you’re right.
  • name99 - Friday, October 25, 2019 - link

    We have to look at why this product exists. To me it looks like IBM's 8-way threading, ie a product of the decadent stage of CPU design, when the primary impulse becomes to game the markets rather than to optimize engineering metrics.

    Look at the performance/energy curve. There is SO MUCH overlap with Core. That makes little sense for a big.LITTLE type system -- if the primary goal is low power, you optimize the one core for low power, the other for performance, and make little effort to extend the low power performance beyond the lowest the high power core goes. (You want a small amount of overlap for hysteresis but not much more.) If you look at Apple's cores (which I know best) this is clear; the small cores max at about 30% of the performance of the large cores, and the large cores can clock down to about 1/3 maximum frequency.

    But what if your goal is NOT primarily energy saving? The Tremont presentation talks a whole lot about performance, little (nothing that I saw) about where they saved energy and how much. What if your goal is to create a "reasonably powerful" lower end core, to at least good enough match current ARM AND to be able to expand your provision of multi-core (for PCs) and many-core (for laptops) without having to give up those nice juicy Core profits?...

    Clearly you can think of Tremont as an A75 equivalent, to be sold to designs thinking of jumping ship at that performance level. But you can also view it as Intel's way of providing low-end laptops/desktops with 5 (or 6? who knows what the SMT situation of the large core is) threads without having to drop the prices on i5s. Likewise a way to compete with those 48 and 64-core lightweight ARMs (ThunderX, Cavium and suchlike) while again not having to drop the price of the large Xeons.
    In this light, the omission of decent AVX is not a bug, it's a feature; it's one more reason that these are low class cores meant for peasants, while decent people should continue to pay for Cores.

    On financial grounds, this may make sense, and Intels' plan is presumably to add AVX-512 when SVE becomes too common too ignore (but not until then...)
    On strategic grounds does it make sense? Hmm.

    - It may just prevent even more people from ever bothering to design, compile, and optimize for AVX. Maybe likewise for persistent memory? (That support seems pretty fragmented, and I'm sure Tremont won't help.)

    - Even Intel isn't so large that they can keep creating substantially improved new designs every year (something that's become very clear over the past few years).
    This particular fork seems to be one that doesn't allow for that much learning across the two teams (and may even lead to deliberate crippling if the Tremont direction gets "too" good).

    Of course other design houses are even more opaque than Intel (I don't think we have any idea how much cross-learning there is between the ARM big and little core teams. Apple certainly APPEARS to have very good cross-learning [both the lock-step feature support and the very low performance overlap as minor pieces of evidence] but who can be sure?)
    But they seem to have a better aligned set of incentives to keep everyone happy and in sync. (Team A goes for performance at this power level, team B for performance up to X and no further and this lower power level.)
    Whereas Intel seems to be in the difficult situation (that VERY WIDE performance overlap range between the two cores) of "yeah, keep making it faster, but not too fast --- you'll know when you're too fast because we'll crush your spirit at that point..."

    Anyway, lots of rambling here, but I think the key insight is to NOT see this as an ENERGY big.LITTLE play, regardless of what Intel says, but as a way to provide more cores at the low-end without hurting Core prices. (Of course there is still that pesky damn AMD forcing high-end Xeon prices to halve... Well, one battle at a time.)
  • Namisecond - Friday, November 1, 2019 - link

    8cx will probably beat Lakefield in terms of efficiency, but Lakefield has native x86 and probably better perceived performance.
  • azfacea - Thursday, October 24, 2019 - link

    wrong title. this aint new. its pentium 4 shrink/rebrand
  • Jorgp2 - Thursday, October 24, 2019 - link

    Lol, no
  • rozquilla - Thursday, October 24, 2019 - link

    I love my J5005 (Gemini Lake) as an HTPC, and I lent it to a relative for a while after his AMD A10-7860K (Piledriver, meh...) failed, he felt it worked around the same, and faster on videos...

    Which is why I love this CPU, it is fanless and stays at around ~8W and plays back 10bit 4K content in my living room TV without any issues. I also added a CNVI 802.11ac module, it performs great.

    Hopefully this Tremont Core will provide something like that, but I will upgrade until there is AV1 hardware decoding. With which GPU will it be paired, a Gen11 something? I think AV1 is still a bit down the road with x86, ARM already has a couple of proposals.

    For day to day office and HTPC duties, I haven't found a better alternative (maybe RPi4 in this segment?), I'm also waiting to see the Ryzen embedded alternatives for home use, so far only expensive industrial-ish options.
  • GreenReaper - Thursday, October 24, 2019 - link

    Usually the video block is shared across all segments, so if the APU form of Navi picks up AV1 support, chances are it'll be available. Might be a while until truly low-end APUs are available, though.
  • vladx - Friday, October 25, 2019 - link

    Do you really expect Navi on future Atoms?
  • GreenReaper - Saturday, October 26, 2019 - link

    No. I was replying regarding '"the Ryzen embedded alternatives for home use".
  • bananaforscale - Thursday, October 24, 2019 - link

    Your Atom history is incorrect. The first ones were released Q2'08. Look up Silverthorne. (Yeah, I have one of the original ones.)
  • xenol - Thursday, October 24, 2019 - link

    I don't see where Ian said it started at Saltwell. Only that he mentioned the last few generations of Atom.
  • digitalgriffin - Thursday, October 24, 2019 - link

    Saltwell was the first true redesign of atom with ooe (out of order execution) iirc
  • IntelUser2000 - Friday, October 25, 2019 - link

    No its not. Saltwell is a 32nm process shrink.

    Silvermont(Bay Trail platform) is the OoE execution Atom.
  • xenol - Tuesday, October 29, 2019 - link

    The SoC implementation of Atom started with Saltwell. So if Ian's context was the SoC implementation, then starting at Saltwell makes sense.
  • Namisecond - Friday, November 1, 2019 - link

    If by 'SoC', you mean the tablet and phone chips, I think that was Silvermont, not Saltwell.
  • maroon1 - Thursday, October 24, 2019 - link

    Does this mean that all five cores can be used together by the application ??

    I think this will show 6 threads in task manager (cause sunny core has two threads, + 4 Atom cores)
  • skoo - Thursday, October 24, 2019 - link

    Stay away from it (if it ever really comes out). I got left high and dry by intel with their previous atom foray into tablets. They decided it was a failure and just stopped supporting the chip (no more drivers for os upgrades) so I am stuck with a tablet with android 6.01 on it
  • 29a - Friday, October 25, 2019 - link

    I second this stay away from Atom.
  • PeachNCream - Friday, October 25, 2019 - link

    AMD is just as iffy about support for their low pwoer cores. My A4-1250 is not supported either. Though that isn't a problem with it running Linux, its just that, unlike my Bay Trail, it isn't fanless and ultra quiet. There is nothing quite like a fanless laptop with a SSD or eMMC and getting that in Core is a challenge. Getting it with Core at less than $200 is not possible.
  • Jorgp2 - Friday, October 25, 2019 - link

    Lol, that's how Android works.
  • unclevagz - Thursday, October 24, 2019 - link

    Given that when Lakefield products come out they will in all likelihood competing with ARM A77 products, I struggle to see how this architecture would be competitive.
  • vladx - Friday, October 25, 2019 - link

    If Tremont will be almost Core-class as Intel claims, it will very likely equal Cortex-A77 if not surpass it.
  • Wilco1 - Friday, October 25, 2019 - link

    It couldn't even get anywhere near a Cortex-A76. The fastest Goldmont+ gets 464 on Geekbench 5, so with 25% gain it would be ~600 at 2.5GHz.

    However SD855+ (Cortex-A76) gets 795...
  • vladx - Friday, October 25, 2019 - link

    That's probably because Geekbench tests both CPU and GPU, I don't think the GPU compute on Atoms is anything to be impressed
  • Wilco1 - Friday, October 25, 2019 - link

    No these scores are not using the GPU. Atom just has poor integer performance (and FP is even worse, being just SSE, no AVX, no FMA). You need a 60-70% IPC improvement over Goldmont+ to match Cortex-A76.
  • Brunnis - Friday, October 25, 2019 - link

    If IPC is approximately 25% better than Goldmont Plus, it will be on Haswell level. Not sure how it will compete with A77 performance wise, but it should be competitive with A76. From a power consumption perspective? I wouldn’t bet on it.
  • Jorgp2 - Friday, October 25, 2019 - link

    They'll be x86 and have PC IO.
  • Namisecond - Friday, November 1, 2019 - link

    Which will be far more important for devices that run Windows.
  • petr.koc - Friday, October 25, 2019 - link

    "the enterprise side has been dealing with a clock degradation issue that ultimately leaves Atom systems built on C2000 processors unable to boot,"

    This is unfortunately not precise as all Atom Bay Trail processors (desktop, mobile, server) including 14nm successors manufactured up to approximately 2018 are affected with LPC circuitry degradation issue that will kill them in the end:
    https://en.wikipedia.org/wiki/Silvermont#Erratum
    https://en.wikipedia.org/wiki/Goldmont#Erratum
  • 29a - Friday, October 25, 2019 - link

    Ugh, I just look at your links and I have a NAS box with a J1900. I wonder what can be done to replace it?
  • MASSAMKULABOX - Thursday, October 31, 2019 - link

    Yeah, I'm amazed this didnt byte Intel in the Ass much harder, AFAIK synology and cisco were both victims and I'm sure many others. So, start by making well-tested, reliable products.. and no harm in boosting up the GFX side of things (x2 X3?). Give us desktop systems @10w and lower
  • Bigos - Friday, October 25, 2019 - link

    > (We therefore assume that a 3.0 MB L2 will be 15-way.)

    That is very unlikely. 3.0MB (which is 3 * 1024 * 1024) is not divisible by 15. I'm sure the 3MB L2$ will be 12-way associative.

    1.5MB = 12 * 128kB
    3.0MB = 12 * 256kB
    4.5MB = 18 * 256kB
  • AntonErtl - Friday, October 25, 2019 - link

    It's clear that they drop products with low-$/area when they do not have enough capacity, but AFAIK that's not the case at the moment for 10nm; on the contrary, they have 10nm capacity and not much demand for Ice Lake (because they cannot get the clock rates and efficiency competetive with the 14nm Skylake derivatives). So building Tremont-based successors for Gemini Lake (where performance is not as critical) would be a way for them to get more revenue out of their 10nm production line(s?); of course they have to design that first, and they may have failed to do so, expecting Ice Lake production to be in full swing by now.

    Concerning sucking performance, here are some numbers for our LaTeX benchmark http://www.complang.tuwien.ac.at/franz/latex-bench...

    2.368 Intel Atom 330, 1.6GHz, 512K L2 Zotac ION A
    1.052 Celeron J1900 (Silvermont) 2416MHz (Shuttle XS35V4)
    0.712 Celeron J3455 (Goldmont) 2300MHz, ASRock J3455-ITX
    0.540 Celeron J4105 (Goldmont+) 2500MHz
    0.200 Core i7-6700K (Skylake), 4200MHz

    Skylake has about a factor 1.6 better IPC than Goldmont+, and allows higher clock rates (at higher power consumption), resulting in significantly better overall performance, but whether that makes the Goldmont+ suck depends on the application.
  • 29a - Friday, October 25, 2019 - link

    Decoding video, that's what the other two Atoms I've owned sucked at.
  • PeachNCream - Friday, October 25, 2019 - link

    You keep thrashing at that, but other people that have dissimilar experiences have supported claims that run contrary to your statements. What model Atoms and under what conditions haev you had this problem? This isn't an issue for anyone else and, frankly, watching video isn't the only thing a computer does so that complaint may have no impact on the wider range of use cases beyond watching YouTube and Netflix.
  • Jorgp2 - Friday, October 25, 2019 - link

    He probably has an in order atom.

    Pretty much all out of order atoms have hardware decoding acceleration
  • GreenReaper - Saturday, October 26, 2019 - link

    Or, he's trying to decode a video that isn't supported by the hardware. Like 10-bit anything until very recent. In fairness my Bobcat cores struggle with 60FPS anything, and plain Full HD MP4 decode also bogs down if you add anything but the most minimal of shader filters. But they're from ~2011.
  • eddman - Saturday, October 26, 2019 - link

    MP4 is a container, not a codec.
  • Alien959 - Friday, October 25, 2019 - link

    I am reading this article on goldmont + powered laptop. While definitely is not a speedster, the hardware is perfectly usable for light tasks like internet browsing, text editing I even did some 1080p edits in premiere and some modeling in SketchUp pro. It handles the tasks fine. The rest of the hardware is a ssd and 8 gb of ddr4 ram. The main reason what makes the system usable is that the GPU is supported in both programs, and that alleviates the speed of weaker cores on the CPU side.
  • Bigos - Friday, October 25, 2019 - link

    > The two ALUs have one focused on fused additions (FADD), while the other focuses on fused multiplication and division (FMUL).

    Did you mean *float* instead of fused? The only thing that comes to my mind when you say "fused" is FMA: fused multiply-accumulate.

    Also, in the "New Instructions" section the table is titled "TITLE", which sounds amusing but is probably a left-over.
  • mode_13h - Saturday, October 26, 2019 - link

    Yeah, this was also mentioned above. You are correct, as I said in my reply to @ternnence.
  • snakyjake - Friday, October 25, 2019 - link

    If it works for HTPC, decodes HEVC efficiently, low power, low heat, fanless, then I'll buy it.
  • Namisecond - Friday, November 1, 2019 - link

    The current Goldmont+ chips already do all that.
  • ksec - Saturday, October 26, 2019 - link

    Not useful without pricing. In terms of absolute numbers in both Performance and power, ARM or POWER has readily available solutions.
  • Elstar - Saturday, October 26, 2019 - link

    The dual frontend decoders seem ideal for SMT performance. I'm surprised they don't have that option for those that want it.
  • TomWomack - Sunday, October 27, 2019 - link

    '1.5 MB will be a 12-way design, while 4.5 MB will be an 18-way design. (We therefore assume that a 3.0 MB L2 will be 15-way'

    1.5MB 12-way would be 12 256kb blocks; 4.5MB 18-way would be 18 512kb blocks; 3.0MB would be either 24-way with 256kb blocks or 12-way with 512kb blocks, almost certainly the latter
  • AshlayW - Sunday, October 27, 2019 - link

    Intel's new SoC design with "low power" and "high power" cores, akin to the big.LITTLE from ARM, is actually pretty awesome. I'll give them credit where it's due, Sunny Cove, and Tremont are shaping up to be fantastic architectures - for low power mobile, an area where I'd love to see more super tiny low power x86 devices, as I have grown quite fond of my HP envy X360, even though it has the comparatively less efficient Raven Ridge silicon in it (2500U).

    It's just a shame they won't have anything interesting on the desktop. I'll tell you what, Intel. If you want my custom, get these ULP chips into something like a One mix Yoga 3, and I might even buy it. Now imagine playing Warframe, on the go, on a device I can slip in my pocket with a wireless Xbox Controller in the other one. Make it happen.
  • Namisecond - Friday, November 1, 2019 - link

    Full-fat Warframe on Windows? Unless you want to tinker with less than maxxed graphics settings, asking that from any on-die GPU is an unrealistic expectation. Asking that from something small and low-powered enough to slip into your pocket? We're just not there yet. Maybe in another 3-5 years?

Log in

Don't have an account? Sign up now