Last time I looked at Eigen, IIRC, it was requiring a width to be specialized at compile-time... which kind of defeats the purpose. I only glanced over it briefly, so maybe I misunderstood.
It wouldn't surprise me that a compile-time specialized width is more efficient; part of eigen's extremely low overhead is that most of the decisions can be made compile time, and are often at least partially amenable to inlining, which in turn enables better compiler optimizations in general.
Additionally, while it sounds great on paper that your vector size is flexible, I'm skeptical that the hardware will run as efficiently at it's true native sizes, as it would at larger sizes. It's quite possibly more efficient to target the true vector size for whatever operation you're running and in software schedule the iteration, because sometimes the algorithms involved are amenable to interleaving with other operations and/or other (more efficient) orderings. It's pretty difficult for the hardware to just guess what you're doing - in principle at least. But maybe ARM pulled it off; I'm just speculating here.
Finally, eigen is a pretty old project by now, with lots of in-depth optimizations for a whole bunch of algorithms and architectures. It's possible the code-base simply made common assumptions (namely fixed-size vectors) in so many places it's hard to change (though if "huge" sizes like 2048b had no additional overhead, why wouldn't eigen just target that?)
TL;DR: it might be a software design limitation, but it strikes me as at least as plausible that the flexible vector sizes still aren't as efficient as using the true vector size.
I know, I know, bu it has to be said though.... ....what are the implications of ARM v9 in terms of other nations and companies?
In particularly, People's Republic of China, with their strong-arming of other companies and nations by using economic sanctions and mass media manipulation to get their way? This "trade war" has allowed us a glimpse of the ugly side of both super-powers. And things looks very questionable when probing into their nationalised-companies like Huawei and SMIC (in contrast to Cisco and Intel).
Will this (ARM v9) pave a way forward where China essentially misses out? Sort of like being forced to use a Snapdragon 805 (or Android 4.4), when your competitors are using the Snapdragon 820 (or Android 5.1). Key point in that analogy is the 64-bit support. Is that scenario good thing? Would that lead to China allowing for a proper unfiltered Internet? Or perhaps to China allowing foreign companies to their internal market? Does it matter? Or would it lead to nothing, except simply reduced competition in China and Global Markets?
What drugs are you on? Why would ARM get involved in this political mess? And why would ARM be able to force anything? You do realise cutting China off can only result in one thing, they euther abandon armv9 completely and turn to RISC-V or simply implement armv9 without a license, what you gonna do? Revoke their license?
I doubt China would abandon crucial technology. I think they would rather seek out corruption in Companies and Governments, and gain access to the technology through alternative means. Or even more likely, they'd levarage their own infrastructure (or economy) as a tit-for-tat bargaining and gain official access through that way. Especially when knowing the short-sightedness of many politicians.
"Armv9 designs to be unveiled soon, devices in early 2022" What exactly do you think? ARM will release Matterhorn v8 and Something v9 back to back expecting nobody to use v8 and Qualcomm and Samsung to tape out Something v9 which should be happening NOW for a Q4 production and early 2022 release?
Even if Intel, AMD, and VIA's subsidiaries agreed to standardize variable-width SIMD instructions overnight, ARM is still going to beat them to the punch. Heck, Intel couldn't even standardize AVX512 within their own product stack.
A) VIA doesn't matter. B) Intel and AMD could standardize this overnight. C) If they standardize this overnight, the only ARM implementation that will beat Intel and AMD to the punch will be internal-only Amazon chips and Apple. Might as well be a win.
Cores take a long time to design and produce. ARM and their licences presumably have some SVE2 designs in the pipeline by now.
In addition, Fujitsu, Qualcomm (via Nuvia), Ampere, and Nvidia/ARM all have pretty compelling shots at competitive designs. There are probably more.
AMD and Intel could be cooperating in secret, but that would be surprising. It would also catch developers by surprise, unless they do something simple like solidify AVX512 across the board, and break up instructions on smaller cores kinda like Zen 1 does.
The SVE2 core designs might be in the pipeline but my point is that the transition from core design -> SoC release appears to be pretty slow still.
I suppose the data center SoCs might match or slightly beat an Intel/AMD implementation. I still can't see that mattering as much as making it available to developers on local hardware. Until there's a dev loop on a single affordable local device running mainline Linux or Windows with modern WDDM that supports SVE2, it's not a threat. It only affects data centers that are either priced into keeping their current architecture, or are too big to care and already switched.
If Qualcomm delivers one of those in a laptop SoC, that could change the game. But imo that won't happen before Intel/AMD deliver.
We've heard repeatedly that (X) will be the downfall of x86 for years now. ARM was prophacized in 2013 as the next big thing, and it went nowhere. SVE2 will only become a "threat" to x86 if implementations are available across the industry.
TSMC, not ARM, is currently the biggest threat to x86. After TSMC will be Samsung. Behind those two it is Apple, not ARM, that is the biggest threat to x86
And they are all different threats. ARM is slowly displacing x86 as more and more people use Android, iOS, and Chromebooks, and including Macs Intel's market share has dropped a measurable amount in the last decade, assuming Apple doesn't lose customers over their ARM switch.
TSMC is a design-agnostic foundry. They build the highest performing x86 chips available. They will (presumably) continue to build x86 as long as a customer is willing to pay to have them built.
I presume you're saying Apple is the 2nd biggest threat to x86 because they are transitioning away from x86 processors in their computers. But apple is transitioning TOWARDS arm architecture. So it's completely nonsensical to say "Apple (the company) is a bigger threat than ARM (an architecture)t" when they are both integral to the same transition.
Furthermore: Apple computers have used x86 architecture for a lot fewer years than they DID use x86 architecture. Apple's transition might by a *symptom* of x86 possibly approaching end-of-life, but it sure is not a CAUSE of it. Until Apple obtains a vast majority of personal & server computing market share, which would be unprecedented, it is not itself a threat to x86 remaining a highly-used computing architecture.
Threats don’t come only from companies that can reduce x86 marketshare significantly and directly. Threats also come from companies that can change people/market perception about the need of x86 processors - that is actually the first step needed before the marketshare drops significantly. Apple is one of such companies and Intel has already demonstrated that it is indeed a threat (not a symptom).
Apple might be transitioning but software companies rarely will, they'll go from some old assembler code to a higher level language where ARM/x86 code is a compile away. While I agree that there's been a lot of false starts, the M1 is causing a lot of spring cleaning in desktop-oriented companies who's managed to ignore smartphones/tablets like for example getting Adobe to make a native version of Photoshop. If you're indifferent about ARM or x86, that's a win for ARM.
Customers who decide to buy that 300 billion devices on ARM v9.x ISA that next decade are final decision about production success. If these customers (generally) stay with x86/x64 this would balance statistics in different manner and on desktops or laptops there is availability for grown, mature programs for technical drawing and design, analysis, devices support, databases access or office related support software still an advantage for to decide. Mobile devices (Android) have wider variety of (useful) apps that fulfill smaller tasks for users (like access to IoT devices, that are probably bigger share of 300 billion devices 'til 2030, been 180 billion ARM SoCs 'til 2021)? No problem having ARM device beside x64 device, because there are cheap available of either ARMv9.x or x86/x64)
If "going nowhere" means moving core volume that makes x86 look like small fry, making serious inroads in servers and HPC, and getting the buy-in of the most profitable PC OEM (Apple), sure. What were you expecting, that "ARM succeeding" means "x86 drops to zero in eight years"? "x86 is going to be the downfall of RISC/UNIX" was something that was being said when the 486 was new, and RISC/UNIX was still a majority of server revenue into the 2000s and is still big money (billions of dollars a year today.)
Shifts take time, and even if x86 does enter terminal decline - and I'm not necessarily saying it will - desktop PCs will be the last part to go.
Uhh, ARM IS the next big thing TODAY. Last time I checked the mobile market is a lot larger than the PC market...
You're like the guy who insists "microprocessors never won! IBM is still selling mainframes and they still kick ass". It's true. mainframes still sell, and still kick ass. But they don't define the state of computing. Intel will be around for a long time supporting the "requires x86 market"; that was never in doubt. The point is, x86 no longer defines the interesting state of computing; it's fading away to mainframe status before our eyes. Oh sure, there'll be a few more glory years -- peak IBM was 1985 -- but the pattern is laid out.
And are you incapable of understanding the article? SVE/2 WILL be available across the industry! That's a large part of the point of creating this new v9 branding and establishing a new baseline for the ARM community.
This is not true. Most developers have never used any SIMD and don't plan to. Some of them don't even know what SIMD is. You're severely overestimating its importance. Software developers are generally lazy and produce lots of underperforming and poorly optimized code.
Given that Arm introduced SVE several years ago, and no one has even implemented it in a processor that you can buy, I don't know why you think Arm's noises about SVE2 matter. It won't matter. They're so fragmented that they can't even get consistent implementation of the latest versions of v8, like v8.3/4/5.
Apple doesn't even want developers to optimize at that level, to use assembly or intrinsics, so they make it hard to even know what instructions are supported in their Arm CPUs. They want everyone to use terrible languages like Swift. On Android, there's so much fragmentation that you can't count on support for later versions of v8.x.
SVE2 would matter on servers if and when Arm servers become a thing, a real thing, like a you can buy one from Supermicro kind of thing. They would need to be common, with accessible hardware. Developers will need access to the chips, either on their desks or in the cloud. It would need to be reliable access – the cloud generally isn't reliable that way, as there have been cases where AWS dropped people down to instances running on pre-Haswell CPUs, which broke developers' code using AVX2 instructions...
You can't develop for SVE2 without access to hardware that supports it. Right now that hardware does not exist. Arm v9 isn't going to yield any hardware that supports SVE2 for a year or longer, and it might be four years or so before it's easily accessed, or longer, possibly never. By the time it's readily available, so many other variables will have changed in the market dynamic between Arm, AMD, and Intel that your claim doesn't work.
A lot of developers might not even know what SIMD is, but I would argue that a lot of apps actually end up using SIMD simply because many APIs to the system make use of NEON
MTE will likely end up more of a short-term solution, as all such solutions are.
If Arm was serious about actually getting rid of the majority of memory bugs, they would have announced first-class support for the Rust programming language.
Many languages have claimed to solve all computing problems, but none did as well as C/C++. Why would Rust be any better than Java, C#, D, Swift, Go etc?
Also you're forgetting that compilers and runtimes will still have bugs. 100% memory safe is only achievable using ROM.
Because from all mentioned languages, Rust is not GC-based language and has highest chance to be involved in system programming. See Rust addition into the Linux kernel. See MS praise for Rust etc. Generally speaking Rust is more typesafe/memory safe than C, and good old C is really old enough to be replaced completely.
Ditching GC is good but it doesn't solve the fundamental problem. I once worked on a new OS written in a C# variant, and it was riddled with constructs that switch off type checking and GC in order to get actual work done. So in the end it didn't gain safety while still suffering from all the extra overheads of using a "safe" language.
So I remain sceptical that yet another new language can solve anything - it's hard to remain safe while messing about with low level registers, stacks, pointers, heaps etc. Low-level programming is something some people can do really well and others can never seem to master.
We're not talking C89 but C17 and most OS solutions are already implementing those modern features. C2x has an awful lot of work being finalized into it.
Rust will be safer until the hacking community is interested enough to find all of the bugs and poor thinking that undoubtedly exists in Rust, as it has in every language over the decades that was declared safe.
Are you aware of formal verification? There are formally verified OSes now, like seL4.
There's also the CHERI CPU project, which Arm in involved in.
And formally verified compilers line INIFRIA.
We need to junk C and C++ and replace them with serious programming languages that are far more intuitive and sane, as well as being memory safe. Rust is terrible from a syntax and learning standpoint, and much better languages are possible. The software industry is appallingly lazy.
You're deluded. The amount of Work in Clang on C/C++ should be clear these are the foundational languages. Apple made the mistake of listening to Lattner before he bailed and developed Swift. If they're smart the fully modernize ObjC and turn Swift into a training language.
"The benefit of SVE and SVE2 beyond addition various modern SIMD capabilities is in their variable vector size, ranging from 128b to 2048b, allowing variable 128b granularity of vectors, irrespective of what the actual hardware is running on"
Not you too, Andrei :-( This is WRONG! This increased width is a minor benefit outside a few specialized use cases. If you want to process 512 bits of vector data per cycle, you can do that today on an A14/M1 (4 wide NEON). The primary value of SVE/2 is the introduction of new types of instructions that are a much better match to compilers, and to non-regular algorithms. Variable width matters, but NOT in the sense that I can build a 128 bit or a 512 bit implementation; it matters in that I can write a single loop (without prologue or epilogue) targeting an arbitrary width array, without expensive overhead. Along with variable-width adjacent functionality like predicate and scatter/gather.
Could these properties of SVE2 make Armv9 designs more attractive for, for example, AV1 (and, to come AV2) video encoding? I could see some customer interest there, at least if hosted by AWS or Azure.
I don't know what's involved with AV1 and AV2 encoding. With the older codecs that I do know, most of the encoding algorithm is in fact extremely regular, so there's limited win from providing support for non-regularity. My point is that SVE/2 is primarily a win for types of code that, today, do not benefit much from vectors. It's much less of a win for code that's already strongly improved by vectors.
What is the "expensive overhead" in question? If you're writing a loop which processes 4 floats, then the tail is no longer than 3 floats. Even if you manually unroll it 4x, then it's 15 floats max to process in the epilogue. For SIMD to give any benefit, you should be processing large amounts of data so even 15 scalar operations is nothing comparing to the main loop.
If we're talking about the size of code, then it's true; the predicates in SVE2 are making the code look smaller. So the overhead is more about the maintenance costs, isn't it?
Maybe it's just me, but did anyone else notice that Microsoft was prominently mentioned in several slides in ARM's presentation? To me, it means that both companies are very serious about Windows on ARM, including on the server side. I guess we'll see soon enough if the custom-ARM processor MS is apparently working on has Armv9 baked into it already. I would be surprised if it doesn't.
And, here my standard complaint about the LITTLE cores: Quo usque tandem, ARM? When will we see a LITTLE core design with out-of-order execution, so that stock ARM designs aren't 2-3 times worse anymore on Perf/W vs Apple's LITTLE cores. That does matter for smartphones, because staying on the LITTLE cores longer and more often improves battery longevity. I know today was about the next big ISA, but some mentioning of "we're working on it" would have been nice.
Seems unlikely. The ARMv8.6 matrix multiply instructions use the NEON or SVE registers https://community.arm.com/developer/ip-products/pr... and so can provide limited speedup; the Apple scheme uses three totally new (and HUGE) registers, the X, Y, and Z registers. It runs within the CPU but "parallel" to the CPU, there are interlocks to ensure that the matrix instructions are correctly sequenced relative to the non-matrix instructions, but overall the matrix instructions run like an old-style (80s or so) coprocessor, not like part of the SIMD/fp unit.
The Apple scheme feels very much like they appreciate it's a stop-gap solution, an experiment that's being evolved. As such they REALLY don't want you to code directly to it, because I expect they will be modifying it (changing register sizes, register layout, etc) every year, and they can hide that behind API calls, but don't want to have to deal with legacy code that uses direct AMX instructions.
Yes, before ARM had even announced their 64 bit core was suitable for anything that the servers is was aimed at, Apple came out with their 2 core version in the A7, shocking the entire industry.
I would be surprised if that use this in their A15 later this year.
They want to include ray-tracing?! Mobile phones, the biggest market I'm aware of for ARM GPUs, are not even able to afford to include the complete GPU+CPU+caches. They use too much area and power to work in that form factor. How on earth would they get ray-tracing in there too?
My layman's understanding of the ARM ecosystem is they're not exclusively for use in mobile phones. And that licensees can design different processors, with different tradeoffs, to suit different purposes.
So perhaps it's unlikely that someone will design an ARM chip with raytracing silicon for mobile phones any time soon.
but it certainly seems plausible that sometime in the next 5-10 years, someone shall be interested in building a different kind of device with a -larger- form factor that has the thermal and power consumption envelope support a ray-tracing enabled ARM processor.
So long as the operating system running on the ARM chip is capable of updating itself. No ridiculous Android philosophy of placing this task in the hands of inept OEMs. We're gonna need a real OS like Windows, Linux, or even MacOS.
Ah the good old x86 death threat comments, how long it has been since the last ? Anyways AI is not going to dethrone x86, everyone is going to buy the leader's chips - Nvidia or they will make their own, also Intel has FPGA and Xilinx has FPGA as well, a.k.a AMD. So they can build specialized cores whenever they feel it.
Apple is not competing in the server space, so they cannot touch AMD and Intel volumes in x86, all they do is Consumer business, all their servers also run on x86 lmao. The ARM dominance over x86 doesn't exist, as per the Servermarketshare it doesn't come close, since over 95% it is x86, and AMD is now slowly taking away Intel's share of Xeon with EPYC series.
So far no ARM processor beat EPYC Rome, next the AWS Graviton2 is excl. to Amazon, Microshaft rumors on building own chip will be exclusive, they want centralization of the power into their ecosystem because oil's age of power is over. Anyways, so what's left ? Google ? hah the incompetent and politically radicalist nature of them is utter stupid and their castration of Android is unforgettable. They are simply moving all AOSP into Google services turning it into another Apple walled garden, and their HW is pathetic, only agenda is dumbing down. So ARM works there because the phones can only run on ARM HW. Yes they outnumber desktop parts by a huge still the world relies on x86 computing, even if the SW is dumbed down (Win10 UWP etc.. Mac OS into phone hybrid OS, less power user features) there's massive market of Dell / HP / Lenovo / Supermicro / Gigabyte who all cater to x86 ONLY. So the hero ARM doesn't have an OEM lol, That latest 80C Altra Ampere ofc is available but it's weak vs AMD. Intel IceLake Xeon is coming as well, and fat stacks already went to the CTOs to get Intel HW only, Marvell Thunder ? Last time I heard they were going to build custom chips, Fujitsu A64FX ? custom. Oh I forgot, Nuvia, Qualcomm swallowed them so they are going to resurrect Centriq ? after how they axed all custom in house designs with it and pushing only ARM cores on Android.. I guess so.
Finally what does ARM provide ? more custom bs where you cannot do anything since the OEM owns your HW top to bottom and cannot have good backwards compat because the SW is made for dumbed down users ? hint - Surface SQ2, to be honest even x86 Surface has highly locked down HW. Macboooks ? everything soldered down and locked down, what else consumers have to rave so hard about ARM, i suppose Raspberry Pi which is going to dethrone x86 (Pi is amazing HW not doubting at all but people have to realize what is it that ARM is providing to them over x86 in both HW and SW stack and user customization) Finally the Switch, it is huge in numbers and a new HW is on the horizon for the DLSS equipped HW Pro edition but is it comparable to the AMD SoC in Xbox and PS ? nope.
"So far no ARM processor beat EPYC Rome, next the AWS Graviton2 is excl. to Amazon"
If you had bothered to read the Milan review, you would know that Ampere Altra not only outperforms Rome by a good margin, but matches Milan as well (1% faster on 1S SPECINT_rate). All that with a 2-year old core and 1/8th of the cache... ~15% of AWS is now Graviton and still growing fast, so it is obviously displacing a huge amount of x86 servers.
Considering EPYC Genoa is 96 cores /192 threads and will include Xilinx specialty processors for Zen 4 I would have just left that as the comment. Intel's new CEO will ratchet up specialty processing onto future Intel solutions as well.
Sorry, but that's actually not even remotely close. Just head over to Phoronix and see how bad Milan whips the competition across the board. And yes, Phoronix has a much large process suite of applications than Anandtech.
Anandtech is one of the few sites that produces accurate benchmark results across different ISAs. SPEC is an industry standard benchmark to compare servers, and I don't see anything like it on Phoronix. Phoronix just runs a bunch of mostly unknown benchmarks without even checking that the results are meaningful across ISAs (they are not in many cases). Quantity does not imply quality.
Spec is quite flawed, you can go read up on it, it basically only cares about cache and cache latency, it is not an accurate representation of how stuff performs between different architectures.
It's actually quite difficult to compare between architectures unless you know the specific use case,and Apple has done really well with the interpretation layer and I think dotnet core/5 from MS will also help MS quite a bit with that over the next few years when they start moving a lot of their products to their own architecture.
SPEC consists of real applications like the GCC compiler. More cache, lower latency memory and higher IPC*frequency give better scores just like any other code. SPEC is not perfect by any means, but it is the best cross-ISA benchmark that exists today.
What Phoronix does is testing how well code is optimized. If you see x86 being much faster than AArch64 then clearly that code hasn't been optimized for AArch64. SimdJson treated AArch64 as first-class from the start and thus has had similar optimization effort as x86, and you can see that in the results. But that's not the case for many other random projects that are not popular (yet) on AArch64. So Phoronix results are completely useless if you are interested in comparing CPU performance.
Considering EPYC Genoa is 96 cores /192 threads and will include Xilinx specialty processors for Zen 4 I would have just left that as the comment. Intel's new CEO will ratchet up specialty processing onto future Intel solutions as well.
I just hope they put CCA also in client side SoCs. So far all those 'realm', 'enclave' or VM encryption enhancements have only targeted server-side chips, but I don't think the vendor-favored walled garden approach has much of a future, there is an urgent need for more federation.
Glad to see. At least with the new arch they finally have to update their small cores. Was so tired of A55... Where only big cores are in focus though in my opinion the small ones are as or even more important.
SVE 2 is great wonder how Intel and AMD react to this. They should work on similar features and also create a Lean86 getting rid of legacy if they want to defend market share. That and more flexible features like SVE would benefit them a lot.
I am quite excited what ARM v9.x can do in tablets and Ultrabooks etc.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
74 Comments
Back to Article
SarahKerrigan - Tuesday, March 30, 2021 - link
Good to see SVE2 in base, though some of the choices being made by software projects around how to implement SVE have seemed a bit grody.CCA looks like TZ-rooted virtualization.
skavi - Tuesday, March 30, 2021 - link
Substantially more grody than typical SIMD? Any open source examples?SarahKerrigan - Tuesday, March 30, 2021 - link
Last time I looked at Eigen, IIRC, it was requiring a width to be specialized at compile-time... which kind of defeats the purpose. I only glanced over it briefly, so maybe I misunderstood.emn13 - Saturday, April 3, 2021 - link
It wouldn't surprise me that a compile-time specialized width is more efficient; part of eigen's extremely low overhead is that most of the decisions can be made compile time, and are often at least partially amenable to inlining, which in turn enables better compiler optimizations in general.Additionally, while it sounds great on paper that your vector size is flexible, I'm skeptical that the hardware will run as efficiently at it's true native sizes, as it would at larger sizes. It's quite possibly more efficient to target the true vector size for whatever operation you're running and in software schedule the iteration, because sometimes the algorithms involved are amenable to interleaving with other operations and/or other (more efficient) orderings. It's pretty difficult for the hardware to just guess what you're doing - in principle at least. But maybe ARM pulled it off; I'm just speculating here.
Finally, eigen is a pretty old project by now, with lots of in-depth optimizations for a whole bunch of algorithms and architectures. It's possible the code-base simply made common assumptions (namely fixed-size vectors) in so many places it's hard to change (though if "huge" sizes like 2048b had no additional overhead, why wouldn't eigen just target that?)
TL;DR: it might be a software design limitation, but it strikes me as at least as plausible that the flexible vector sizes still aren't as efficient as using the true vector size.
katiko - Thursday, April 1, 2021 - link
niceKangal - Thursday, April 1, 2021 - link
I know, I know, bu it has to be said though........what are the implications of ARM v9 in terms of other nations and companies?
In particularly, People's Republic of China, with their strong-arming of other companies and nations by using economic sanctions and mass media manipulation to get their way? This "trade war" has allowed us a glimpse of the ugly side of both super-powers. And things looks very questionable when probing into their nationalised-companies like Huawei and SMIC (in contrast to Cisco and Intel).
Will this (ARM v9) pave a way forward where China essentially misses out? Sort of like being forced to use a Snapdragon 805 (or Android 4.4), when your competitors are using the Snapdragon 820 (or Android 5.1). Key point in that analogy is the 64-bit support. Is that scenario good thing? Would that lead to China allowing for a proper unfiltered Internet? Or perhaps to China allowing foreign companies to their internal market? Does it matter? Or would it lead to nothing, except simply reduced competition in China and Global Markets?
vladx - Thursday, April 1, 2021 - link
Not gonna happen, ARM just recently announced it will continue to provide new SoC designs to Huawei.dotjaz - Friday, April 2, 2021 - link
What drugs are you on? Why would ARM get involved in this political mess? And why would ARM be able to force anything? You do realise cutting China off can only result in one thing, they euther abandon armv9 completely and turn to RISC-V or simply implement armv9 without a license, what you gonna do? Revoke their license?dotjaz - Friday, April 2, 2021 - link
In any case their strategy would be leaving ARM Ltd. behind, like they did with MIPS initially.Kangal - Saturday, April 3, 2021 - link
I doubt China would abandon crucial technology. I think they would rather seek out corruption in Companies and Governments, and gain access to the technology through alternative means. Or even more likely, they'd levarage their own infrastructure (or economy) as a tit-for-tat bargaining and gain official access through that way. Especially when knowing the short-sightedness of many politicians.rutamodi - Tuesday, April 27, 2021 - link
We are the top leading IT company in surat that provides web designing, development, digital marketing, and graphic designing-related services.https://www.shreedama.com/services/web-development...
https://www.shreedama.com/
skavi - Tuesday, March 30, 2021 - link
So is Matterhorn v8? I thought it was pretty much expected to launch with v9.dotjaz - Friday, April 2, 2021 - link
"Armv9 designs to be unveiled soon, devices in early 2022"What exactly do you think? ARM will release Matterhorn v8 and Something v9 back to back expecting nobody to use v8 and Qualcomm and Samsung to tape out Something v9 which should be happening NOW for a Q4 production and early 2022 release?
How stupid does that sound?
brucethemoose - Tuesday, March 30, 2021 - link
SVE2 is a huge existential threat for x86.Even if Intel, AMD, and VIA's subsidiaries agreed to standardize variable-width SIMD instructions overnight, ARM is still going to beat them to the punch. Heck, Intel couldn't even standardize AVX512 within their own product stack.
lmcd - Tuesday, March 30, 2021 - link
A) VIA doesn't matter.B) Intel and AMD could standardize this overnight.
C) If they standardize this overnight, the only ARM implementation that will beat Intel and AMD to the punch will be internal-only Amazon chips and Apple. Might as well be a win.
brucethemoose - Tuesday, March 30, 2021 - link
Cores take a long time to design and produce. ARM and their licences presumably have some SVE2 designs in the pipeline by now.In addition, Fujitsu, Qualcomm (via Nuvia), Ampere, and Nvidia/ARM all have pretty compelling shots at competitive designs. There are probably more.
AMD and Intel could be cooperating in secret, but that would be surprising. It would also catch developers by surprise, unless they do something simple like solidify AVX512 across the board, and break up instructions on smaller cores kinda like Zen 1 does.
lmcd - Tuesday, March 30, 2021 - link
The SVE2 core designs might be in the pipeline but my point is that the transition from core design -> SoC release appears to be pretty slow still.I suppose the data center SoCs might match or slightly beat an Intel/AMD implementation. I still can't see that mattering as much as making it available to developers on local hardware. Until there's a dev loop on a single affordable local device running mainline Linux or Windows with modern WDDM that supports SVE2, it's not a threat. It only affects data centers that are either priced into keeping their current architecture, or are too big to care and already switched.
If Qualcomm delivers one of those in a laptop SoC, that could change the game. But imo that won't happen before Intel/AMD deliver.
TheinsanegamerN - Tuesday, March 30, 2021 - link
We've heard repeatedly that (X) will be the downfall of x86 for years now. ARM was prophacized in 2013 as the next big thing, and it went nowhere. SVE2 will only become a "threat" to x86 if implementations are available across the industry.michael2k - Tuesday, March 30, 2021 - link
TSMC, not ARM, is currently the biggest threat to x86.After TSMC will be Samsung.
Behind those two it is Apple, not ARM, that is the biggest threat to x86
And they are all different threats. ARM is slowly displacing x86 as more and more people use Android, iOS, and Chromebooks, and including Macs Intel's market share has dropped a measurable amount in the last decade, assuming Apple doesn't lose customers over their ARM switch.
dotjaz - Tuesday, March 30, 2021 - link
You stupid? Zen2 onwards are built by TSMC, how are they a "threat" to x86? Intel ≠ x86.HardwareDufus - Friday, April 2, 2021 - link
you are a rather offensive and unpleasant person.... why do you repeatedly say things like are you stupid, that sounds stupid, are you on drugs?can you find a nicer way to express your disagreement with what others have posted?
grant3 - Wednesday, March 31, 2021 - link
TSMC is a design-agnostic foundry. They build the highest performing x86 chips available. They will (presumably) continue to build x86 as long as a customer is willing to pay to have them built.I presume you're saying Apple is the 2nd biggest threat to x86 because they are transitioning away from x86 processors in their computers. But apple is transitioning TOWARDS arm architecture. So it's completely nonsensical to say "Apple (the company) is a bigger threat than ARM (an architecture)t" when they are both integral to the same transition.
Furthermore: Apple computers have used x86 architecture for a lot fewer years than they DID use x86 architecture. Apple's transition might by a *symptom* of x86 possibly approaching end-of-life, but it sure is not a CAUSE of it. Until Apple obtains a vast majority of personal & server computing market share, which would be unprecedented, it is not itself a threat to x86 remaining a highly-used computing architecture.
Ppietra - Wednesday, March 31, 2021 - link
Threats don’t come only from companies that can reduce x86 marketshare significantly and directly. Threats also come from companies that can change people/market perception about the need of x86 processors - that is actually the first step needed before the marketshare drops significantly. Apple is one of such companies and Intel has already demonstrated that it is indeed a threat (not a symptom).Kjella - Wednesday, March 31, 2021 - link
Apple might be transitioning but software companies rarely will, they'll go from some old assembler code to a higher level language where ARM/x86 code is a compile away. While I agree that there's been a lot of false starts, the M1 is causing a lot of spring cleaning in desktop-oriented companies who's managed to ignore smartphones/tablets like for example getting Adobe to make a native version of Photoshop. If you're indifferent about ARM or x86, that's a win for ARM.back2future - Thursday, April 1, 2021 - link
Customers who decide to buy that 300 billion devices on ARM v9.x ISA that next decade are final decision about production success. If these customers (generally) stay with x86/x64 this would balance statistics in different manner and on desktops or laptops there is availability for grown, mature programs for technical drawing and design, analysis, devices support, databases access or office related support software still an advantage for to decide. Mobile devices (Android) have wider variety of (useful) apps that fulfill smaller tasks for users (like access to IoT devices, that are probably bigger share of 300 billion devices 'til 2030, been 180 billion ARM SoCs 'til 2021)?No problem having ARM device beside x64 device, because there are cheap available of either ARMv9.x or x86/x64)
melgross - Saturday, April 10, 2021 - link
To Apple, the ARM cores are just a part of their SoC, and possibly not the most important part.SarahKerrigan - Tuesday, March 30, 2021 - link
If "going nowhere" means moving core volume that makes x86 look like small fry, making serious inroads in servers and HPC, and getting the buy-in of the most profitable PC OEM (Apple), sure. What were you expecting, that "ARM succeeding" means "x86 drops to zero in eight years"? "x86 is going to be the downfall of RISC/UNIX" was something that was being said when the 486 was new, and RISC/UNIX was still a majority of server revenue into the 2000s and is still big money (billions of dollars a year today.)Shifts take time, and even if x86 does enter terminal decline - and I'm not necessarily saying it will - desktop PCs will be the last part to go.
name99 - Tuesday, March 30, 2021 - link
Uhh, ARM IS the next big thing TODAY.Last time I checked the mobile market is a lot larger than the PC market...
You're like the guy who insists "microprocessors never won! IBM is still selling mainframes and they still kick ass". It's true. mainframes still sell, and still kick ass. But they don't define the state of computing.
Intel will be around for a long time supporting the "requires x86 market"; that was never in doubt. The point is, x86 no longer defines the interesting state of computing; it's fading away to mainframe status before our eyes.
Oh sure, there'll be a few more glory years -- peak IBM was 1985 -- but the pattern is laid out.
And are you incapable of understanding the article? SVE/2 WILL be available across the industry! That's a large part of the point of creating this new v9 branding and establishing a new baseline for the ARM community.
darkich - Wednesday, March 31, 2021 - link
Exactly.29a - Wednesday, March 31, 2021 - link
"ARM was prophacized in 2013 as the next big thing, and it went nowhere"I guess 99.999999999999% of mobile devices is nowhere?
JoeDuarte - Wednesday, March 31, 2021 - link
This is not true. Most developers have never used any SIMD and don't plan to. Some of them don't even know what SIMD is. You're severely overestimating its importance. Software developers are generally lazy and produce lots of underperforming and poorly optimized code.Given that Arm introduced SVE several years ago, and no one has even implemented it in a processor that you can buy, I don't know why you think Arm's noises about SVE2 matter. It won't matter. They're so fragmented that they can't even get consistent implementation of the latest versions of v8, like v8.3/4/5.
Apple doesn't even want developers to optimize at that level, to use assembly or intrinsics, so they make it hard to even know what instructions are supported in their Arm CPUs. They want everyone to use terrible languages like Swift. On Android, there's so much fragmentation that you can't count on support for later versions of v8.x.
SVE2 would matter on servers if and when Arm servers become a thing, a real thing, like a you can buy one from Supermicro kind of thing. They would need to be common, with accessible hardware. Developers will need access to the chips, either on their desks or in the cloud. It would need to be reliable access – the cloud generally isn't reliable that way, as there have been cases where AWS dropped people down to instances running on pre-Haswell CPUs, which broke developers' code using AVX2 instructions...
You can't develop for SVE2 without access to hardware that supports it. Right now that hardware does not exist. Arm v9 isn't going to yield any hardware that supports SVE2 for a year or longer, and it might be four years or so before it's easily accessed, or longer, possibly never. By the time it's readily available, so many other variables will have changed in the market dynamic between Arm, AMD, and Intel that your claim doesn't work.
Ppietra - Friday, April 2, 2021 - link
A lot of developers might not even know what SIMD is, but I would argue that a lot of apps actually end up using SIMD simply because many APIs to the system make use of NEONKrysto - Tuesday, March 30, 2021 - link
MTE will likely end up more of a short-term solution, as all such solutions are.If Arm was serious about actually getting rid of the majority of memory bugs, they would have announced first-class support for the Rust programming language.
SarahKerrigan - Tuesday, March 30, 2021 - link
https://developer.arm.com/solutions/internet-of-th...Rust has been well-supported on ARM for a while.
Wilco1 - Wednesday, March 31, 2021 - link
Many languages have claimed to solve all computing problems, but none did as well as C/C++. Why would Rust be any better than Java, C#, D, Swift, Go etc?Also you're forgetting that compilers and runtimes will still have bugs. 100% memory safe is only achievable using ROM.
kgardas - Wednesday, March 31, 2021 - link
Because from all mentioned languages, Rust is not GC-based language and has highest chance to be involved in system programming. See Rust addition into the Linux kernel. See MS praise for Rust etc. Generally speaking Rust is more typesafe/memory safe than C, and good old C is really old enough to be replaced completely.Wilco1 - Wednesday, March 31, 2021 - link
Ditching GC is good but it doesn't solve the fundamental problem. I once worked on a new OS written in a C# variant, and it was riddled with constructs that switch off type checking and GC in order to get actual work done. So in the end it didn't gain safety while still suffering from all the extra overheads of using a "safe" language.So I remain sceptical that yet another new language can solve anything - it's hard to remain safe while messing about with low level registers, stacks, pointers, heaps etc. Low-level programming is something some people can do really well and others can never seem to master.
mdriftmeyer - Thursday, April 1, 2021 - link
We're not talking C89 but C17 and most OS solutions are already implementing those modern features. C2x has an awful lot of work being finalized into it.http://www.open-std.org/jtc1/sc22/wg14/www/wg14_do...
Good old C isn't old anymore.
And no, Linus, Apple, Microsoft aren't ditching C/C++ in their Kernels for Rust.
melgross - Saturday, April 10, 2021 - link
Rust will be safer until the hacking community is interested enough to find all of the bugs and poor thinking that undoubtedly exists in Rust, as it has in every language over the decades that was declared safe.JoeDuarte - Wednesday, March 31, 2021 - link
Are you aware of formal verification? There are formally verified OSes now, like seL4.There's also the CHERI CPU project, which Arm in involved in.
And formally verified compilers line INIFRIA.
We need to junk C and C++ and replace them with serious programming languages that are far more intuitive and sane, as well as being memory safe. Rust is terrible from a syntax and learning standpoint, and much better languages are possible. The software industry is appallingly lazy.
mdriftmeyer - Thursday, April 1, 2021 - link
You're deluded. The amount of Work in Clang on C/C++ should be clear these are the foundational languages. Apple made the mistake of listening to Lattner before he bailed and developed Swift. If they're smart the fully modernize ObjC and turn Swift into a training language.name99 - Tuesday, March 30, 2021 - link
"The benefit of SVE and SVE2 beyond addition various modern SIMD capabilities is in their variable vector size, ranging from 128b to 2048b, allowing variable 128b granularity of vectors, irrespective of what the actual hardware is running on"Not you too, Andrei :-(
This is WRONG! This increased width is a minor benefit outside a few specialized use cases. If you want to process 512 bits of vector data per cycle, you can do that today on an A14/M1 (4 wide NEON).
The primary value of SVE/2 is the introduction of new types of instructions that are a much better match to compilers, and to non-regular algorithms.
Variable width matters, but NOT in the sense that I can build a 128 bit or a 512 bit implementation; it matters in that I can write a single loop (without prologue or epilogue) targeting an arbitrary width array, without expensive overhead. Along with variable-width adjacent functionality like predicate and scatter/gather.
eastcoast_pete - Tuesday, March 30, 2021 - link
Could these properties of SVE2 make Armv9 designs more attractive for, for example, AV1 (and, to come AV2) video encoding?I could see some customer interest there, at least if hosted by AWS or Azure.
name99 - Tuesday, March 30, 2021 - link
I don't know what's involved with AV1 and AV2 encoding. With the older codecs that I do know, most of the encoding algorithm is in fact extremely regular, so there's limited win from providing support for non-regularity.My point is that SVE/2 is primarily a win for types of code that, today, do not benefit much from vectors. It's much less of a win for code that's already strongly improved by vectors.
Andrei Frumusanu - Tuesday, March 30, 2021 - link
That's literally what I meant, in a just less explicit way.Over17 - Thursday, April 8, 2021 - link
What is the "expensive overhead" in question? If you're writing a loop which processes 4 floats, then the tail is no longer than 3 floats. Even if you manually unroll it 4x, then it's 15 floats max to process in the epilogue. For SIMD to give any benefit, you should be processing large amounts of data so even 15 scalar operations is nothing comparing to the main loop.If we're talking about the size of code, then it's true; the predicates in SVE2 are making the code look smaller. So the overhead is more about the maintenance costs, isn't it?
eastcoast_pete - Tuesday, March 30, 2021 - link
Maybe it's just me, but did anyone else notice that Microsoft was prominently mentioned in several slides in ARM's presentation? To me, it means that both companies are very serious about Windows on ARM, including on the server side. I guess we'll see soon enough if the custom-ARM processor MS is apparently working on has Armv9 baked into it already. I would be surprised if it doesn't.And, here my standard complaint about the LITTLE cores: Quo usque tandem, ARM? When will we see a LITTLE core design with out-of-order execution, so that stock ARM designs aren't 2-3 times worse anymore on Perf/W vs Apple's LITTLE cores. That does matter for smartphones, because staying on the LITTLE cores longer and more often improves battery longevity. I know today was about the next big ISA, but some mentioning of "we're working on it" would have been nice.
SarahKerrigan - Tuesday, March 30, 2021 - link
Wouldn't surprise me if the ARMv9 little core looks more like the A65 - narrow OoO.BillBear - Tuesday, March 30, 2021 - link
I wonder if the Matrix Multiplication implementation currently shipping on Apple's M1 chip is just an early implementation of this new spec?https://medium.com/swlh/apples-m1-secret-coprocess...
Apple certainly had an ARM v8 implementation shipping well in advance of anyone else.
name99 - Thursday, April 1, 2021 - link
Seems unlikely.The ARMv8.6 matrix multiply instructions use the NEON or SVE registers
https://community.arm.com/developer/ip-products/pr...
and so can provide limited speedup;
the Apple scheme uses three totally new (and HUGE) registers, the X, Y, and Z registers. It runs within the CPU but "parallel" to the CPU, there are interlocks to ensure that the matrix instructions are correctly sequenced relative to the non-matrix instructions, but overall the matrix instructions run like an old-style (80s or so) coprocessor, not like part of the SIMD/fp unit.
The Apple scheme feels very much like they appreciate it's a stop-gap solution, an experiment that's being evolved. As such they REALLY don't want you to code directly to it, because I expect they will be modifying it (changing register sizes, register layout, etc) every year, and they can hide that behind API calls, but don't want to have to deal with legacy code that uses direct AMX instructions.
melgross - Saturday, April 10, 2021 - link
Yes, before ARM had even announced their 64 bit core was suitable for anything that the servers is was aimed at, Apple came out with their 2 core version in the A7, shocking the entire industry.I would be surprised if that use this in their A15 later this year.
ballsystemlord - Tuesday, March 30, 2021 - link
They want to include ray-tracing?! Mobile phones, the biggest market I'm aware of for ARM GPUs, are not even able to afford to include the complete GPU+CPU+caches. They use too much area and power to work in that form factor.How on earth would they get ray-tracing in there too?
grant3 - Wednesday, March 31, 2021 - link
My layman's understanding of the ARM ecosystem is they're not exclusively for use in mobile phones. And that licensees can design different processors, with different tradeoffs, to suit different purposes.So perhaps it's unlikely that someone will design an ARM chip with raytracing silicon for mobile phones any time soon.
but it certainly seems plausible that sometime in the next 5-10 years, someone shall be interested in building a different kind of device with a -larger- form factor that has the thermal and power consumption envelope support a ray-tracing enabled ARM processor.
ballsystemlord - Wednesday, March 31, 2021 - link
Good point. They could be just future proofing themselves and allocating some IP so that they can compete.iphonebestgamephone - Thursday, April 1, 2021 - link
Huawei had worked on raytracing on android, shown in some demos.https://www.reddit.com/r/Android/comments/eczftf/n...
Its not like raytracing means it looks like whats shown using an rtx3080.
dicobalt - Wednesday, March 31, 2021 - link
So long as the operating system running on the ARM chip is capable of updating itself. No ridiculous Android philosophy of placing this task in the hands of inept OEMs. We're gonna need a real OS like Windows, Linux, or even MacOS.Findecanor - Friday, April 2, 2021 - link
Don't attribute to ineptitude what can be adequately explained by malice. The OEMs want you to buy new hardware when your banking app no longer works.Silver5urfer - Wednesday, March 31, 2021 - link
Ah the good old x86 death threat comments, how long it has been since the last ? Anyways AI is not going to dethrone x86, everyone is going to buy the leader's chips - Nvidia or they will make their own, also Intel has FPGA and Xilinx has FPGA as well, a.k.a AMD. So they can build specialized cores whenever they feel it.Apple is not competing in the server space, so they cannot touch AMD and Intel volumes in x86, all they do is Consumer business, all their servers also run on x86 lmao. The ARM dominance over x86 doesn't exist, as per the Servermarketshare it doesn't come close, since over 95% it is x86, and AMD is now slowly taking away Intel's share of Xeon with EPYC series.
So far no ARM processor beat EPYC Rome, next the AWS Graviton2 is excl. to Amazon, Microshaft rumors on building own chip will be exclusive, they want centralization of the power into their ecosystem because oil's age of power is over. Anyways, so what's left ? Google ? hah the incompetent and politically radicalist nature of them is utter stupid and their castration of Android is unforgettable. They are simply moving all AOSP into Google services turning it into another Apple walled garden, and their HW is pathetic, only agenda is dumbing down. So ARM works there because the phones can only run on ARM HW. Yes they outnumber desktop parts by a huge still the world relies on x86 computing, even if the SW is dumbed down (Win10 UWP etc.. Mac OS into phone hybrid OS, less power user features) there's massive market of Dell / HP / Lenovo / Supermicro / Gigabyte who all cater to x86 ONLY. So the hero ARM doesn't have an OEM lol, That latest 80C Altra Ampere ofc is available but it's weak vs AMD. Intel IceLake Xeon is coming as well, and fat stacks already went to the CTOs to get Intel HW only, Marvell Thunder ? Last time I heard they were going to build custom chips, Fujitsu A64FX ? custom. Oh I forgot, Nuvia, Qualcomm swallowed them so they are going to resurrect Centriq ? after how they axed all custom in house designs with it and pushing only ARM cores on Android.. I guess so.
Finally what does ARM provide ? more custom bs where you cannot do anything since the OEM owns your HW top to bottom and cannot have good backwards compat because the SW is made for dumbed down users ? hint - Surface SQ2, to be honest even x86 Surface has highly locked down HW. Macboooks ? everything soldered down and locked down, what else consumers have to rave so hard about ARM, i suppose Raspberry Pi which is going to dethrone x86 (Pi is amazing HW not doubting at all but people have to realize what is it that ARM is providing to them over x86 in both HW and SW stack and user customization) Finally the Switch, it is huge in numbers and a new HW is on the horizon for the DLSS equipped HW Pro edition but is it comparable to the AMD SoC in Xbox and PS ? nope.
But yeah x86 is going to die lmao.
viktorcode - Wednesday, March 31, 2021 - link
I would love to archive this comment for posterity...Wilco1 - Wednesday, March 31, 2021 - link
"So far no ARM processor beat EPYC Rome, next the AWS Graviton2 is excl. to Amazon"If you had bothered to read the Milan review, you would know that Ampere Altra not only outperforms Rome by a good margin, but matches Milan as well (1% faster on 1S SPECINT_rate). All that with a 2-year old core and 1/8th of the cache... ~15% of AWS is now Graviton and still growing fast, so it is obviously displacing a huge amount of x86 servers.
mdriftmeyer - Thursday, April 1, 2021 - link
Considering EPYC Genoa is 96 cores /192 threads and will include Xilinx specialty processors for Zen 4 I would have just left that as the comment. Intel's new CEO will ratchet up specialty processing onto future Intel solutions as well.mdriftmeyer - Thursday, April 1, 2021 - link
Sorry, but that's actually not even remotely close. Just head over to Phoronix and see how bad Milan whips the competition across the board. And yes, Phoronix has a much large process suite of applications than Anandtech.Wilco1 - Friday, April 2, 2021 - link
Anandtech is one of the few sites that produces accurate benchmark results across different ISAs. SPEC is an industry standard benchmark to compare servers, and I don't see anything like it on Phoronix. Phoronix just runs a bunch of mostly unknown benchmarks without even checking that the results are meaningful across ISAs (they are not in many cases). Quantity does not imply quality.RSAUser - Saturday, April 3, 2021 - link
Spec is quite flawed, you can go read up on it, it basically only cares about cache and cache latency, it is not an accurate representation of how stuff performs between different architectures.It's actually quite difficult to compare between architectures unless you know the specific use case,and Apple has done really well with the interpretation layer and I think dotnet core/5 from MS will also help MS quite a bit with that over the next few years when they start moving a lot of their products to their own architecture.
Wilco1 - Saturday, April 3, 2021 - link
SPEC consists of real applications like the GCC compiler. More cache, lower latency memory and higher IPC*frequency give better scores just like any other code. SPEC is not perfect by any means, but it is the best cross-ISA benchmark that exists today.What Phoronix does is testing how well code is optimized. If you see x86 being much faster than AArch64 then clearly that code hasn't been optimized for AArch64. SimdJson treated AArch64 as first-class from the start and thus has had similar optimization effort as x86, and you can see that in the results. But that's not the case for many other random projects that are not popular (yet) on AArch64. So Phoronix results are completely useless if you are interested in comparing CPU performance.
mdriftmeyer - Thursday, April 1, 2021 - link
Considering EPYC Genoa is 96 cores /192 threads and will include Xilinx specialty processors for Zen 4 I would have just left that as the comment. Intel's new CEO will ratchet up specialty processing onto future Intel solutions as well.Wilco1 - Saturday, April 3, 2021 - link
Genoa is 2022, Altra Max has 128 cores in 2021.abufrejoval - Wednesday, March 31, 2021 - link
I just hope they put CCA also in client side SoCs. So far all those 'realm', 'enclave' or VM encryption enhancements have only targeted server-side chips, but I don't think the vendor-favored walled garden approach has much of a future, there is an urgent need for more federation.bobwya - Wednesday, March 31, 2021 - link
"The benefit of SVE and SVE2 beyond addition various modern SIMD capabilities is in their variable vector size" - que?!! :-)Matthias B V - Thursday, April 1, 2021 - link
Glad to see. At least with the new arch they finally have to update their small cores. Was so tired of A55... Where only big cores are in focus though in my opinion the small ones are as or even more important.SVE 2 is great wonder how Intel and AMD react to this. They should work on similar features and also create a Lean86 getting rid of legacy if they want to defend market share. That and more flexible features like SVE would benefit them a lot.
I am quite excited what ARM v9.x can do in tablets and Ultrabooks etc.
SydneyBlue120d - Sunday, April 4, 2021 - link
Can we expect Nuvia SOC to be Armv9?SigmundEXactos - Monday, April 5, 2021 - link
One important distinction between Arm v9 and v8 -- v9 is not subject to US export restrictions and so can be licensed to companies like Huawei. Source: https://www.scmp.com/tech/tech-trends/article/3127...SydneyBlue120d - Tuesday, April 6, 2021 - link
Very interesting, thanks for that.erithacus - Saturday, April 10, 2021 - link
“it means that a software developer would only ever have to compile HIS code once”Is it really so hard to use a neutral pronoun?