Intel's actually released a compelling new chipset? I'm surprised to see DDR5 and PCIe 5 support, but USB 4 seems to be notably absent, despite there being no reason at all to omit it. Intel is finally one-upping AMD after a few years of playing #2.
Yep, the only thing USB4 adds over "USB 3.2 2x2" is Thunderbolt support. Therefore any Thunderbolt 4 device is automatically USB4. In fact, essentially any board with "Thunderbolt 3" along with USB 3.2 2x2 basically get "USB4" status for free.
"essentially any board with "Thunderbolt 3" along with USB 3.2 2x2 basically get "USB4" status for free."
TB3 can run USB 4.0 devices, while USB 3.2 2x2 should be able to, it would be capped at its 20Gbit/sec and run over the backwards compatibility protocol for USB. USB4 ports can be either 20 or 40Gb.
I wouldn't want just USB 4.0 ports as Apple has, capped at 20Gbps. We'll probably see some of that on the AMD side. The best thing is just to have TB3 or TB4 to be sure you have fullspeed 40Gbps ports.
I agree. How come there are so few boards with USB4 or TB4 ? And how come the article doesn't mention them at all before it starts listing specific features of individual boards?
The only way to get USB4 on a PC was by using Intel's Thunderbolt 4 chipset (or having it built into Tiger Lake). Since Thunderbolt is kind of a niche thing on desktop PCs, motherboard makers aren't interested int spending the money on Intel's TB4 chip except in high end or specialty boards. I would assume there will be some third-party USB4 chips coming soon.
DDR5 is not faster in almost every case, and there are no PCIe 5 devices (unlike when PCIe4 was launched at least you got video cards and storage immediately). Not really an advantage. Prices are too high also. Frankly I like PCIe 3.0 boards when they are under $100 USD.
It's the same thing that happened during the DDR3 to DDR4 transition. The first DDR4 products weren't really any faster than the best DDR3. Eventually DDR4 speeds got faster and left DDR3 behind. Same thing will happen with DDR4 to DDR5.
PCIe 4.0 support was significantly delayed on the desktop but it arrived in servers in 2017 (IBM Power9). AMD was planning on adopting PCIe 4.0 after Intel on the desktop but Intel's train wreck of their 10 nm manufacturing node derailed the chips what were going to add it (Ice Lake on desktop).
I would expect both PCIe 5.0 graphics and storage by the end of 2022 on the desktop, though their benefits will be marginal outside of a few niches. (Single lane PCIe 5.0 chips for USB4/Thunderbolt 4 and 10 Gbit Ethernet vs. using four PCI 3.0 lanes are cost driven examples.)
Kevin G - I agree, I think in a year there will be PCIe 5.0 devices, but the performance advantages, much like initial PCIe 4.0 devices (RTX 30xx, NVMe SSD's, etc) won't be there until 2023-2024, by which time this platform will already be replaced or significantly less expensive.
I don't think Intel is looking to drive a lot of sales with this platform. Not many people are buying $3000 desktop PC's at the moment (and when you consider the platform alone is $500, with a $500 CPU on top of it, $3000 is pretty conservative considering most people buying something like this will want a $1000+ GPU, so that's $2000 for three components.)
Put in perspective, the last launch like this that had a lot of tech that you couldn't take advantage of right away was probably X58. PCIe 2.0 at a time no PCIe 2.0 products existed, and 36 lanes no less, left a ton of room to expand a platform that was already stacked to the gills with embedded tech. In fact it would be years before applications were fully optimized for the bandwidth offered by triple channel memory, let alone quad channel memory that Intel introduced on later HEDT platforms.
The current LGA 1700 platform from Intel is set for three generations: Alder Lake, Raptor Lake, and Meteor Lake. In that time frame with future generations as well as AMD releasing AM5, featuring DDR5 and PCIe 5.0 support as well, will remove the current premium prices and reach parity with what we have today. Even in the short term, Intel supports DDR4 on Alder Lake and the various Alder Lake i3's and lower will arrive in less than 6 months further reducing prices.
The big feature of PCIe 5.0 isn't the additional bandwidth but on the server side it'll bring CXL. For consumers, I wouldn't be surprised if AMD enables their PCIe 5.0 slot to switch over to an Infinity Fabric mode when paired with a Radeon graphics card. (AMD recently announced something similar with Epyc and CDNA2.) The benefits wouldn't be the bandwidth but rather lower latencies between devices and increased efficiencies due to coherency/direct memory access. The way things are aligning, this could arrive in late 2022.
X58 did have a few cards that took advantage of PCIe 2.0 right away (dual 10 Gbit networking on an eight lane PCIe 2.0 cards). Beyond that, multi-GPU was emphasized by both nVidia and AMD at the time giving some usage to those additional lanes. X58 wasn't the first DDR3 platform from Intel but it still carried a premium over DDR2 when it was first introduced.
Intel HEDT is currently dead with the glimmer of hope that Saphhire Rapids brings it back. However, I'd expect any Sapphire Rapids HEDT platform to be half of what Intel is offering on the server side (dual chiplet instead of quad chiplet). AMD still has Zen 3/Zen 3D Threadrippers to launch to counter.
Ahh, X58. Good stuff. I upgraded from an X58 system in 2017 and its still going strong as my neighbors PC. It took 6+ cores and (more importantly) NVMe storage to dislodge its position as "good enough" while i spent the PC money on video cards and I finally got a 8700k in 2017.
I expect something similar to happen for my next upgrade. A new memory tech, pcie5, and more and faster USB at the minimum, and at a lower price than these 690 boards. I don't expect that until zen 4 at the earliest so late 2022 early 2023. But even then i doubt CPU performance will be the killer feature that forces me to upgrade. It will be something related to IO or memory. Bleeding edge CPU performance just isnt that relevant to most gamers anymore. And the upgrade cycles have slown down so much compared to the past. I remember when year or two old hardware was at risk of being unable to play the latest titles!
I don't see much advantage for PCIe 5.0 dGPUs, but PCIe 5.0 SSDs should offer improved sequential read/write speeds with more NAND layers. Unfortunately, in Z690, you'd need to use a PCIe 5.0 -> NVMe expansion card, as mobo NVMe are 4.0 only. I wonder how that will be handled. Some early Z690 slides listed support in 5.0 expansion slots for Intel SSDs only.
Oof, these motherboard prices are pretty high. I struggle to justify anything over $400, especially in mainstream market.
> PCIe 5.0 SSDs should offer improved sequential read/write speeds with more NAND layers.
Why? What's the use case for > 8 GB/sec storage reads/writes, in a consumer desktop? And I'm not aware of a consumer SSD that's even maxed out PCIe 4.0 x4, BTW.
These are probably the reasons Intel didn't bother with it. However long it takes consumer GPUs to support PCIe 5.0, SSDs could take even longer. And with Raptor Lake coming in just <= 1 year, Intel will soon have another chance to re-evaluate whether a PCIe 5.0 M.2 slot makes any kind of sense.
Yeah, definitely too much for a "current" system - you can't use half the tech you are paying for because there is literally nothing on the market, and there likely won't be for a year. I mean we are only at Gen2 PCIe 4.0 SSD's, and only recently did PCIe 4.0 GPU's launch (if you could call it that since they are still difficult to even buy.) On top of that, the difference in most applications (like GPU's, for example) between PCIe 3.0 and PCIe 4.0 is almost nothing, because the bus isn't the bottleneck.
Other than TB4, I don't know what PCIe 5.0 is going to be good for in the near term because PCIe 4.0 is already a lot of bandwidth even in small lanes. Obviously PCIe 5.0 has more bandwidth in narrower lanes, but again, in desktop\mobile platforms there isn't a lot of demand for that much bus bandwidth.
Definitely a technology showcase launch more than a marketable one.
I will say in regards to PCIe 5, there is the advantage that you can do more on the board with less lanes. Some of these boards split the PCIe 5 lanes between two x16 slots, effectively giving you two full speed PCIe 4.0 x16 slots, instead of x8/x8. While this still won't benefit you much in most games, it's a big help for anyone who uses multiple GPUs for workstation purposes, or needs extremely high-speed storage in the second slot.
Did Intel say why they upped the number of SATA ports to 8? If the number were to change at this point, I'd expect it to start going down.
With M.2 SSDs having largely displaced SATA models the demand for SATA continues to dwindle outside of DIY NAS systems for which a much cheaper basic chipset mobo is suitable. With only about 1 in 10 mobos adding the last 2 ports the mobo makers mostly seem to agree that it was unneeded as well.
Its for RAID, they are adding some RAID features. And M.2 is nothing vs SATA. M.2 all drives are basically TLC, only MLC ones are 970 Pro which are now obsolete. SATA gets you 860 PRO MLC 4TB drives. Plus M.2 is barely useful in gaming, at max reduces load times by a fraction vs regular SATA SSD plus they get wayy too hot for what they have in mediocre endurance, capacity and usefulness in general purpose workloads.
Bonus is SATA gets you NAS class devices - WD Red / Seagate Exos etc, no need to buy extra NAS and deal with Plex and all simply run all your media on your own PC. I bet many like that option when they do not prefer any TV or others.
Really unfortunate that everyone dropping them even ASUS Apex dropped it this time, only EVGA is offering on their mobos, and ASRock.
It seems to be a tangent on your part, but games currently don't even leverage PCIe 3 SSD speeds. When DirectStorage is utilized by actual games, your statement becomes grossly false when it comes to NVMe M.2 SSDs vs SATA SSDs.
If you need endurance and so on, most pros would opt for Intel Optane in U.2. format (that you can alternately slot in through a PCie or M.2 adapter.
Yeah I have been hearing the same BS ever since DX12 came out and where are we with those promises ? Talk when tech is there do not place ladders in sky. PCIe4.0 itself is useless even in benchmarks for GPUs a minor boost at very very high FPS is all it is at now.
Why fill my pc with loud and hot hard drives? I have 2 M.2 sticks as local storage and a NAS for all the rust drives in another room. I wouldn't want to go back to the days of using my PC for that.
And if you must have tons of sata just buy a SAS card. Their cheap and flexible. Each SAS port on the card fans out to 4 sata ports using a cheap cable.
Since the 100 series chipsets, the lanes for the SATA ports are shared with other things, so you aren't getting dedicated ports like you used to. You have to disable other features if you want to use all the SATA ports. With my current Z390 board, I can't use more than 2 SATA ports without compromising on other features, and I can't use all 6 SATA ports unless I disable both M.2 slots. Since they're sharing lanes, there's little cost and little reason to not have them, and that will probably continue into the future.
Things have changed the last couple of generations. My Z690 board has 6 SATA ports and 4 PCIe 4.0 x4 M.2 slots. The only thing shared is SATA between one SATA port and one of the M.2 slots. As long as you don't need a M.2 SATA drive, you can run 4 NVMe drives and 6 SATA devices simultaneously..
There has nothing changed. The IO-Lanes of the chipset can eather be SATA or PCIe. The reason why you have nothing shared is, because they saved money for switches. You have not the option how to use this Lanes. This happens since Rocket Lake. The CPU has additional PCIe lanes, so you don't need to share much anymore and the Board is full already. There is no space for more M.2. Backside maybe.
I am pretty sure intel had 8 SATA ports since Z77, but board manufacturers routed 2 SATA ports for m.2 SATA. The On Z87 and Z97, 8 SATA ports with 2 ports shared for m.2 SATA was totally a thing.
The silicon has 8 ports for long time. But maximum usable for the Zxy7 was 6. Eight were workstation only. If you used shared SATA on M.2, then you had less than six SATA Ports usable.
SATA SSD sales continue to remain strong, and are much mroe economical for large file storage per TB then M.2 drives (a 2TB SATA drive is around $170 now), and if you have a RAID aray with 3+ drives speeds begin to encroah on NVMe speeds, a RAID 5 array with 4 SATA III will hit 1.6GB/s read speeds.
Man, these Z-chipset boards keep going up in price. I'm curious what eventual H670 chipset boards will look like. If they've got everything you need without all the flashy bits, I'll probably shoot for one of those.
I think it's because PCIe 5.0 and DDR5 require better quality traces, and shielding to maintain signal quality. AM4 X570 boards had quite a sticker shock when they came out too. The price increase was thanks to PCIe 4.0 requiring higher quality traces. This price increase was also reflected in B550 boards, so you can expect the same.
Better quality PCBs and/or a lot more signal boosters. A few years ago there was speculation that PCIe5 might be too expensive to implement to show up on consumer boards at all. I'll be really interested in seeing if/how much future generations extend it to all the other locations; because CPU-slot 1 slot1-slot 3/4 for bifurcated are the shortest runs.
Looks - DARK #1, XTREME #2, APEX #3 this time ASUS really ruined their design, insane gamerboy trash look too much bling. Their Z590 Apex was superb, shame.
Features - MSI and ASRock only are giving TB4 ports on the Top end range - Ace / Taichi. Nobody else, for that Gigabyte gives them on Xtreme, same for ASUS on Extreme as well. EVGA as usual, and 10G LAN is still not common.
Also none of the boards have PLX chips, I mean the Z690 has a lot of bandwidth esp those PCIe5.0 lanes, we could have got 4xPCIe x8 full lanes with more circuitry to enable more. All they give is basic x16 GPU lanes like always and a PCIe NVMe Slot from CPU. Utter shame.
Plus only MSI and ASRock are giving PCIe4.0x4 lanes on the PCIe slots nobody else is doing which is even more a bummer, I think the reasoning is probably since the board has ton of I/O on NVMe side they do not give it. But boy that DARK has Horrible laning, wtf is that ? Single slot for PCIe5.0 and not even having reinforced slots for the 2nd slot look at GB they give full steel armor and many others do. Too much greed. They also have Post code LED issues since Z490 can you beleive it ? Z490, Z590 and even their rip off X570 has LED failures, just check their forum. ASUS and MSI on the other hand use that garbage ALC4080 series again, it has horrible trash issues, pop ins, nonsensical EMI issues, insane driver problems all over the place. ASRock and GB use ALC1200 which is much better but this time GB is saving money on the Audio I/O by no 7.1 and other options not even on Master. Utter shame from all these companies. ASUS ROG series Z690 has Noctua clearance issues, they never had that but now they have, I bet it's all for that extra crappy bling.
Finally the price, EVGA will rip off, their X570 is too damn expensive at $700+ with tax, that's a LOT and the USB I/O is pathetic on them just 6 ports, also unlike Z590 of their they removed NVMe to just 2 slots, and even the bifurcation on the X570 board will cut the GPU lanes. DARK will be at same APEX cost over $800. MSI ACE and GB Master / Tachyon (last time with Z590 it had poor DIMM tuning no 11900K could get 3866MHz G1, I hope they improved here) might be better options tbh. Also the VRM cooling Z590 had 2 boards with that awfulness, Z590 DARK and OC Formula, idk why they want to ruin the board with such crappy decisions of adding failure prone components for no reason. Lastly I hope EVGA is not going with doublers BS on DARK, even ASUS dropped it from Z590 Apex.
Now the last aspect, is this platform worth ? Nope. From my analysis the only reason to buy Alder Lake is for those who are stuck on 2600K or even more relaxing 7th gen since it's 4C era and 300 series chipsets gets at-least 9900K option. That too if they are starving for some uplift and I/O. All the folks on X470, X570, B550 or Z390, Z490/Z590 do not need this at all you know already as a fact for this by now.
You get improved DMI sure, more NVMe sure without cutting GPU lanes, yep fantastic. But you want to play into Intel's 1511 debacle ? NBR forum and other places had people modifying the MLC capacitors on the Socket back to make a damn Z170 run 9700K, that's from 6th to 9th gen btw, yeah that's how Intel is. And you are paying a top premium here. A DDR5 is worthless today, a few SMT workloads only you will see that too production with limited set. So a DDR4 is sane right ? Yep but as I said the socket longevity is unknown, it's up in the air for Intel, LGA1700 has physically 1800pin support, so Raptor Lake might be a direct slot in, but what about after that ? Meteor Lake and Lunar Lake they hint even more crappy E cores but it's a new P core too. So you will not only be relegated to a DDR4 locked system but also a premature expensive DDR5 system.
If anyone is waiting for the DDR5 era, they should wait more. Once Zen 4 comes with AM5, we will see how DDR5 shapes up. I didn't even mention the cost because it's already known. And buying a DDR4 board pairing it with an i7 12700K is bad. So that's what I feel like. On top E cores do not do anything it's all P cores that are carrying Alder Lake processors.
Finally the OC, 12900K is too hot, the density is very high heat and it pulls ton of current. The proof is 200W-240 on 10900K will be 75C, this ADL it will be more than 80C, once you get 300W it will shoot to 100C easily, hotter than Rocket Lake. And Intel binning very tight, all core OC 5.2GHz is max, you can go high but 360mm MSI latest AIO is peaking at 90C when you run a high load, yes this is not gaming. But that's the state of 10nmESF which is why Intel put only 8 P cores. 12700K may have a bit more but it's just losing the E cores and lower P bins, so you won't even get more, i5 12600K sure but it's again a mid range CPU, running on an Apex with that on an AIO is even more stupid just because you can clock high that also bin dependent.
Anyways that's all for now, hope this helps someone.
I forgot SATA ports, Intel Z690 has 8xSATA but nobody except EVGA is offering those, WTH ? I thought finally Intel is adding more on top they are adding some RAID features too. AMD RAID as per Level1techs was jank. Now Intel gives more SATA and more NVMe OEMs simply drop SATA. Unfortunate. Plus look how they are gimping the x4 length slot with Gen3 and not Gen4 offerings, HBA SAS Expansion is the option there without cutting GPU lanes, but natively they should have for such a damn I/O beast chipset.
If you buy into the Intel ecosystem, you already know that you're only going to get 1 or 2 CPU generations out of a motherboard. Even AMD, who have been using AM4 for 5 generations now don't have backward compatibility back to the beginning of the socket. They don't support sticking a 5000 series CPU in an X370 motherboard, for example.
Throw everything out and stick with what makes Intel Intel lol. X370 hmm, that was a pile of junk because nobody trusted AMD. And even still that can run R9 3900X, can any Z170 officially run 8700K or 9900K ? Nope so there goes your omg Ryzen 5000 doesn't support so does Intel pathetic.. Same pins on LGA151x still they shafted everyone out proof is the notebooks which modded the socket and BIOS and made them work.
How long people will excuse BS that Intel pulls off ? Well we have dumb bovine consumers who just jump into shiny new toys like Alder Lake which serves no purpose except to create the hype for Intel.
I've only ever used the AMD platform 3 times in my life. K7 Thunderbird, K8 Athlon64 x2 on s939, and Ryzen 5000 on AM4. But all three times, I haven't/don't expect to upgrade the CPU on the same motherboard. It's just not worth the hassle for marginal clock gains or to bump one generation, if I even get that. So I don't view what Intel does as evil. They start with a cleaner slate than if they carry the baggage of socket compatibility. Intel and AMD carry enough x86 baggage as it is. I can feel the weight of the aging AM4. Limited socket current. Limited chipset aggregate throughput. A fan just to support PCIe 4 on secondary slots. Stuff that could be easily fixed by adding more socket pins relative to 5 years ago.
Certainly, there are tradeoffs, keeping a socket; but, as Mr. Tuvok would say, "Ryzen, you are an unending source of astonishment." There was a time when sockets even took CPUs from different manufacturers. I remember my Socket 7 motherboard, though I never tried it, could take a K5 and some Cyrix CPUs as well. Those 5x something, something. How things have changed.
A short-lived socket can be a pain in the behind too. I was one of those unlucky folk who ended up with Socket 754 and missed out on dual-channel DDR and a long upgrade path. In any case, that computer went kaput after four years.
Overclocking is for employees of motherboard companies.
ECC RAM support should have been a standard feature from the beginning. Apple offered it on the Lisa in ‘83 and consumer computing has gone backward since.
Doublers, though... aren’t a bad thing as long as they’re implemented well — as I understand it. Better to have a good doubler implementation than a weak individual phase system. The main thing is to have a board meet the minimum spec for reliable (i.e. not overheating and/or failing) long-term support of its supported CPUs. Anything beyond that is unnecessary.
Weak phases with a mediocre/poor regulator aren’t necessarily better than ‘marketing phases’ via the use of doublers. That’s the case when the doublers are used a correctly.
There are a lot of shenanigans, though — like not even utilizing the doubler fully but counting it as the doubling of phases. I also recall that one of the big tricks was putting extra chokes on the board to make it look like there are more phases.
In the long term, I think the cost for ATX12VO will be cheaper. ATX12VO PSU will be cheaper than a comparable quality ATX PSU. The BoM for 12V to 5V and 12V to 3.3V converters would go down, if mobo makers decide to stick to a single, standardized design.
With the way things are looking, electricity prices are unlikely to go down and continue to go up.
If mobo makers can stick to one design why can't PSU makers? They already conform to ATX.
ATX 12 VO increases costs for piecemeal upgraders because of the simple observation that PSUs outlive motherboards. The question would be whether the power savings are worth it. For prebuilts they're comparing power savings to 0 net component cost so 12VO is already the norm.
Can I ask why ? What does ATX12VO provide to a consumer ?
It doesn't make your mobo cheap, it doesn't make your mobo less complicated, it does not make your system run cooler, it doesn't make ADL consume less power, It doesn't even make any sense.
ATX12VO was created because of that trash policies set by policing state of California about some nonsensical rubbish. Servers and Data centers can get away with modular high density PSUs because of fully standardized set and they also get 3M liquid cooling. This is consumer market and here we have people wishing for backwards in technology.
A lot of people had the same sentiment about EU RoHS restrictions, and yet, it was implemented worldwide.
With that attitude, the same can be said about energy star, and 80plus certifications. It adds cost to the product, yet it offers not a thing to the consumer.
Not everything is about you. We need to do everything we can to cut down power consumption, and ATX12VO standardization across the entire industry is very low hanging fruit.
Stop being so selfish, there's literally only one habitable planet we have right now.
80 Plus offered plenty to consumers. Less power use means quieter PSUs.
The knock on 80 Plus was unrealistically easy testing. Despite that, it helped raise the efficiency of PSUs. Along with better efficiency, ripple, holdout time, voltage consistency, and other factors improved — as enthusiasts began to pay more attention to PSU quality.
I don’t doubt that 80 Plus also helped a lot of non-enthusiasts/amateurs by keeping them away from ultra-cheap PSUs that catch fire. Having a high-profile certification that those PSUs can’t reach helped to steer those customers away.
That white metal trim running tight around the molex power connector on the ASUS ROG Maximus Z690 Formula must make it an absolutely nightmare to plug/unplug the main power cable to the board.
Correct. And whilst we are correcting that sentence - "upheaved" ???? This first page really needs to be read by an AnandTech editor. What's that? They don't any editors? :-(
I caught that, as well. Even the word "upheaved" is itself somewhat noteworthy. Plenty of better alternatives: "upgraded", "widened", "expanded", "increased", "enlarged", etc.
While "upheaved" is likely an error, it's not far off from the words of today. Unfortunately, the English language is on a downgrade, and it's just going to get worse and worse. The language's genius is not tuned to the over-economical forms we're finding today; and a lot of it seems to be coming from tech. Upthis, upthat. My favourite, though, is leverage. A big, scary word that companies are fond of, and which escaped its programming, game development roots. Soon, we'll be leveraging the kettle to make tea. How about using?
I don't mind "leverage", so long as it's an apt analogy. I think its modern roots might've been in the world of finance, where a "leveraged buyout" is one where a small amount of assets are used as collateral for taking on a greater amount of debt to fund the bulk of the buyout price.
IMO, one of the more annoying abuses is substitution of "learning" for "a lesson learned". People talking about "learnings" sound to me like business-school idiots, who seem to have invented their own jargon out of jealousy of real professions.
Yes, leverage works well when tied to the proper sense, but I see it being "leveraged" more and more as a high-flown synonym of use, much the same way that highly intellectual folk of an earlier era found that "utilise" was shinier than the plain, homely "use." In short, substituting cardboard for a brick.
Watch out, talking about the esteemed language of business school. Mr. B. Swan might be an avid reader of Anandtech.
Do you wear a suit and tie ever? Do you think men who do look more respectable than those wearing ‘casual’ clothes?
Prestige dialects are about maintaining one’s social status — a barrier for competition. They’re not primarily about concision.
Similarly, overly-elaborate clothing like penguin suits with ties aren’t about keeping one’s body suitably regulated when it comes to temperature, protected from sun damage, and protecting others from the horrors of nudity. Overly-elaborate clothing is about maintaining social status.
I agree with your argument, Oxford Guy, and no, I don't use a suit and tie. Having said that, you've caught me on a weak point, because while I don't use them, I think certain styles of the past were fantastic. Just think of James Stewart or Cary Grant. Or, on the side of the ladies, Ingrid Bergman, Grace Kelly, or any classic actress really. Truly, they give the dressers of today a run for their money.
‘Unfortunately, the English language is on a downgrade, and it's just going to get worse and worse.’
No and yes.
Languages are always changing. The worst language is one that remains static, increasingly less able to meet the needs of its speakers.
The prestige dialect of a language is arbitrary and also changing. While a high school textbook from 1920 may impress with its diction there was a lot less competing for energy/time then and ignorance was hardly less. It does, though, make for the illusion that high school students have become less intelligent. IQ is actually up due to, for instance, reductions in lead exposure and improved prenatal nutrition.
Neologisms often enrich languages rather than degrade them (not always). Even when the new terminology is redundant (which it frequently is) — speakers tend to simply abandon the older words/phrases. English is absolutely rife with abandoned words and phrases. Poets trot them out to impress but even they usually don’t bother with what dictionaries label ‘archaic’.
One thing that seems to be increasing in English is a reduction in working vocabulary, due to globalization. Being monolingual has drawbacks but having to learn 5 ways to talk about a cat (to express the same idea) has a price. This ‘global speak’ is one of the reasons listening to tennis players is often painful. It’s vacuous corporatism plus a limited vocabulary (one not merely limited by a lack of humanities education). However, respect for the humanities continues to decline.
Working vocabulary is a bit like the RISC vs. CISC debate. It takes more time/energy to develop a large working vocabulary (CISC instructions) — and it’s more difficult to keep it all in one’s working vocabulary. The benefit is that it takes fewer words to express an idea. We’re generally trained to see the use of a ‘more accurate’ special word as the mark of intellect, versus using more more common words to get the same idea across. If the same amount of energy is involved then it’s arbitrary to prefer one over another. The reduction in the attractiveness of monolingualism should lead to a reduction in the prestigiousness of ‘50-cent words’.
Euphemism is also perhaps an increasing problem. Orwell wrote an essay about it in the 50s or so so it’s not new. However, buzzword labels and euphemism seem to be growing in importance. Again, though, calling someone a communist or homosexual was enough to shut down all rational discourse. Prior to that there were witches, homosexuals, and heretics. So, perhaps the overall level of this hasn’t changed much.
Languages other than English and Chinese are under threat in terms of degradation, though, from loss of speakers and usage. In Salzburg, university physics is now taught in English.
I wouldn’t worry, at all, about the degradation of English and Chinese. I’d be more concerned about the ability to use the languages in the face of increasing censorship, censorship AI (the growing tech power divide) increasingly facilitates. Not being able to speak a language fully, due to that, is a path to greater diminishment.
> One thing that seems to be increasing in English is a reduction in working vocabulary
In terms of importance, I've found clarity of expression to be second only to clarity of thought, in software design. One needs to be clear about semantics not only in one's own mind, but also capable of clearly and concisely expressing them in the form of names and documentation.
So often, bugs are the consequence of confusion. Either on the part of the original author or by maintainers or API users. That's why clear conception of ideas must be paired with clear communication, if an API is to be correctly implemented, used, and maintained.
This point of view has been shaped by decades of experience. I can often tell the difference between someone muddling concepts together in their head vs. simply lacking the vocabulary to express the finer distinctions.
> This ‘global speak’ is one of the reasons listening to tennis players is often painful.
Probably true of pro athletes, in most sports. They're selected for their aptitude on the court or field, and honing those skills is where they spend the bulk of their time & energy. It doesn't help that pro athletes are increasingly deferring college to extend the potential length of their athletic careers.
Quite true. I'm no expert at programming, more of a hobbyist, but I've found that thinking about something beforehand often leads to better code. Writing it "on-the-fly" usually results in a mess, which can persist. I'd like to add to my comment on English that there's an analogy in programming languages. Just like updating English, there's been a constant trend to come up with new languages that address "weaknesses" in C and C++.
Quite right that languages are always changing, but the change may be for the worse as well as better. Despite being a lover of all that is old, I feel that English has actually gone nearer to its roots in the past two decades. People appear to be writing plain, concise English, comparable to the simplicity of Elizabethan prose, I would contend.
People say that a language has to be brought up to date to express new ideas: that may be so in the fields of science and technology, but certainly not in human nature and relations. When I look at the 18th-century writers, it's evident that our distinctions have been blurred and watered down. The way they expressed life was precise, but unfortunately more Latinate, compared with our crude analogues of today. Apart from science and technology, that language isn't lacking at all to express present life (and was much more CISC, to use your example). In fact, there are distinctions that are seemingly lost; and lacking the language, our view on those points is cruder or non-existent. So much for increasing civilisation. Going further back, the Elizabethan English of that fellow from the Globe, or Bacon in prose, if one clears away the archaic usages, all the thous and the eths, is about as "modern" as English can get. I believe there is a true centre, "that mode of phraseology so analogous to the principles of a language," which English has sometimes strayed away from (particularly the 17th and 19th centuries), and I'd argue that the 20th and 21st centuries have seen a return to it in many ways. Unfortunately, there are some frightfully ugly inventions as well, that any true lover of good English will wince when looking at. Selfie, anyone? Hashtags? Upskilling the staff? There are many others but memory, as usual, is failing me on the spot.
Euphemism is a big problem (and I believe you're referring to "Politics and the English Language"), simply because it goes contrary to truth and has an effect on the mind, where the false, blurry idea becomes the thing itself. At its worst, people are able to commit criminal or unjust acts because they're sheltered beneath a euphemistic, polite phraseology. And it spills over into censorship, too, and only the warm, fuzzy forms are acceptable. Again, the important of being simple, direct, and exact in one's language and "telling it like it is."
I can’t be sure but I believe Orwell critiqued heavy use of Latin derivatives along with passive voice and other strategies as a method for being less clear — a form of euphemism/doublespeak. I think Orwell would have responded to your crudeness point with the opposite point of view — that simplicity and concision are superior. Personally, I think irregularity in grammar and English’s terrible spelling (which can be easily fixed) are vestiges of the past that are ‘degradation’ inefficiencies.
‘but the change may be for the worse as well as better.’
The only changes I can think of that would be for the worse would be having a language lose speakers (a dying language) and a language declining in expressiveness from increasing AI-based censorship. Language change generally favors increasing efficiency, although substituting half-pidgin ‘global speak’ due to polylingualism being more important is also an issue.
All human (non-synthetic/artificial) languages are sorely in need of more change than their speakers are willing to allow in the short term. That’s the main problem — the opposite of degradation from change. English spelling, for instance, is utterly preposterous and one linguist’s reform scheme is very easy to get used to. Stubborn nostalgia, though, is extremely difficult to overcome in the short term. Gender in languages like German and French is also very stupid. It’s a massive waste of energy to ascribe sexual characteristics to clouds, trees, and soup.
It should also be noted that English and Chinese are languages that are strongly characterized by density of meaning per syllable. That’s the opposite of Japanese. It uses a lot of syllables from a small palette of sounds to get meaning across — which calls for rapidity of speech. This is also like the RISC vs. CISC dichotomy. (On the flip side, Japanese has the most complex writing system.)
The demand of English to pack as much meaning as one can into a syllable seems that it would favor short ‘simple’ words. So, calls to use lengthy ‘ornate’ Latin derivatives may miss the mark. Lengthy words are more attractive in certain other languages. (There is jargon for all of these things but I’m trying to minimize that here.)
Perhaps I contradicted myself or wasn't clear, but I am not calling for Latinate English. Not at all. On the contrary, I am a proponent of plain and simple "Saxon" English, and repudiate the Latin style with a passion. I am going to write "get the job done," never "accomplish the task," and use and buy, instead of utilise and purchase. I always try to write using the simplest words to get the sense across. And that extends to syntax too, condensing a sentence to its shortest form. At the end of the day, it comes down to clear thinking. Do that, and one's style becomes more lucid.
18th-century prose was elegant but its chief defect was overly Latinate words and sentences (exemplified by Dr. Johnson). I am actually praising 20th and 21st century prose---can't believe I'm doing that---when I say it's a return to Elizabethan plainess, to my eyes at any rate. If ever there was a golden age of English, it's undoubtedly that of the late 16th and early 17th centuries.
Being simple doesn't mean being crude or vulgar. One can be elegant as well as simple---after all, true beauty, as the ladies will point out, is simplicity. I feel that while there's a return to plainness in our times, there's been loss of decorum and good taste. Many of today's made-up words are ugly or distasteful, and I feel there's a twisting of the language away from its grain.Upskill? Even clickbait titles are a symptom of something amiss. Could it just be bad taste, or a reflection of the mind of the age?
You are calling for reform to the language. Here our views depart; for I am more of a conservative and believe in preserving English in all its messiness, spelling and all. One of the beauties of language is that it's an irregular growth, much like a tree, lovely as a whole but messy in detail. (Same goes for programming: I'd take messy C++ any day, instead of the new, slick stuff of the present.)
A contradiction again, where I'm talking about preservation but criticising current English? Not really. I'd say: there's a model of good style already in the language, shaped by some of the greatest writers that ever lived. At its best, it's plain, simple, and elegant, and most of all, easy to understand. Orwell would be one example.There are many others.
> I ... believe in preserving English in all its messiness, spelling and all.
Consider that its messiness isn't free. English speakers, especially those coming to it later in life, waste significant amounts of time, energy, and mental capacity learning some of its unnecessary complexity. Without it, they could be putting those resources towards improving their overall mastery of the language.
As English speakers, we derive numerous and diverse benefits from more people being able to speak it, and from them being able to do so with better aptitude. It's in our interest to lessen the learning curve, particularly given that it's eroding anyhow -- and in ways that have more detrimental consequences.
I agree there are a lot of silly points in English that hinder learning. And yes, we are apt to forget that so many people speaking it makes life easier for us. How many more centuries this will go on for, we can only wonder.
On the other hand, Oxford Guy's comment about globalisation is also true. While asymmetric communication is causing simplification, some beautiful usages are lost along the way. The same happens between American and non-American speakers. Sadly, whom is dying, as well as the first-person, colourless "should," and others. Many a time, one possesses a usage that one feels is idiomatic but is forced to use another because of misunderstanding. And for my part, personally, there appears to be greater misunderstanding between cross-country, native English speakers, than between a native and non-native one. I find it easier speaking with people who are using English as a second language; but so often there's a barrier when talking with a native speaker from another country (or even different culture).
In any case, I'm often disappointed with English, and see features in other languages that are attractive, particularly Afrikaans and French. When I hear Afrikaans in my country, with the classic inflexion, it has a magical effect on me, and I almost sense something that English lost earlier in its history. And then, like most languages, the verb's going to the end is beautiful, whereas we aren't allowed to do that outside of poetry. Taking Afrikaans again, it's astonishing how direct and clear a speaker is when talking in English, whereas we English speakers are lost in a maze of many, empty words. So, increasing CISC expressiveness may not be all it's cut out to be. After all, the stuff of life is simple and needs only a few words for expression. It's only idle sophistication that comes up with imaginary nonsense. Let our words be few and choice, and our actions many and noble! Silent cinema shows us that words are empty.
> Oxford Guy's comment about globalisation is also true. While asymmetric > communication is causing simplification, some beautiful usages are lost along the way.
That's basically my point. If those invested in the language don't make the easier and more painless simplifications, the new speakers are going to make much more detrimental ones.
> the stuff of life is simple and needs only a few words for expression.
More like a fractal, I think. From a distance, it seems relatively simple. Yet, the closer you look, the more complexity you see.
> It's only idle sophistication that comes up with imaginary nonsense.
If your needs and thoughts are simple, then a simple language will suffice. Language is a conceptual tool, as much as a means of communication. Comparative language studies have shown people have difficulty grasping concepts for which they lack words.
I prefer to inhabit a world of richness, complexity, and big ideas. I'm grateful not to live in a sparse realm, where anything beyond simplicity of language and simplicity of thought would seem excessive or burdensome.
Good points (and nice one about the fractal of life). Lack of words can lead to poverty of thought. Take a look at older writers, and one realises we've lost many distinctions, expressed admirably. Or worse: similar concepts have been born again under ugly language. Or delete democracy. Then we ask, what, what's that? I suppose there's an ambivalence in me regarding simple vs. complex language---and that's where the apparent contradiction is coming from. Part of me longs for the older speech, and part of me for simplicity. The best model, I think, steers a course between these two whirlpools. And I think people would begin to think more soundly if the bias were towards simplicity. Let one's treasure be buried in the garden and go abroad in plain clothing.
I don't like it, but change is inevitable, especially when a language comes into contact with secondary speakers. In the Middle English era, when it was Saxon against Norman, English lost most of its cases, was simplified, and word order became critical. Doubtless, the same process will happen again, and likely is already happening. Let's keep our fingers crossed that hashtag language doesn't take over. Then we'll get Postmodern English.
> Language change generally favors increasing efficiency
Perhaps, but dialect formation often emphasizes or devises devices to distinguish its speakers from neighbors, outsiders, or newcomers. Here, we see the goals of language in tension with the goals of its speakers. Perhaps you're alluding to that, at the start of the following paragraph.
> English spelling, for instance, is utterly preposterous
I don't mind eliminating exceptions and irregularities from English, so long as nothing substantial is lost in the process.
> Gender in languages like German and French is also very stupid.
Were it expunged, maybe people wouldn't try to import gendering of asexual objects into English, such as the way some refer to ships as female.
That caught my eye, too. I bought an Asus Hero-branded board for my current system last year at approximately $200 USD. I suspect Asus is shifting their marketspeak because the word "Maximus" (used for the z690 board but not mine) usually applies to their most expensive boards.
This. $2000 for a consumer grade motherboard? WTF are they smoking?
Also, I'm pretty sure ASUS will be releasing some TUF Z690s at some point, probably at a lower price point than the primes. My experience with the TUF series has been very positive for the price.
TUF is historically just a bit more expensive than Prime. They already have a TUF DDR4 version - ordered the Wifi one for $290 the other day. If worried about price DDR5 is the first mistake.
I don't get the "DP IN" ports on the ASUS ProArt Z690 Creator WIFI. I see the author just wrote what was on the ASUS website, but that doesn't really explain anything. Are they passthrough to the Thunderbolt out ports? Is there a capture card built into this motherboard? I'm very confused by the labeling here.
Those are passthrough to the Thunderbolt port. Add-in Thunderbolt cards work the same way. You slot in your discrete GPU, send the output from both DP ports to the Thunderbolt controller, and then use Thunderbolt to output to a Thunderbolt monitor or hub.
And some have tried to dismiss the apparent fact that inflation is a significant cause. Supply constraint doesn’t explain all of it nor does an increase in sedentary entertainment due to Covid.
You’ve got your shrinkflation and your price inflation. Both are occuring.
Can ASUS Z690 Maximus Extreme run PCIe 5.0 x16 GPU and PCIe 5.0 x4 M.2 SSD concurrently? Will the GPU (PCIe 5.0 x16 slot 1) drop to PCIe 5.0 x8 instead when SSD is installed on the PCIe 5.0 x4 (M2 slot)?
MSI does show the Audio Codec... just not on the simplified summary. You guys have to click the "Detail" tab on the Specifications page for a given board. All the boards show which audio they're using.
At the time of writing, even the detail sections of the specifications didn't show them. On top of this, all of the information we received prior to launch mentioned no specific HD audio codecs. I will update this though :)
For RAID, obviously. That borderline makes sense. If you're running a 4 or 5-drive RAID of SSDs in a consumer rig, it's more cost-effective and still plenty fast to use SATA. And I think it's not unreasonable to expect anyone using M.2 drives to put them in a PCIe carrier card, which will have better cooling potential anyhow.
You've included MSI's ITX variant in the list (MEG Z690I Unify) but I can't seem to find it on their website. Although if you google you'll find a few mentions on some shops, without pics. Is this because MSI is still working on the board, or?
I was really disappointed not to see more discussion of costs and why the price distribution of these boards tends to skew so high.
However, I was most surprised to see how much lower some of the entry-level models are priced. Do we think these will be produced in sufficient volume, or are they primarily there as a means of upselling would-be buyers who, out of frustration at seeing them always out-of-stock eventually end up buying one of the more expensive models?
MSI Pro Z690-A WIFI, MSI Pro Z690-A and many more have the cheaper Realtek ALC897 Codec, the audio table is not accurate and it says Z490 instead of Z690.
Wtf is with the PCIe 3.0 slots? I'm looking at the Gigabyte Aorus Master, has 10gig onboard, great, but then the other two pcie slots are pcie 3.0 So confused.
maybe mainboards start getting reshaped/redesigned (vertical m.2, backside slots/connectors, ?) instead of using retimers (chipset TDP includes retimer power?, cooling power for peripherals on PCIe 5.x speeds on 4GB/(s*lane)=~2 lanes sufficient for fastest available (2021, consumer) SSDs )?
2 ram slots? I've seen this on a few of these new DDR5 boards. Most people here are talking about Thunderbolt 4 and USB4. Yes these are very useful to a select group of people yet these can be achieved with add on cards. Then you can pay for the devices to take advantage of these technologies. Reducing ram slots from 4 to 2. Wow. Yes you can buy high density ram. But this is forcing you that direction. What is wrong with 4x16 or 4x32 ram kits? If you (me) are interested in high performance video then affordable and available ram is a huge consideration. Is it just me?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
126 Comments
Back to Article
Dahak - Tuesday, November 9, 2021 - link
Will there be a list of DDR4 only board as well?Ryan Smith - Wednesday, November 10, 2021 - link
Yes, we're also putting together a guide for DDR4 boards.jh20001 - Wednesday, December 1, 2021 - link
Any news on the DDR4 story? Would be nice to know what model is the best for performance/features in the eyes of others.Flunk - Tuesday, November 9, 2021 - link
Intel's actually released a compelling new chipset? I'm surprised to see DDR5 and PCIe 5 support, but USB 4 seems to be notably absent, despite there being no reason at all to omit it. Intel is finally one-upping AMD after a few years of playing #2.Exotica - Tuesday, November 9, 2021 - link
Thunderbolt4 is usb4 capable…CharonPDX - Tuesday, November 9, 2021 - link
Yep, the only thing USB4 adds over "USB 3.2 2x2" is Thunderbolt support. Therefore any Thunderbolt 4 device is automatically USB4. In fact, essentially any board with "Thunderbolt 3" along with USB 3.2 2x2 basically get "USB4" status for free.DigitalFreak - Tuesday, November 9, 2021 - link
USB 3.2 2x2 is 20 Gbps. USB 4 is 40 Gbps.12345 - Wednesday, November 10, 2021 - link
That's why they mentioned TB3. 40Gbps support is also optional for USB4.12345 - Wednesday, November 10, 2021 - link
DP 2.0 is mandatory for USB4 so TB3 support isn't good enough.KarlKastor - Wednesday, November 10, 2021 - link
That is only the name. The question is, with what speed you can run USB devices.Flying Aardvark - Wednesday, November 10, 2021 - link
"essentially any board with "Thunderbolt 3" along with USB 3.2 2x2 basically get "USB4" status for free."TB3 can run USB 4.0 devices, while USB 3.2 2x2 should be able to, it would be capped at its 20Gbit/sec and run over the backwards compatibility protocol for USB. USB4 ports can be either 20 or 40Gb.
I wouldn't want just USB 4.0 ports as Apple has, capped at 20Gbps. We'll probably see some of that on the AMD side. The best thing is just to have TB3 or TB4 to be sure you have fullspeed 40Gbps ports.
KarlKastor - Wednesday, November 10, 2021 - link
Just optional. If you have Thunderbolt and 10 Gbit USB, you can call it USB 4. See Apple.OFelix - Tuesday, November 9, 2021 - link
I agree. How come there are so few boards with USB4 or TB4 ?And how come the article doesn't mention them at all before it starts listing specific features of individual boards?
DigitalFreak - Tuesday, November 9, 2021 - link
The only way to get USB4 on a PC was by using Intel's Thunderbolt 4 chipset (or having it built into Tiger Lake). Since Thunderbolt is kind of a niche thing on desktop PCs, motherboard makers aren't interested int spending the money on Intel's TB4 chip except in high end or specialty boards. I would assume there will be some third-party USB4 chips coming soon.OFelix - Wednesday, November 10, 2021 - link
Thanks for your reply.So USB4 was built in to Tiger Lake but its not built in to Alder Lake / Z690????
That would explain somethings but not explain why on earth Intel would do that or AnandTech would not think this major regression worth mentioning!!!
The main reason I want to upgrade from my Sky Lake system (which i purchased to get built in USB3) is to get USB4/TB4.
KarlKastor - Wednesday, November 10, 2021 - link
TB is only integrated in the mobile Dies. The Desktop Die has no TB.Alistair - Tuesday, November 9, 2021 - link
DDR5 is not faster in almost every case, and there are no PCIe 5 devices (unlike when PCIe4 was launched at least you got video cards and storage immediately). Not really an advantage. Prices are too high also. Frankly I like PCIe 3.0 boards when they are under $100 USD.DigitalFreak - Tuesday, November 9, 2021 - link
It's the same thing that happened during the DDR3 to DDR4 transition. The first DDR4 products weren't really any faster than the best DDR3. Eventually DDR4 speeds got faster and left DDR3 behind. Same thing will happen with DDR4 to DDR5.Kevin G - Tuesday, November 9, 2021 - link
PCIe 4.0 support was significantly delayed on the desktop but it arrived in servers in 2017 (IBM Power9). AMD was planning on adopting PCIe 4.0 after Intel on the desktop but Intel's train wreck of their 10 nm manufacturing node derailed the chips what were going to add it (Ice Lake on desktop).I would expect both PCIe 5.0 graphics and storage by the end of 2022 on the desktop, though their benefits will be marginal outside of a few niches. (Single lane PCIe 5.0 chips for USB4/Thunderbolt 4 and 10 Gbit Ethernet vs. using four PCI 3.0 lanes are cost driven examples.)
Samus - Wednesday, November 10, 2021 - link
Kevin G - I agree, I think in a year there will be PCIe 5.0 devices, but the performance advantages, much like initial PCIe 4.0 devices (RTX 30xx, NVMe SSD's, etc) won't be there until 2023-2024, by which time this platform will already be replaced or significantly less expensive.I don't think Intel is looking to drive a lot of sales with this platform. Not many people are buying $3000 desktop PC's at the moment (and when you consider the platform alone is $500, with a $500 CPU on top of it, $3000 is pretty conservative considering most people buying something like this will want a $1000+ GPU, so that's $2000 for three components.)
Put in perspective, the last launch like this that had a lot of tech that you couldn't take advantage of right away was probably X58. PCIe 2.0 at a time no PCIe 2.0 products existed, and 36 lanes no less, left a ton of room to expand a platform that was already stacked to the gills with embedded tech. In fact it would be years before applications were fully optimized for the bandwidth offered by triple channel memory, let alone quad channel memory that Intel introduced on later HEDT platforms.
The difference though is X690 isn't even HEDT.
Kevin G - Wednesday, November 10, 2021 - link
The current LGA 1700 platform from Intel is set for three generations: Alder Lake, Raptor Lake, and Meteor Lake. In that time frame with future generations as well as AMD releasing AM5, featuring DDR5 and PCIe 5.0 support as well, will remove the current premium prices and reach parity with what we have today. Even in the short term, Intel supports DDR4 on Alder Lake and the various Alder Lake i3's and lower will arrive in less than 6 months further reducing prices.The big feature of PCIe 5.0 isn't the additional bandwidth but on the server side it'll bring CXL. For consumers, I wouldn't be surprised if AMD enables their PCIe 5.0 slot to switch over to an Infinity Fabric mode when paired with a Radeon graphics card. (AMD recently announced something similar with Epyc and CDNA2.) The benefits wouldn't be the bandwidth but rather lower latencies between devices and increased efficiencies due to coherency/direct memory access. The way things are aligning, this could arrive in late 2022.
X58 did have a few cards that took advantage of PCIe 2.0 right away (dual 10 Gbit networking on an eight lane PCIe 2.0 cards). Beyond that, multi-GPU was emphasized by both nVidia and AMD at the time giving some usage to those additional lanes. X58 wasn't the first DDR3 platform from Intel but it still carried a premium over DDR2 when it was first introduced.
Intel HEDT is currently dead with the glimmer of hope that Saphhire Rapids brings it back. However, I'd expect any Sapphire Rapids HEDT platform to be half of what Intel is offering on the server side (dual chiplet instead of quad chiplet). AMD still has Zen 3/Zen 3D Threadrippers to launch to counter.
Bp_968 - Wednesday, November 10, 2021 - link
Ahh, X58. Good stuff. I upgraded from an X58 system in 2017 and its still going strong as my neighbors PC. It took 6+ cores and (more importantly) NVMe storage to dislodge its position as "good enough" while i spent the PC money on video cards and I finally got a 8700k in 2017.I expect something similar to happen for my next upgrade. A new memory tech, pcie5, and more and faster USB at the minimum, and at a lower price than these 690 boards. I don't expect that until zen 4 at the earliest so late 2022 early 2023. But even then i doubt CPU performance will be the killer feature that forces me to upgrade. It will be something related to IO or memory. Bleeding edge CPU performance just isnt that relevant to most gamers anymore. And the upgrade cycles have slown down so much compared to the past. I remember when year or two old hardware was at risk of being unable to play the latest titles!
JasonMZW20 - Monday, November 15, 2021 - link
I don't see much advantage for PCIe 5.0 dGPUs, but PCIe 5.0 SSDs should offer improved sequential read/write speeds with more NAND layers. Unfortunately, in Z690, you'd need to use a PCIe 5.0 -> NVMe expansion card, as mobo NVMe are 4.0 only. I wonder how that will be handled. Some early Z690 slides listed support in 5.0 expansion slots for Intel SSDs only.Oof, these motherboard prices are pretty high. I struggle to justify anything over $400, especially in mainstream market.
mode_13h - Monday, November 15, 2021 - link
> PCIe 5.0 SSDs should offer improved sequential read/write speeds with more NAND layers.Why? What's the use case for > 8 GB/sec storage reads/writes, in a consumer desktop? And I'm not aware of a consumer SSD that's even maxed out PCIe 4.0 x4, BTW.
These are probably the reasons Intel didn't bother with it. However long it takes consumer GPUs to support PCIe 5.0, SSDs could take even longer. And with Raptor Lake coming in just <= 1 year, Intel will soon have another chance to re-evaluate whether a PCIe 5.0 M.2 slot makes any kind of sense.
Samus - Tuesday, November 9, 2021 - link
Yeah, definitely too much for a "current" system - you can't use half the tech you are paying for because there is literally nothing on the market, and there likely won't be for a year. I mean we are only at Gen2 PCIe 4.0 SSD's, and only recently did PCIe 4.0 GPU's launch (if you could call it that since they are still difficult to even buy.) On top of that, the difference in most applications (like GPU's, for example) between PCIe 3.0 and PCIe 4.0 is almost nothing, because the bus isn't the bottleneck.Other than TB4, I don't know what PCIe 5.0 is going to be good for in the near term because PCIe 4.0 is already a lot of bandwidth even in small lanes. Obviously PCIe 5.0 has more bandwidth in narrower lanes, but again, in desktop\mobile platforms there isn't a lot of demand for that much bus bandwidth.
Definitely a technology showcase launch more than a marketable one.
Kakkoii - Thursday, November 11, 2021 - link
I will say in regards to PCIe 5, there is the advantage that you can do more on the board with less lanes. Some of these boards split the PCIe 5 lanes between two x16 slots, effectively giving you two full speed PCIe 4.0 x16 slots, instead of x8/x8.While this still won't benefit you much in most games, it's a big help for anyone who uses multiple GPUs for workstation purposes, or needs extremely high-speed storage in the second slot.
DanNeely - Tuesday, November 9, 2021 - link
Did Intel say why they upped the number of SATA ports to 8? If the number were to change at this point, I'd expect it to start going down.With M.2 SSDs having largely displaced SATA models the demand for SATA continues to dwindle outside of DIY NAS systems for which a much cheaper basic chipset mobo is suitable. With only about 1 in 10 mobos adding the last 2 ports the mobo makers mostly seem to agree that it was unneeded as well.
Silver5urfer - Tuesday, November 9, 2021 - link
Its for RAID, they are adding some RAID features. And M.2 is nothing vs SATA. M.2 all drives are basically TLC, only MLC ones are 970 Pro which are now obsolete. SATA gets you 860 PRO MLC 4TB drives. Plus M.2 is barely useful in gaming, at max reduces load times by a fraction vs regular SATA SSD plus they get wayy too hot for what they have in mediocre endurance, capacity and usefulness in general purpose workloads.Bonus is SATA gets you NAS class devices - WD Red / Seagate Exos etc, no need to buy extra NAS and deal with Plex and all simply run all your media on your own PC. I bet many like that option when they do not prefer any TV or others.
Really unfortunate that everyone dropping them even ASUS Apex dropped it this time, only EVGA is offering on their mobos, and ASRock.
lilkwarrior - Tuesday, November 9, 2021 - link
It seems to be a tangent on your part, but games currently don't even leverage PCIe 3 SSD speeds. When DirectStorage is utilized by actual games, your statement becomes grossly false when it comes to NVMe M.2 SSDs vs SATA SSDs.If you need endurance and so on, most pros would opt for Intel Optane in U.2. format (that you can alternately slot in through a PCie or M.2 adapter.
Silver5urfer - Tuesday, November 9, 2021 - link
Yeah I have been hearing the same BS ever since DX12 came out and where are we with those promises ? Talk when tech is there do not place ladders in sky. PCIe4.0 itself is useless even in benchmarks for GPUs a minor boost at very very high FPS is all it is at now.DigitalFreak - Tuesday, November 9, 2021 - link
Except DirectStorage actually exists in the XBox Series X. Once the XBSX native games start getting ported things will start to move.Bp_968 - Wednesday, November 10, 2021 - link
Why fill my pc with loud and hot hard drives? I have 2 M.2 sticks as local storage and a NAS for all the rust drives in another room. I wouldn't want to go back to the days of using my PC for that.And if you must have tons of sata just buy a SAS card. Their cheap and flexible. Each SAS port on the card fans out to 4 sata ports using a cheap cable.
The Von Matrices - Tuesday, November 9, 2021 - link
Since the 100 series chipsets, the lanes for the SATA ports are shared with other things, so you aren't getting dedicated ports like you used to. You have to disable other features if you want to use all the SATA ports. With my current Z390 board, I can't use more than 2 SATA ports without compromising on other features, and I can't use all 6 SATA ports unless I disable both M.2 slots. Since they're sharing lanes, there's little cost and little reason to not have them, and that will probably continue into the future.DigitalFreak - Tuesday, November 9, 2021 - link
Things have changed the last couple of generations. My Z690 board has 6 SATA ports and 4 PCIe 4.0 x4 M.2 slots. The only thing shared is SATA between one SATA port and one of the M.2 slots. As long as you don't need a M.2 SATA drive, you can run 4 NVMe drives and 6 SATA devices simultaneously..KarlKastor - Wednesday, November 10, 2021 - link
There has nothing changed. The IO-Lanes of the chipset can eather be SATA or PCIe. The reason why you have nothing shared is, because they saved money for switches. You have not the option how to use this Lanes.This happens since Rocket Lake. The CPU has additional PCIe lanes, so you don't need to share much anymore and the Board is full already. There is no space for more M.2. Backside maybe.
12345 - Monday, November 15, 2021 - link
Z690 has a x8 gen 4 link to the chipset now. You don't have to disable SATA anymore to use all m.2 slots.meacupla - Tuesday, November 9, 2021 - link
I am pretty sure intel had 8 SATA ports since Z77, but board manufacturers routed 2 SATA ports for m.2 SATA. The On Z87 and Z97, 8 SATA ports with 2 ports shared for m.2 SATA was totally a thing.KarlKastor - Wednesday, November 10, 2021 - link
The silicon has 8 ports for long time. But maximum usable for the Zxy7 was 6. Eight were workstation only.If you used shared SATA on M.2, then you had less than six SATA Ports usable.
TheinsanegamerN - Tuesday, November 9, 2021 - link
SATA SSD sales continue to remain strong, and are much mroe economical for large file storage per TB then M.2 drives (a 2TB SATA drive is around $170 now), and if you have a RAID aray with 3+ drives speeds begin to encroah on NVMe speeds, a RAID 5 array with 4 SATA III will hit 1.6GB/s read speeds.Mr Perfect - Tuesday, November 9, 2021 - link
Man, these Z-chipset boards keep going up in price. I'm curious what eventual H670 chipset boards will look like. If they've got everything you need without all the flashy bits, I'll probably shoot for one of those.meacupla - Tuesday, November 9, 2021 - link
I think it's because PCIe 5.0 and DDR5 require better quality traces, and shielding to maintain signal quality.AM4 X570 boards had quite a sticker shock when they came out too. The price increase was thanks to PCIe 4.0 requiring higher quality traces. This price increase was also reflected in B550 boards, so you can expect the same.
DanNeely - Tuesday, November 9, 2021 - link
Better quality PCBs and/or a lot more signal boosters. A few years ago there was speculation that PCIe5 might be too expensive to implement to show up on consumer boards at all. I'll be really interested in seeing if/how much future generations extend it to all the other locations; because CPU-slot 1 slot1-slot 3/4 for bifurcated are the shortest runs.DigitalFreak - Tuesday, November 9, 2021 - link
I'm guessing that's one of the reasons that Intel only did PCIe 5 for the x16 GPU slot and not the NVMe drive or the chipset.KarlKastor - Wednesday, November 10, 2021 - link
5 cm to the first slot can't cost much. There are no active components used. In the past almost every Z-board had X8/x8 option. Now most don't have.Silver5urfer - Tuesday, November 9, 2021 - link
Looks - DARK #1, XTREME #2, APEX #3 this time ASUS really ruined their design, insane gamerboy trash look too much bling. Their Z590 Apex was superb, shame.Features - MSI and ASRock only are giving TB4 ports on the Top end range - Ace / Taichi. Nobody else, for that Gigabyte gives them on Xtreme, same for ASUS on Extreme as well. EVGA as usual, and 10G LAN is still not common.
Also none of the boards have PLX chips, I mean the Z690 has a lot of bandwidth esp those PCIe5.0 lanes, we could have got 4xPCIe x8 full lanes with more circuitry to enable more. All they give is basic x16 GPU lanes like always and a PCIe NVMe Slot from CPU. Utter shame.
Plus only MSI and ASRock are giving PCIe4.0x4 lanes on the PCIe slots nobody else is doing which is even more a bummer, I think the reasoning is probably since the board has ton of I/O on NVMe side they do not give it. But boy that DARK has Horrible laning, wtf is that ? Single slot for PCIe5.0 and not even having reinforced slots for the 2nd slot look at GB they give full steel armor and many others do. Too much greed. They also have Post code LED issues since Z490 can you beleive it ? Z490, Z590 and even their rip off X570 has LED failures, just check their forum. ASUS and MSI on the other hand use that garbage ALC4080 series again, it has horrible trash issues, pop ins, nonsensical EMI issues, insane driver problems all over the place. ASRock and GB use ALC1200 which is much better but this time GB is saving money on the Audio I/O by no 7.1 and other options not even on Master. Utter shame from all these companies. ASUS ROG series Z690 has Noctua clearance issues, they never had that but now they have, I bet it's all for that extra crappy bling.
Finally the price, EVGA will rip off, their X570 is too damn expensive at $700+ with tax, that's a LOT and the USB I/O is pathetic on them just 6 ports, also unlike Z590 of their they removed NVMe to just 2 slots, and even the bifurcation on the X570 board will cut the GPU lanes. DARK will be at same APEX cost over $800. MSI ACE and GB Master / Tachyon (last time with Z590 it had poor DIMM tuning no 11900K could get 3866MHz G1, I hope they improved here) might be better options tbh. Also the VRM cooling Z590 had 2 boards with that awfulness, Z590 DARK and OC Formula, idk why they want to ruin the board with such crappy decisions of adding failure prone components for no reason. Lastly I hope EVGA is not going with doublers BS on DARK, even ASUS dropped it from Z590 Apex.
Now the last aspect, is this platform worth ? Nope. From my analysis the only reason to buy Alder Lake is for those who are stuck on 2600K or even more relaxing 7th gen since it's 4C era and 300 series chipsets gets at-least 9900K option. That too if they are starving for some uplift and I/O. All the folks on X470, X570, B550 or Z390, Z490/Z590 do not need this at all you know already as a fact for this by now.
You get improved DMI sure, more NVMe sure without cutting GPU lanes, yep fantastic. But you want to play into Intel's 1511 debacle ? NBR forum and other places had people modifying the MLC capacitors on the Socket back to make a damn Z170 run 9700K, that's from 6th to 9th gen btw, yeah that's how Intel is. And you are paying a top premium here. A DDR5 is worthless today, a few SMT workloads only you will see that too production with limited set. So a DDR4 is sane right ? Yep but as I said the socket longevity is unknown, it's up in the air for Intel, LGA1700 has physically 1800pin support, so Raptor Lake might be a direct slot in, but what about after that ? Meteor Lake and Lunar Lake they hint even more crappy E cores but it's a new P core too. So you will not only be relegated to a DDR4 locked system but also a premature expensive DDR5 system.
If anyone is waiting for the DDR5 era, they should wait more. Once Zen 4 comes with AM5, we will see how DDR5 shapes up. I didn't even mention the cost because it's already known. And buying a DDR4 board pairing it with an i7 12700K is bad. So that's what I feel like. On top E cores do not do anything it's all P cores that are carrying Alder Lake processors.
Finally the OC, 12900K is too hot, the density is very high heat and it pulls ton of current. The proof is 200W-240 on 10900K will be 75C, this ADL it will be more than 80C, once you get 300W it will shoot to 100C easily, hotter than Rocket Lake. And Intel binning very tight, all core OC 5.2GHz is max, you can go high but 360mm MSI latest AIO is peaking at 90C when you run a high load, yes this is not gaming. But that's the state of 10nmESF which is why Intel put only 8 P cores. 12700K may have a bit more but it's just losing the E cores and lower P bins, so you won't even get more, i5 12600K sure but it's again a mid range CPU, running on an Apex with that on an AIO is even more stupid just because you can clock high that also bin dependent.
Anyways that's all for now, hope this helps someone.
Silver5urfer - Tuesday, November 9, 2021 - link
I forgot SATA ports, Intel Z690 has 8xSATA but nobody except EVGA is offering those, WTH ? I thought finally Intel is adding more on top they are adding some RAID features too. AMD RAID as per Level1techs was jank. Now Intel gives more SATA and more NVMe OEMs simply drop SATA. Unfortunate. Plus look how they are gimping the x4 length slot with Gen3 and not Gen4 offerings, HBA SAS Expansion is the option there without cutting GPU lanes, but natively they should have for such a damn I/O beast chipset.KarlKastor - Wednesday, November 10, 2021 - link
They use the IO-Lanes for PCIe instead SATA.Gen3 instead Gen4 I saw just with ASRock. Most Boards have 4 Gen4 M.2 or a Gen4 x4 Slot.
DigitalFreak - Tuesday, November 9, 2021 - link
TLDRIf you buy into the Intel ecosystem, you already know that you're only going to get 1 or 2 CPU generations out of a motherboard. Even AMD, who have been using AM4 for 5 generations now don't have backward compatibility back to the beginning of the socket. They don't support sticking a 5000 series CPU in an X370 motherboard, for example.
Silver5urfer - Tuesday, November 9, 2021 - link
Throw everything out and stick with what makes Intel Intel lol. X370 hmm, that was a pile of junk because nobody trusted AMD. And even still that can run R9 3900X, can any Z170 officially run 8700K or 9900K ? Nope so there goes your omg Ryzen 5000 doesn't support so does Intel pathetic.. Same pins on LGA151x still they shafted everyone out proof is the notebooks which modded the socket and BIOS and made them work.How long people will excuse BS that Intel pulls off ? Well we have dumb bovine consumers who just jump into shiny new toys like Alder Lake which serves no purpose except to create the hype for Intel.
Wrs - Wednesday, November 10, 2021 - link
I've only ever used the AMD platform 3 times in my life. K7 Thunderbird, K8 Athlon64 x2 on s939, and Ryzen 5000 on AM4. But all three times, I haven't/don't expect to upgrade the CPU on the same motherboard. It's just not worth the hassle for marginal clock gains or to bump one generation, if I even get that. So I don't view what Intel does as evil. They start with a cleaner slate than if they carry the baggage of socket compatibility. Intel and AMD carry enough x86 baggage as it is. I can feel the weight of the aging AM4. Limited socket current. Limited chipset aggregate throughput. A fan just to support PCIe 4 on secondary slots. Stuff that could be easily fixed by adding more socket pins relative to 5 years ago.GeoffreyA - Saturday, November 13, 2021 - link
Certainly, there are tradeoffs, keeping a socket; but, as Mr. Tuvok would say, "Ryzen, you are an unending source of astonishment." There was a time when sockets even took CPUs from different manufacturers. I remember my Socket 7 motherboard, though I never tried it, could take a K5 and some Cyrix CPUs as well. Those 5x something, something. How things have changed.A short-lived socket can be a pain in the behind too. I was one of those unlucky folk who ended up with Socket 754 and missed out on dual-channel DDR and a long upgrade path. In any case, that computer went kaput after four years.
Oxford Guy - Wednesday, November 10, 2021 - link
Overclocking is for employees of motherboard companies.ECC RAM support should have been a standard feature from the beginning. Apple offered it on the Lisa in ‘83 and consumer computing has gone backward since.
Doublers, though... aren’t a bad thing as long as they’re implemented well — as I understand it. Better to have a good doubler implementation than a weak individual phase system. The main thing is to have a board meet the minimum spec for reliable (i.e. not overheating and/or failing) long-term support of its supported CPUs. Anything beyond that is unnecessary.
GeoffreyA - Saturday, November 13, 2021 - link
The problem with doublers is, they over-use it as a marketing technique to give the impression that a certain board has a large amount of phases.Oxford Guy - Saturday, November 13, 2021 - link
Weak phases with a mediocre/poor regulator aren’t necessarily better than ‘marketing phases’ via the use of doublers. That’s the case when the doublers are used a correctly.There are a lot of shenanigans, though — like not even utilizing the doubler fully but counting it as the doubling of phases. I also recall that one of the big tricks was putting extra chokes on the board to make it look like there are more phases.
GeoffreyA - Sunday, November 14, 2021 - link
Quite right, and one of the reasons why people have got to read a proper analysis of the VRM, or take a look at the lists on hardwareluxx for example.t.s - Tuesday, November 9, 2021 - link
Wish Intel go with their atv12vo. Or like business lines from HP, Dell, Lenovo, etc. 6 or 8 pin.shabby - Tuesday, November 9, 2021 - link
Mobo prices will go up even more, screw that.meacupla - Tuesday, November 9, 2021 - link
In the long term, I think the cost for ATX12VO will be cheaper.ATX12VO PSU will be cheaper than a comparable quality ATX PSU.
The BoM for 12V to 5V and 12V to 3.3V converters would go down, if mobo makers decide to stick to a single, standardized design.
With the way things are looking, electricity prices are unlikely to go down and continue to go up.
DigitalFreak - Tuesday, November 9, 2021 - link
All ATX12VO is doing is shifting the cost from the PSU to the motherboard.Wrs - Wednesday, November 10, 2021 - link
If mobo makers can stick to one design why can't PSU makers? They already conform to ATX.ATX 12 VO increases costs for piecemeal upgraders because of the simple observation that PSUs outlive motherboards. The question would be whether the power savings are worth it. For prebuilts they're comparing power savings to 0 net component cost so 12VO is already the norm.
DanNeely - Tuesday, November 9, 2021 - link
Good point. I thought Intel was pushing hard for 12vo with the 6xx series, but it seems to be completely MIA.Silver5urfer - Tuesday, November 9, 2021 - link
Can I ask why ? What does ATX12VO provide to a consumer ?It doesn't make your mobo cheap, it doesn't make your mobo less complicated, it does not make your system run cooler, it doesn't make ADL consume less power, It doesn't even make any sense.
ATX12VO was created because of that trash policies set by policing state of California about some nonsensical rubbish. Servers and Data centers can get away with modular high density PSUs because of fully standardized set and they also get 3M liquid cooling. This is consumer market and here we have people wishing for backwards in technology.
meacupla - Thursday, November 11, 2021 - link
A lot of people had the same sentiment about EU RoHS restrictions, and yet, it was implemented worldwide.With that attitude, the same can be said about energy star, and 80plus certifications. It adds cost to the product, yet it offers not a thing to the consumer.
Not everything is about you.
We need to do everything we can to cut down power consumption, and ATX12VO standardization across the entire industry is very low hanging fruit.
Stop being so selfish, there's literally only one habitable planet we have right now.
Oxford Guy - Thursday, November 11, 2021 - link
80 Plus offered plenty to consumers. Less power use means quieter PSUs.The knock on 80 Plus was unrealistically easy testing. Despite that, it helped raise the efficiency of PSUs. Along with better efficiency, ripple, holdout time, voltage consistency, and other factors improved — as enthusiasts began to pay more attention to PSU quality.
I don’t doubt that 80 Plus also helped a lot of non-enthusiasts/amateurs by keeping them away from ultra-cheap PSUs that catch fire. Having a high-profile certification that those PSUs can’t reach helped to steer those customers away.
yacoub35 - Tuesday, November 9, 2021 - link
That white metal trim running tight around the molex power connector on the ASUS ROG Maximus Z690 Formula must make it an absolutely nightmare to plug/unplug the main power cable to the board.Ranguvar - Tuesday, November 9, 2021 - link
Correction:"Previously with 11th gen (Rocket Lake), Intel upheaved it from a PCIe 3.0 x4 uplink on Z490 to a PCIe 3.0 x4 uplink on Z590."
This should say "to a PCIe 3.0 x8 uplink on Z590".
OFelix - Tuesday, November 9, 2021 - link
Correct. And whilst we are correcting that sentence - "upheaved" ????This first page really needs to be read by an AnandTech editor.
What's that? They don't any editors? :-(
OFelix - Tuesday, November 9, 2021 - link
"Z490 Motherboard Audio" ... presumably Z690?mode_13h - Friday, November 12, 2021 - link
I caught that, as well. Even the word "upheaved" is itself somewhat noteworthy. Plenty of better alternatives: "upgraded", "widened", "expanded", "increased", "enlarged", etc.GeoffreyA - Saturday, November 13, 2021 - link
While "upheaved" is likely an error, it's not far off from the words of today. Unfortunately, the English language is on a downgrade, and it's just going to get worse and worse. The language's genius is not tuned to the over-economical forms we're finding today; and a lot of it seems to be coming from tech. Upthis, upthat. My favourite, though, is leverage. A big, scary word that companies are fond of, and which escaped its programming, game development roots. Soon, we'll be leveraging the kettle to make tea. How about using?mode_13h - Sunday, November 14, 2021 - link
I don't mind "leverage", so long as it's an apt analogy. I think its modern roots might've been in the world of finance, where a "leveraged buyout" is one where a small amount of assets are used as collateral for taking on a greater amount of debt to fund the bulk of the buyout price.IMO, one of the more annoying abuses is substitution of "learning" for "a lesson learned". People talking about "learnings" sound to me like business-school idiots, who seem to have invented their own jargon out of jealousy of real professions.
GeoffreyA - Sunday, November 14, 2021 - link
Yes, leverage works well when tied to the proper sense, but I see it being "leveraged" more and more as a high-flown synonym of use, much the same way that highly intellectual folk of an earlier era found that "utilise" was shinier than the plain, homely "use." In short, substituting cardboard for a brick.Watch out, talking about the esteemed language of business school. Mr. B. Swan might be an avid reader of Anandtech.
Oxford Guy - Sunday, November 14, 2021 - link
Do you wear a suit and tie ever? Do you think men who do look more respectable than those wearing ‘casual’ clothes?Prestige dialects are about maintaining one’s social status — a barrier for competition. They’re not primarily about concision.
Similarly, overly-elaborate clothing like penguin suits with ties aren’t about keeping one’s body suitably regulated when it comes to temperature, protected from sun damage, and protecting others from the horrors of nudity. Overly-elaborate clothing is about maintaining social status.
Oxford Guy - Sunday, November 14, 2021 - link
It’s the unnecessary complexity that’s considered a boon rather than a drawback.GeoffreyA - Monday, November 15, 2021 - link
I agree with your argument, Oxford Guy, and no, I don't use a suit and tie. Having said that, you've caught me on a weak point, because while I don't use them, I think certain styles of the past were fantastic. Just think of James Stewart or Cary Grant. Or, on the side of the ladies, Ingrid Bergman, Grace Kelly, or any classic actress really. Truly, they give the dressers of today a run for their money.Oxford Guy - Sunday, November 14, 2021 - link
‘Unfortunately, the English language is on a downgrade, and it's just going to get worse and worse.’No and yes.
Languages are always changing. The worst language is one that remains static, increasingly less able to meet the needs of its speakers.
The prestige dialect of a language is arbitrary and also changing. While a high school textbook from 1920 may impress with its diction there was a lot less competing for energy/time then and ignorance was hardly less. It does, though, make for the illusion that high school students have become less intelligent. IQ is actually up due to, for instance, reductions in lead exposure and improved prenatal nutrition.
Neologisms often enrich languages rather than degrade them (not always). Even when the new terminology is redundant (which it frequently is) — speakers tend to simply abandon the older words/phrases. English is absolutely rife with abandoned words and phrases. Poets trot them out to impress but even they usually don’t bother with what dictionaries label ‘archaic’.
One thing that seems to be increasing in English is a reduction in working vocabulary, due to globalization. Being monolingual has drawbacks but having to learn 5 ways to talk about a cat (to express the same idea) has a price. This ‘global speak’ is one of the reasons listening to tennis players is often painful. It’s vacuous corporatism plus a limited vocabulary (one not merely limited by a lack of humanities education). However, respect for the humanities continues to decline.
Working vocabulary is a bit like the RISC vs. CISC debate. It takes more time/energy to develop a large working vocabulary (CISC instructions) — and it’s more difficult to keep it all in one’s working vocabulary. The benefit is that it takes fewer words to express an idea. We’re generally trained to see the use of a ‘more accurate’ special word as the mark of intellect, versus using more more common words to get the same idea across. If the same amount of energy is involved then it’s arbitrary to prefer one over another. The reduction in the attractiveness of monolingualism should lead to a reduction in the prestigiousness of ‘50-cent words’.
Euphemism is also perhaps an increasing problem. Orwell wrote an essay about it in the 50s or so so it’s not new. However, buzzword labels and euphemism seem to be growing in importance. Again, though, calling someone a communist or homosexual was enough to shut down all rational discourse. Prior to that there were witches, homosexuals, and heretics. So, perhaps the overall level of this hasn’t changed much.
Languages other than English and Chinese are under threat in terms of degradation, though, from loss of speakers and usage. In Salzburg, university physics is now taught in English.
I wouldn’t worry, at all, about the degradation of English and Chinese. I’d be more concerned about the ability to use the languages in the face of increasing censorship, censorship AI (the growing tech power divide) increasingly facilitates. Not being able to speak a language fully, due to that, is a path to greater diminishment.
mode_13h - Monday, November 15, 2021 - link
> One thing that seems to be increasing in English is a reduction in working vocabularyIn terms of importance, I've found clarity of expression to be second only to clarity of thought, in software design. One needs to be clear about semantics not only in one's own mind, but also capable of clearly and concisely expressing them in the form of names and documentation.
So often, bugs are the consequence of confusion. Either on the part of the original author or by maintainers or API users. That's why clear conception of ideas must be paired with clear communication, if an API is to be correctly implemented, used, and maintained.
This point of view has been shaped by decades of experience. I can often tell the difference between someone muddling concepts together in their head vs. simply lacking the vocabulary to express the finer distinctions.
> This ‘global speak’ is one of the reasons listening to tennis players is often painful.
Probably true of pro athletes, in most sports. They're selected for their aptitude on the court or field, and honing those skills is where they spend the bulk of their time & energy. It doesn't help that pro athletes are increasingly deferring college to extend the potential length of their athletic careers.
GeoffreyA - Monday, November 15, 2021 - link
Quite true. I'm no expert at programming, more of a hobbyist, but I've found that thinking about something beforehand often leads to better code. Writing it "on-the-fly" usually results in a mess, which can persist. I'd like to add to my comment on English that there's an analogy in programming languages. Just like updating English, there's been a constant trend to come up with new languages that address "weaknesses" in C and C++.GeoffreyA - Monday, November 15, 2021 - link
Quite right that languages are always changing, but the change may be for the worse as well as better. Despite being a lover of all that is old, I feel that English has actually gone nearer to its roots in the past two decades. People appear to be writing plain, concise English, comparable to the simplicity of Elizabethan prose, I would contend.People say that a language has to be brought up to date to express new ideas: that may be so in the fields of science and technology, but certainly not in human nature and relations. When I look at the 18th-century writers, it's evident that our distinctions have been blurred and watered down. The way they expressed life was precise, but unfortunately more Latinate, compared with our crude analogues of today. Apart from science and technology, that language isn't lacking at all to express present life (and was much more CISC, to use your example). In fact, there are distinctions that are seemingly lost; and lacking the language, our view on those points is cruder or non-existent. So much for increasing civilisation. Going further back, the Elizabethan English of that fellow from the Globe, or Bacon in prose, if one clears away the archaic usages, all the thous and the eths, is about as "modern" as English can get. I believe there is a true centre, "that mode of phraseology so analogous to the principles of a language," which English has sometimes strayed away from (particularly the 17th and 19th centuries), and I'd argue that the 20th and 21st centuries have seen a return to it in many ways. Unfortunately, there are some frightfully ugly inventions as well, that any true lover of good English will wince when looking at. Selfie, anyone? Hashtags? Upskilling the staff? There are many others but memory, as usual, is failing me on the spot.
Euphemism is a big problem (and I believe you're referring to "Politics and the English Language"), simply because it goes contrary to truth and has an effect on the mind, where the false, blurry idea becomes the thing itself. At its worst, people are able to commit criminal or unjust acts because they're sheltered beneath a euphemistic, polite phraseology. And it spills over into censorship, too, and only the warm, fuzzy forms are acceptable. Again, the important of being simple, direct, and exact in one's language and "telling it like it is."
Oxford Guy - Monday, November 15, 2021 - link
I can’t be sure but I believe Orwell critiqued heavy use of Latin derivatives along with passive voice and other strategies as a method for being less clear — a form of euphemism/doublespeak. I think Orwell would have responded to your crudeness point with the opposite point of view — that simplicity and concision are superior. Personally, I think irregularity in grammar and English’s terrible spelling (which can be easily fixed) are vestiges of the past that are ‘degradation’ inefficiencies.‘but the change may be for the worse as well as better.’
The only changes I can think of that would be for the worse would be having a language lose speakers (a dying language) and a language declining in expressiveness from increasing AI-based censorship. Language change generally favors increasing efficiency, although substituting half-pidgin ‘global speak’ due to polylingualism being more important is also an issue.
All human (non-synthetic/artificial) languages are sorely in need of more change than their speakers are willing to allow in the short term. That’s the main problem — the opposite of degradation from change. English spelling, for instance, is utterly preposterous and one linguist’s reform scheme is very easy to get used to. Stubborn nostalgia, though, is extremely difficult to overcome in the short term. Gender in languages like German and French is also very stupid. It’s a massive waste of energy to ascribe sexual characteristics to clouds, trees, and soup.
Oxford Guy - Monday, November 15, 2021 - link
It should also be noted that English and Chinese are languages that are strongly characterized by density of meaning per syllable. That’s the opposite of Japanese. It uses a lot of syllables from a small palette of sounds to get meaning across — which calls for rapidity of speech. This is also like the RISC vs. CISC dichotomy. (On the flip side, Japanese has the most complex writing system.)The demand of English to pack as much meaning as one can into a syllable seems that it would favor short ‘simple’ words. So, calls to use lengthy ‘ornate’ Latin derivatives may miss the mark. Lengthy words are more attractive in certain other languages. (There is jargon for all of these things but I’m trying to minimize that here.)
GeoffreyA - Monday, November 15, 2021 - link
Perhaps I contradicted myself or wasn't clear, but I am not calling for Latinate English. Not at all. On the contrary, I am a proponent of plain and simple "Saxon" English, and repudiate the Latin style with a passion. I am going to write "get the job done," never "accomplish the task," and use and buy, instead of utilise and purchase. I always try to write using the simplest words to get the sense across. And that extends to syntax too, condensing a sentence to its shortest form. At the end of the day, it comes down to clear thinking. Do that, and one's style becomes more lucid.18th-century prose was elegant but its chief defect was overly Latinate words and sentences (exemplified by Dr. Johnson). I am actually praising 20th and 21st century prose---can't believe I'm doing that---when I say it's a return to Elizabethan plainess, to my eyes at any rate. If ever there was a golden age of English, it's undoubtedly that of the late 16th and early 17th centuries.
GeoffreyA - Tuesday, November 16, 2021 - link
Being simple doesn't mean being crude or vulgar. One can be elegant as well as simple---after all, true beauty, as the ladies will point out, is simplicity. I feel that while there's a return to plainness in our times, there's been loss of decorum and good taste. Many of today's made-up words are ugly or distasteful, and I feel there's a twisting of the language away from its grain.Upskill? Even clickbait titles are a symptom of something amiss. Could it just be bad taste, or a reflection of the mind of the age?You are calling for reform to the language. Here our views depart; for I am more of a conservative and believe in preserving English in all its messiness, spelling and all. One of the beauties of language is that it's an irregular growth, much like a tree, lovely as a whole but messy in detail. (Same goes for programming: I'd take messy C++ any day, instead of the new, slick stuff of the present.)
A contradiction again, where I'm talking about preservation but criticising current English? Not really. I'd say: there's a model of good style already in the language, shaped by some of the greatest writers that ever lived. At its best, it's plain, simple, and elegant, and most of all, easy to understand. Orwell would be one example.There are many others.
mode_13h - Tuesday, November 16, 2021 - link
> I ... believe in preserving English in all its messiness, spelling and all.Consider that its messiness isn't free. English speakers, especially those coming to it later in life, waste significant amounts of time, energy, and mental capacity learning some of its unnecessary complexity. Without it, they could be putting those resources towards improving their overall mastery of the language.
As English speakers, we derive numerous and diverse benefits from more people being able to speak it, and from them being able to do so with better aptitude. It's in our interest to lessen the learning curve, particularly given that it's eroding anyhow -- and in ways that have more detrimental consequences.
GeoffreyA - Wednesday, November 17, 2021 - link
I agree there are a lot of silly points in English that hinder learning. And yes, we are apt to forget that so many people speaking it makes life easier for us. How many more centuries this will go on for, we can only wonder.On the other hand, Oxford Guy's comment about globalisation is also true. While asymmetric communication is causing simplification, some beautiful usages are lost along the way. The same happens between American and non-American speakers. Sadly, whom is dying, as well as the first-person, colourless "should," and others. Many a time, one possesses a usage that one feels is idiomatic but is forced to use another because of misunderstanding. And for my part, personally, there appears to be greater misunderstanding between cross-country, native English speakers, than between a native and non-native one. I find it easier speaking with people who are using English as a second language; but so often there's a barrier when talking with a native speaker from another country (or even different culture).
In any case, I'm often disappointed with English, and see features in other languages that are attractive, particularly Afrikaans and French. When I hear Afrikaans in my country, with the classic inflexion, it has a magical effect on me, and I almost sense something that English lost earlier in its history. And then, like most languages, the verb's going to the end is beautiful, whereas we aren't allowed to do that outside of poetry. Taking Afrikaans again, it's astonishing how direct and clear a speaker is when talking in English, whereas we English speakers are lost in a maze of many, empty words. So, increasing CISC expressiveness may not be all it's cut out to be. After all, the stuff of life is simple and needs only a few words for expression. It's only idle sophistication that comes up with imaginary nonsense. Let our words be few and choice, and our actions many and noble! Silent cinema shows us that words are empty.
mode_13h - Thursday, November 18, 2021 - link
> Oxford Guy's comment about globalisation is also true. While asymmetric> communication is causing simplification, some beautiful usages are lost along the way.
That's basically my point. If those invested in the language don't make the easier and more painless simplifications, the new speakers are going to make much more detrimental ones.
> the stuff of life is simple and needs only a few words for expression.
More like a fractal, I think. From a distance, it seems relatively simple. Yet, the closer you look, the more complexity you see.
> It's only idle sophistication that comes up with imaginary nonsense.
If your needs and thoughts are simple, then a simple language will suffice. Language is a conceptual tool, as much as a means of communication. Comparative language studies have shown people have difficulty grasping concepts for which they lack words.
I prefer to inhabit a world of richness, complexity, and big ideas. I'm grateful not to live in a sparse realm, where anything beyond simplicity of language and simplicity of thought would seem excessive or burdensome.
GeoffreyA - Thursday, November 18, 2021 - link
Good points (and nice one about the fractal of life). Lack of words can lead to poverty of thought. Take a look at older writers, and one realises we've lost many distinctions, expressed admirably. Or worse: similar concepts have been born again under ugly language. Or delete democracy. Then we ask, what, what's that? I suppose there's an ambivalence in me regarding simple vs. complex language---and that's where the apparent contradiction is coming from. Part of me longs for the older speech, and part of me for simplicity. The best model, I think, steers a course between these two whirlpools. And I think people would begin to think more soundly if the bias were towards simplicity. Let one's treasure be buried in the garden and go abroad in plain clothing.I don't like it, but change is inevitable, especially when a language comes into contact with secondary speakers. In the Middle English era, when it was Saxon against Norman, English lost most of its cases, was simplified, and word order became critical. Doubtless, the same process will happen again, and likely is already happening. Let's keep our fingers crossed that hashtag language doesn't take over. Then we'll get Postmodern English.
mode_13h - Friday, November 19, 2021 - link
> I think people would begin to think more soundly if the bias were towards simplicity.I fear false simplicity and superficiality.
> Let's keep our fingers crossed that hashtag language doesn't take over.
I'd certainly rather not dwell on the long-term implications of texting on the English language.
GeoffreyA - Friday, November 19, 2021 - link
In the spirit of science, as simple as is consistent with the data but no simpler.mode_13h - Tuesday, November 16, 2021 - link
> Language change generally favors increasing efficiencyPerhaps, but dialect formation often emphasizes or devises devices to distinguish its speakers from neighbors, outsiders, or newcomers. Here, we see the goals of language in tension with the goals of its speakers. Perhaps you're alluding to that, at the start of the following paragraph.
> English spelling, for instance, is utterly preposterous
I don't mind eliminating exceptions and irregularities from English, so long as nothing substantial is lost in the process.
> Gender in languages like German and French is also very stupid.
Were it expunged, maybe people wouldn't try to import gendering of asexual objects into English, such as the way some refer to ships as female.
Duwelon - Tuesday, November 9, 2021 - link
Asus' prices are completely bananas. If I build a new rig with Z690 it'll probably be my first non-Asus build in a very long time.Sivar - Tuesday, November 9, 2021 - link
That caught my eye, too. I bought an Asus Hero-branded board for my current system last year at approximately $200 USD.I suspect Asus is shifting their marketspeak because the word "Maximus" (used for the z690 board but not mine) usually applies to their most expensive boards.
blppt - Tuesday, November 9, 2021 - link
This. $2000 for a consumer grade motherboard? WTF are they smoking?Also, I'm pretty sure ASUS will be releasing some TUF Z690s at some point, probably at a lower price point than the primes. My experience with the TUF series has been very positive for the price.
DigitalFreak - Tuesday, November 9, 2021 - link
They know they're not going to sell many of those. Those boards are either for LN2 e-peen competitions or people with more money than sense.Wrs - Wednesday, November 10, 2021 - link
TUF is historically just a bit more expensive than Prime. They already have a TUF DDR4 version - ordered the Wifi one for $290 the other day. If worried about price DDR5 is the first mistake.blppt - Wednesday, November 10, 2021 - link
The X570 TUF was cheaper than the X570 Prime when I went shopping for an AMD board.COtech - Tuesday, November 9, 2021 - link
Subtitle - "Intel Z690 Chipset: Like Z590, But Now With Native PCIe 4.0"I think "But Now With Native PCIe 5.0" is intended.
gavbon - Thursday, November 18, 2021 - link
The Z690 chipset doesn't have PCIe 5.0, this comes from the CPU. The Z690 chipset does, however, now include PCIe 4.0 lanes, whereas Z590 did not.Someguyperson - Tuesday, November 9, 2021 - link
I don't get the "DP IN" ports on the ASUS ProArt Z690 Creator WIFI. I see the author just wrote what was on the ASUS website, but that doesn't really explain anything. Are they passthrough to the Thunderbolt out ports? Is there a capture card built into this motherboard? I'm very confused by the labeling here.uwsalt - Tuesday, November 9, 2021 - link
Those are passthrough to the Thunderbolt port. Add-in Thunderbolt cards work the same way. You slot in your discrete GPU, send the output from both DP ports to the Thunderbolt controller, and then use Thunderbolt to output to a Thunderbolt monitor or hub.Pneumothorax - Tuesday, November 9, 2021 - link
These prices are insane. You need to add $300 tax on top of any ADL build for Z690/DDR5.Gasaraki88 - Tuesday, November 9, 2021 - link
OMG... those mobo prices... wowOxford Guy - Wednesday, November 10, 2021 - link
‘Video cards are insane so why not us?’TeddyBaeeer - Tuesday, November 9, 2021 - link
Gigabyte Z690i Aorus Ultra is NOT DDR5. It is ddr4 and your link goes to the ddr4 boardSamus - Tuesday, November 9, 2021 - link
Cheapest ITX board is $400-$440. Yikes.Oxford Guy - Wednesday, November 10, 2021 - link
And some have tried to dismiss the apparent fact that inflation is a significant cause. Supply constraint doesn’t explain all of it nor does an increase in sedentary entertainment due to Covid.You’ve got your shrinkflation and your price inflation. Both are occuring.
zodiacsoulmate - Wednesday, November 10, 2021 - link
The MSI Z690 Ace is E-ATX, not ATX.How do I know? i preordered it on the 4th, and newegg is showing availability on Dec 3rd… It is so much better than the HERO at 600 dollar
zodiacsoulmate - Wednesday, November 10, 2021 - link
I thought thunderbolt port support displayport protocol? So will those thinderbolt 4 port without dp input port, output video from integrated GPU?zodiacsoulmate - Wednesday, November 10, 2021 - link
Another error in the article, the ALC4082 should include MSI Z690 AceInjuis - Wednesday, November 10, 2021 - link
The inflation just never stops, does it?Oxford Guy - Wednesday, November 10, 2021 - link
Creating more fiat bills has consequences. Congress literally printed money to give to lobbyists as part of ‘Covid relief’.fcth - Wednesday, November 10, 2021 - link
Sad to see only one mATX board, though at least it looks like a decent (if expensive) option.Mite - Wednesday, November 10, 2021 - link
Can ASUS Z690 Maximus Extreme run PCIe 5.0 x16 GPU and PCIe 5.0 x4 M.2 SSD concurrently? Will the GPU (PCIe 5.0 x16 slot 1) drop to PCIe 5.0 x8 instead when SSD is installed on the PCIe 5.0 x4 (M2 slot)?Kakkoii - Wednesday, November 10, 2021 - link
MSI does show the Audio Codec... just not on the simplified summary. You guys have to click the "Detail" tab on the Specifications page for a given board. All the boards show which audio they're using.The Carbon for example has ALC4080.
gavbon - Thursday, November 18, 2021 - link
At the time of writing, even the detail sections of the specifications didn't show them. On top of this, all of the information we received prior to launch mentioned no specific HD audio codecs. I will update this though :)JackNJ - Friday, November 12, 2021 - link
The GIGABYTE Z690I Aorus Ultra is not DDR5 I think?chavv - Friday, November 12, 2021 - link
5 m2 slots?How is this useful for a normal user?!
Or 600$ mobo for desktop usage?!
World gone mad
mode_13h - Saturday, November 13, 2021 - link
For RAID, obviously. That borderline makes sense. If you're running a 4 or 5-drive RAID of SSDs in a consumer rig, it's more cost-effective and still plenty fast to use SATA. And I think it's not unreasonable to expect anyone using M.2 drives to put them in a PCIe carrier card, which will have better cooling potential anyhow.sunmobo - Friday, November 12, 2021 - link
You've included MSI's ITX variant in the list (MEG Z690I Unify) but I can't seem to find it on their website. Although if you google you'll find a few mentions on some shops, without pics. Is this because MSI is still working on the board, or?gavbon - Thursday, November 18, 2021 - link
It's likely to launch soon, but it does and will exist.mode_13h - Friday, November 12, 2021 - link
I was really disappointed not to see more discussion of costs and why the price distribution of these boards tends to skew so high.However, I was most surprised to see how much lower some of the entry-level models are priced. Do we think these will be produced in sufficient volume, or are they primarily there as a means of upselling would-be buyers who, out of frustration at seeing them always out-of-stock eventually end up buying one of the more expensive models?
mikk - Saturday, November 13, 2021 - link
MSI Pro Z690-A WIFI, MSI Pro Z690-A and many more have the cheaper Realtek ALC897 Codec, the audio table is not accurate and it says Z490 instead of Z690.ajollylife - Sunday, November 14, 2021 - link
Wtf is with the PCIe 3.0 slots? I'm looking at the Gigabyte Aorus Master, has 10gig onboard, great, but then the other two pcie slots are pcie 3.0 So confused.mode_13h - Sunday, November 14, 2021 - link
From what I've read, PCIe 4.0 tends to require retimers, which adds cost and takes space. Those could be reasons why we don't see more PCIe 4.0 slots.back2future - Monday, November 15, 2021 - link
maybe mainboards start getting reshaped/redesigned (vertical m.2, backside slots/connectors, ?) instead of using retimers (chipset TDP includes retimer power?, cooling power for peripherals on PCIe 5.x speeds on 4GB/(s*lane)=~2 lanes sufficient for fastest available (2021, consumer) SSDs )?ecclesiastes121314 - Wednesday, February 23, 2022 - link
2 ram slots? I've seen this on a few of these new DDR5 boards. Most people here are talking about Thunderbolt 4 and USB4. Yes these are very useful to a select group of people yet these can be achieved with add on cards. Then you can pay for the devices to take advantage of these technologies. Reducing ram slots from 4 to 2. Wow. Yes you can buy high density ram. But this is forcing you that direction. What is wrong with 4x16 or 4x32 ram kits? If you (me) are interested in high performance video then affordable and available ram is a huge consideration. Is it just me?