I am sure TB will be updated again. TB4 at 80 Gbps? However TB has generally lagged behind the underlying PCIe spec by three years. I wouldn't expect to see TB4 before 2022 and TB5 may never happen at least not just routing 4 PCIe 5 lanes given the serious constraints on even internal PCIe 5.
PCIe 4.0 should allow 80 Gbps with the same lanes, but the cables will need to have a much higher quality to sustain the signal integrity, while passive cables will probably need to be even shorter than today. PCIe 5.0 (for TB5 and 160 Gbps) might never happen with passive cables, unless perhaps extensive internal shielding between the wires is employed.
I agree. I wish there would be a Thunderbolt 3.1 or 4.0 with 80Gbps that would support HDMI 2.1 (up to 48Gbps), Ethernet 10GbE, USB 4.0...
It would finally allow to replace HDMI connectors on TVs with universal USB-C connectors with support of 80Gbps Thunderbolt 4.0 / 3.1, and therefore connect a smartphone, computer, USB stick,... with a USB-C cable !!!
TVs will always use HDMI. They could have used DVI and later they could have used Displayport. The TV OEMS want a standard they and only they control. That standard is HDMI. It is the reason why no TV has a displayport port not even one because doing so would reduce the necessity of HDMI connectors on computers.
It is hard to predict the future of technology 5 to 10 years in the future...
So I would say that it is difficult to predict what a TV migh look like in 2025 or 2030, but I am hopefull that it will integrate at least some Thunderbolt USB-C connectors as it would allow to slim down TVs and use a USB-C cable to plug one or several external Thunderbolt docks on it...
I believe that TV manufacturer will feel pressure with time to integrate at least one Thunderbolt USB-C connector to connect a laptop to the TV, as more and more laptops are getting thinner and may soon come equipped with only Thunderbolt USB-C connector (like Apple MacBook Pro computers...)
Once again it isn't a technology issue. Have you noticed not a single TV has a single displayport port. Way back in the day they did have VGA ports and a few had a single DVI port but once DP became a competitor to the dominance of HDMI that ended. The TV OEMs want a standard they control and they control HDMI so TVs (and other CE devices) will use HDMI. There will be future versions of HDMI with more capabilities, and higher resolutions but it will be HDMI.
"I believe that TV manufacturer will feel pressure with time to integrate at least one Thunderbolt USB-C connector to connect a laptop to the TV"
You can today. Most laptops with usb-c support HDMI alt-mode. You just need a usb-c to HDMI cable. See the pattern? No matter what the solution is .... HDMI because the consumer electronic OEMs control the HDMI standard.
It is hard to predict the future of technology 5 to 10 years in the future...
So I would say that it is difficult to predict what a TV migh look like in 2025 or 2030, but I am hopefull that it will integrate at least some Thunderbolt USB-C connectors as it would allow to slim down TVs and use a USB-C cable to plug one or several external Thunderbolt docks on it...
I believe that TV manufacturer will feel pressure with time to integrate at least one Thunderbolt USB-C connector to connect a laptop to the TV, as more and more laptops are getting thinner and may soon come equipped with only Thunderbolt USB-C connector (like Apple MacBook Pro computers...)
Thunderbolt 3 was build around the PCI-E 3.0 era, PCI-E 5.0 will be out in 2021, ( Some FPGA even in 2020 ), will there be an update? Or are external PCI-E stops at 3.0 there will be no 4.0 and 5.0?
PCIE 5 has some sort trace requirements. Most boards won't have it on the farther pci slots. So I'm not sure (unless someone invents some kind of booster chip) if we will see it externally.
My expectation is that TB4/5 cables will have to switch to fiber with all that means in terms of even higher costs and the inevitable failures when users ignore minimum curvature radius requirements and try tightly folding the cable back on itself.
Nah, PAM4 signaling at almost the same Nyquist rate with a little FEC thrown in. That's the way things are headed, and copper cables will be fine (or at least as fine as they are now).
PCIe Gen 5 is ridiculous in that they went for 32.0 Gbit/s NRZ signaling. That's a 60% higher Nyquist rate than Thunderbolt 3. AFAIK, aside from Intel Agilex and Xilinx UltraScale+ FPGAs, no other commercial transceivers operate at that rate. I have trouble seeing how PCIe 5.0 makes it to consumer devices anytime soon.
I am sure TB will be updated again. TB4 at 80 Gbps? However TB has generally lagged behind the underlying PCIe spec by three years. I wouldn't expect to see TB4 before 2022 and TB5 may never happen at least not just routing 4 PCIe 5 lanes given the serious constraints on even internal PCIe 5.
Side note TB3 should be able to do 8K 10bit HDR 120 Hz using dual DP 1.4 streams and DSC. It is right on the line though but 100 Hz is certainly possible.
Even at 7680 x 4320 with a full 3:1 compression ratio, 120 Hz is too much when you factor in the CVT requirements. 112 Hz is the cutoff, but 110 Hz is probably the highest practical rate.
Hmmm... 10 bpc may yield a 3.75:1 compression ratio, but AFAICT the decoder is effectively limited to 3 pixels per unit of pixel time. So with VBR you might be able to get some extra down-time and save a little power, but you can't transport more than 3x the number of uncompressed pixels. Having only skimmed through the spec once though, any additional insight into how this works in practice would be appreciated.
Thunderbolt controllers have connections for up to 4 lanes of PCIe to connect to the host or devices, but the signaling used for the Thunderbolt ports on the other side is a complete different thing, even though you can transport PCIe or DisplayPort packets over a Thunderbolt link. Updating the PCIe interface on the controller from PCIe 3.0 to PCIe 4.0 is completely straightforward, and would be quite welcome, seeing as a dual-port controller currently has two 40 Gbit/s ports with only ~31.5 Gbit/s of PCIe bandwidth available on the back-end between them. Historically, Intel has only bumped up the PCIe or DisplayPort revisions once their platforms actually supported those technologies. Which makes sense, seeing as it's their thing after all.
Thunderbolt controllers also have connections for up to two DisplayPort main links, which is how the DisplayPort packets get on the bus; it has nothing to do with the PCIe side of things. With Titan Ridge Thunderbolt 3 controllers, those are at revision 1.4 with HBR3 support, providing 25.92 Gbit/s apiece, or up to 40 Gbit/s per Thunderbolt link. Updating to DP 1.4a with Display Stream Compression is also relatively straightforward, and even a 2:1 compression ratio would allow you to saturate both links of a dual-port controller.
The logical next step for Thunderbolt 4 is to double bandwidth again by switching from NRZ to PAM4 signaling, which would allow up to 80 Gbit/s per link. However, feeding that on the back end without making the chip enormous and radically more expensive will require waiting for platforms that support both PCIe 5.0 and next generation DisplayPort.
Without DSC or chroma subsampling, a 16:9 6K display at 10 bpc and 120 Hz requires between 72.11 and 81.96 Gbit/s (depending on how you define 6K). 8K at 10 bpc and 120 Hz is between 127.75 and 145.26 Gbit/s. Thunderbolt 4 might be able to handle some 6K resolutions over a single cable without DSC, but as it stands, you'd need at least three DisplayPort HBR3 main links on the back end.
Incidentally, HDMI 2.1 Fixed Rate Link, which is not available on any shipping products yet, is only 42.67 Gbit/s, so not even close to being able to do 6K or 8K with 10 bpc at 120 Hz without DSC or chroma subsampling.
Ugh, ignore this bit, "and even a 2:1 compression ratio would allow you to saturate both links of a dual-port controller." Obviously the total DisplayPort bandwidth remains at 25.92 Gbit/s per main link, but compression allows you to squeeze more pixel data in there.
I cannot help but wonder if all this PCI-express hotpluggability is well considered. Hotpluggable DMA attacks were already a problem back with Firewire.
Sure, in theory we do have IOMMUs these days, but I'm not even sure how many implementations have an IOMMU group for each Thunderbolt port, nor how well developed driver support is for applying IOMMU protections to devices. OS support for Thunderbolt access control also seems to very rudimentary at best.
Ah, you're the type of shithead that cries wolf each time a new fuckclown-like "exploit" is announced and spews garbage on forums that we're all fucking doomed huh.
I believe I speak for every normal desktop PC user when i say that none of that shit fucking matters whatsoever. I disable all these retarded "mitigations" as soon as disable mechanism is available and will continue to do so. None of this shit affects normal users and never will. Maybe you lunix dweebs worry about "hot plug DMA attacks" but to us normal users, this shit is just another cable to plug in and enjoy a device that works.
Normal desktop PC users don't use and don't care about TB. They care more about USB-A than TB. Only Apple users and the blogosphere cares about the TB dogma.
The security argument on IEEE 1394 basically boiled down to "DMA is bad". Ultimately, some tradeoff has to be made between usability and accessibility. We COULD just disable all access to the system, but... it wouldn't be very usable at that point.
If USB-IF had any sense they would rebrand all USB type-C based versions as >4.0 versions, for example: USB 3.0: USB type A based connectors at up to 10 Gbps USB 4.0: USB type C based connectors at up to 20 Gbps USB 5.0: USB type C based connectors at up to 40 Gbps
Quite sensibly, when the USB-IF publishes new versions of their specifications, they increment the version number of the document. Also, quite sensibly, they have always advised vendors not to use these version numbers to indicate the signaling capabilities of their products.
The media thought it would generate a lot more clicks if they suggested that the situation was intractably complicated and that the specification version numbers were actually some sort of brand or trademark that the USB-IF had changed somewhere along the way, which is not at all the case.
The USB has consistently registered silly marketing names for vendors and the media to use to communicate the various signaling capabilities of products to customers (Basic-Speed, Hi-Speed, SuperSpeed, SuperSpeed USB 10 Gbps, SuperSpeed USB 20 Gbps). Everyone universally refuses to acknowledge these as acceptable. So the USB-IF said that if vendors insist on using the specification version number on product packaging, advertising, or other marketing materials, that they would also have to indicate the signaling capabilities of the product. That's where you get the Gen 1 or Gen 2 (5 or 10 Gbit/s) for USB 3.1 products, and the x 1 or x 2 (single or dual-lane) for USB 3.2 products.
We've always had USB Type-A and Type-B connectors, so I'm not sure why Type-C is so challenging for everyone. Once again, the media is partly to blame for repeatedly ignoring the USB-IF's messaging that Type-C is a cable and connector specification and is entirely separate from any particular version of the USB specification, which is what describes the signaling protocol.
Because version numbering is a convenient way to track capabilities. My older systems don't have USB 3.2 Gen 1 ports, they have USB 2.0 ports. Anything compliant with version 2.0 of the spec is a USB 2.0 device. That future versions of the spec are backwards-compatible doesn't mean older versions stop existing, or that there's no valid reason to reference them.
And imagine the confusion if people DID start following the USB-IF suggestions and solely using cute names for things. Sooooo many shady devices advertising they were high-speed or even full-speed to sucker in the masses. How do you explain to your parents that full-speed is the slowest option, and that high-speed is pretty slow?
The USB-IF is laying out a stupid and confusing naming scheme that only benefits dodgy manufacturers. Better that devices are labelled by the newest version of the spec they comply fully with.
Older versions of the specifications do cease to existing as the USB-IF deprecates them. You can't download older versions of the specs from their site anymore, and they stop certifying new devices against them. It's just like software versions. However, devices that were designed and/or certified for previous versions do not cease to exist. The currently active versions of the USB specification may be 2.0 and 3.2, but billions of USB 1.0, 1.1, 3.0 and 3.1 devices continue to exist. And of course you can continue to use those version numbers and assume that people will know what you're talking about.
However, say I design a new device targeting the USB 3.2 specification (a version that I can currently license) but it only requires 5 Gbit/s signaling (which is totally allowed according to the spec) and follow through with getting it certified according to the USB 3.2 testing procedures. It is dead to rights a USB 3.2 product. The USB-IF can't tell me as a licensee who has jumped through all their hoops that I can't refer to it as a USB 3.2 device. They can however tell me that if I refer to it as USB 3.2 on packaging or in advertising or other marketing materials that I also have to tell you which USB 3.2 signaling mode it's actually capable of, namely 5 Gbit/s. This is where the vendors are playing fast and loose and customers can potentially be mislead.
The "cute names" which everyone ridicules are registered trademarks controlled by USB-IF, and as such they carry a legal mechanism to enforce usage. And the speed classes, as they currently stand, are Basic-Speed, Hi-Speed, SuperSpeed, SuperSpeed 10 Gbps, and SuperSpeed 20 Gbps. The only thing my parents might not understand about these names is what heck a Gbps is, and that SuperSpeed without a qualifier only has 5 of them. And I can hear my mom asking, "But how do I know how many Gbps I even need?" No solution is going to work for everyone.
A significant percentage of what are called USB 2.0 devices that are on the market right now are only capable of the low-speed or full-speed signaling modes introduced in USB 1.0, rather than high-speed which was introduced in USB 2.0. And almost nobody cares because devices generally adopt whatever signaling mode is most appropriate, and backwards compatibility is guaranteed. This is pretty much the same deal for USB 3.x devices. If you actually need 10 Gbit/s vs 5 Gbit/s signaling or dual-lane operation to reach 20 Gbit/s, you're not my parents and you can probably take the time to read the fine print or do some research. The place these capabilities probably matter the most is in regards to host ports. I'm guessing 90% of people only care about whether a device works or not, not about the theoretical bandwidth the physical channel can provide to the upper layers.
And you have it exactly reversed. The dodgy manufacturers know that if their device fully complied with the USB 3.0 spec, it is now also fully compliant with the USB 3.1 and 3.2 specs. People who look at spec sheets are more likely to buy whichever product has the bigger number listed. This is exactly why the USB-IF said you can't do that unless you also indicate the maximum signaling capabilities, and explicitly provided at least three different acceptable ways of doing so.
I don't have it backwards. The current approach of the USB-IF makes "dodgy manufacturers know that if their device fully complied with the USB 3.0 spec, it is now also fully compliant with the USB 3.1 and 3.2 specs" true. 3.1 and 3.2 should only be applicable to devices using one of those new features. That they are not is nothing more than the USB-IF doing manufacturers a favor so they can conceal their devices' interface functionality. Extensions of the specification should exist as a "sublevel" designation, like the IEEE does(most obviously with 802.11). And hey, for a while the USB-IF was doing good with that. Then they decided to "clarify" everything by declaring that USB 3.2 was ALL USB EVER, and created a "gen #x#" nomenclature to replace the existing "USB2, USB3, USB 3.1" designations. (And 3.1 should have been USB4, but that is minor nitpicking relative to the Gen X clusterbomb).
There's a lot more to the USB spec than just link bandwidth. A naive "bigger number means more faster" does nothing to address all the other things USB need to do to be useful, which is why the new standards exist in the first place. Power delivery is one of the more obvious ones.
I completely understand where you're coming from, but a lot of what you're saying is misinformation that was propagated by the tech media echo chamber.
Going back to my original assertion, the version numbers are only the version of the specification, which itself is a document distributed in pdf format intended for licensees building USB enabled devices. The working group makes some engineering changes, adds a few new features, and then puts out a new release with an incremented version number. The USB 3.1 specification is the USB 3.0 specification plus ECNs and the addition of the new Gen 2 PHY—90% of the text remained the same. Once 3.1 was released, the USB-IF deprecated USB 3.0 because licensees should all be referring to the new version not the old one from now on. This is like when Apple stops signing iOS 12.2 following the release of 12.3 and customers can no longer download or downgrade to previous versions. This is essentially the way *all* interface specifications work, not just USB.
The reason why manufacturers know that that their USB 3.0 devices are most likely 3.1 or 3.2 compliant is because nearly 100% of the text of the USB 3.0 specification is included with only very minor changes in both the 3.1 and 3.2 specs, and 100% of the new features are optional. From the very beginning the USB-IF let everybody know that USB 3.0 was not going to be a one and done situation. The industry was stuck at USB 2.0 performance levels for way too long, and therefore, not only would 3.0 be a massive leap forward, but it would also be extensible through updates at regular intervals.
From a licensing and maintenance point of view, the USB-IF is following the only sane path. Yet everyone on teh internets desperately wants specification version numbers that fully determine device capabilities. Once you include even a single feature in a spec that isn't mandatory, that idea goes straight out the window. Once you build an interface based on multiple specifications with lots of optional features in order to address a market of several billion devices, creating a version number that uniquely identifies each permutation would be ludicrous.
With USB, most people don't just want to know the signaling rate, they also want to know which style of plug / receptacle is being used and what the power capabilities are. If a device is advertised as "USB Type-C, 5 Gbit/s signaling, 60 W source/sink power delivery," you probably know everything you need to without including version numbers for a single one of the three specifications referenced. And none of those version numbers would have pinpointed the exact device capabilities anyway.
The Gen X and Gen X x Y nomenclature does not replace the specification version numbers, it's simply how the PHYs were referred to within the USB 3.1 and 3.2 specifications themselves. USB 3.1 introduced a new PHY with different capabilities that could be implemented alongside the original one. The engineers referred to these PHYs as Gen 1 and Gen 2. USB 3.2 introduced channel bonding in the form of dual-lane operation. Now you could have either PHY operating as an x1 or x2 link. It's a little unfair to fault the engineers for using the same terminology as every other engineer ever in the history of personal computer I/O interfaces. I will agree that the USB-IF promoting this terminology for use in public facing materials was ill advised. However, it would seem perfectly suited for the readers of sites like Anandtech, who actually have the desire to do a deep-dive and embrace the terminology used by the engineers themselves.
I swear to God, I have never in my life seen so much mental masturbation as I have in this comment thread.
First of all, Lord Of The Bored is exactly correct and Refluxman27 seems to be a good example, like Paul Krugman, of someone who's educated beyond his intelligence.
The Implementors' Forum is attempting to maintain two parallel naming schemes -- one that's numerical and is therefore IMMEDIATELY and universally intuitive to anyone on the planet who is at least 4-years-of-age, and another with proper nouns constructed of superlatives stacked on top of superlatives, with a few pluses and "gens" thrown in to make things even "clearer." Now, I'm not exactly stoopid, but for the life of me I cannot remember which of the superlative combinations is better/higher/faster than the other superlative combinations.
But it is damned easy to see that 3.2 should be faster/better/more advanced than 3.1, which is likewise faster/better/more advanced than 3.0, which is likewise . . . .
And because USB specifications are always backward compatible, they are all EQUALLY backward compatible, so backward compatibility is not a distinguishing feature between the different versions . . . but forward compatibility is. There's every reason always to purchase the product that is certified to the highest numerical spec because it will be the most future-compatible and future-proof, and NO REASON to purchase a product with a lower spec. It doesn't get any more simple or intuitive for the buying public than this simple numerical progression nomenclature. Nor can a nomenclature scheme be more UNintuitive and absolutely useless than the IF's moronic proper noun scheme which, as Sir Lord Bored noted, has been roundly rejected by everyone in the known universe. Every DAMN time I'm in Worst Buy picking up a hard drive or looking at the USB ports on a laptop, I'm saying to myself, "OK, is UltraUberSuperDuperSpeed the same as 3.0 or 3.1 or 3.2 Gen1 or Gen2 ? ? ?" . . . And I'm saying that to myself because I already understand what 3.0 and 3.1 and 3.2 mean. What's more is that even someone who knows nothing about USB and isn't even all that bright -- say, Paul Krugman -- even that person could easily pick out which of the three is the latest and greatest.
Sorry, but Monsignor Lord Bored is right. And all the crap about "spec this" and "spec that" and "vendor this" and "vendor that" is just useless claptrap. THE ONLY TWO THINGS THAT MATTER are that the labeling is honest and that THE CONSUMER can understand that labeling INTUITIVELY, without having to do RESEARCH to decipher the inverted pyramid of accumulating superlatives.
And whoever it was on the IF who thought up the "Gen #x#" crap ought to be taken out behind the barn and shot. Each successive bump in speed should get it's own numerical I.D. -- it's that simple.
One other UltraUberSuperDuper error the IF is making is NOT requiring all C-ports to be powered by a 3.2 chip and to comply with the 3.2 spec. It is nothing less than moronic for a manufacturer to be able to put a C-port in their product that is merely 2.0 or 3.0 compliant. Yes, I know the argument that the physical port and the protocol spec are two separate things, but the C-port is a major break -- a fundamental change in the USB world, and that new physical standard should ALWAYS be associated at least with the specification which was a major leap forward at the same time the port was introduced. THE WHOLE POINT of buying products that specifically have the C-port is to get the latest jump in speed. Otherwise there IS no point in getting C.
Makes me wonder whether Paul Krugman works at the IF.
My one complaint that no one has brought up is just that the product world is so unbelievably slow in adopting the new ports and the new specs. As I write this in June of 2019, I STILL can't buy an external Western Digital 8TB harddrive that has a C-port. They do have a 4TB 2.5" drive -- the "Passport" -- that has a C-port, but not a big drive that has a C-port. What gives ? Have you looked for a thumb drive that is USB-C ? Rotsa ruck with that. Even more difficult when you do find a thumb drive that's C . . . try to figure out WHICH VERSION of USB it's compliant with. The CIA or the KGB might know, but no one else does.
My company computer dock (lenovo) is TB3. Wonderful. One capable to support three 1600p monitors plus ethernet, plus peripherals and charge the laptop.
If I am not sure but I read tha the 2019 Intel 10nm IceLake processors will natively support Thunderbolt 3, and it is likely that all future Intel processors after that will as well.
It means that from H2 2019 and going forward, all new computers based on 10nm processors or newer will come equipped with Thunderbolt 3.
So I would think that Thunderbolt 3 USB-C connectors will soon become more widespread on computers, slowly replacing legacy USB-A connectors.
It may have the potential to replace all other connectors (power supply, USB-A, HDMI, Ethernet,...) by one single universal connector and I do really hope it will be the case !!!
Well, TB3 is already finished years ago, so what exactly they're working on? are they going to redesign TB3 with USB "how it works"? or it just a new implementation based on TB3.
I just hope naming will be human readable, unlike the current mess.
USB is different to TB3 in terms of PCIe, protocols etc - hopefully USB4 will keep the ability to use hubs etc (as TB3 hubs are way expensive because of the PCIe bits).
USB4 replacing sata for HDD/SSD connectivity at the consumer end would be great - single cable signal and power, dirt cheap hubs instead of $$$ sas cards if you need lots of ports for a NAS, same drive uable internally or externally with the same cable. Suspect latency might not be optimal for boot SSDs but for spinning rust or general file storage most users probably wouldn't notice any difference.
To date, Thunderbolt has only supported daisy chain topologies and not tiered star like USB. The term "Thunderbolt hub" as it is currently used is a complete misnomer when compared to hubs in the USB or Ethernet sense. They're really only single-port or dual-port (daisy chainable) Thunderbolt docks which happen to provide connectivity in the form of additional ports supporting other protocols.
USB4 will almost certainly be more complex (read: expensive) than USB 3.2, but it won't be adopted for all applications. Just like USB 2.0 lives on for mice, keyboards, and a billion other things, USB 3.2 will probably be around for a long time to come yet. But because backwards compatibility is a hallmark of all of the USB protocols, as long as we get USB4 host ports wherever it matters, the devices can use whichever version makes sense for their particular price point.
" USB4 promises to be more than that. In fact, so much more that the USB Promoter Group is considering a new logotype and branding scheme. The current one is already complex enough, so expect some kind of simplification on that front. "
Actually what I'm expecting is more like: USB4 gen4 4x4 40gbps Ludicrous Speed OMGWTFLOLBBQ
I dunno, Apple was pretty involved for a while there over at the USB-IF, so maybe we'll get something like "Microburst" for the trademark, along with an understated yet evocative logo (and no obvious connection to the technologies which preceded it).
With Intel contributing to the USB4 standard they already have a significant advantage because nobody understands TB3 better than them and USB4 is based on Thunderbolt 3. If you look at the slides for Ice Lake, Thunderbolt 3 is integrated to the CPU but two things stood out for me: 1. On the CPU block it is referenced as USB-C and Thunderbolt 3 interchangeably. 2. On another slide the protocol between two Thunderbolt 3 controllers is referenced as 40G USB4 Tx and 40G USB4 Rx. Since the specification is already at 0.7 and the full spec is expected to be ratified soon, there is not much going to change. Intel is supposed to release Ice Lake-SP next year alongside support for PCIe 4.0, having Thunderbolt integrated in the CPU will mean that a lot of the issues we have with an external controller and PCIe 4.0 can be resolved right on the CPU where Thunderbolt and PCIe is concerned. You can’t get any closer than on die design. Ice Lake is therefore USB4 ready with support of up to 4 USB-C ports on laptops, 2 on each side. I was glad to see ASRock come out with Motherboards that support TB3 for Ryzen 3000 CPUs. I think it will be a matter of a software patch to get TB3 to support USB4 even on AMD systems.
Ice Lake-SP, being targeted at servers, probably won't include Thunderbolt at all. Tiger Lake U and Y, which are due as early as Q2 2020, hopefully will have PCIe 4.0 and will also probably be the first official USB4 products.
The slides that you referred to (which I believe are these ones here: https://www.notebookcheck.net/Intel-envisions-a-US... ) indicate that Icelake U will have the equivalent of a PCIe 3.0 x4 link on the back end for each Thunderbolt port, in other words double the width of the interface on discrete controllers. But until such time as Intel releases a client platform with PCIe 4.0, we probably shouldn't expect to see any matching discrete controllers with PCIe 4.0 for Thunderbolt devices
Also notable is that Icelake supports DisplayPort 1.4 HBR3 with DSC 1.1, so I imagine we might see some new Ridge out later this year, unless Titan Ridge is already DSC 1.1 capable.
Intel wants Thunderbolt to dominate USB, there are a number of advantages Thunderbolt has over USB that benefit workstations. Intel favours Thunderbolt over any I/O even on workstations. The adoption of Thunderbolt in server Motherboards is slow but it is happening. Supermicro announced support for Thunderbolt on their workstation servers awhile back and should be working on Thunderbolt 3 certification. Obviously Apple has their new workstation, Mac Pro with upto 12 TB3 ports.
Titan Ridge replaced Alpine Ridge mainly because of DisplayPort 1.4 support. HBR3 was introduced with DisplayPort 1.3. DisplayPort 1.4 adds support for Display Stream Compression 1.2 (DSC) which Titan Ridge supports.
Icelake-SP lacks an integrated GPU, and therefore it will not have Thunderbolt on die. Workstations will have to get by with discrete Thunderbolt controllers for a while yet.
DSC support is an entirely optional feature of the DP 1.4 spec. NVIDIA Turing, AMD Navi, and Intel Gen11 GPUs are the first to support DSC despite several previous architectures being certified DisplayPort 1.4 compliant. Not to mention things like Icelake having DP 1.4 outputs but only DSC 1.1 capability. There is no indication one way or the other regarding Titan Ridge's ability to transport streams using DSC.
An integrated Thunderbolt controller from the CPU will use DisplayPort lanes from a Discrete GPU. Typically there would be a Display Input. Thunderbolt controllers are not graphics processors they can support standards long before they are available on graphics processors. The Titan Ridge supported DisplayPort 1.4 while Intel iGPUs only supported DisplayPort 1.2, 1.3. The first DisplayPort 1.4 support is expected with Gen 11 of iGPUs from Intel. The Titan Ridge was announced in January 2018 while the DisplayPort 1.4 and DSC 1.2 spec was announced almost 2 years earlier in March 2016. By January 2017 VESA had already started a program to fast track certification for DisplayPort 1.4 Alt Mode on USB-C. DisplayPort 1.4 support was the main feature for Titan Ridge.
Engineering samples of Ice Lake-SP already exist, and not one source that I'm aware of has said anything about Thunderbolt. Besides, there's no point in integrating a feature if you can only integrate 50% of the back end. And there is no way Intel would waste the die space and reduce yields of the LCC / HCC / XCC dies.
Titan Ridge was developed for release alongside Intel Gen10 Graphics, which included support for DP 1.4. Cannon Lake technically "shipped for revenue" before Titan Ridge, but it also never shipped with an enabled GPU. By far the largest customer for Thunderbolt is Apple, and if they wanted Titan Ridge, Intel was going to sell them Titan Ridge, despite the fact that it would give competing GPUs an advantage. Historically, they were content to allow Thunderbolt to hobble the display output capabilities of discrete GPUs in order to level the playing field with their own offerings. But in reality, the groups that deal with display controller and transport IP at Intel probably work together or at least not in total isolation, so that the source and sink implementations work together as well as possible.
Is it possible we see Thunderbolt on the upcoming Intel Xe GPUs? With PCIe 4.0 there is enough bandwidth for both GPU (PCIe 4.0x8) and Thunderbolt (PCIe 4.0x8). You mentioned Apple which got me thinking about the Mac Pro Expansion (MPX) Modules which use a PCIe 3.0x16 for graphics and a PCIe 3.0x8 for Thunderbolt. Intel loves Thunderbolt way more than Apple, they now fully own the tech to the extent of contributing and basing USB4 on Thunderbolt. If we don’t see Thunderbolt on 10nm Ice Lake-SP CPU, maybe we see it on 10nm Xe GPU. Ice Lake SP, Intel Xe GPUs and USB4 all launch in 2020.
So if the first USB 4 devices will supposedly be released in late 2020 when are the first USB 3.2 devices (and, most importantly, controller support) supposed to become available? It's already mid 2019 and they're nowhere to be seen.
Which is why I am skeptical of the late 2020 claim. What usb-if wants to happen and what will happen are likely two different things. I mean we just saw huge number of product announcements from E3 and none involve usb 3.2 (aka usb 3.2 x2 hyper stupid naming quantum speed).
I really dont care anymore, as long as USB-C becomes a standard everywhere very soon, so that even mainboards only have 4 of those at the back instead of 2 USB-A. 5 Gbps or 10 or 40? Who really cares? Need big fat USB-SSDs for that anyway, because everything else is too slow.
How about improve the labeling of type-c 3.0 functions before moving on to bigger and better 4.0? Type-c labeling is a mess right now and a lot of time it's a guessing game of what works and what doesn't.
It is never going to get fixed. It is one thing I like about TB3. TB3 means a certain set of capabilities. USB4 won't. USB 4 'can' be as capable as TB3 but you won't know what it actually can do without digging into the details (if available).
All this bandwidth is pointless when copying a UserData folder or many modern system folders with hundreds of thousands of microfiles drop the transfer to 23Kbps...
Tech guys...please sort out out small file transfers!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
69 Comments
Back to Article
TheUnhandledException - Wednesday, June 12, 2019 - link
That seems optimistic given how slow the move from 5Gbps to 10 Gbps has been and I haven't seen a single usb 3.2 device or host on the market yet.Meaker10 - Wednesday, June 12, 2019 - link
Thunderbolt products are already on the market so it's more of a tweak rather than a new standard.Flunk - Wednesday, June 12, 2019 - link
This is basically just making Thunderbolt 3 part of the USB standard.TheUnhandledException - Wednesday, June 12, 2019 - link
I am sure TB will be updated again. TB4 at 80 Gbps? However TB has generally lagged behind the underlying PCIe spec by three years. I wouldn't expect to see TB4 before 2022 and TB5 may never happen at least not just routing 4 PCIe 5 lanes given the serious constraints on even internal PCIe 5.TheUnhandledException - Wednesday, June 12, 2019 - link
Oops replied to the wrong post. Come on anand allow us to delete or modify posts.Skeptical123 - Thursday, June 13, 2019 - link
^^^Santoval - Wednesday, June 12, 2019 - link
PCIe 4.0 should allow 80 Gbps with the same lanes, but the cables will need to have a much higher quality to sustain the signal integrity, while passive cables will probably need to be even shorter than today. PCIe 5.0 (for TB5 and 160 Gbps) might never happen with passive cables, unless perhaps extensive internal shielding between the wires is employed.Diogene7 - Wednesday, June 12, 2019 - link
I agree. I wish there would be a Thunderbolt 3.1 or 4.0 with 80Gbps that would support HDMI 2.1 (up to 48Gbps), Ethernet 10GbE, USB 4.0...It would finally allow to replace HDMI connectors on TVs with universal USB-C connectors with support of 80Gbps Thunderbolt 4.0 / 3.1, and therefore connect a smartphone, computer, USB stick,... with a USB-C cable !!!
TheUnhandledException - Wednesday, June 12, 2019 - link
TVs will always use HDMI. They could have used DVI and later they could have used Displayport. The TV OEMS want a standard they and only they control. That standard is HDMI. It is the reason why no TV has a displayport port not even one because doing so would reduce the necessity of HDMI connectors on computers.Diogene7 - Wednesday, June 12, 2019 - link
It is hard to predict the future of technology 5 to 10 years in the future...So I would say that it is difficult to predict what a TV migh look like in 2025 or 2030, but I am hopefull that it will integrate at least some Thunderbolt USB-C connectors as it would allow to slim down TVs and use a USB-C cable to plug one or several external Thunderbolt docks on it...
I believe that TV manufacturer will feel pressure with time to integrate at least one Thunderbolt USB-C connector to connect a laptop to the TV, as more and more laptops are getting thinner and may soon come equipped with only Thunderbolt USB-C connector (like Apple MacBook Pro computers...)
TheUnhandledException - Wednesday, June 12, 2019 - link
Once again it isn't a technology issue. Have you noticed not a single TV has a single displayport port. Way back in the day they did have VGA ports and a few had a single DVI port but once DP became a competitor to the dominance of HDMI that ended. The TV OEMs want a standard they control and they control HDMI so TVs (and other CE devices) will use HDMI. There will be future versions of HDMI with more capabilities, and higher resolutions but it will be HDMI."I believe that TV manufacturer will feel pressure with time to integrate at least one Thunderbolt USB-C connector to connect a laptop to the TV"
You can today. Most laptops with usb-c support HDMI alt-mode. You just need a usb-c to HDMI cable. See the pattern? No matter what the solution is .... HDMI because the consumer electronic OEMs control the HDMI standard.
LogitechFan - Sunday, June 16, 2019 - link
Yeah, because the future is not wireless.... sure.lilkwarrior - Sunday, December 1, 2019 - link
You slim down TVs using tech like MicroLED & OLED; not really the ports you use.Diogene7 - Wednesday, June 12, 2019 - link
It is hard to predict the future of technology 5 to 10 years in the future...So I would say that it is difficult to predict what a TV migh look like in 2025 or 2030, but I am hopefull that it will integrate at least some Thunderbolt USB-C connectors as it would allow to slim down TVs and use a USB-C cable to plug one or several external Thunderbolt docks on it...
I believe that TV manufacturer will feel pressure with time to integrate at least one Thunderbolt USB-C connector to connect a laptop to the TV, as more and more laptops are getting thinner and may soon come equipped with only Thunderbolt USB-C connector (like Apple MacBook Pro computers...)
ksec - Wednesday, June 12, 2019 - link
Thunderbolt 3 was build around the PCI-E 3.0 era, PCI-E 5.0 will be out in 2021, ( Some FPGA even in 2020 ), will there be an update? Or are external PCI-E stops at 3.0 there will be no 4.0 and 5.0?What if we need 6/8K and 120hz?
hpglow - Wednesday, June 12, 2019 - link
PCIE 5 has some sort trace requirements. Most boards won't have it on the farther pci slots. So I'm not sure (unless someone invents some kind of booster chip) if we will see it externally.DanNeely - Wednesday, June 12, 2019 - link
My expectation is that TB4/5 cables will have to switch to fiber with all that means in terms of even higher costs and the inevitable failures when users ignore minimum curvature radius requirements and try tightly folding the cable back on itself.repoman27 - Wednesday, June 12, 2019 - link
Nah, PAM4 signaling at almost the same Nyquist rate with a little FEC thrown in. That's the way things are headed, and copper cables will be fine (or at least as fine as they are now).PCIe Gen 5 is ridiculous in that they went for 32.0 Gbit/s NRZ signaling. That's a 60% higher Nyquist rate than Thunderbolt 3. AFAIK, aside from Intel Agilex and Xilinx UltraScale+ FPGAs, no other commercial transceivers operate at that rate. I have trouble seeing how PCIe 5.0 makes it to consumer devices anytime soon.
TheUnhandledException - Wednesday, June 12, 2019 - link
I am sure TB will be updated again. TB4 at 80 Gbps? However TB has generally lagged behind the underlying PCIe spec by three years. I wouldn't expect to see TB4 before 2022 and TB5 may never happen at least not just routing 4 PCIe 5 lanes given the serious constraints on even internal PCIe 5.Side note TB3 should be able to do 8K 10bit HDR 120 Hz using dual DP 1.4 streams and DSC. It is right on the line though but 100 Hz is certainly possible.
repoman27 - Wednesday, June 12, 2019 - link
Even at 7680 x 4320 with a full 3:1 compression ratio, 120 Hz is too much when you factor in the CVT requirements. 112 Hz is the cutoff, but 110 Hz is probably the highest practical rate.TheUnhandledException - Friday, June 14, 2019 - link
With 10 bit input you can get up to 3.75:1 compression using DSC.repoman27 - Friday, June 14, 2019 - link
Goes and reads the specification...Hmmm... 10 bpc may yield a 3.75:1 compression ratio, but AFAICT the decoder is effectively limited to 3 pixels per unit of pixel time. So with VBR you might be able to get some extra down-time and save a little power, but you can't transport more than 3x the number of uncompressed pixels. Having only skimmed through the spec once though, any additional insight into how this works in practice would be appreciated.
repoman27 - Wednesday, June 12, 2019 - link
Thunderbolt controllers have connections for up to 4 lanes of PCIe to connect to the host or devices, but the signaling used for the Thunderbolt ports on the other side is a complete different thing, even though you can transport PCIe or DisplayPort packets over a Thunderbolt link. Updating the PCIe interface on the controller from PCIe 3.0 to PCIe 4.0 is completely straightforward, and would be quite welcome, seeing as a dual-port controller currently has two 40 Gbit/s ports with only ~31.5 Gbit/s of PCIe bandwidth available on the back-end between them. Historically, Intel has only bumped up the PCIe or DisplayPort revisions once their platforms actually supported those technologies. Which makes sense, seeing as it's their thing after all.Thunderbolt controllers also have connections for up to two DisplayPort main links, which is how the DisplayPort packets get on the bus; it has nothing to do with the PCIe side of things. With Titan Ridge Thunderbolt 3 controllers, those are at revision 1.4 with HBR3 support, providing 25.92 Gbit/s apiece, or up to 40 Gbit/s per Thunderbolt link. Updating to DP 1.4a with Display Stream Compression is also relatively straightforward, and even a 2:1 compression ratio would allow you to saturate both links of a dual-port controller.
The logical next step for Thunderbolt 4 is to double bandwidth again by switching from NRZ to PAM4 signaling, which would allow up to 80 Gbit/s per link. However, feeding that on the back end without making the chip enormous and radically more expensive will require waiting for platforms that support both PCIe 5.0 and next generation DisplayPort.
Without DSC or chroma subsampling, a 16:9 6K display at 10 bpc and 120 Hz requires between 72.11 and 81.96 Gbit/s (depending on how you define 6K). 8K at 10 bpc and 120 Hz is between 127.75 and 145.26 Gbit/s. Thunderbolt 4 might be able to handle some 6K resolutions over a single cable without DSC, but as it stands, you'd need at least three DisplayPort HBR3 main links on the back end.
Incidentally, HDMI 2.1 Fixed Rate Link, which is not available on any shipping products yet, is only 42.67 Gbit/s, so not even close to being able to do 6K or 8K with 10 bpc at 120 Hz without DSC or chroma subsampling.
repoman27 - Wednesday, June 12, 2019 - link
Ugh, ignore this bit, "and even a 2:1 compression ratio would allow you to saturate both links of a dual-port controller." Obviously the total DisplayPort bandwidth remains at 25.92 Gbit/s per main link, but compression allows you to squeeze more pixel data in there.Dolda2000 - Wednesday, June 12, 2019 - link
I cannot help but wonder if all this PCI-express hotpluggability is well considered. Hotpluggable DMA attacks were already a problem back with Firewire.Sure, in theory we do have IOMMUs these days, but I'm not even sure how many implementations have an IOMMU group for each Thunderbolt port, nor how well developed driver support is for applying IOMMU protections to devices. OS support for Thunderbolt access control also seems to very rudimentary at best.
timecop1818 - Wednesday, June 12, 2019 - link
Ah, you're the type of shithead that cries wolf each time a new fuckclown-like "exploit" is announced and spews garbage on forums that we're all fucking doomed huh.I believe I speak for every normal desktop PC user when i say that none of that shit fucking matters whatsoever. I disable all these retarded "mitigations" as soon as disable mechanism is available and will continue to do so. None of this shit affects normal users and never will. Maybe you lunix dweebs worry about "hot plug DMA attacks" but to us normal users, this shit is just another cable to plug in and enjoy a device that works.
damianrobertjones - Wednesday, June 12, 2019 - link
?? Steady with the language. It adds nothing to what you're trying to say and makes you seem childish.s-plus - Wednesday, June 12, 2019 - link
What's wrong with you man?AshlayW - Wednesday, June 12, 2019 - link
Wow, rude.id4andrei - Wednesday, June 12, 2019 - link
Normal desktop PC users don't use and don't care about TB. They care more about USB-A than TB. Only Apple users and the blogosphere cares about the TB dogma.Lord of the Bored - Thursday, June 13, 2019 - link
The security argument on IEEE 1394 basically boiled down to "DMA is bad".Ultimately, some tradeoff has to be made between usability and accessibility. We COULD just disable all access to the system, but... it wouldn't be very usable at that point.
valinor89 - Wednesday, June 12, 2019 - link
I am eagerly awaiting the upcoming USB 4.0 Gen 2x2x2 devices! AKA MegaSpeed USB 40Gbps to make it ore understandable./s
s-plus - Wednesday, June 12, 2019 - link
By late 2020 Intel will release Thunderbolt 4 and USB will have the inferiority complex again...Arsenica - Wednesday, June 12, 2019 - link
If USB-IF had any sense they would rebrand all USB type-C based versions as >4.0 versions, for example:USB 3.0: USB type A based connectors at up to 10 Gbps
USB 4.0: USB type C based connectors at up to 20 Gbps
USB 5.0: USB type C based connectors at up to 40 Gbps
TheUnhandledException - Wednesday, June 12, 2019 - link
"If USB-IF had any sense" ok you can stop right there.repoman27 - Wednesday, June 12, 2019 - link
Quite sensibly, when the USB-IF publishes new versions of their specifications, they increment the version number of the document. Also, quite sensibly, they have always advised vendors not to use these version numbers to indicate the signaling capabilities of their products.The media thought it would generate a lot more clicks if they suggested that the situation was intractably complicated and that the specification version numbers were actually some sort of brand or trademark that the USB-IF had changed somewhere along the way, which is not at all the case.
The USB has consistently registered silly marketing names for vendors and the media to use to communicate the various signaling capabilities of products to customers (Basic-Speed, Hi-Speed, SuperSpeed, SuperSpeed USB 10 Gbps, SuperSpeed USB 20 Gbps). Everyone universally refuses to acknowledge these as acceptable. So the USB-IF said that if vendors insist on using the specification version number on product packaging, advertising, or other marketing materials, that they would also have to indicate the signaling capabilities of the product. That's where you get the Gen 1 or Gen 2 (5 or 10 Gbit/s) for USB 3.1 products, and the x 1 or x 2 (single or dual-lane) for USB 3.2 products.
We've always had USB Type-A and Type-B connectors, so I'm not sure why Type-C is so challenging for everyone. Once again, the media is partly to blame for repeatedly ignoring the USB-IF's messaging that Type-C is a cable and connector specification and is entirely separate from any particular version of the USB specification, which is what describes the signaling protocol.
Lord of the Bored - Wednesday, June 12, 2019 - link
Because version numbering is a convenient way to track capabilities. My older systems don't have USB 3.2 Gen 1 ports, they have USB 2.0 ports. Anything compliant with version 2.0 of the spec is a USB 2.0 device. That future versions of the spec are backwards-compatible doesn't mean older versions stop existing, or that there's no valid reason to reference them.And imagine the confusion if people DID start following the USB-IF suggestions and solely using cute names for things. Sooooo many shady devices advertising they were high-speed or even full-speed to sucker in the masses. How do you explain to your parents that full-speed is the slowest option, and that high-speed is pretty slow?
The USB-IF is laying out a stupid and confusing naming scheme that only benefits dodgy manufacturers. Better that devices are labelled by the newest version of the spec they comply fully with.
repoman27 - Wednesday, June 12, 2019 - link
Older versions of the specifications do cease to existing as the USB-IF deprecates them. You can't download older versions of the specs from their site anymore, and they stop certifying new devices against them. It's just like software versions. However, devices that were designed and/or certified for previous versions do not cease to exist. The currently active versions of the USB specification may be 2.0 and 3.2, but billions of USB 1.0, 1.1, 3.0 and 3.1 devices continue to exist. And of course you can continue to use those version numbers and assume that people will know what you're talking about.However, say I design a new device targeting the USB 3.2 specification (a version that I can currently license) but it only requires 5 Gbit/s signaling (which is totally allowed according to the spec) and follow through with getting it certified according to the USB 3.2 testing procedures. It is dead to rights a USB 3.2 product. The USB-IF can't tell me as a licensee who has jumped through all their hoops that I can't refer to it as a USB 3.2 device. They can however tell me that if I refer to it as USB 3.2 on packaging or in advertising or other marketing materials that I also have to tell you which USB 3.2 signaling mode it's actually capable of, namely 5 Gbit/s. This is where the vendors are playing fast and loose and customers can potentially be mislead.
The "cute names" which everyone ridicules are registered trademarks controlled by USB-IF, and as such they carry a legal mechanism to enforce usage. And the speed classes, as they currently stand, are Basic-Speed, Hi-Speed, SuperSpeed, SuperSpeed 10 Gbps, and SuperSpeed 20 Gbps. The only thing my parents might not understand about these names is what heck a Gbps is, and that SuperSpeed without a qualifier only has 5 of them. And I can hear my mom asking, "But how do I know how many Gbps I even need?" No solution is going to work for everyone.
A significant percentage of what are called USB 2.0 devices that are on the market right now are only capable of the low-speed or full-speed signaling modes introduced in USB 1.0, rather than high-speed which was introduced in USB 2.0. And almost nobody cares because devices generally adopt whatever signaling mode is most appropriate, and backwards compatibility is guaranteed. This is pretty much the same deal for USB 3.x devices. If you actually need 10 Gbit/s vs 5 Gbit/s signaling or dual-lane operation to reach 20 Gbit/s, you're not my parents and you can probably take the time to read the fine print or do some research. The place these capabilities probably matter the most is in regards to host ports. I'm guessing 90% of people only care about whether a device works or not, not about the theoretical bandwidth the physical channel can provide to the upper layers.
And you have it exactly reversed. The dodgy manufacturers know that if their device fully complied with the USB 3.0 spec, it is now also fully compliant with the USB 3.1 and 3.2 specs. People who look at spec sheets are more likely to buy whichever product has the bigger number listed. This is exactly why the USB-IF said you can't do that unless you also indicate the maximum signaling capabilities, and explicitly provided at least three different acceptable ways of doing so.
Lord of the Bored - Thursday, June 13, 2019 - link
I don't have it backwards. The current approach of the USB-IF makes "dodgy manufacturers know that if their device fully complied with the USB 3.0 spec, it is now also fully compliant with the USB 3.1 and 3.2 specs" true.3.1 and 3.2 should only be applicable to devices using one of those new features. That they are not is nothing more than the USB-IF doing manufacturers a favor so they can conceal their devices' interface functionality.
Extensions of the specification should exist as a "sublevel" designation, like the IEEE does(most obviously with 802.11). And hey, for a while the USB-IF was doing good with that. Then they decided to "clarify" everything by declaring that USB 3.2 was ALL USB EVER, and created a "gen #x#" nomenclature to replace the existing "USB2, USB3, USB 3.1" designations. (And 3.1 should have been USB4, but that is minor nitpicking relative to the Gen X clusterbomb).
edzieba - Thursday, June 13, 2019 - link
There's a lot more to the USB spec than just link bandwidth. A naive "bigger number means more faster" does nothing to address all the other things USB need to do to be useful, which is why the new standards exist in the first place. Power delivery is one of the more obvious ones.TheUnhandledException - Thursday, June 13, 2019 - link
There is no difference between usb 3.1 and usb 3.0 except a new 10 Gbps mode likewise with usb 3.2 adding a 20 Gbps mode.repoman27 - Thursday, June 13, 2019 - link
I completely understand where you're coming from, but a lot of what you're saying is misinformation that was propagated by the tech media echo chamber.Going back to my original assertion, the version numbers are only the version of the specification, which itself is a document distributed in pdf format intended for licensees building USB enabled devices. The working group makes some engineering changes, adds a few new features, and then puts out a new release with an incremented version number. The USB 3.1 specification is the USB 3.0 specification plus ECNs and the addition of the new Gen 2 PHY—90% of the text remained the same. Once 3.1 was released, the USB-IF deprecated USB 3.0 because licensees should all be referring to the new version not the old one from now on. This is like when Apple stops signing iOS 12.2 following the release of 12.3 and customers can no longer download or downgrade to previous versions. This is essentially the way *all* interface specifications work, not just USB.
The reason why manufacturers know that that their USB 3.0 devices are most likely 3.1 or 3.2 compliant is because nearly 100% of the text of the USB 3.0 specification is included with only very minor changes in both the 3.1 and 3.2 specs, and 100% of the new features are optional. From the very beginning the USB-IF let everybody know that USB 3.0 was not going to be a one and done situation. The industry was stuck at USB 2.0 performance levels for way too long, and therefore, not only would 3.0 be a massive leap forward, but it would also be extensible through updates at regular intervals.
From a licensing and maintenance point of view, the USB-IF is following the only sane path. Yet everyone on teh internets desperately wants specification version numbers that fully determine device capabilities. Once you include even a single feature in a spec that isn't mandatory, that idea goes straight out the window. Once you build an interface based on multiple specifications with lots of optional features in order to address a market of several billion devices, creating a version number that uniquely identifies each permutation would be ludicrous.
With USB, most people don't just want to know the signaling rate, they also want to know which style of plug / receptacle is being used and what the power capabilities are. If a device is advertised as "USB Type-C, 5 Gbit/s signaling, 60 W source/sink power delivery," you probably know everything you need to without including version numbers for a single one of the three specifications referenced. And none of those version numbers would have pinpointed the exact device capabilities anyway.
The Gen X and Gen X x Y nomenclature does not replace the specification version numbers, it's simply how the PHYs were referred to within the USB 3.1 and 3.2 specifications themselves. USB 3.1 introduced a new PHY with different capabilities that could be implemented alongside the original one. The engineers referred to these PHYs as Gen 1 and Gen 2. USB 3.2 introduced channel bonding in the form of dual-lane operation. Now you could have either PHY operating as an x1 or x2 link. It's a little unfair to fault the engineers for using the same terminology as every other engineer ever in the history of personal computer I/O interfaces. I will agree that the USB-IF promoting this terminology for use in public facing materials was ill advised. However, it would seem perfectly suited for the readers of sites like Anandtech, who actually have the desire to do a deep-dive and embrace the terminology used by the engineers themselves.
MarcusMo - Saturday, June 15, 2019 - link
These couple of posts were some of the best I’ve read on Anandtech in a long while. Thank you!Herbertificus - Sunday, June 30, 2019 - link
I swear to God, I have never in my life seen so much mental masturbation as I have in this comment thread.First of all, Lord Of The Bored is exactly correct and Refluxman27 seems to be a good example, like Paul Krugman, of someone who's educated beyond his intelligence.
The Implementors' Forum is attempting to maintain two parallel naming schemes -- one that's numerical and is therefore IMMEDIATELY and universally intuitive to anyone on the planet who is at least 4-years-of-age, and another with proper nouns constructed of superlatives stacked on top of superlatives, with a few pluses and "gens" thrown in to make things even "clearer." Now, I'm not exactly stoopid, but for the life of me I cannot remember which of the superlative combinations is better/higher/faster than the other superlative combinations.
But it is damned easy to see that 3.2 should be faster/better/more advanced than 3.1, which is likewise faster/better/more advanced than 3.0, which is likewise . . . .
And because USB specifications are always backward compatible, they are all EQUALLY backward compatible, so backward compatibility is not a distinguishing feature between the different versions . . . but forward compatibility is. There's every reason always to purchase the product that is certified to the highest numerical spec because it will be the most future-compatible and future-proof, and NO REASON to purchase a product with a lower spec. It doesn't get any more simple or intuitive for the buying public than this simple numerical progression nomenclature. Nor can a nomenclature scheme be more UNintuitive and absolutely useless than the IF's moronic proper noun scheme which, as Sir Lord Bored noted, has been roundly rejected by everyone in the known universe. Every DAMN time I'm in Worst Buy picking up a hard drive or looking at the USB ports on a laptop, I'm saying to myself, "OK, is UltraUberSuperDuperSpeed the same as 3.0 or 3.1 or 3.2 Gen1 or Gen2 ? ? ?" . . . And I'm saying that to myself because I already understand what 3.0 and 3.1 and 3.2 mean. What's more is that even someone who knows nothing about USB and isn't even all that bright -- say, Paul Krugman -- even that person could easily pick out which of the three is the latest and greatest.
Sorry, but Monsignor Lord Bored is right. And all the crap about "spec this" and "spec that" and "vendor this" and "vendor that" is just useless claptrap. THE ONLY TWO THINGS THAT MATTER are that the labeling is honest and that THE CONSUMER can understand that labeling INTUITIVELY, without having to do RESEARCH to decipher the inverted pyramid of accumulating superlatives.
And whoever it was on the IF who thought up the "Gen #x#" crap ought to be taken out behind the barn and shot. Each successive bump in speed should get it's own numerical I.D. -- it's that simple.
One other UltraUberSuperDuper error the IF is making is NOT requiring all C-ports to be powered by a 3.2 chip and to comply with the 3.2 spec. It is nothing less than moronic for a manufacturer to be able to put a C-port in their product that is merely 2.0 or 3.0 compliant. Yes, I know the argument that the physical port and the protocol spec are two separate things, but the C-port is a major break -- a fundamental change in the USB world, and that new physical standard should ALWAYS be associated at least with the specification which was a major leap forward at the same time the port was introduced. THE WHOLE POINT of buying products that specifically have the C-port is to get the latest jump in speed. Otherwise there IS no point in getting C.
Makes me wonder whether Paul Krugman works at the IF.
My one complaint that no one has brought up is just that the product world is so unbelievably slow in adopting the new ports and the new specs. As I write this in June of 2019, I STILL can't buy an external Western Digital 8TB harddrive that has a C-port. They do have a 4TB 2.5" drive -- the "Passport" -- that has a C-port, but not a big drive that has a C-port. What gives ? Have you looked for a thumb drive that is USB-C ? Rotsa ruck with that. Even more difficult when you do find a thumb drive that's C . . . try to figure out WHICH VERSION of USB it's compliant with. The CIA or the KGB might know, but no one else does.
JSMH.
mooninite - Wednesday, June 12, 2019 - link
Is this going to finally combine Thunderbolt into USB? Who even has Thunderbolt devices (outside of Apple)?TheUnhandledException - Wednesday, June 12, 2019 - link
My company computer dock (lenovo) is TB3. Wonderful. One capable to support three 1600p monitors plus ethernet, plus peripherals and charge the laptop.Diogene7 - Wednesday, June 12, 2019 - link
If I am not sure but I read tha the 2019 Intel 10nm IceLake processors will natively support Thunderbolt 3, and it is likely that all future Intel processors after that will as well.It means that from H2 2019 and going forward, all new computers based on 10nm processors or newer will come equipped with Thunderbolt 3.
So I would think that Thunderbolt 3 USB-C connectors will soon become more widespread on computers, slowly replacing legacy USB-A connectors.
It may have the potential to replace all other connectors (power supply, USB-A, HDMI, Ethernet,...) by one single universal connector and I do really hope it will be the case !!!
Xajel - Wednesday, June 12, 2019 - link
Well, TB3 is already finished years ago, so what exactly they're working on? are they going to redesign TB3 with USB "how it works"? or it just a new implementation based on TB3.I just hope naming will be human readable, unlike the current mess.
dontlistentome - Wednesday, June 12, 2019 - link
USB is different to TB3 in terms of PCIe, protocols etc - hopefully USB4 will keep the ability to use hubs etc (as TB3 hubs are way expensive because of the PCIe bits).USB4 replacing sata for HDD/SSD connectivity at the consumer end would be great - single cable signal and power, dirt cheap hubs instead of $$$ sas cards if you need lots of ports for a NAS, same drive uable internally or externally with the same cable. Suspect latency might not be optimal for boot SSDs but for spinning rust or general file storage most users probably wouldn't notice any difference.
repoman27 - Wednesday, June 12, 2019 - link
To date, Thunderbolt has only supported daisy chain topologies and not tiered star like USB. The term "Thunderbolt hub" as it is currently used is a complete misnomer when compared to hubs in the USB or Ethernet sense. They're really only single-port or dual-port (daisy chainable) Thunderbolt docks which happen to provide connectivity in the form of additional ports supporting other protocols.USB4 will almost certainly be more complex (read: expensive) than USB 3.2, but it won't be adopted for all applications. Just like USB 2.0 lives on for mice, keyboards, and a billion other things, USB 3.2 will probably be around for a long time to come yet. But because backwards compatibility is a hallmark of all of the USB protocols, as long as we get USB4 host ports wherever it matters, the devices can use whichever version makes sense for their particular price point.
DanNeely - Wednesday, June 12, 2019 - link
" USB4 promises to be more than that. In fact, so much more that the USB Promoter Group is considering a new logotype and branding scheme. The current one is already complex enough, so expect some kind of simplification on that front. "Actually what I'm expecting is more like: USB4 gen4 4x4 40gbps Ludicrous Speed OMGWTFLOLBBQ
repoman27 - Wednesday, June 12, 2019 - link
I dunno, Apple was pretty involved for a while there over at the USB-IF, so maybe we'll get something like "Microburst" for the trademark, along with an understated yet evocative logo (and no obvious connection to the technologies which preceded it).Herbertificus - Sunday, June 30, 2019 - link
"USB4 gen4 4x4 40gbps Ludicrous Speed OMGWTFLOLBBQ"LOL !
Don't give 'em any ideas.
KimGitz - Wednesday, June 12, 2019 - link
With Intel contributing to the USB4 standard they already have a significant advantage because nobody understands TB3 better than them and USB4 is based on Thunderbolt 3. If you look at the slides for Ice Lake, Thunderbolt 3 is integrated to the CPU but two things stood out for me:1. On the CPU block it is referenced as USB-C and Thunderbolt 3 interchangeably.
2. On another slide the protocol between two Thunderbolt 3 controllers is referenced as 40G USB4 Tx and 40G USB4 Rx.
Since the specification is already at 0.7 and the full spec is expected to be ratified soon, there is not much going to change.
Intel is supposed to release Ice Lake-SP next year alongside support for PCIe 4.0, having Thunderbolt integrated in the CPU will mean that a lot of the issues we have with an external controller and PCIe 4.0 can be resolved right on the CPU where Thunderbolt and PCIe is concerned. You can’t get any closer than on die design.
Ice Lake is therefore USB4 ready with support of up to 4 USB-C ports on laptops, 2 on each side.
I was glad to see ASRock come out with Motherboards that support TB3 for Ryzen 3000 CPUs. I think it will be a matter of a software patch to get TB3 to support USB4 even on AMD systems.
repoman27 - Wednesday, June 12, 2019 - link
Ice Lake-SP, being targeted at servers, probably won't include Thunderbolt at all. Tiger Lake U and Y, which are due as early as Q2 2020, hopefully will have PCIe 4.0 and will also probably be the first official USB4 products.The slides that you referred to (which I believe are these ones here: https://www.notebookcheck.net/Intel-envisions-a-US... ) indicate that Icelake U will have the equivalent of a PCIe 3.0 x4 link on the back end for each Thunderbolt port, in other words double the width of the interface on discrete controllers. But until such time as Intel releases a client platform with PCIe 4.0, we probably shouldn't expect to see any matching discrete controllers with PCIe 4.0 for Thunderbolt devices
Also notable is that Icelake supports DisplayPort 1.4 HBR3 with DSC 1.1, so I imagine we might see some new Ridge out later this year, unless Titan Ridge is already DSC 1.1 capable.
KimGitz - Wednesday, June 12, 2019 - link
Intel wants Thunderbolt to dominate USB, there are a number of advantages Thunderbolt has over USB that benefit workstations. Intel favours Thunderbolt over any I/O even on workstations. The adoption of Thunderbolt in server Motherboards is slow but it is happening. Supermicro announced support for Thunderbolt on their workstation servers awhile back and should be working on Thunderbolt 3 certification. Obviously Apple has their new workstation, Mac Pro with upto 12 TB3 ports.KimGitz - Wednesday, June 12, 2019 - link
Titan Ridge replaced Alpine Ridge mainly because of DisplayPort 1.4 support. HBR3 was introduced with DisplayPort 1.3. DisplayPort 1.4 adds support for Display Stream Compression 1.2 (DSC) which Titan Ridge supports.repoman27 - Wednesday, June 12, 2019 - link
Icelake-SP lacks an integrated GPU, and therefore it will not have Thunderbolt on die. Workstations will have to get by with discrete Thunderbolt controllers for a while yet.DSC support is an entirely optional feature of the DP 1.4 spec. NVIDIA Turing, AMD Navi, and Intel Gen11 GPUs are the first to support DSC despite several previous architectures being certified DisplayPort 1.4 compliant. Not to mention things like Icelake having DP 1.4 outputs but only DSC 1.1 capability. There is no indication one way or the other regarding Titan Ridge's ability to transport streams using DSC.
KimGitz - Friday, June 14, 2019 - link
An integrated Thunderbolt controller from the CPU will use DisplayPort lanes from a Discrete GPU. Typically there would be a Display Input. Thunderbolt controllers are not graphics processors they can support standards long before they are available on graphics processors. The Titan Ridge supported DisplayPort 1.4 while Intel iGPUs only supported DisplayPort 1.2, 1.3. The first DisplayPort 1.4 support is expected with Gen 11 of iGPUs from Intel. The Titan Ridge was announced in January 2018 while the DisplayPort 1.4 and DSC 1.2 spec was announced almost 2 years earlier in March 2016. By January 2017 VESA had already started a program to fast track certification for DisplayPort 1.4 Alt Mode on USB-C. DisplayPort 1.4 support was the main feature for Titan Ridge.repoman27 - Friday, June 14, 2019 - link
Engineering samples of Ice Lake-SP already exist, and not one source that I'm aware of has said anything about Thunderbolt. Besides, there's no point in integrating a feature if you can only integrate 50% of the back end. And there is no way Intel would waste the die space and reduce yields of the LCC / HCC / XCC dies.Titan Ridge was developed for release alongside Intel Gen10 Graphics, which included support for DP 1.4. Cannon Lake technically "shipped for revenue" before Titan Ridge, but it also never shipped with an enabled GPU. By far the largest customer for Thunderbolt is Apple, and if they wanted Titan Ridge, Intel was going to sell them Titan Ridge, despite the fact that it would give competing GPUs an advantage. Historically, they were content to allow Thunderbolt to hobble the display output capabilities of discrete GPUs in order to level the playing field with their own offerings. But in reality, the groups that deal with display controller and transport IP at Intel probably work together or at least not in total isolation, so that the source and sink implementations work together as well as possible.
KimGitz - Saturday, June 15, 2019 - link
Is it possible we see Thunderbolt on the upcoming Intel Xe GPUs? With PCIe 4.0 there is enough bandwidth for both GPU (PCIe 4.0x8) and Thunderbolt (PCIe 4.0x8). You mentioned Apple which got me thinking about the Mac Pro Expansion (MPX) Modules which use a PCIe 3.0x16 for graphics and a PCIe 3.0x8 for Thunderbolt. Intel loves Thunderbolt way more than Apple, they now fully own the tech to the extent of contributing and basing USB4 on Thunderbolt. If we don’t see Thunderbolt on 10nm Ice Lake-SP CPU, maybe we see it on 10nm Xe GPU. Ice Lake SP, Intel Xe GPUs and USB4 all launch in 2020.Santoval - Wednesday, June 12, 2019 - link
So if the first USB 4 devices will supposedly be released in late 2020 when are the first USB 3.2 devices (and, most importantly, controller support) supposed to become available? It's already mid 2019 and they're nowhere to be seen.TheUnhandledException - Wednesday, June 12, 2019 - link
Which is why I am skeptical of the late 2020 claim. What usb-if wants to happen and what will happen are likely two different things. I mean we just saw huge number of product announcements from E3 and none involve usb 3.2 (aka usb 3.2 x2 hyper stupid naming quantum speed).Beaver M. - Wednesday, June 12, 2019 - link
I really dont care anymore, as long as USB-C becomes a standard everywhere very soon, so that even mainboards only have 4 of those at the back instead of 2 USB-A. 5 Gbps or 10 or 40? Who really cares? Need big fat USB-SSDs for that anyway, because everything else is too slow.wr3zzz - Wednesday, June 12, 2019 - link
How about improve the labeling of type-c 3.0 functions before moving on to bigger and better 4.0? Type-c labeling is a mess right now and a lot of time it's a guessing game of what works and what doesn't.TheUnhandledException - Thursday, June 13, 2019 - link
It is never going to get fixed. It is one thing I like about TB3. TB3 means a certain set of capabilities. USB4 won't. USB 4 'can' be as capable as TB3 but you won't know what it actually can do without digging into the details (if available).jabber - Friday, June 14, 2019 - link
All this bandwidth is pointless when copying a UserData folder or many modern system folders with hundreds of thousands of microfiles drop the transfer to 23Kbps...Tech guys...please sort out out small file transfers!
kwinz - Monday, November 9, 2020 - link
November 2020: I am still waiting on any USB4 product announcements for Desktop.compvter - Wednesday, January 13, 2021 - link
early 2021, and I am still waiting for my USB4 products, especially eGPU docks for Amd laptops=(