Thunderbolt is not “based on PCIe” like that. Thunderbolt 3 controllers can be updated with PCIe 4.0 back ends.
“Thunderbolt 4” will likely be reserved for an increase in the Thunderbolt signaling rate (currently at 20.625 Gbit/s), which has nothing to do with the PCIe generation of the back end.
Thunderbolt is "”based on PCIe” like that". TB3 combines a PCIe x4 and two DP1.2 signals to reach a 40Gps limit provided the drivers and cable support it. This is a little confusing and took me a while to work out how it works because I could all the need info in one place. To help others I have included some references.
““Fundamentally, Thunderbolt is a tunneling architecture designed to take a few underlying protocols, and combine them onto a single interface, so that the total speed and performance of the link can be shared between the underlying usages of these protocols – whether they are data, display, or something else”“ - linked from intel website
“Thunderbolt combines PCI Express (PCIe) and DisplayPort (DP) into two serial signals,” - wiki “Thunderbolt 3 uses PCIe x4 gen 3 data rate with 128kB header sizes. Two links of (4 lane) DisplayPort 1.2 consume 2x (4 x 5.4 Gbps) or 43.2 Gbps. For both of these numbers, the underlying protocol uses some data to provide encoding overhead which is not carried over the Thunderbolt 3 link reducing the consumed bandwidth by roughly 20 percent (DisplayPort) or 1.5 percent (PCI Express Gen 3). But regardless, adding both together gets you above 40 Gbps.“ If demand goes above 40Gbps the drivers hit their limit and will prioritize the DP single though this is rare. - thunderbolttechnology - intel links to this site on their TB3 page So the thunderbolt drivers “tunnels” together upto 4 PCIe gen 3 lanes and two links of DP 1.2
With DP 2.0 aim at 2020 and fact that Thunderbolt is created by Intel and USB4 I believe is also 2020. I would expect that TB4 will work with PCIe 5.0
Also it important to note the differences in TB3 and DP, TB3 is bidirectional because it more than display where DP is primary display. So I wonder if TB could be change to switch mode and allow all lanes in one direction which connected to display.
There is of course physical cable limitation of USB-C - so compatibility may limits - but hopefully this means TB 4 has new cable option that improves on that.
On consumer products? I think it will be a long while. 4K@120Hz is going to be fine for consumer applications for the foreseeable future. I really only see 8K being used for large commercial signage.
I kind of wish the industry would suck it up and finally start to really push for optical. Even decent pre-terminated OM4 fiber is now down to below $.69/foot in bulk, and only a bit more (~$0.75-$0.80/ft) as singles. OM3 is even less. OM4 is good for up to 300 feet (100m) with a standard 100GBASE module, and those are going for under $100 each at this point.
Yes, an extra $100 on each side for the connector is still significant extra margin right now, but I doubt it will be in the context of 6k-8k monitors for a while or the GPUs needed to drive them for a while. The cable pricing represents major savings *and* far more flexibility, optical cables are thin and go so much farther. Optical transmission also uses a lot less power. And those prices also look to be within the range where massive economies of scale could drive them down significantly. It's not crazy infeasible anymore. It's represent a lot more future growth room too. It really seems like the hacks and compromises needed to keep pushing up bandwidth over external copper are starting to not be worth it, which is a drastic change from even a few years ago but here we are.
Heck, if we're thinking about reusing layers, it might be really interesting if they did just base a display standard around ethernet (or fibrechannel or something) so that all standard switching hardware and the like could be taken advantage of. Just plug a display into a wall keystone and be able to have full connectivity to wherever you want your computer to be elsewhere in the network with 100 Gbps full duplex? I can see uses for that at scale.
+1 for optical. what is the wisdom in preserving copper when you are paying 100$ for a dam usb cable (and even then not knowing if it will fry a certain port if you connect the wrong thing to the wrong port)
At least for regular networking and peripheral connections there are valid arguments to be made for the value of some power over the same cable too, though I wonder if in an optical data world if a hybrid optical/copper cable (ie., a fiber optic cable with 2-pair just wrapped next to it) would actually be that hard, it'd be less copper and far looser (cheaper) reqs since it'd be exclusively carrying power. But desktop screens, particularly high end desktop screens, always need their own PSU anyway so no bonus there. And like you say carrying a lot of power can also plain be a double edged sword, a lot of crummy USB-C cables out there can fry stuff (and in networking means interference and isolation issues, it's common to do fiber between even close buildings for example just for the optoisolation).
If we were still talking $400 a pop for a 100G transceiver fine yeah that'd be too much, even with savings from vastly more manufacturing. But at $100 which might get shrunk down to $50-80 over a few years, that's getting to the point where it's practically paid for just by cheaper cables even ignoring every other advantage. Seems within striking range anyway, and initial monitors will probably all be in the $2000+ range too.
Optical was really tough to go consumer with 8-9 years ago say, back when Intel/Apple were doing the initial TB and it was briefly "LightPeak" instead. Now though?
1) Bring back the LightPeak name! 2) I like the hybrid idea, especially for things like external HDDs where you don't want a separate power cable. High speed optical, paired with a basic copper for power should give the cable some endurance, while being cheaper than a crazy high-speed copper cable alone.
100G transceivers haven't cost more than $50 to manufacturer for over 2 years now. The companies charging $400 for them are just ripping people off because they're rare. (Of course, Cisco is still charging $1000 for 10G transceivers that cost $10 to make.) Besides, we don't need to use current network standards for optical connections. We could use a multi-wavelength format for a 8-bit parallel signal with 10G encoding per wavelength across a single fiber, and use a single wavelength at 1Gb for the return control signals. We wouldn't have to use expensive MTP cables that way.
> But desktop screens, particularly high end desktop screens, always need their own PSU anyway so no bonus there.
The most common case for desktop monitors is 1080p res and under 100W power draw, which can easily be satisfied by a single USB-C cable with PD. I'm hoping this configuration becomes ubiquitous over the next few years, thus driving further innovation in USB-C/Thunderbolt to eventually be able to support any and all monitors with a single cable.
> The most common case for desktop monitors is 1080p res and under 100W power draw, which can easily be satisfied by a single USB-C cable with PD. I'm hoping this configuration becomes ubiquitous over the next few years, thus driving further innovation in USB-C/Thunderbolt to eventually be able to support any and all monitors with a single cable.
Apple has had a long-term love affair with monitors that either receive their power from the computer or provide power to the computer. The problem with this design is that either solution makes something more expensive for everybody *just in case* the ideal scenario is used by some percentage of users. (And not just more expensive, but more cooling for either the computer or monitor is required depending on which power supply is taxed with extra work for some other device.)
I think there is zero chance of that happening outside of niche uses like protable monitors. For general use assumming the PC can supply power is dubious. Say monitor uses 60W. It has no power cord and only a usb-c connection. What if a given laptop support PD but only 45W? Oops. Or what if it can supply 100W but you want to connect two monitors. Nobody is going to design a laptop with a 400W PSU just in case someone might want to plug in three 100W devices (in this case monitors).
The problem with optical is that consumers will try to treat it like copper cable and try putting very tight bends that snap the glass fibers and destroy the cable.
I don't see why that can't be solved with armoring/inert plastic or kevlar side strings or whatever. Direct burial cable has to deal with a lot of abuse already, not just in the ground but because it needs to be able to go in by low paid sloppy contractors yanking the stuff around. I just did a bunch and it's tough stuff, sacrifices some of the real thinness of standard OM3 but it's essentially impossible to bend tight enough to cause any problems.
Minimum radius is important. You can't just coat a piece of fiber in enough rubber to guarantee the minimum radius is met. It's wasteful and no one has space for a 6 inch thick tube of rubber in their living room. People just need to learn to treat fiber nicely, which they would if it was the norm to have it around the house. You only need to break a few pieces of fiber to know what not to do.
Sure, but rule of thumb is around 10x diameter, so for a basic simplex cable that should only be around an inch. It's definitely possible to make a cable that is hard to accidentally bend that far, and it doesn't take 6" of rubber at all. There is also such a thing as "bend insensitive fiber", which of course isn't perfectly insensitive, but has a low refractive index trench around the main fiber. Fiber optic can also be made out of polymer and that are more robust vs bending/stretching. If we're imagining a scenario where the consumer industry moves towards this, there are, even beyond education, plenty of avenues to make tougher end user cable.
These things tend to result in reduced performance and thus distance, but remember the context. It's vs active copper cables that are expensive, have unique failure modes themselves (since they have chips inside them) and even then only go a 6-9 feet. A rougher use armored polymer fiber cable that "only" goes 30 feet for ~$15-20 doesn't look so bad.
>You only need to break a few pieces of fiber to know what not to do.
True, like both you and CaedenV said there will also just be a learning process perhaps. Of course, that'd be a lot easier if they were dealing with $2-20 fiber cables rather then $25-100 chip copper cables!
I wonder if the cheap optical cable used in toslink/spdif could handle that kind of bandwidth over short distances. Does it *have* to be fiber, or will some dirt cheap polymer optical cable be "good enough" for ~10 meters?
Someone should try and see how many gbps you can stuff through one using a more high powered transmitter than the cheap laser LED used for toslink/spdif.
It can't. There is a flaw in the middle of certain kinds of glass when used in fiber optics, and while audio signals don't have an issue with it (at a little over 1Gb bandwidth), high bandwidth network data does have an issue with it.
However, OM3 has far less of that flaw, and is now pretty cheap. So, no need to go with the cheap toslink cable types. I got my 3m 10G OM3 cables for <$10.
Toslink is an optical format that uses S/PDIF signalling. S/PDIF, itself, is 75-ohm coax. So, if you mean just the optical format, then just say Toslink.
What if you coat it in rubber and then a plastic layer with a helical gap cut in it? The ratio of plastic vs. gap would basically constrain the cable to whatever bending radius you want.
This was my first thought regarding the use of fiber in ce. So, I looked up some values and it seems the bending radius varies between 10-15x the outer diameter of the cable, with the lower value representing a cable under no tension. Carefully chosen sheathing can mitigate these issues.
People seem to forget about Toslink, but this wouldn't be the first time fiber optics has been used in consumer A/V.
Crouching behind an equipment rack, I recently dropped the receiving end of an active Toslink cable, and was pleasantly surprised by how nice it was to have a red glow to show me exactly where the cable fell.
I agree. Tethered cables seems like a really bad option. Just bite the bullet and go optical. I do worry that there is some technological problem with TB3 optical though, since I have yet to see a TB3 optical cable.
>I do worry that there is some technological problem with TB3 optical though, since I have yet to see a TB3 optical cable.
I think it's just lack of market and that the standard was only recently opened up. Corning did announce TB3 optical back at NAB I think? Sometime recently anyway, for in the next few months. They were the only ones who bothered with TB1/2 optical too IIRC, or maybe one other budget brand eventually did.
I guess it probably is more trouble though to shove a transceiver on each end of a cable rather then just having it be part of a system in the first place and using normal OM3/OM4. Maybe someone will just do an "adapter" instead, TB3<>40GBASE-SR4 and then you can just use your own cables. Then we can enjoy paying an extra TB premium for something that should have just been the standard :\.
The expense is that they have the optical encoder/decoder built into the cable. If it is a dumb light-pipe and the optical bits are in the computer and peripheral then it should not be terribly expensive.
That's not actually true. Many tethered cables are merely attached inside the casing, only requiring you to open the chassis to replace it. Sure, you're at the mercy of the manufacturer for replacements, but you by no means need to scrap the entire device.
Same thoughts since Light Peak was posted here in Anandtech. I think, it is utilized quite a few times in these extremely thin OLED TVs which they call wallpaper TVs. Essentially, the TV has the active components on a separate device through fiber with copper wires.
Or is it the cost or viability of transceivers and not cabling?
Fiber optic cabling is less than $1 / ft. The problem is the components on each end, that you plug it into, can cost $50-100 or more, if you want really high data rates.
Yeah, but once we're talking $20-40 for active copper cables that still max out at pretty short lengths that seems like a less compelling cost. Same if we're looking at a market that will be, at least for a while, confined to displays that are thousands of dollars. Yes to see widespread adoption everywhere that would have to come down eventually, but vs cutting edge copper high bandwidth interfaces it seems to be in the reasonable range now.
A computer motherboard is typically $40-80 in bulk for an OEM. As pointed out, just 40GB optical modules are $40, meaning double the cost of the cheapest boards and 50% the cost on a more expensive board. That is not even close to reasonable for an OEM.
For those wanting to do this and maintain power over the cable it will be even more expensive on both the system and the cable side.
Uh, no. FFS. We're not talking about ultra budget machines here, except on like a 10 year horizon. Look at what machines have TB3 for example right now. DP2.0 and 6k-8k screens are not going to be the $300 Walmart special on launch, nor would anyone ever build a standard aimed at that. Every standard started farther up the stack then that and then worked down as mass manufacturing brings down the price. The question is if it's low enough to get the ball going, or if it's still in ultra rarified space. Optical is definitely converging on what high end copper is at this point, even without consumer market scale.
The main question is how cheap optical transceivers can eventually get. So, what's responsible for the high price - any exotic materials contained therein? That could be a problem, since the price might only *rise* with demand.
Came here to say this - those are non-sale, everyday retail prices from fiberstore, and they can charge that and make a few bucks, meaning it's probably around $25 in BOM to add 40G to a device.
+1 for optical cables here. I love optical cables: no interference, smaller, lighter, only two pins, higher bandwidth overall.
The downside is how fragile they are. I had to replaces OM2 and OM3 cables in my test lab all the time. We had a $1200/month budget for new and replacement cables alone. My home network is 10G in OM3, and I haven't had to replace a cable in 3 and a half years, but I do just run the cables around the corners of the room. Having any cables in a potentially hazardous location could be expensive. Simply stepping on one could render it useless. (In my lab, the engineers and QA testers would run them on the floor across aisles, and of course, they'd get stepped on or rolled over and broken often.) If we get optical cables for home use, we'd better get something more durable than the current version.
OMG dude, just get those rubber cable guards at Home Depot. They even make nice wide ones that you can access from the top. Better yet, hang cable trays from the ceiling.
Make anyone who runs a bare cable on the floor, across an aisle, install the next replacement. It's not just a waste of money, but also a safety hazard!
Optical is good for long-distance signaling. but for short distance, it is a waste of power and will show no benefit. But on the other hand, it would be good to standardize on one. the only question is could you standardize on optical, which I doubt as for everyday plugin and plug out like portable cable, optical will not match metal cable for practical world usability. Cable is here to stay. with optical as an addon for specialized use.
I like the concept of using Ethernet, but if your grousing about the cost of transceivers (or anticipating such), imagine the complaints about the cost of 100 Gbps Ethernet switches!
Very much looking forward to this upgrade. Monitor selection today is IMO very limited by the interface. I would like to see 8k/10bit/120Hz 40" screen, but that will not come until we have a reasonable way to connect it. Not in mainstream. Btw - how is the usb-c over DP/alternate mode handled if all 4 high bandwidth thunderbolt channels are unidirectional?
Limited by the interface?? Monitors are limited by the screen size and tech not the interface. Its the reason monitor sales have been flat, no one has any incentive to upgrade when everything out (beside the $2k+) monitors don't bring anything new to table.
Dell have an 8K monitor out already, but it requires two cables. Apple have a 6K monitor coming out that will either require two cables or DSC. So it matters for things currently on the market, and the "foreseeable future" is full of products that this is also relevant to.
Nope, 1 Thunderbolt cable (carrying 2 DP HBR3 streams). If your GPU / Thunderbolt controller doesn’t do HBR3, it has a scaler that will allow you to do 5K using HBR2 instead. No DSC involved.
There are 8k panels but without mass production they are and will be VERY expensive. And without connection via single and cheap port (one of the requirements, not only one) they will not became mass market solutions. Vicious circle.
Good example are 4k panels. There was a 4k panel sold by IBM since 2001. But because it needed non-standard 2xdual link DVI connection and even then it supported maximum of 48Hz it was not meant for mass market (around 7000usd). Because it was not mass market product, it had instead of ASICs several high-end FPGAs inside (around 1000usd each) that drove the the price up and made it even more niche product.
you get a bunch of monitors that just have a USB-C and a power cable instead of a fat DVI or VGA, and my boss will be interested in upgrading our law offices. Anything that makes the tech more invisible is a huge selling point.
Can do one better, there are already displays that are pure usbc. I have a 16inch one I carry around as a second display for my laptop that's exceedingly handy.
Same here, I was hoping it would be at least 6K/10bit/120hz , but even that will require ~88Gbps. And for 8K that would need ~ 150Gbps. And these are effective bandwidth, not RAW.
I guess we will have to wait for DisplayPort 3.0 for those... : /
I don't like the idea of Visually Lossless, which is like saying AAC @ 320Kbps is audibly lossless.
Not a big problem for gaming, or normal usage. But if you bough an Apple Pro XDR that is capable of 120Hz ( Pro Motion ), I don't think having the connection with compression is doing it any good.
No it is more like saying AAC at 2 Mbps is nearly lossless and ... it is. DSC is a compression ratio of roughly 3:1 not 30:1 or 800:1 but a mere 3:1. Doing 150 Gbps so you can have it uncompressed simply is not going happen at least not for consumer priced gear and this decade. 80 Gbps with DSC should be good for 8K 30 bit at 180 Hz.
It's not intrinsically 3:1. That's just what someone decided it should be. According to Willis' link (below) there's a rate control element and variable quantization. So, the compression ratio could be turned up or down. It'd be nice if they supported it at lower compression ratios, when the mode you're trying to use is *just* beyond the link's raw capacity.
Something to keep in mind: anytime you do single-pass CBR, you can get rate control artifacts, where the bitstream either overshoots or undershoots its target rate and quantization is increased or decreased, to compensate. There will surely be some test patterns designed around this, that will reveal very visible DSC artifacts.
There is no technical reason to have these two things coupled together. I would rather not have some lossy encoder touching visual data. If the pipe really will be that fat then there is little need for it to be enabled. It would be nice if there was some easy for a user to see and perhaps set some rules for the DP behavior. Like if a link sucks does it silently aneg to a lower gear and use DSC to support the higher refresh rates? A soft failure is nicer than a hard one, but I'd rather know about it and be able to choose what happens.
There is very good reason to have them both. DSC is great technology, because for a given price it allows you to have better image. People are for some reason comparing given image with and without compression (ie same resolution and refresh). When they should compare best image without compression with best image with compression - for example quadruple luma resolution (lossless) with same chroma resolution (again lossless compared to image w/o compression with given bandwidth).
As for what will monitor/gpu do when it detects poor connection is up to vendor. It can tell you or it might not.
I know this argument. It misses the big picture. As implemented DSC is deceiving. Digital video links have been treated as transparent binary transfer pipes. There is no multi-line latency added, there is no non-deterministic addition of artifacts. You could send a 120 Hz white noise image and it will be the same image that the GPU created. That has value. Forced DSC all the time treats it as valueless.
To be clear: the option to enable DSC is good. Having it appear as a monitor mode is nice to have. Needing it to use FEC is a bad design decision. Not have it be a controllable feature is a scary prospect.
Are you sure DSC is necessary to use FEC? The link you posted below says the converse, which makes sense. If using compression, any errors will have a greater visual impact.
Also this webinar isn't textbook quality but it does answer a lot of questions about what DSC is. It's a little heavy for those without an information theory background.
You forgot to mention resolution here "The current versions of DisplayPort – 1.3 & 1.4 – offer up to 32.4 Gbps of bandwidth – or 25.9 Gbps after overhead – which is enough for a standard 16.7 million color (24-bit) monitor at up to 120Hz, or up to 98Hz for 1 billion+ (30-bit) monitors."
"Instead, the group envisions UHBR 13.5 and UHBR 20 being tethered setups: manufacturers would ship devices with an appropriate port/cable already attached. "
Would this be saying that devices will no longer have replaceable cables? Because that sounds terrible.
No they didn't, what they meant is that in order to accelerate the specification release and adoption, they opted to use current cables which are okay for most people. So when you need the full speed then you'll be limited to tethered cables -for now- future development with cables can bring untethered cables for full speed in longer distances. But again because the need for the full speed isn't needed yet, it's a better option to actually release the standard and have the actual products with it soon so when we need the full speed we only need the cables then.
So next year's GPUs might have TB3/USB4 outputs. And 2021 GPUs might have DP 2.0 then.
Right, UHBR 10 uses current cables and ports to enable 40Gpbs. That's all good.
What I'm concerned about is UHBR 13.5 and UHBR 20, which they're saying will require tethered cables. If the tethered cable on a $x000 monitor gets damaged, how are consumers supposed to repair that?
I agree optical is the way to go. There are some optical on hdmi 2.1 on the market and they have great reviews by the buyers. Thinking about getting one.
More like ultimate cinema resolution. 4k is already plenty for TV, I see no real reason to go beyond 8k. Gains too small for too much increase of bandwith and storage.
I disagree, I think 4K is already more than enough for the size of screen that'll fit in most homes. You just don't sit close enough for it to make a difference.
Oh, and just to add to this: most people aren't even watching good quality 4K content. Compare "4K" streams with 1080p Full HD blurays that have been mastered well - the bluray will still come out on top because of raw bitrate.
The DisplayPort main link is 4 bonded simplex channels, and has been from the get go. And multi-SST lets you use more than one main link to drive a single display. For externally cabled applications, more than 4 lanes is pretty hard to justify as cable diameter and cost increase more quickly than if you just dial up the signaling rate.
Plus, if you REEEEEALLY need Moar Bandwidth, Multi-Strea-Transport has been in the standard for several versions allowing multiple DP interfaces to be used for the same endpoint device (without the genlocking requirements of doing the same with a pixel stream interface like HDMI).
Just curious on your numbers; if I calculate 80 Gbit/s × (128/132), I get 77.57 Gbit/s, not 77.37. I know the 77.37 is on the VESA press release, but that could be a typo. Your article has a whole table: UHBR 10 40 Gbps 38.69 Gbps UHBR 13.5 54 Gbps 52.22 Gbps UHBR 20 80 Gbps 77.37 Gbps
and all the numbers on the write side seem to be off by a little bit in the same way (52.22 instead of 54 × (128/132) = 52.36, and 38.69 instead of 40 × (128/132) = 38.79. I'm curious if those numbers are all from VESA, or if you calculated them yourself based on the 77.37 Gbit/s figure?
It was high time DisplayPort adopted 128/132b encoding. Better late than ever. So DP 2.0 is basically a simplex / half duplex variant of TB 3, but apparently merging all 4 lanes to a single lane makes the cable situation even worse than TB 3 I guess, at least when the lanes are at full speed.
Maybe a consumer optical solution with even cheaper cables and modules (though these are already quite cheap) than those used for serves and data centers should at last be developed, and economies of scale would then drop its price even further? A solution as cheap as Toslink but with much higher bandwidth and for computers rather than audio devices.
This is a welcome development, but also a clear sign of the rather worrying I/O wall that mainstream computing is seemingly running headlong into, leading to either dramatically increased prices, feature stagnation, or both. Not that feature stagnation is necessarily a bad thing (it could push engineering efforts towards _smarter_ features added rather than brute-forcing improvements through bandwidth), but it's a bit of a paradigm shift, which the industry seems a bit slow to accept.
Some examples: PCIe 4.0 requires higher motherboard trace quality, shorter traces (before requiring redrivers), more power, and fewer connectors in between the host and connected device. PCIe 3.0 is low power and can be extended through relatively inexpensive riser cables for surprising distances, even daisy-chaining cables. PCIe 5.0 (and thus 6.0) is likely to only make this worse. Internal connections will be faster (but more power hungry and expensive), but implementations will be somewhat less flexible. The rising cost of entry is also a definite issue - PCB manufacturing prices are already at commodity levels, and aren't going to drop due to increased demand for complex/high-quality boards.
USB 3.x is similar. USB 2.0 could run up to 5 meters on passive cabling (though lengths like that are known to have issues). USB 3.0/3.1G1 cut that to 3m. USB-C 3.1G2 cuts that to 1m (source: https://community.cypress.com/docs/DOC-10693). 1m is _utterly useless_ for anything except temporarily attached peripheral devices. Of course it's possible to some extent to make higher-quality cables that maintain said speeds over longer distances, but it's both expensive and difficult.
And now DP is going to limit itself to similar cable lengths to TB3? So you'll need your GPU to be within 1-1.5m of your monitor's input? That's an extreme limitation. Got your PC on the floor beneath your desk? It likely won't reach. Like to keep your desk clean? Sorry, no cable routing for you - it needs to be a straight shot to get there.
The computing world is backing itself into a corner, where practicality and usability are falling to the wayside in favor of brute-forcing in more performance. Some might say this is a necessity, and of course backwards compatibility alleviates some issues, but if that's the case these new standards are ultimately useless for the vast majority of users if they end up relying on falling back to an older standard anyhow. Then you're just paying more for nothing.
AFAIK the 20 Gbit/s per lane in Thunderbolt 3 is actually the payload. The TB3 interface signals at 20.625 Gbit/s with 128b/132b encoding (20.000 Gbit/s payload), while you're saying DP 2.0 actually signals at 20.000 Gbit/s with 128b/132b encoding (19.39 Gbit/s payload). That's pretty interesting to note they signal at slightly different frequencies, so it's not quite an exact copy-paste of the Thunderbolt 3 interface in terms of operation :P
I would prefer a single standardized video connector rather than having multiple types. HDMI seems better positioned to take the lead given how little DisplayPort is actually in use these days. There's more analog VGA out there than DVI and DP combined so maybe it's a better idea to just discontinue DP.
So we should all move to a proprietary licensed interface rather than a free and open one? Yeah, that's not a good idea, no matter HDMI's dominance. As for "how little DP is actually in use these days", it's in every single laptop with a Type-C USB plug (through DP alt mode, which is native, even if it's also easily converted to HDMI - as is every DP port out there), every PC monitor beyond $150 or so, and every serious GPU on the market. Sure, that's PCs only, and HDMI does seem to own everything else - I wouldn't be surprised if there were exclusivity deals out there that we don't know of. DP has a lot of advantages.
That's a bit dramatic isn't it? Every modern computer out there has HDMI, but not all of them have a DP and the fact that HDMI isn't "free and open" doesn't seem to have adversely impacted its broad adoption.
DP is all around a better tech. MST is ideal for connecting multiple monitors and for use in docking stations. TB3 supports DP natively now with Titan Ridge it is dual DP 1.4 streams which is ideal for multiple high resolution monitors. I don't know what bargain junk bin you are looking in that DP is rare.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
120 Comments
Back to Article
masimilianzo - Wednesday, June 26, 2019 - link
So we will see new TB3 controllers with DP2.0 support.Curios as to when we will have TB4 based on PCIe 4.0
brakdoo - Wednesday, June 26, 2019 - link
Probably not so soon. TB3 cables are already bad. Higher frequencies would be a pain in the ass.repoman27 - Wednesday, June 26, 2019 - link
Thunderbolt is not “based on PCIe” like that. Thunderbolt 3 controllers can be updated with PCIe 4.0 back ends.“Thunderbolt 4” will likely be reserved for an increase in the Thunderbolt signaling rate (currently at 20.625 Gbit/s), which has nothing to do with the PCIe generation of the back end.
masimilianzo - Thursday, June 27, 2019 - link
Can you elaborate on that? Sorry I am quite ignorant...Skeptical123 - Friday, June 28, 2019 - link
Thunderbolt is "”based on PCIe” like that". TB3 combines a PCIe x4 and two DP1.2 signals to reach a 40Gps limit provided the drivers and cable support it. This is a little confusing and took me a while to work out how it works because I could all the need info in one place. To help others I have included some references.““Fundamentally, Thunderbolt is a tunneling architecture designed to take a few underlying protocols, and combine them onto a single interface, so that the total speed and performance of the link can be shared between the underlying usages of these protocols – whether they are data, display, or something else”“ - linked from intel website
“Thunderbolt combines PCI Express (PCIe) and DisplayPort (DP) into two serial signals,” - wiki “Thunderbolt 3 uses PCIe x4 gen 3 data rate with 128kB header sizes. Two links of (4 lane) DisplayPort 1.2 consume 2x (4 x 5.4 Gbps) or 43.2 Gbps. For both of these numbers, the underlying protocol uses some data to provide encoding overhead which is not carried over the Thunderbolt 3 link reducing the consumed bandwidth by roughly 20 percent (DisplayPort) or 1.5 percent (PCI Express Gen 3). But regardless, adding both together gets you above 40 Gbps.“ If demand goes above 40Gbps the drivers hit their limit and will prioritize the DP single though this is rare. - thunderbolttechnology - intel links to this site on their TB3 page
So the thunderbolt drivers “tunnels” together upto 4 PCIe gen 3 lanes and two links of DP 1.2
HStewart - Wednesday, July 3, 2019 - link
With DP 2.0 aim at 2020 and fact that Thunderbolt is created by Intel and USB4 I believe is also 2020. I would expect that TB4 will work with PCIe 5.0Also it important to note the differences in TB3 and DP, TB3 is bidirectional because it more than display where DP is primary display. So I wonder if TB could be change to switch mode and allow all lanes in one direction which connected to display.
There is of course physical cable limitation of USB-C - so compatibility may limits - but hopefully this means TB 4 has new cable option that improves on that.
Flunk - Monday, July 8, 2019 - link
On consumer products? I think it will be a long while. 4K@120Hz is going to be fine for consumer applications for the foreseeable future. I really only see 8K being used for large commercial signage.zanon - Wednesday, June 26, 2019 - link
I kind of wish the industry would suck it up and finally start to really push for optical. Even decent pre-terminated OM4 fiber is now down to below $.69/foot in bulk, and only a bit more (~$0.75-$0.80/ft) as singles. OM3 is even less. OM4 is good for up to 300 feet (100m) with a standard 100GBASE module, and those are going for under $100 each at this point.Yes, an extra $100 on each side for the connector is still significant extra margin right now, but I doubt it will be in the context of 6k-8k monitors for a while or the GPUs needed to drive them for a while. The cable pricing represents major savings *and* far more flexibility, optical cables are thin and go so much farther. Optical transmission also uses a lot less power. And those prices also look to be within the range where massive economies of scale could drive them down significantly. It's not crazy infeasible anymore. It's represent a lot more future growth room too. It really seems like the hacks and compromises needed to keep pushing up bandwidth over external copper are starting to not be worth it, which is a drastic change from even a few years ago but here we are.
Heck, if we're thinking about reusing layers, it might be really interesting if they did just base a display standard around ethernet (or fibrechannel or something) so that all standard switching hardware and the like could be taken advantage of. Just plug a display into a wall keystone and be able to have full connectivity to wherever you want your computer to be elsewhere in the network with 100 Gbps full duplex? I can see uses for that at scale.
azfacea - Wednesday, June 26, 2019 - link
+1 for optical. what is the wisdom in preserving copper when you are paying 100$ for a dam usb cable (and even then not knowing if it will fry a certain port if you connect the wrong thing to the wrong port)zanon - Wednesday, June 26, 2019 - link
At least for regular networking and peripheral connections there are valid arguments to be made for the value of some power over the same cable too, though I wonder if in an optical data world if a hybrid optical/copper cable (ie., a fiber optic cable with 2-pair just wrapped next to it) would actually be that hard, it'd be less copper and far looser (cheaper) reqs since it'd be exclusively carrying power. But desktop screens, particularly high end desktop screens, always need their own PSU anyway so no bonus there. And like you say carrying a lot of power can also plain be a double edged sword, a lot of crummy USB-C cables out there can fry stuff (and in networking means interference and isolation issues, it's common to do fiber between even close buildings for example just for the optoisolation).If we were still talking $400 a pop for a 100G transceiver fine yeah that'd be too much, even with savings from vastly more manufacturing. But at $100 which might get shrunk down to $50-80 over a few years, that's getting to the point where it's practically paid for just by cheaper cables even ignoring every other advantage. Seems within striking range anyway, and initial monitors will probably all be in the $2000+ range too.
Optical was really tough to go consumer with 8-9 years ago say, back when Intel/Apple were doing the initial TB and it was briefly "LightPeak" instead. Now though?
CaedenV - Wednesday, June 26, 2019 - link
1) Bring back the LightPeak name!2) I like the hybrid idea, especially for things like external HDDs where you don't want a separate power cable. High speed optical, paired with a basic copper for power should give the cable some endurance, while being cheaper than a crazy high-speed copper cable alone.
dgingeri - Thursday, June 27, 2019 - link
100G transceivers haven't cost more than $50 to manufacturer for over 2 years now. The companies charging $400 for them are just ripping people off because they're rare. (Of course, Cisco is still charging $1000 for 10G transceivers that cost $10 to make.) Besides, we don't need to use current network standards for optical connections. We could use a multi-wavelength format for a 8-bit parallel signal with 10G encoding per wavelength across a single fiber, and use a single wavelength at 1Gb for the return control signals. We wouldn't have to use expensive MTP cables that way.The_Assimilator - Thursday, June 27, 2019 - link
> But desktop screens, particularly high end desktop screens, always need their own PSU anyway so no bonus there.The most common case for desktop monitors is 1080p res and under 100W power draw, which can easily be satisfied by a single USB-C cable with PD. I'm hoping this configuration becomes ubiquitous over the next few years, thus driving further innovation in USB-C/Thunderbolt to eventually be able to support any and all monitors with a single cable.
Elstar - Thursday, June 27, 2019 - link
> The most common case for desktop monitors is 1080p res and under 100W power draw, which can easily be satisfied by a single USB-C cable with PD. I'm hoping this configuration becomes ubiquitous over the next few years, thus driving further innovation in USB-C/Thunderbolt to eventually be able to support any and all monitors with a single cable.Apple has had a long-term love affair with monitors that either receive their power from the computer or provide power to the computer. The problem with this design is that either solution makes something more expensive for everybody *just in case* the ideal scenario is used by some percentage of users. (And not just more expensive, but more cooling for either the computer or monitor is required depending on which power supply is taxed with extra work for some other device.)
TheUnhandledException - Thursday, June 27, 2019 - link
I think there is zero chance of that happening outside of niche uses like protable monitors. For general use assumming the PC can supply power is dubious. Say monitor uses 60W. It has no power cord and only a usb-c connection. What if a given laptop support PD but only 45W? Oops. Or what if it can supply 100W but you want to connect two monitors. Nobody is going to design a laptop with a 400W PSU just in case someone might want to plug in three 100W devices (in this case monitors).DanNeely - Wednesday, June 26, 2019 - link
The problem with optical is that consumers will try to treat it like copper cable and try putting very tight bends that snap the glass fibers and destroy the cable.zanon - Wednesday, June 26, 2019 - link
I don't see why that can't be solved with armoring/inert plastic or kevlar side strings or whatever. Direct burial cable has to deal with a lot of abuse already, not just in the ground but because it needs to be able to go in by low paid sloppy contractors yanking the stuff around. I just did a bunch and it's tough stuff, sacrifices some of the real thinness of standard OM3 but it's essentially impossible to bend tight enough to cause any problems.willis936 - Wednesday, June 26, 2019 - link
Minimum radius is important. You can't just coat a piece of fiber in enough rubber to guarantee the minimum radius is met. It's wasteful and no one has space for a 6 inch thick tube of rubber in their living room. People just need to learn to treat fiber nicely, which they would if it was the norm to have it around the house. You only need to break a few pieces of fiber to know what not to do.zanon - Wednesday, June 26, 2019 - link
>Minimum radius is important.Sure, but rule of thumb is around 10x diameter, so for a basic simplex cable that should only be around an inch. It's definitely possible to make a cable that is hard to accidentally bend that far, and it doesn't take 6" of rubber at all. There is also such a thing as "bend insensitive fiber", which of course isn't perfectly insensitive, but has a low refractive index trench around the main fiber. Fiber optic can also be made out of polymer and that are more robust vs bending/stretching. If we're imagining a scenario where the consumer industry moves towards this, there are, even beyond education, plenty of avenues to make tougher end user cable.
These things tend to result in reduced performance and thus distance, but remember the context. It's vs active copper cables that are expensive, have unique failure modes themselves (since they have chips inside them) and even then only go a 6-9 feet. A rougher use armored polymer fiber cable that "only" goes 30 feet for ~$15-20 doesn't look so bad.
>You only need to break a few pieces of fiber to know what not to do.
True, like both you and CaedenV said there will also just be a learning process perhaps. Of course, that'd be a lot easier if they were dealing with $2-20 fiber cables rather then $25-100 chip copper cables!
IndianaKrom - Wednesday, June 26, 2019 - link
I wonder if the cheap optical cable used in toslink/spdif could handle that kind of bandwidth over short distances. Does it *have* to be fiber, or will some dirt cheap polymer optical cable be "good enough" for ~10 meters?Someone should try and see how many gbps you can stuff through one using a more high powered transmitter than the cheap laser LED used for toslink/spdif.
dgingeri - Thursday, June 27, 2019 - link
It can't. There is a flaw in the middle of certain kinds of glass when used in fiber optics, and while audio signals don't have an issue with it (at a little over 1Gb bandwidth), high bandwidth network data does have an issue with it.However, OM3 has far less of that flaw, and is now pretty cheap. So, no need to go with the cheap toslink cable types. I got my 3m 10G OM3 cables for <$10.
mode_13h - Thursday, July 4, 2019 - link
Toslink is an optical format that uses S/PDIF signalling. S/PDIF, itself, is 75-ohm coax. So, if you mean just the optical format, then just say Toslink.mode_13h - Thursday, July 4, 2019 - link
What if you coat it in rubber and then a plastic layer with a helical gap cut in it? The ratio of plastic vs. gap would basically constrain the cable to whatever bending radius you want.CaedenV - Wednesday, June 26, 2019 - link
Meh, they will learn after the first cable or two.haukionkannel - Wednesday, June 26, 2019 - link
They will sue the cable company after They bend their cable... each time...tuxRoller - Wednesday, June 26, 2019 - link
This was my first thought regarding the use of fiber in ce. So, I looked up some values and it seems the bending radius varies between 10-15x the outer diameter of the cable, with the lower value representing a cable under no tension. Carefully chosen sheathing can mitigate these issues.mode_13h - Thursday, July 4, 2019 - link
People seem to forget about Toslink, but this wouldn't be the first time fiber optics has been used in consumer A/V.Crouching behind an equipment rack, I recently dropped the receiving end of an active Toslink cable, and was pleasantly surprised by how nice it was to have a red glow to show me exactly where the cable fell.
DigitalFreak - Wednesday, June 26, 2019 - link
I agree. Tethered cables seems like a really bad option. Just bite the bullet and go optical. I do worry that there is some technological problem with TB3 optical though, since I have yet to see a TB3 optical cable.zanon - Wednesday, June 26, 2019 - link
>I do worry that there is some technological problem with TB3 optical though, since I have yet to see a TB3 optical cable.I think it's just lack of market and that the standard was only recently opened up. Corning did announce TB3 optical back at NAB I think? Sometime recently anyway, for in the next few months. They were the only ones who bothered with TB1/2 optical too IIRC, or maybe one other budget brand eventually did.
I guess it probably is more trouble though to shove a transceiver on each end of a cable rather then just having it be part of a system in the first place and using normal OM3/OM4. Maybe someone will just do an "adapter" instead, TB3<>40GBASE-SR4 and then you can just use your own cables. Then we can enjoy paying an extra TB premium for something that should have just been the standard :\.
melgross - Wednesday, June 26, 2019 - link
Look around. A number of companies have been offering optical TB3 cables for some time. They are expensive though.zanon - Wednesday, June 26, 2019 - link
Are you sure you're not confusing them with TB1/TB2 optical cables? I checked just recently and I couldn't find any TB3 ones yet.CaedenV - Wednesday, June 26, 2019 - link
The expense is that they have the optical encoder/decoder built into the cable. If it is a dumb light-pipe and the optical bits are in the computer and peripheral then it should not be terribly expensive.DanNeely - Wednesday, June 26, 2019 - link
The real problem with tethered is that if you damage the cable you have to replace the entire device, not just a potentially expensive cableInvidiousIgnoramus - Thursday, June 27, 2019 - link
That's not actually true. Many tethered cables are merely attached inside the casing, only requiring you to open the chassis to replace it. Sure, you're at the mercy of the manufacturer for replacements, but you by no means need to scrap the entire device.mode_13h - Thursday, July 4, 2019 - link
> Many tethered cables are merely attached inside the casingExcept, in this case, wouldn't the reason for tethering it be to avoid the connector?
I think those examples you cite are merely tethered for cost/convenience reasons - not signal integrity.
mode_13h - Thursday, July 4, 2019 - link
I hate the tethered cable idea. If you break the cable, send the device in for repair? No thanks!zodiacfml - Wednesday, June 26, 2019 - link
Same thoughts since Light Peak was posted here in Anandtech. I think, it is utilized quite a few times in these extremely thin OLED TVs which they call wallpaper TVs. Essentially, the TV has the active components on a separate device through fiber with copper wires.Or is it the cost or viability of transceivers and not cabling?
WinterCharm - Wednesday, June 26, 2019 - link
It's the transceiver cost. It's very high.Fiber optic cabling is less than $1 / ft. The problem is the components on each end, that you plug it into, can cost $50-100 or more, if you want really high data rates.
zanon - Wednesday, June 26, 2019 - link
>$50-100Yeah, but once we're talking $20-40 for active copper cables that still max out at pretty short lengths that seems like a less compelling cost. Same if we're looking at a market that will be, at least for a while, confined to displays that are thousands of dollars. Yes to see widespread adoption everywhere that would have to come down eventually, but vs cutting edge copper high bandwidth interfaces it seems to be in the reasonable range now.
CaedenV - Wednesday, June 26, 2019 - link
On computers and TVs that cost several hundred if not thousands of dollars?Ya... they can build that in.
Reflex - Wednesday, June 26, 2019 - link
A computer motherboard is typically $40-80 in bulk for an OEM. As pointed out, just 40GB optical modules are $40, meaning double the cost of the cheapest boards and 50% the cost on a more expensive board. That is not even close to reasonable for an OEM.For those wanting to do this and maintain power over the cable it will be even more expensive on both the system and the cable side.
Going optical is not cheap. Period.
zanon - Wednesday, June 26, 2019 - link
Uh, no. FFS. We're not talking about ultra budget machines here, except on like a 10 year horizon. Look at what machines have TB3 for example right now. DP2.0 and 6k-8k screens are not going to be the $300 Walmart special on launch, nor would anyone ever build a standard aimed at that. Every standard started farther up the stack then that and then worked down as mass manufacturing brings down the price. The question is if it's low enough to get the ball going, or if it's still in ultra rarified space. Optical is definitely converging on what high end copper is at this point, even without consumer market scale.mode_13h - Thursday, July 4, 2019 - link
The main question is how cheap optical transceivers can eventually get. So, what's responsible for the high price - any exotic materials contained therein? That could be a problem, since the price might only *rise* with demand.mode_13h - Thursday, July 4, 2019 - link
I don't see motherboard connectors for integrated graphics as being a leading adopter. More like $1k+ graphics cards and monitors.willis936 - Wednesday, June 26, 2019 - link
$40 for 40GBASE-SR4 module.$100 for 100GBASE-SR4 module.
Daeros - Thursday, June 27, 2019 - link
Came here to say this - those are non-sale, everyday retail prices from fiberstore, and they can charge that and make a few bucks, meaning it's probably around $25 in BOM to add 40G to a device.dgingeri - Thursday, June 27, 2019 - link
+1 for optical cables here. I love optical cables: no interference, smaller, lighter, only two pins, higher bandwidth overall.The downside is how fragile they are. I had to replaces OM2 and OM3 cables in my test lab all the time. We had a $1200/month budget for new and replacement cables alone. My home network is 10G in OM3, and I haven't had to replace a cable in 3 and a half years, but I do just run the cables around the corners of the room. Having any cables in a potentially hazardous location could be expensive. Simply stepping on one could render it useless. (In my lab, the engineers and QA testers would run them on the floor across aisles, and of course, they'd get stepped on or rolled over and broken often.) If we get optical cables for home use, we'd better get something more durable than the current version.
mode_13h - Thursday, July 4, 2019 - link
OMG dude, just get those rubber cable guards at Home Depot. They even make nice wide ones that you can access from the top. Better yet, hang cable trays from the ceiling.Make anyone who runs a bare cable on the floor, across an aisle, install the next replacement. It's not just a waste of money, but also a safety hazard!
sharath.naik - Thursday, June 27, 2019 - link
Optical is good for long-distance signaling. but for short distance, it is a waste of power and will show no benefit. But on the other hand, it would be good to standardize on one. the only question is could you standardize on optical, which I doubt as for everyday plugin and plug out like portable cable, optical will not match metal cable for practical world usability. Cable is here to stay. with optical as an addon for specialized use.mode_13h - Thursday, July 4, 2019 - link
I like the concept of using Ethernet, but if your grousing about the cost of transceivers (or anticipating such), imagine the complaints about the cost of 100 Gbps Ethernet switches!qap - Wednesday, June 26, 2019 - link
Very much looking forward to this upgrade. Monitor selection today is IMO very limited by the interface. I would like to see 8k/10bit/120Hz 40" screen, but that will not come until we have a reasonable way to connect it. Not in mainstream.Btw - how is the usb-c over DP/alternate mode handled if all 4 high bandwidth thunderbolt channels are unidirectional?
imaheadcase - Wednesday, June 26, 2019 - link
Limited by the interface?? Monitors are limited by the screen size and tech not the interface. Its the reason monitor sales have been flat, no one has any incentive to upgrade when everything out (beside the $2k+) monitors don't bring anything new to table.melgross - Wednesday, June 26, 2019 - link
The interface limits refresh rates, resolution and bits for c9lor depth. Did you bother to read the article?When Apple designed to 5k iMac, they had to design their own video controller because nobody else had one.
imaheadcase - Wednesday, June 26, 2019 - link
Except none of that maters for anything currently on the market of foreseeable future...Spunjji - Wednesday, June 26, 2019 - link
Dell have an 8K monitor out already, but it requires two cables. Apple have a 6K monitor coming out that will either require two cables or DSC. So it matters for things currently on the market, and the "foreseeable future" is full of products that this is also relevant to.repoman27 - Wednesday, June 26, 2019 - link
Nope, 1 Thunderbolt cable (carrying 2 DP HBR3 streams). If your GPU / Thunderbolt controller doesn’t do HBR3, it has a scaler that will allow you to do 5K using HBR2 instead. No DSC involved.mode_13h - Thursday, July 4, 2019 - link
Scaler? Why would you buy a high-res monitor just to use a scaler?repoman27 - Wednesday, June 26, 2019 - link
For the Apple 6K display. The Dell requires 2 DP cables, and still can’t do 10 bpc at 60 Hz.The_Assimilator - Thursday, June 27, 2019 - link
So it matters for a grand total of two, incredibly niche, products. Gotcha.mode_13h - Thursday, July 4, 2019 - link
HDMI 2.1 has been out for 2.5 years, already!mode_13h - Thursday, July 4, 2019 - link
Sorry, that's when it was announced. It was actually released about 1.75 years ago.qap - Wednesday, June 26, 2019 - link
There are 8k panels but without mass production they are and will be VERY expensive. And without connection via single and cheap port (one of the requirements, not only one) they will not became mass market solutions. Vicious circle.Good example are 4k panels. There was a 4k panel sold by IBM since 2001. But because it needed non-standard 2xdual link DVI connection and even then it supported maximum of 48Hz it was not meant for mass market (around 7000usd). Because it was not mass market product, it had instead of ASICs several high-end FPGAs inside (around 1000usd each) that drove the the price up and made it even more niche product.
mode_13h - Thursday, July 4, 2019 - link
Didn't it also need a special graphics card to drive it?CaedenV - Wednesday, June 26, 2019 - link
you get a bunch of monitors that just have a USB-C and a power cable instead of a fat DVI or VGA, and my boss will be interested in upgrading our law offices. Anything that makes the tech more invisible is a huge selling point.Kraszmyl - Wednesday, June 26, 2019 - link
Can do one better, there are already displays that are pure usbc. I have a 16inch one I carry around as a second display for my laptop that's exceedingly handy.nevcairiel - Wednesday, June 26, 2019 - link
If you use all 4 channel for display data, there is naturally no room for USB left.brakdoo - Wednesday, June 26, 2019 - link
you can still use usb 2.0ksec - Wednesday, June 26, 2019 - link
Same here, I was hoping it would be at least 6K/10bit/120hz , but even that will require ~88Gbps. And for 8K that would need ~ 150Gbps. And these are effective bandwidth, not RAW.I guess we will have to wait for DisplayPort 3.0 for those... : /
Spunjji - Wednesday, June 26, 2019 - link
That's what DSC is for!ksec - Wednesday, June 26, 2019 - link
I don't like the idea of Visually Lossless, which is like saying AAC @ 320Kbps is audibly lossless.Not a big problem for gaming, or normal usage. But if you bough an Apple Pro XDR that is capable of 120Hz ( Pro Motion ), I don't think having the connection with compression is doing it any good.
TheUnhandledException - Thursday, June 27, 2019 - link
No it is more like saying AAC at 2 Mbps is nearly lossless and ... it is. DSC is a compression ratio of roughly 3:1 not 30:1 or 800:1 but a mere 3:1. Doing 150 Gbps so you can have it uncompressed simply is not going happen at least not for consumer priced gear and this decade. 80 Gbps with DSC should be good for 8K 30 bit at 180 Hz.Vitor - Friday, June 28, 2019 - link
It would be more like AAC at 448kbps or 512kbps. And that's basically lossless. Of course if we are talking about stereo 44,1 or 48kkhz stereo.mode_13h - Thursday, July 4, 2019 - link
It's not intrinsically 3:1. That's just what someone decided it should be. According to Willis' link (below) there's a rate control element and variable quantization. So, the compression ratio could be turned up or down. It'd be nice if they supported it at lower compression ratios, when the mode you're trying to use is *just* beyond the link's raw capacity.Something to keep in mind: anytime you do single-pass CBR, you can get rate control artifacts, where the bitstream either overshoots or undershoots its target rate and quantization is increased or decreased, to compensate. There will surely be some test patterns designed around this, that will reveal very visible DSC artifacts.
mode_13h - Thursday, July 4, 2019 - link
And will you be using an entire DGX Station to drive that thing?Seriously, 8k at 120 Hz? It's still hard enough to render 4k at 120 Hz!
willis936 - Wednesday, June 26, 2019 - link
FEC goodDSC bad
There is no technical reason to have these two things coupled together. I would rather not have some lossy encoder touching visual data. If the pipe really will be that fat then there is little need for it to be enabled. It would be nice if there was some easy for a user to see and perhaps set some rules for the DP behavior. Like if a link sucks does it silently aneg to a lower gear and use DSC to support the higher refresh rates? A soft failure is nicer than a hard one, but I'd rather know about it and be able to choose what happens.
qap - Wednesday, June 26, 2019 - link
FSC necessaryDSC great
There is very good reason to have them both. DSC is great technology, because for a given price it allows you to have better image. People are for some reason comparing given image with and without compression (ie same resolution and refresh). When they should compare best image without compression with best image with compression - for example quadruple luma resolution (lossless) with same chroma resolution (again lossless compared to image w/o compression with given bandwidth).
As for what will monitor/gpu do when it detects poor connection is up to vendor. It can tell you or it might not.
willis936 - Wednesday, June 26, 2019 - link
I know this argument. It misses the big picture. As implemented DSC is deceiving. Digital video links have been treated as transparent binary transfer pipes. There is no multi-line latency added, there is no non-deterministic addition of artifacts. You could send a 120 Hz white noise image and it will be the same image that the GPU created. That has value. Forced DSC all the time treats it as valueless.willis936 - Wednesday, June 26, 2019 - link
To be clear: the option to enable DSC is good. Having it appear as a monitor mode is nice to have. Needing it to use FEC is a bad design decision. Not have it be a controllable feature is a scary prospect.mode_13h - Thursday, July 4, 2019 - link
Are you sure DSC is necessary to use FEC? The link you posted below says the converse, which makes sense. If using compression, any errors will have a greater visual impact.willis936 - Wednesday, June 26, 2019 - link
Also this webinar isn't textbook quality but it does answer a lot of questions about what DSC is. It's a little heavy for those without an information theory background.https://www.quantumdata.com/assets/displayport_dsc...
mode_13h - Thursday, July 4, 2019 - link
Thanks.I'd add that anyone familiar with image or video compression techniques will likely find it very comprehensible.
mckirkus - Wednesday, June 26, 2019 - link
You forgot to mention resolution here"The current versions of DisplayPort – 1.3 & 1.4 – offer up to 32.4 Gbps of bandwidth – or 25.9 Gbps after overhead – which is enough for a standard 16.7 million color (24-bit) monitor at up to 120Hz, or up to 98Hz for 1 billion+ (30-bit) monitors."
Spunjji - Wednesday, June 26, 2019 - link
They did! Spoiler alert: those figures refer to 4K resolution. I have spent way too long on the DisplayPort Wiki. D:Mr Perfect - Wednesday, June 26, 2019 - link
"Instead, the group envisions UHBR 13.5 and UHBR 20 being tethered setups: manufacturers would ship devices with an appropriate port/cable already attached. "Would this be saying that devices will no longer have replaceable cables? Because that sounds terrible.
Xajel - Wednesday, June 26, 2019 - link
No they didn't, what they meant is that in order to accelerate the specification release and adoption, they opted to use current cables which are okay for most people. So when you need the full speed then you'll be limited to tethered cables -for now- future development with cables can bring untethered cables for full speed in longer distances. But again because the need for the full speed isn't needed yet, it's a better option to actually release the standard and have the actual products with it soon so when we need the full speed we only need the cables then.So next year's GPUs might have TB3/USB4 outputs. And 2021 GPUs might have DP 2.0 then.
Mr Perfect - Thursday, June 27, 2019 - link
Right, UHBR 10 uses current cables and ports to enable 40Gpbs. That's all good.What I'm concerned about is UHBR 13.5 and UHBR 20, which they're saying will require tethered cables. If the tethered cable on a $x000 monitor gets damaged, how are consumers supposed to repair that?
Vitor - Wednesday, June 26, 2019 - link
I agree optical is the way to go. There are some optical on hdmi 2.1 on the market and they have great reviews by the buyers. Thinking about getting one.mdrejhon - Wednesday, June 26, 2019 - link
Just a heads up, DisplayPort 2.0 now has enough bandwidth to support the 1000 Hz refresh rate -- (the spec has no refresh rate limitation):https://www.blurbusters.com/displayport-2-0-announ...
mdrejhon - Wednesday, June 26, 2019 - link
8K 60Hz and 1080p 1000Hz are both nearly same in pixels/sec -- 2 billion pixels/sec transmitted over the cable!InvidiousIgnoramus - Thursday, June 27, 2019 - link
It was Interesting when they tested that 4k panel at what, 360p500?nandnandnand - Wednesday, June 26, 2019 - link
With DSC, you can get to what, 16K@60Hz?GreenReaper - Wednesday, June 26, 2019 - link
Sounds good. Maybe not useful for video yet, but I run an art site accepting 16K pictures (up to 36MB).Ryan Smith - Wednesday, June 26, 2019 - link
Yep. 16K@60Hz and 30 bpp 4:4:4 HDR.urbanman2004 - Wednesday, June 26, 2019 - link
TL;DR. The reason I prefer reading a/b similar tech news updates on other platforms: https://www.pcgamer.com/displayport-20-has-enough-...AnTech - Wednesday, June 26, 2019 - link
16K will be the ultimate TV resolution.Vitor - Thursday, June 27, 2019 - link
More like ultimate cinema resolution. 4k is already plenty for TV, I see no real reason to go beyond 8k. Gains too small for too much increase of bandwith and storage.piroroadkill - Tuesday, July 2, 2019 - link
Cinema for sure is the only place where such a high resolution could be beneficial.piroroadkill - Tuesday, July 2, 2019 - link
I disagree, I think 4K is already more than enough for the size of screen that'll fit in most homes. You just don't sit close enough for it to make a difference.piroroadkill - Tuesday, July 2, 2019 - link
Oh, and just to add to this: most people aren't even watching good quality 4K content. Compare "4K" streams with 1080p Full HD blurays that have been mastered well - the bluray will still come out on top because of raw bitrate.mode_13h - Thursday, July 4, 2019 - link
It's not only bitrate - if the 1080p signal used the full bandwidth, then you need a higher-resolution display to properly reconstruct it.For the same reason, 8k displays will be the best way to appreciate well-mastered 4k content. That's the only reason they make sense, in the home.
PixyMisa - Wednesday, June 26, 2019 - link
Hi DisplayPort engineers.The magic words are channel bonding.
repoman27 - Wednesday, June 26, 2019 - link
The DisplayPort main link is 4 bonded simplex channels, and has been from the get go. And multi-SST lets you use more than one main link to drive a single display. For externally cabled applications, more than 4 lanes is pretty hard to justify as cable diameter and cost increase more quickly than if you just dial up the signaling rate.edzieba - Thursday, June 27, 2019 - link
Plus, if you REEEEEALLY need Moar Bandwidth, Multi-Strea-Transport has been in the standard for several versions allowing multiple DP interfaces to be used for the same endpoint device (without the genlocking requirements of doing the same with a pixel stream interface like HDMI).mode_13h - Thursday, July 4, 2019 - link
As of 2.1, HDMI is now packet-based.Glenwing - Wednesday, June 26, 2019 - link
Just curious on your numbers; if I calculate 80 Gbit/s × (128/132), I get 77.57 Gbit/s, not 77.37. I know the 77.37 is on the VESA press release, but that could be a typo. Your article has a whole table:UHBR 10 40 Gbps 38.69 Gbps
UHBR 13.5 54 Gbps 52.22 Gbps
UHBR 20 80 Gbps 77.37 Gbps
and all the numbers on the write side seem to be off by a little bit in the same way (52.22 instead of 54 × (128/132) = 52.36, and 38.69 instead of 40 × (128/132) = 38.79. I'm curious if those numbers are all from VESA, or if you calculated them yourself based on the 77.37 Gbit/s figure?
Glenwing - Wednesday, June 26, 2019 - link
and by "write side", I of course mean "right side" :3Ryan Smith - Thursday, June 27, 2019 - link
There is also a bit of overhead from FEC. So that's where the small amount of missing bandwidth is going.Glenwing - Thursday, June 27, 2019 - link
Thanks!Santoval - Thursday, June 27, 2019 - link
It was high time DisplayPort adopted 128/132b encoding. Better late than ever. So DP 2.0 is basically a simplex / half duplex variant of TB 3, but apparently merging all 4 lanes to a single lane makes the cable situation even worse than TB 3 I guess, at least when the lanes are at full speed.Maybe a consumer optical solution with even cheaper cables and modules (though these are already quite cheap) than those used for serves and data centers should at last be developed, and economies of scale would then drop its price even further? A solution as cheap as Toslink but with much higher bandwidth and for computers rather than audio devices.
Valantar - Thursday, June 27, 2019 - link
This is a welcome development, but also a clear sign of the rather worrying I/O wall that mainstream computing is seemingly running headlong into, leading to either dramatically increased prices, feature stagnation, or both. Not that feature stagnation is necessarily a bad thing (it could push engineering efforts towards _smarter_ features added rather than brute-forcing improvements through bandwidth), but it's a bit of a paradigm shift, which the industry seems a bit slow to accept.Some examples: PCIe 4.0 requires higher motherboard trace quality, shorter traces (before requiring redrivers), more power, and fewer connectors in between the host and connected device. PCIe 3.0 is low power and can be extended through relatively inexpensive riser cables for surprising distances, even daisy-chaining cables. PCIe 5.0 (and thus 6.0) is likely to only make this worse. Internal connections will be faster (but more power hungry and expensive), but implementations will be somewhat less flexible. The rising cost of entry is also a definite issue - PCB manufacturing prices are already at commodity levels, and aren't going to drop due to increased demand for complex/high-quality boards.
USB 3.x is similar. USB 2.0 could run up to 5 meters on passive cabling (though lengths like that are known to have issues). USB 3.0/3.1G1 cut that to 3m. USB-C 3.1G2 cuts that to 1m (source: https://community.cypress.com/docs/DOC-10693). 1m is _utterly useless_ for anything except temporarily attached peripheral devices. Of course it's possible to some extent to make higher-quality cables that maintain said speeds over longer distances, but it's both expensive and difficult.
And now DP is going to limit itself to similar cable lengths to TB3? So you'll need your GPU to be within 1-1.5m of your monitor's input? That's an extreme limitation. Got your PC on the floor beneath your desk? It likely won't reach. Like to keep your desk clean? Sorry, no cable routing for you - it needs to be a straight shot to get there.
The computing world is backing itself into a corner, where practicality and usability are falling to the wayside in favor of brute-forcing in more performance. Some might say this is a necessity, and of course backwards compatibility alleviates some issues, but if that's the case these new standards are ultimately useless for the vast majority of users if they end up relying on falling back to an older standard anyhow. Then you're just paying more for nothing.
minde - Thursday, June 27, 2019 - link
Maybe in next gen ampere nm7 nvidia quadro gpu pcie 4 and DP 2Glenwing - Thursday, June 27, 2019 - link
AFAIK the 20 Gbit/s per lane in Thunderbolt 3 is actually the payload. The TB3 interface signals at 20.625 Gbit/s with 128b/132b encoding (20.000 Gbit/s payload), while you're saying DP 2.0 actually signals at 20.000 Gbit/s with 128b/132b encoding (19.39 Gbit/s payload). That's pretty interesting to note they signal at slightly different frequencies, so it's not quite an exact copy-paste of the Thunderbolt 3 interface in terms of operation :P(Source for TB3 operation: https://www.keysight.com/upload/cmc_upload/All/Thu...
Glenwing - Thursday, June 27, 2019 - link
^ Looks like the parentheses on the right side got merged into the URL, so you'll have to remove that from the end of the URL for it to work >.>PeachNCream - Thursday, June 27, 2019 - link
I would prefer a single standardized video connector rather than having multiple types. HDMI seems better positioned to take the lead given how little DisplayPort is actually in use these days. There's more analog VGA out there than DVI and DP combined so maybe it's a better idea to just discontinue DP.Valantar - Thursday, June 27, 2019 - link
So we should all move to a proprietary licensed interface rather than a free and open one? Yeah, that's not a good idea, no matter HDMI's dominance. As for "how little DP is actually in use these days", it's in every single laptop with a Type-C USB plug (through DP alt mode, which is native, even if it's also easily converted to HDMI - as is every DP port out there), every PC monitor beyond $150 or so, and every serious GPU on the market. Sure, that's PCs only, and HDMI does seem to own everything else - I wouldn't be surprised if there were exclusivity deals out there that we don't know of. DP has a lot of advantages.PeachNCream - Thursday, June 27, 2019 - link
That's a bit dramatic isn't it? Every modern computer out there has HDMI, but not all of them have a DP and the fact that HDMI isn't "free and open" doesn't seem to have adversely impacted its broad adoption.TheUnhandledException - Friday, June 28, 2019 - link
DP is all around a better tech. MST is ideal for connecting multiple monitors and for use in docking stations. TB3 supports DP natively now with Titan Ridge it is dual DP 1.4 streams which is ideal for multiple high resolution monitors. I don't know what bargain junk bin you are looking in that DP is rare.GreenReaper - Friday, June 28, 2019 - link
Not all do. The Surface Pro has only ever had a mini-DisplayPort. Of course, it has its own display.akvadrako - Wednesday, July 3, 2019 - link
It's going the other way because DP is superior in every sense, technically and legally. So TVs are starting to come with DP connectors.JEmlay - Friday, April 24, 2020 - link
Can some one help me out by filling in the holes?4K 60 4:4:4 08b-SDR = 17.82Gbps
4K 60 4:2:2 10b-HDR = 17.82Gbps
4K 60 4:4:4 10b-HDR = 22.28Gbps
4K 120 4:4:4 10b-HDR =
4K 144 4:4:4 10b-HDR =