2.5G seems like a perfect bump for home users over the ubiquitous 1G, however this article makes me glad I invested in used 10G enterprise gear instead. At twenty bucks per card, and surprisingly cheap pricing on optical cables, the investment cost was very affordable. I’ve had almost no issues over the past year running 10G on a few machines that benefit from faster than 1G connectivity. I wish consumer brands simply embraced 10G standards as the enterprise stuff is rock solid.
I have not yet gone down the >1Gb ethernet road, but I'm inclined to wait for 10Gb to come down. I had a temporary need for a short distance high bandwidth link a while back. The most reasonable solution I found was a few 54Gb InfiniBand cards and a single cable to wire the two machines back to back. What looked like a daunting task--IB can be very complex--turned out to be barely harder than an ethernet connection. Fedora had all the drivers and software packages available. All I had to do was install them and start *one* daemon. Then I just brought up the interface and routed IP over it. Total expendature was <$50 and I had 54Gb each way between two machines.
The cards are old and power hungry, so I'm not leaving them in place all the time, but if I ever have a similar need, I'm set. It did make me look into the state of 1>Gb networking--which remains a mess of 2.5Gb devices for high prices, power hungry leftover server hardware for a reasonable price, and super expensive current gen 10Gb hardware. I'll wait.
If I do end up needing more bandwidth, it's going to be between two specific machines most likely, so I'll still not have to upgrade everything. Maybe just point to point the two machines. If I need a third, I guess that would be the time to start looking for a small switch.
I got a pair of dual port intel 520 sfp cards several years ago because switching hardware was so ridiculously expensive. I directly connected the two machines and then had separate 1gb connections for the rest of the network. This worked extremely well and sfp cards use less power than rj45 so even though they're old they're not quite the power hog.
Last year I got a Zyxel 12 port switch (about $150 USD) with 2x 10gb sfp, 2x 2.5gb and 8x 1gb so I could consolidate hardware a bit and it has worked very well. Getting something similar from QNAP would probably be better, but the cost has been significantly higher.
MikroTik might have something in the all 10gb 4 port variety which might be an option if you need more than 2 at a high speed and don't want to put down premium money.
I looked into options like that, but nothing was cost effective that could make use of the CAT6 already in my walls and also have a switch for multiple devices.
Where are you finding 10gbit optical cards at twenty bucks per? I think maybe you're going to make my year.
I've found reasonably-priced switches (Mikrotik), reasonably-priced transceivers (10GTek), but all of the NICs I've found have been ~150 bucks... nearly as much as the switches.
I remember the same, and I remember Intel having unbeatable process technology and the most efficient x86 CPU designs. It is sad that they have fallen so far, so fast, but I hope that Pat Gelsinger can right the ship.
I mean back in the day of Intel Pro 100 controllers, etc, they were a solid alternative to the more expensive 3COM 3c905. When 3COM disappeared and the competition was basically bottom barrel controllers from Realtek, Qualcomm, Atheros and the like, Intel was seemingly complacent in letting QC slide because everything after the 82573 has had issues, albeit, workaroundable issues.
I'm glad to see the recent drivers (unfortunately not WHQL, so no windows update automated patch) disable the problematic issues because at the end of the day sure we are trying to save a watt of power here and there but reliability trumps efficiency in most circumstances.
I have a N5105 based box with 4 of these i226 controllers running pfsense and have never had an issue with their operation. I am putting together a new server box which has two of them onboard, but I'm not sure I will have windows on it long enough to really test stability though I would like to try.
Seeing as there haven't been any recalls, new revisions, or much solid info as to what could be going on it doesn't seem like it's a hardware flaw like the i225. I've not seen anyone experiencing the problem running anything other than windows, and I've not seen anyone test anything else to see if it still happens.
I've seen TORRENT CLIENTS recommend disabling EEE, Green Ethernet, and "allow computer to turn off this device to save power' settings, though the clients do not do it for you.
But it says a lot when software creators making programs that depend on 24/7 connectivity know there are reliability issues with these modern NICs and PHY's they are going out of their way to warn you.
One user and one system is way too small of sample size to draw any conclusions.
“Seeing as there haven't been any recalls, new revisions, or much solid info as to what could be going on it doesn't seem like it's a hardware flaw like the i225” Yup that’s not the case nor how it works.
If I recall correctly the i226 is the latest revision of the i225-V. The i225-v had three hardware revisions ending in i225-V (REV_03). So they bumped up the number to i226. With the history and nature of nic chips (networkstack/ PHYs) I’d be shocked if anyone with knowledge on the subject did not at least suspect some level of issue.
“ I've not seen anyone experiencing the problem running anything other than windows, and I've not seen anyone test anything else to see if it still happens” because that’s the only user base really using these atm.
They've been sold in networking boxes out of china for going on 6 months and this is where they appeared first. Nobody in the giant STH thread on these devices has reported any problems, but nobody runs bare windows on them. STH reviewed at least 3 different types of the boxes using these controllers and reported no problems there either. They certainly experienced the problems with the first two revisions of the i225, but nothing after.
Yup, like I said that does not matter much. "Real" deployments in industry are at best just starting to happen. Even LTT noted this in one of their recent server updates. Even pfsense just recently got support for the chip for context.
It’s still too soon to say these nics are free of major hardware issues. Windows running on them is not really a factor. I’m aware of the great work STH does and their forms. You're welcome to check out my post there on a said i226 box.
How is Intel still selling these flawed ethernet controller?
I had to dig through motherboard options to make sure I didn't get one with it. Of course the boards that use the flawed Intel ethernet are on sale more often so ended up paying more just to avoid Intel ethernet.
I look for Broadcom 10GBit or 25Gbit NICs on eBay. Broadcom does mostly enterprise-class stuff, but it is amazing how cheap you can find enterprise hardware on eBay from datacenter upgrades and similar.
Personally I haven't had an issue with the i225 either, but I have seen the i226 drop and reconnect or take a long time to establish connection when coming out of sleep, so that is anecdotally the more problematic PHY. But in all fairness I haven't been paying attention to driver versions of any of these PC's and its safe to say they are all running whatever WHQL driver is included in whatever Windows image they have.
You clearly have not experienced the issue. Good for you and I hope you don't have to. But for those who have to deal with this issue, imagine your network adapter losing connection every day, or file transfer speed reduced to 10/100 mbps, or randomly disappearing.
After having returned 2 motherboards I picked partly because they had Intel NICs, now I no longer look for Intel NICs. I might still buy one if other features of a motherboard are compelling *AND* there is no report of NIC problem when I google it, but it will not be an automatic sell.
Eh whatever, I only have one thing connected by wire at home and it's on a 100mb port. People have trouble with residential netowrking and it causes them so much needless heartburn because they can't understand the gradient between wants and needs.
We should all just use simple null MODEMs. I hear that U.S. Robotics makes some great hardware for when you want to talk to computers outside of the range of a serial cable, but who would want to do that?
I regularly feel constrained by my 25GBit Broadcom NICs which are starved of PCI Express bandwidth because my home systems lack the PCI Express channels to saturate the line. 1GBit would feel like a dead slug stuck in frozen molasses when transferring files, though granted I am not a typical non-geek home user.
File sizes have gotten a lot bigger and virtualization is not unusual even for home users, many of whom prefer NAS for storage. Gigabit is a need, not a want in 2023.
What on earth is going on with Intel these days. They used to be the absolute high watermark in the industry for quality, reliability, and product execution but the last few years have just been shambolic.
The Intel who managed to single handedly make the entire motherboard industry stop turning out crap and up their game with their retail boards, or made the first SSDs that weren't flaky rubbish, or got WiFi not to be hopeless feel far, far away from any company that would churn out this nonsense.
Intel has too long been led by bottom-line-obsessed businessmen with little love of engineering. R&D is expensive. Keeping the best engineers is expensive. Long-term leadership doesn't benefit the next quarterly results. I have some confidence that Pat Gelsinger can increase thrust before the whole plane stalls over rough terrain. Pat is a Stanford man and led VMWare to become a critical part of the world's datacenter infrastructure.
It’s unfortunate because I’d like to build a custom SFF router but most have this network chip included. Even the pfsense Netgate boxes use this chip, so I don’t know what workaround they are using or perhaps it’s just slightly unstable and nobody has complained loudly.
So, I probably have to use PCI-E add on cards for enterprise stability or just buy a real 1u server with better chips.
How can Intel not know what is wrong after two generations? So incompetent.
That Puma 6 issue will never be fixed. Its still there in the PUMA 7 chipset but its not as bad. If one has to use cable internet make sure your modem is using a broadcom chipset.
I take offence at your first sentence, because if it was difficult for Intel, that means it was far worse for Intel's customers, who might have lost data, time, customers and serious money due to a defective product and a QA that didn't catch it.
The first issue, of course, was greed and price: for far too long Intel, like many other Ethernet vendors, decided to make >1Gibt Ethernet a luxury item that would require optical cables, transreceivers and massive ASICs with tons of offload gadgetry. They wanted at least fibre-channel prices, better yet Infiniband returns and not just on the NICs, but everything from cables to fabrics and management software.
So when Nbase-T finally came along, it was another Microchannel, x86_64 or ARM HPC moment where for the longest time Intel mangement simply refused to invest in a product that was nice and cost effective for users, because they had long ago deciced it was time to follow IBM's mainframe footsteps and Omnipath/Optane-lock-in customers: their 10Gbit kit wasn't competitive in any shape or form and Gbit a horse long since beaten to death.
Teranetics/PLX/Aquantia/Marvell has delivered very cost efficient NBase-T hardware for many years, but for some shady reason they have never caught on the the market, even when Intel itself put Marvel AQC107 kit into their high-end 9th gen NUCs for lack of a working Intel alternative.
The AQC113 supports up to 10Gbit speeds with full NBase-T support, including EEE from a single PCIe 4.0 lane and no current mainboard or NUC should really be sold with less.
Yes, 10Base-T might grab more power than a modern SoC just for the PHY, but not when you operate it at 1 or 2.5 Gbit/s on a battery.
I believe it's high-time for some anti-trust investigation there and could someone in the mean-time please just offer PCIe 4.0 x1 based 10Gbit NICs based on the AQC113?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
40 Comments
Back to Article
Rocket321 - Monday, March 6, 2023 - link
2.5G seems like a perfect bump for home users over the ubiquitous 1G, however this article makes me glad I invested in used 10G enterprise gear instead. At twenty bucks per card, and surprisingly cheap pricing on optical cables, the investment cost was very affordable.I’ve had almost no issues over the past year running 10G on a few machines that benefit from faster than 1G connectivity. I wish consumer brands simply embraced 10G standards as the enterprise stuff is rock solid.
dwillmore - Monday, March 6, 2023 - link
I have not yet gone down the >1Gb ethernet road, but I'm inclined to wait for 10Gb to come down. I had a temporary need for a short distance high bandwidth link a while back. The most reasonable solution I found was a few 54Gb InfiniBand cards and a single cable to wire the two machines back to back. What looked like a daunting task--IB can be very complex--turned out to be barely harder than an ethernet connection. Fedora had all the drivers and software packages available. All I had to do was install them and start *one* daemon. Then I just brought up the interface and routed IP over it. Total expendature was <$50 and I had 54Gb each way between two machines.The cards are old and power hungry, so I'm not leaving them in place all the time, but if I ever have a similar need, I'm set. It did make me look into the state of 1>Gb networking--which remains a mess of 2.5Gb devices for high prices, power hungry leftover server hardware for a reasonable price, and super expensive current gen 10Gb hardware. I'll wait.
If I do end up needing more bandwidth, it's going to be between two specific machines most likely, so I'll still not have to upgrade everything. Maybe just point to point the two machines. If I need a third, I guess that would be the time to start looking for a small switch.
thestryker - Monday, March 6, 2023 - link
I got a pair of dual port intel 520 sfp cards several years ago because switching hardware was so ridiculously expensive. I directly connected the two machines and then had separate 1gb connections for the rest of the network. This worked extremely well and sfp cards use less power than rj45 so even though they're old they're not quite the power hog.Last year I got a Zyxel 12 port switch (about $150 USD) with 2x 10gb sfp, 2x 2.5gb and 8x 1gb so I could consolidate hardware a bit and it has worked very well. Getting something similar from QNAP would probably be better, but the cost has been significantly higher.
MikroTik might have something in the all 10gb 4 port variety which might be an option if you need more than 2 at a high speed and don't want to put down premium money.
DigitalFreak - Monday, March 6, 2023 - link
I've found that it's easy to get 10G over Cat5e in typical homes. The runs are usually short enough that it's not a problem.Gigaplex - Tuesday, March 7, 2023 - link
I looked into options like that, but nothing was cost effective that could make use of the CAT6 already in my walls and also have a switch for multiple devices.spamaway - Sunday, March 19, 2023 - link
Where are you finding 10gbit optical cards at twenty bucks per? I think maybe you're going to make my year.I've found reasonably-priced switches (Mikrotik), reasonably-priced transceivers (10GTek), but all of the NICs I've found have been ~150 bucks... nearly as much as the switches.
Sivar - Monday, March 6, 2023 - link
Intel has a bit of a history of unreliable ethernet controllers (remember the I225-V? The 82579LM?)Broadcom hardware may be a bit more expensive, but their hardware is rock-solid.
boozed - Monday, March 6, 2023 - link
Funny, I was going to say "Intel's ethernet PHYs used to be the bees knees, what happened?"I remember them being the thing you paid extra for when you were buying a motherboard in the early days of integration.
Sivar - Tuesday, March 7, 2023 - link
I remember the same, and I remember Intel having unbeatable process technology and the most efficient x86 CPU designs. It is sad that they have fallen so far, so fast, but I hope that Pat Gelsinger can right the ship.Samus - Tuesday, March 7, 2023 - link
I mean back in the day of Intel Pro 100 controllers, etc, they were a solid alternative to the more expensive 3COM 3c905. When 3COM disappeared and the competition was basically bottom barrel controllers from Realtek, Qualcomm, Atheros and the like, Intel was seemingly complacent in letting QC slide because everything after the 82573 has had issues, albeit, workaroundable issues.I'm glad to see the recent drivers (unfortunately not WHQL, so no windows update automated patch) disable the problematic issues because at the end of the day sure we are trying to save a watt of power here and there but reliability trumps efficiency in most circumstances.
dwillmore - Tuesday, March 7, 2023 - link
I was using DEC Tulip cards in that era, so I never noticed that problems with the various fast ethernet MACs/PHYs.thestryker - Monday, March 6, 2023 - link
I have a N5105 based box with 4 of these i226 controllers running pfsense and have never had an issue with their operation. I am putting together a new server box which has two of them onboard, but I'm not sure I will have windows on it long enough to really test stability though I would like to try.Seeing as there haven't been any recalls, new revisions, or much solid info as to what could be going on it doesn't seem like it's a hardware flaw like the i225. I've not seen anyone experiencing the problem running anything other than windows, and I've not seen anyone test anything else to see if it still happens.
DigitalFreak - Monday, March 6, 2023 - link
I would assume pfsense disables the power savings features. Either that or the FreeBSD driver is less flawed.[email protected] - Monday, March 6, 2023 - link
Netgate (the developer of pfSense) wrote the FreeBSD driver.Samus - Tuesday, March 7, 2023 - link
I've seen TORRENT CLIENTS recommend disabling EEE, Green Ethernet, and "allow computer to turn off this device to save power' settings, though the clients do not do it for you.But it says a lot when software creators making programs that depend on 24/7 connectivity know there are reliability issues with these modern NICs and PHY's they are going out of their way to warn you.
Skeptical123 - Wednesday, March 8, 2023 - link
One user and one system is way too small of sample size to draw any conclusions.“Seeing as there haven't been any recalls, new revisions, or much solid info as to what could be going on it doesn't seem like it's a hardware flaw like the i225” Yup that’s not the case nor how it works.
If I recall correctly the i226 is the latest revision of the i225-V. The i225-v had three hardware revisions ending in i225-V (REV_03). So they bumped up the number to i226. With the history and nature of nic chips (networkstack/ PHYs) I’d be shocked if anyone with knowledge on the subject did not at least suspect some level of issue.
“ I've not seen anyone experiencing the problem running anything other than windows, and I've not seen anyone test anything else to see if it still happens” because that’s the only user base really using these atm.
thestryker - Thursday, March 9, 2023 - link
They've been sold in networking boxes out of china for going on 6 months and this is where they appeared first. Nobody in the giant STH thread on these devices has reported any problems, but nobody runs bare windows on them. STH reviewed at least 3 different types of the boxes using these controllers and reported no problems there either. They certainly experienced the problems with the first two revisions of the i225, but nothing after.Skeptical123 - Thursday, March 9, 2023 - link
Yup, like I said that does not matter much. "Real" deployments in industry are at best just starting to happen. Even LTT noted this in one of their recent server updates. Even pfsense just recently got support for the chip for context.It’s still too soon to say these nics are free of major hardware issues. Windows running on them is not really a factor. I’m aware of the great work STH does and their forms. You're welcome to check out my post there on a said i226 box.
lopri - Saturday, March 11, 2023 - link
There have been 3 revisions.blzd - Monday, March 6, 2023 - link
How is Intel still selling these flawed ethernet controller?I had to dig through motherboard options to make sure I didn't get one with it. Of course the boards that use the flawed Intel ethernet are on sale more often so ended up paying more just to avoid Intel ethernet.
Samus - Tuesday, March 7, 2023 - link
What's the alternative? Realtek is garbage too.Sivar - Tuesday, March 7, 2023 - link
I look for Broadcom 10GBit or 25Gbit NICs on eBay. Broadcom does mostly enterprise-class stuff, but it is amazing how cheap you can find enterprise hardware on eBay from datacenter upgrades and similar.lopri - Saturday, March 11, 2023 - link
Well garbage Realtek that works reliably is infinitely better than the one that disconnects or doesn't work.James5mith - Monday, March 6, 2023 - link
If it's a choice between Intel and Realtek, I'll pick Intel any day of the week for an integrated ethernet controller.I have an 11th gen NUC with 2x i225's in it running pfsense, it's been rock solid for nearly 2 years now.
Samus - Tuesday, March 7, 2023 - link
Personally I haven't had an issue with the i225 either, but I have seen the i226 drop and reconnect or take a long time to establish connection when coming out of sleep, so that is anecdotally the more problematic PHY. But in all fairness I haven't been paying attention to driver versions of any of these PC's and its safe to say they are all running whatever WHQL driver is included in whatever Windows image they have.lopri - Saturday, March 11, 2023 - link
You clearly have not experienced the issue. Good for you and I hope you don't have to. But for those who have to deal with this issue, imagine your network adapter losing connection every day, or file transfer speed reduced to 10/100 mbps, or randomly disappearing.After having returned 2 motherboards I picked partly because they had Intel NICs, now I no longer look for Intel NICs. I might still buy one if other features of a motherboard are compelling *AND* there is no report of NIC problem when I google it, but it will not be an automatic sell.
PeachNCream - Tuesday, March 7, 2023 - link
Eh whatever, I only have one thing connected by wire at home and it's on a 100mb port. People have trouble with residential netowrking and it causes them so much needless heartburn because they can't understand the gradient between wants and needs.davidedney123 - Tuesday, March 7, 2023 - link
Yes, whatever was wrong with 10Base2. Fools!Sivar - Tuesday, March 7, 2023 - link
We should all just use simple null MODEMs. I hear that U.S. Robotics makes some great hardware for when you want to talk to computers outside of the range of a serial cable, but who would want to do that?PeachNCream - Wednesday, March 8, 2023 - link
Ah slippery slope fallicies. Too obvious. Try again.Sivar - Tuesday, March 7, 2023 - link
I regularly feel constrained by my 25GBit Broadcom NICs which are starved of PCI Express bandwidth because my home systems lack the PCI Express channels to saturate the line. 1GBit would feel like a dead slug stuck in frozen molasses when transferring files, though granted I am not a typical non-geek home user.lopri - Saturday, March 11, 2023 - link
File sizes have gotten a lot bigger and virtualization is not unusual even for home users, many of whom prefer NAS for storage. Gigabit is a need, not a want in 2023.davidedney123 - Tuesday, March 7, 2023 - link
What on earth is going on with Intel these days. They used to be the absolute high watermark in the industry for quality, reliability, and product execution but the last few years have just been shambolic.The Intel who managed to single handedly make the entire motherboard industry stop turning out crap and up their game with their retail boards, or made the first SSDs that weren't flaky rubbish, or got WiFi not to be hopeless feel far, far away from any company that would churn out this nonsense.
TheinsanegamerN - Tuesday, March 7, 2023 - link
A decade of laziness, sloth, and greed has come home to roost. Intel started losing talent years ago and has done little to excite them to come back.Sivar - Tuesday, March 7, 2023 - link
Intel has too long been led by bottom-line-obsessed businessmen with little love of engineering. R&D is expensive. Keeping the best engineers is expensive. Long-term leadership doesn't benefit the next quarterly results.I have some confidence that Pat Gelsinger can increase thrust before the whole plane stalls over rough terrain. Pat is a Stanford man and led VMWare to become a critical part of the world's datacenter infrastructure.
lopri - Saturday, March 11, 2023 - link
They are still bottom-line obsessed. See: Intel On Demandhechacker1 - Tuesday, March 7, 2023 - link
It’s unfortunate because I’d like to build a custom SFF router but most have this network chip included. Even the pfsense Netgate boxes use this chip, so I don’t know what workaround they are using or perhaps it’s just slightly unstable and nobody has complained loudly.So, I probably have to use PCI-E add on cards for enterprise stability or just buy a real 1u server with better chips.
How can Intel not know what is wrong after two generations? So incompetent.
blppt - Tuesday, March 7, 2023 - link
This is nothing new for Intel. How many years has that flawed Puma6 chipset been out, and they've never managed to fix that epic latency bug?Makaveli - Wednesday, March 8, 2023 - link
That Puma 6 issue will never be fixed. Its still there in the PUMA 7 chipset but its not as bad. If one has to use cable internet make sure your modem is using a broadcom chipset.abufrejoval - Thursday, March 9, 2023 - link
I take offence at your first sentence, because if it was difficult for Intel, that means it was far worse for Intel's customers, who might have lost data, time, customers and serious money due to a defective product and a QA that didn't catch it.The first issue, of course, was greed and price: for far too long Intel, like many other Ethernet vendors, decided to make >1Gibt Ethernet a luxury item that would require optical cables, transreceivers and massive ASICs with tons of offload gadgetry. They wanted at least fibre-channel prices, better yet Infiniband returns and not just on the NICs, but everything from cables to fabrics and management software.
So when Nbase-T finally came along, it was another Microchannel, x86_64 or ARM HPC moment where for the longest time Intel mangement simply refused to invest in a product that was nice and cost effective for users, because they had long ago deciced it was time to follow IBM's mainframe footsteps and Omnipath/Optane-lock-in customers: their 10Gbit kit wasn't competitive in any shape or form and Gbit a horse long since beaten to death.
Teranetics/PLX/Aquantia/Marvell has delivered very cost efficient NBase-T hardware for many years, but for some shady reason they have never caught on the the market, even when Intel itself put Marvel AQC107 kit into their high-end 9th gen NUCs for lack of a working Intel alternative.
The AQC113 supports up to 10Gbit speeds with full NBase-T support, including EEE from a single PCIe 4.0 lane and no current mainboard or NUC should really be sold with less.
Yes, 10Base-T might grab more power than a modern SoC just for the PHY, but not when you operate it at 1 or 2.5 Gbit/s on a battery.
I believe it's high-time for some anti-trust investigation there and could someone in the mean-time please just offer PCIe 4.0 x1 based 10Gbit NICs based on the AQC113?