It is kind of silly, when you call almost an unknown at Intel leaving getting off sinking ship - but when one of top AMD personal leaves, they call him traitor and not useful and such.
Lets be fair folks no matter which company is they have turn over form employees. Like to increase one's position and pay. Raju has opportunity to help with one of biggest industry changes with Intel and it graphics units
Intel is not a sinking ship, just a ship change and improving it's gears for the future.
It's obvious that the comment was intended for the IBM people both leaving POWER-related positions. And it was a joke. That you immediately made it into a fanboi AMD vs. Intel issue, and again do damage control for Intel, says more about your mindset than anyone else.
Actually I am not, all I am saying is that it normal for technical people to move from company to company. Only thing that I would defend about Intel, is that they originally created the x86 processor and IBM mess things up in the early days.
oh but you ARE Hstewart. your reply was in response to hulks " Hmm. Getting off a sinking ship? " which YOU replied with " It is kind of silly, when you call almost an unknown at Intel leaving getting off sinking ship - but when one of top AMD personal leaves, they call him traitor and not useful and such. " which you were defending intel thinking hulks comment was intel is the sinking ship, which is was not, but IBM is the sinking ship...
and what does " Only thing that I would defend about Intel, is that they originally created the x86 processor and IBM mess things up in the early days. " have to do with this ?? again.. defending intel when NO defense needs to be made.
Pretty much the only thing I'll agree with HStewart about. If there hadn't been an industry-standard second-source for the 8088, IBM would've used a different processor. We'd all be better off today.
While I admire that architecture for being "different", I can't help but think that it wouldn't have gone anywhere: Registers in DRAM? Makes me shudder with hindsight, had my jaw drop at their audacity at the time! Hardly mattered when the simplest logic instructions executed in plenty of kilohertz cycles.
Of course today it would probably just translate to some RISC-µops anyway.
Right of wrong, this is what I referring to. Think it about this way ( say any company, any product and not just Intel ). In theory say in the that the future console designed from AMD because so important that companies like Microsoft / Sony desired a second source of GPU to protect that interests and force AMD to second source the GPU to another company - in this case could be Sony or even NVidia. Will you then say we are better off. Keep in mind that x86 processors are not like ARM CPU which run by organization to support by multiple companies. Now did Apple do the right thing and modified the designed the ARM CPU for its internal purpose.
HStewart, you act like IBM bullied Intel into a unique arrangement. At the time, microprocessors were considered a component to make a product with, not a product themselves. EVERYONE had a second-source supplier for any component they wanted to sell outside their own company. Even respected companies with a history of quality product and ability to provide parts in quantity at reasonable prices, like Fairchild, National, and TI. It prevented product manufacturers from being at the mercy of component suppliers if the latter had production issues, decided to start price-gouging, or just plain old couldn't meet demand. In fact, AMD was already a second-source manufacturer for several of Intel's components years before the IBM deal(and Intel was a second-source for several of AMD's components).
Why should a memory company just getting into the microprocessor business play by different rules than people with established track records? (AMD had also reverse-engineered the design of Intel's 8080 and introduced a clone, the Am9080. Again, because reverse-engineering was common in those days.)
Intel had every right to refuse to enter that second source agreement. But they wanted all the money that would come from agreeing to the second source. Intel isn't in a position to play the sympathy card. They did it to themselves.
Man, I agree you can't really guess much, but Friedrich's experience with POWER power (ha) and clock gating, and the "CPU/GPU integration" wording, sounds half-relevant to trying to reduce idle/low-load power use in mobile parts, which is maybe the top problem to solve to compete well there. You can't rely on sales from shortages and gaming laptops forever. :)
And, of course, any work started by new hires today won't be in released products for years, so we shouldn't collectively hold our breath.
hmmm early benchmark leaks from the new 4k laptop chips are showing they not only beat intel but have better battery life as well, however these are leaks so we must wait for the proper releases to be sure...but i for one cannot see AMD allowing a Ryzen release to fall behind an intel one maybe in the bulldozer etc days but not now
One thing that rumored in the past was that AMD would be using the same socket as POWER systems. This was play to help reduce costs from both AMD and IBM’s perspectives. AMD cancelled G3MX which was part of that plan.
The idea that has been ping ponging around for a few years is who would buy Xilinx. For awhile it seemed like IBM and then nVidia. Makes me wonder if AMD would pursue such a move now.
Nope. POWER is direct competition to AMD now that EPYC is basically better than Intel Xeon in every shape and form and IBM always tries to compare POWER to XEONs rather than OPTERONs. IBM has offloaded its foundries to Global Foundries and then inked a deal with Samsung for 7nm FinFET for future POWER 10 + series.
I would not say that - each CPU has it advantage, AMD has more cores but Intel has advance instruction sets like AVX 512. POWER has it own line of software
" but Intel has advance instruction sets like AVX 512" yea that only a handful of software actually uses, and the performance and power hit intels cpus take, almost makes it not worth it to use, point is ????
Context is important and hence why I mentioned the G3MX socket that AMD abandoned in favor of G34. This around the time JEDEC had abandoned work on FB2-DIMMs to replace the much maligned FB-DIMM standard. Intel and IBM had already started work on their next generation memory buffers and memory controllers to be placed on their respective CPU dies. AMD was looking to adopt the JEDEC standard before it dissolved. What eventually happened is that the memory buffers moved to the motherboards and if AMD needed to leverage the work they already put into that buffered serial memory controller, they'd need a source for a buffer chip or develop one themselves. The solution was to approach IBM and move to a common platform so that the buffer chip could be leveraged on AMD and IBM's systems would benefit from reduced costs due to higher volume of necessary parts. That deal fell through and AMD went with socket G34 by going a dual die route with their high end client chips. This was all when AMD was short on resources and starting to collapse.
Conceivably AMD and IBM could have shared another socket with the launch of Epyc and the Power9 having some vast overlap in IO features (512 bit wide memory interfaces, high speed serial links for coherency based upon PCIe PHY etc.). Even if they were not electrically compatible at the socket level, being identical mechanically would save IBM and AMD money in similar platform component costs as those components would be manufactured in higher volume. Both of these companies are still relatively low volume in comparison to Intel's Xeon line up right now.
A shared platform has happened in the past. AMD licensed their FSB for the Athlons from DEC. AMD's chipsets at the time could be used for DEC's Alpha line up at the time. Similarly AMD licensed out Hypertransport and oddly the first Hypertransport product was from Transmeta. Mechanical compatibilities appear randomly for low volume and embedded stuff as a cost savings measure. For example, I have a PowerPC chip that fits into socket 7 via a PCB interposer.
I think the opposite can be true especially dealing with coding. 64 bit and extra processors have made developers lazy - I pretty sure very few developers monitor memory leaks and certain don't count clock cycle.
There is no wonder in today's world there is so many viruses and spyware.
Of course you hate 64-bit. Can't stand that Intel had to copy AMD64, can you?
Also, IBM shipped the first 64-bit computer in 1961. It wasn't very good, but a few years later the Cray-1 was 64-bit and it was the best in the world. Extra word size and address space and multithreading aren't what makes people lazy, as those have been around FOREVER. What makes them lazy is a complete lack of consequences for shitting out bad code.
And there's so much malware because there's so many computers.
Lord of the Bored, you also forgot that intel also copied amd with the on die memory controller :-)
"There is no wonder in today's world there is so many viruses and spyware. " what does that have to do with anything?? i think there are so many of those.. is because some people have nothing else better to do with their time, instead of doing something useful with it.
@Korguz: " "There is no wonder in today's world there is so many viruses and spyware. " what does that have to do with anything?? i think there are so many of those.. is because some people have nothing else better to do with their time, instead of doing something useful with it."
I think this statement actually makes some sense in context. Access to a surplus of memory and cpu resources has allow bad programmers to write inefficient code that often has memory leaks and other flaws that can be exploited by viruses and spyware. Before this surplus of resources, code needed to be better written just to be usable. Now, slow memory leaks and tanking an entire cpu core for no good reason can be hidden to some extent as many people have more memory and cpu cores than they would otherwise need. The 64bit part of the comment only make sense in that it allowed for more memory capacity in PCs. I don't think 64bit instructions really play into it.
I am aware of at least one application that has 64/32 bit problem that I used at work. This is Cisco WebEx - it still on 32 bit, but developer has yet upgrade it to 64 bit.
And you can thank AMD for making it possible to continue to use 32bit X86 with a 64bit instruction set.
Intel beat AMD at making a pure 64bit solution, but the world demanded 32bit and 64bit support together, so they had to go back to the drawing board, and AMD won the race, hence AMD64 lives on today.
If you've ever booted a Linux machine running 64bit, you'll constantly see references to AMD64.
They did ship a 64-bit machine although it wasn't what we would truly call a 64-bit machine in today's standards but it did have 64-bit registers. That mainframe sold for around $10 million which in today's prices would be about $86 million. It also weighed around 70,000 pounds and took up an entire large room.
@ Lord of the Bored: "Extra word size and address space and multithreading aren't what makes people lazy, as those have been around FOREVER."
In this context, you are talking about a small group of people with (at the time) very specialized education trying to program as efficiently as possible to solve mission critical problems on the most powerful computer in the world. This is not a good equivalent to the rather large group of programmers today programming (in many cases) non-mission critical software for commonly underutilized computers. Despite the Cray 1's relative power, the compute resources of this machine were extremely limited for the workloads that it was used for. Inefficiencies were not as easy to hide and the impact was far more severe when compared to today's underutilized PCs.
@Lord of the Bored: "Extra word size and address space and multithreading aren't what makes people lazy, as those have been around FOREVER." I agree with your point. I don't think the extra word size and address space and multithreading are making programmers lazy. However, I do think that they allow lazy programmers to exist where more restricted resources might have forced them to better optimize or find a different job.
My point was actually that "better computers" don't make more malware and worse code. It is the AVAILABILITY of computers that makes more malware.
As far as the lack of equivalence to modern software development and the Cray-1, that lack of equivalence is my point. A few elites that understood every step of the process coded for the Cray-1. The overall environment for most modern software development is... different, to put it mildly, and it generates a lot of shitty code. Why learn the system when you can cut/paste blindly from Stack Exchange?
"And there's so much malware because there's so many computers."
there's rather little DOS-style viruses around these days, thanks in large part to linux. windoze was forced to tighten up; got rid of the vestiges of the self-same DOS. OTOH, I tend to agree that modern kiddie-coders don't grok pointers as well as their elder brethren. but that's always been true; writing bugfree pointer code requires a superior mind and a body full of scars. one might also assert that the linux kernel (nothing to say about the rest of the distributions) is tight just because Linus turned 50!
much of today's malware is phishing and browser holes; the former is stupid users while the latter is coder problem.
maybe not, but as your very 1st post showed.. you will turn anything into an amd vs intel debate, when there was none to begin with, because you HAVE to defend and make intel look like the better brand then AMD, at ALL cost. and PLEASE explain how YOU THINK 64bit has made developers lazy. viruses and malware has been around before 64bit, programmers have ( according to you ) have been lazy before 64bit. " I pretty sure very few developers monitor memory leaks and certain don't count clock cycle " and im sure, this sentence from you, was also true before 64 bit. seems you are blaming 64bit for the all of the above, so, as lord of the board stated, you hate x86-64 cause intel was forced to copy amd for it, because MS didnt want to have to support 2 x86-64 instruction sets with windows and intels own 64bit, kind of flopped.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
43 Comments
Back to Article
Hulk - Tuesday, January 21, 2020 - link
Hmm. Getting off a sinking ship?Santoval - Tuesday, January 21, 2020 - link
The (hiring) tide has turned apparently..HStewart - Tuesday, January 21, 2020 - link
It is kind of silly, when you call almost an unknown at Intel leaving getting off sinking ship - but when one of top AMD personal leaves, they call him traitor and not useful and such.Lets be fair folks no matter which company is they have turn over form employees. Like to increase one's position and pay. Raju has opportunity to help with one of biggest industry changes with Intel and it graphics units
Intel is not a sinking ship, just a ship change and improving it's gears for the future.
Hulk - Tuesday, January 21, 2020 - link
Well it was actually supposed to be funny.Ian Cutress - Tuesday, January 21, 2020 - link
I think he was referring to IBM here, given it's two people from POWER.sa666666 - Tuesday, January 21, 2020 - link
It's obvious that the comment was intended for the IBM people both leaving POWER-related positions. And it was a joke. That you immediately made it into a fanboi AMD vs. Intel issue, and again do damage control for Intel, says more about your mindset than anyone else.bji - Tuesday, January 21, 2020 - link
But you have to have a mind to have a mindset.Korguz - Tuesday, January 21, 2020 - link
sa66666 thats hstewart for you... always defending his beloved intel.. even when no defence needs to be made....HStewart - Tuesday, January 21, 2020 - link
Actually I am not, all I am saying is that it normal for technical people to move from company to company. Only thing that I would defend about Intel, is that they originally created the x86 processor and IBM mess things up in the early days.Korguz - Tuesday, January 21, 2020 - link
oh but you ARE Hstewart. your reply was in response to hulks " Hmm. Getting off a sinking ship? " which YOU replied with " It is kind of silly, when you call almost an unknown at Intel leaving getting off sinking ship - but when one of top AMD personal leaves, they call him traitor and not useful and such. " which you were defending intel thinking hulks comment was intel is the sinking ship, which is was not, but IBM is the sinking ship...and what does " Only thing that I would defend about Intel, is that they originally created the x86 processor and IBM mess things up in the early days. " have to do with this ?? again.. defending intel when NO defense needs to be made.
Lord of the Bored - Wednesday, January 22, 2020 - link
Pretty much the only thing I'll agree with HStewart about. If there hadn't been an industry-standard second-source for the 8088, IBM would've used a different processor. We'd all be better off today.bigvlada - Wednesday, January 22, 2020 - link
MC 68000 still needed more debugging at the time.Lord of the Bored - Thursday, January 23, 2020 - link
We coulda had a TMS9000-based IBM PC!11abufrejoval - Thursday, January 23, 2020 - link
While I admire that architecture for being "different", I can't help but think that it wouldn't have gone anywhere: Registers in DRAM? Makes me shudder with hindsight, had my jaw drop at their audacity at the time! Hardly mattered when the simplest logic instructions executed in plenty of kilohertz cycles.Of course today it would probably just translate to some RISC-µops anyway.
HStewart - Wednesday, January 22, 2020 - link
Right of wrong, this is what I referring to. Think it about this way ( say any company, any product and not just Intel ). In theory say in the that the future console designed from AMD because so important that companies like Microsoft / Sony desired a second source of GPU to protect that interests and force AMD to second source the GPU to another company - in this case could be Sony or even NVidia. Will you then say we are better off. Keep in mind that x86 processors are not like ARM CPU which run by organization to support by multiple companies. Now did Apple do the right thing and modified the designed the ARM CPU for its internal purpose.Lord of the Bored - Thursday, January 23, 2020 - link
HStewart, you act like IBM bullied Intel into a unique arrangement.At the time, microprocessors were considered a component to make a product with, not a product themselves. EVERYONE had a second-source supplier for any component they wanted to sell outside their own company. Even respected companies with a history of quality product and ability to provide parts in quantity at reasonable prices, like Fairchild, National, and TI.
It prevented product manufacturers from being at the mercy of component suppliers if the latter had production issues, decided to start price-gouging, or just plain old couldn't meet demand.
In fact, AMD was already a second-source manufacturer for several of Intel's components years before the IBM deal(and Intel was a second-source for several of AMD's components).
Why should a memory company just getting into the microprocessor business play by different rules than people with established track records?
(AMD had also reverse-engineered the design of Intel's 8080 and introduced a clone, the Am9080. Again, because reverse-engineering was common in those days.)
Korguz - Thursday, January 23, 2020 - link
Lord of the Bored, welcome the the hstewart way if thinking :-)sarafino - Friday, January 24, 2020 - link
Intel had every right to refuse to enter that second source agreement. But they wanted all the money that would come from agreeing to the second source. Intel isn't in a position to play the sympathy card. They did it to themselves.PeachNCream - Wednesday, January 22, 2020 - link
Your hiatus was not long enough.rrinker - Tuesday, January 21, 2020 - link
ONE guy from Intel, who came from Altera, not Intel's CPU side, and it's a turned tide or rats jumping off a sinking ship? Wow. Fanboy much?Ian Cutress - Tuesday, January 21, 2020 - link
I think he was referring to the IBM POWER hires. Not everything is Intel vs AMD.twotwotwo - Tuesday, January 21, 2020 - link
Man, I agree you can't really guess much, but Friedrich's experience with POWER power (ha) and clock gating, and the "CPU/GPU integration" wording, sounds half-relevant to trying to reduce idle/low-load power use in mobile parts, which is maybe the top problem to solve to compete well there. You can't rely on sales from shortages and gaming laptops forever. :)And, of course, any work started by new hires today won't be in released products for years, so we shouldn't collectively hold our breath.
alufan - Wednesday, January 22, 2020 - link
hmmm early benchmark leaks from the new 4k laptop chips are showing they not only beat intel but have better battery life as well, however these are leaks so we must wait for the proper releases to be sure...but i for one cannot see AMD allowing a Ryzen release to fall behind an intel one maybe in the bulldozer etc days but not nowKevin G - Tuesday, January 21, 2020 - link
Interesting moves.One thing that rumored in the past was that AMD would be using the same socket as POWER systems. This was play to help reduce costs from both AMD and IBM’s perspectives. AMD cancelled G3MX which was part of that plan.
The idea that has been ping ponging around for a few years is who would buy Xilinx. For awhile it seemed like IBM and then nVidia. Makes me wonder if AMD would pursue such a move now.
rocketbuddha - Tuesday, January 21, 2020 - link
Nope. POWER is direct competition to AMD now that EPYC is basically better than Intel Xeon in every shape and form and IBM always tries to compare POWER to XEONs rather than OPTERONs. IBM has offloaded its foundries to Global Foundries and then inked a deal with Samsung for 7nm FinFET for future POWER 10 + series.HStewart - Tuesday, January 21, 2020 - link
I would not say that - each CPU has it advantage, AMD has more cores but Intel has advance instruction sets like AVX 512. POWER has it own line of softwareKorguz - Tuesday, January 21, 2020 - link
" but Intel has advance instruction sets like AVX 512" yea that only a handful of software actually uses, and the performance and power hit intels cpus take, almost makes it not worth it to use, point is ????AshlayW - Wednesday, January 22, 2020 - link
To be fair, 64 Zen2 cores with AVX2 can actually provide similar FPU throughput to 28 SKL cores with AVX512; at lower power use.Kevin G - Wednesday, January 22, 2020 - link
Context is important and hence why I mentioned the G3MX socket that AMD abandoned in favor of G34. This around the time JEDEC had abandoned work on FB2-DIMMs to replace the much maligned FB-DIMM standard. Intel and IBM had already started work on their next generation memory buffers and memory controllers to be placed on their respective CPU dies. AMD was looking to adopt the JEDEC standard before it dissolved. What eventually happened is that the memory buffers moved to the motherboards and if AMD needed to leverage the work they already put into that buffered serial memory controller, they'd need a source for a buffer chip or develop one themselves. The solution was to approach IBM and move to a common platform so that the buffer chip could be leveraged on AMD and IBM's systems would benefit from reduced costs due to higher volume of necessary parts. That deal fell through and AMD went with socket G34 by going a dual die route with their high end client chips. This was all when AMD was short on resources and starting to collapse.Conceivably AMD and IBM could have shared another socket with the launch of Epyc and the Power9 having some vast overlap in IO features (512 bit wide memory interfaces, high speed serial links for coherency based upon PCIe PHY etc.). Even if they were not electrically compatible at the socket level, being identical mechanically would save IBM and AMD money in similar platform component costs as those components would be manufactured in higher volume. Both of these companies are still relatively low volume in comparison to Intel's Xeon line up right now.
A shared platform has happened in the past. AMD licensed their FSB for the Athlons from DEC. AMD's chipsets at the time could be used for DEC's Alpha line up at the time. Similarly AMD licensed out Hypertransport and oddly the first Hypertransport product was from Transmeta. Mechanical compatibilities appear randomly for low volume and embedded stuff as a cost savings measure. For example, I have a PowerPC chip that fits into socket 7 via a PCB interposer.
FunBunny2 - Tuesday, January 21, 2020 - link
This is funny. For so many years, no one over 30 could be trusted in IT (esp. coding). The geezers have re-taken control!!! :)HStewart - Tuesday, January 21, 2020 - link
I think the opposite can be true especially dealing with coding. 64 bit and extra processors have made developers lazy - I pretty sure very few developers monitor memory leaks and certain don't count clock cycle.There is no wonder in today's world there is so many viruses and spyware.
Lord of the Bored - Wednesday, January 22, 2020 - link
Of course you hate 64-bit. Can't stand that Intel had to copy AMD64, can you?Also, IBM shipped the first 64-bit computer in 1961. It wasn't very good, but a few years later the Cray-1 was 64-bit and it was the best in the world.
Extra word size and address space and multithreading aren't what makes people lazy, as those have been around FOREVER. What makes them lazy is a complete lack of consequences for shitting out bad code.
And there's so much malware because there's so many computers.
Korguz - Wednesday, January 22, 2020 - link
Lord of the Bored, you also forgot that intel also copied amd with the on die memory controller :-)"There is no wonder in today's world there is so many viruses and spyware. " what does that have to do with anything?? i think there are so many of those.. is because some people have nothing else better to do with their time, instead of doing something useful with it.
BurntMyBacon - Wednesday, January 22, 2020 - link
@Korguz: " "There is no wonder in today's world there is so many viruses and spyware. " what does that have to do with anything?? i think there are so many of those.. is because some people have nothing else better to do with their time, instead of doing something useful with it."I think this statement actually makes some sense in context. Access to a surplus of memory and cpu resources has allow bad programmers to write inefficient code that often has memory leaks and other flaws that can be exploited by viruses and spyware. Before this surplus of resources, code needed to be better written just to be usable. Now, slow memory leaks and tanking an entire cpu core for no good reason can be hidden to some extent as many people have more memory and cpu cores than they would otherwise need. The 64bit part of the comment only make sense in that it allowed for more memory capacity in PCs. I don't think 64bit instructions really play into it.
HStewart - Wednesday, January 22, 2020 - link
I am aware of at least one application that has 64/32 bit problem that I used at work. This is Cisco WebEx - it still on 32 bit, but developer has yet upgrade it to 64 bit.Xyler94 - Monday, January 27, 2020 - link
And you can thank AMD for making it possible to continue to use 32bit X86 with a 64bit instruction set.Intel beat AMD at making a pure 64bit solution, but the world demanded 32bit and 64bit support together, so they had to go back to the drawing board, and AMD won the race, hence AMD64 lives on today.
If you've ever booted a Linux machine running 64bit, you'll constantly see references to AMD64.
Korguz - Tuesday, January 28, 2020 - link
BurntMyBacon and all that was around before x86-64, point is ?? programmers were lazy with 32bit, and maybe more lazy with 64bit.FreckledTrout - Wednesday, January 22, 2020 - link
They did ship a 64-bit machine although it wasn't what we would truly call a 64-bit machine in today's standards but it did have 64-bit registers. That mainframe sold for around $10 million which in today's prices would be about $86 million. It also weighed around 70,000 pounds and took up an entire large room.BurntMyBacon - Wednesday, January 22, 2020 - link
@ Lord of the Bored: "Extra word size and address space and multithreading aren't what makes people lazy, as those have been around FOREVER."In this context, you are talking about a small group of people with (at the time) very specialized education trying to program as efficiently as possible to solve mission critical problems on the most powerful computer in the world. This is not a good equivalent to the rather large group of programmers today programming (in many cases) non-mission critical software for commonly underutilized computers. Despite the Cray 1's relative power, the compute resources of this machine were extremely limited for the workloads that it was used for. Inefficiencies were not as easy to hide and the impact was far more severe when compared to today's underutilized PCs.
@Lord of the Bored: "Extra word size and address space and multithreading aren't what makes people lazy, as those have been around FOREVER."
I agree with your point. I don't think the extra word size and address space and multithreading are making programmers lazy. However, I do think that they allow lazy programmers to exist where more restricted resources might have forced them to better optimize or find a different job.
Lord of the Bored - Thursday, January 23, 2020 - link
My point was actually that "better computers" don't make more malware and worse code. It is the AVAILABILITY of computers that makes more malware.As far as the lack of equivalence to modern software development and the Cray-1, that lack of equivalence is my point. A few elites that understood every step of the process coded for the Cray-1.
The overall environment for most modern software development is... different, to put it mildly, and it generates a lot of shitty code. Why learn the system when you can cut/paste blindly from Stack Exchange?
FunBunny2 - Wednesday, January 22, 2020 - link
"And there's so much malware because there's so many computers."there's rather little DOS-style viruses around these days, thanks in large part to linux. windoze was forced to tighten up; got rid of the vestiges of the self-same DOS. OTOH, I tend to agree that modern kiddie-coders don't grok pointers as well as their elder brethren. but that's always been true; writing bugfree pointer code requires a superior mind and a body full of scars. one might also assert that the linux kernel (nothing to say about the rest of the distributions) is tight just because Linus turned 50!
much of today's malware is phishing and browser holes; the former is stupid users while the latter is coder problem.
HStewart - Wednesday, January 22, 2020 - link
This is not about hating 64 bit - but instead that over time it has made developers lazy.Korguz - Thursday, January 23, 2020 - link
maybe not, but as your very 1st post showed.. you will turn anything into an amd vs intel debate, when there was none to begin with, because you HAVE to defend and make intel look like the better brand then AMD, at ALL cost. and PLEASE explain how YOU THINK 64bit has made developers lazy. viruses and malware has been around before 64bit, programmers have ( according to you ) have been lazy before 64bit. " I pretty sure very few developers monitor memory leaks and certain don't count clock cycle " and im sure, this sentence from you, was also true before 64 bit. seems you are blaming 64bit for the all of the above, so, as lord of the board stated, you hate x86-64 cause intel was forced to copy amd for it, because MS didnt want to have to support 2 x86-64 instruction sets with windows and intels own 64bit, kind of flopped.