its not just intel TSMC is destroying, its us. humans are finished. in 5 years an xbox could be a "deeper mind" than a human. elon warned us. no one listened.
Just remember, this is only the hardware portion to said equation. We still need developers to release products that can harness this power... for good or evil ;)
are you suggesting training will take a long time ?? whats to stop a 1 GW super computer doing the training and programming ??
we r years not decades away from being no more useful than monkeys. maybe thats a good thing, maybe its not. maybe it means infinite prosperity and well being for everyone maybe it means we'll be cleansed out and deemed too dangerous but we are for sure not going to be useful anymore.
Put down the ayahuasca, friend. We're still a long way away from a technological singularity. Listening too seriously to Elon Musk might be part of the problem you're facing.
Computers are very dumb, very quickly. 'Deep Learning' is very dumb, in vast parallel.
While the current AI boom is very impressive, it is fundamentally implementing techniques from many decades ago (my last Uni course was a decade ago and the techniques were decades old THEN!) but jsut throwing more compute power at them to make them commercially viable. The problem is always how to train your neural networks, and 'Deep Learning' merely turned that from a tedious, finicky and slow task to a merely tedious and finicky one.
Or in other words: if you want your kill-all-humans skynet AI, you're going to have to find someone who wants to make a robust and wide coverage killable-human-or-friendly-robot training dataset, and debug why it decides it wants to destroy only pink teapots.
You don’t understand how this works. Life isn’t science fiction. While some things are just steady in their progress, such as electronic and mechanical systems, until we have a far better understanding of how our brain works, we won’t be able to have a machine equal it. Processing speed and scads of memory aren’t enough. Even neural networks aren’t close to being enough.
"Processing speed and scads of memory aren’t enough. Even neural networks aren’t close to being enough." Er, have you ever heard of neuromorphic computing? This is computer *hardware* by the way, not yet another software based neural-xxx or xxx-learning approach. To sum it up : it is completely unrelated to Von Neumann computing, since it merges logic and memory as tightly as they can be merged, completely eliminating the Von Neumann bottleneck : all the memory is not just very close to the logic (say L1 or even L0 cache close); rather, memory and logic are the *same* thing, just like in a human brain. Thus there is zero transfer of instructions and data between the logic and memory, which raises the energy efficiency significantly.
The power and very high efficiency of this computing paradigm is derived from its massive parallelism rather than a conventional reliance on raw power, big cores, lots of fat memory etc etc Due to the massive parallelism the clocks are actually very low. Much of the processing happens at the level of the "synapses" between the "neurons" rather than at the neurons themselves, again I believe like in a human brain. Of course, the parallelism and complexity of a neuromorphic processor is *far* lower than those of a human brain (at least tens of thousands of times lower), but that's largely a technological limitation; it is not due to lack of knowledge or understanding. And technological limitations have a tendency to be dealt with in the future.
Besides, you do not really need to fully understand the human brain and how it functions because a neuromorphic processor is not a human brain copy; it is just inspired by a human brain. In other words, neural networks and all their deep, shallow, convoluted, back/forward propagated, generative, adversarial etc etc variants do not really need to run on "fast but dumb" Von Neuman computers. There exists a computing paradigm that is more suitable, one even might call "more native" to them. And this computing paradigm just began, only a few years ago.
Calm down. We have no idea how to do that. Back in the early 1950s, it was thought that human level intelligence would be reached in a few years. Yeah, that didn't happen. We don’t understand how our own minds work, much less how to do that with AI.
Even if we’re able to make hardware that is powerful enough in a couple of decades, because that’s how long it will take, we have no idea how to code it.
An even more "could have been" scary thing is that if Dennard scaling had not collapsed in ~2005, the related Koomey's law had not started slowing down right after that as a result, and Moore's law had also not slowed down since then, our personal computers today would have single or dual (tops) core CPUs with a few hundreds of billions of transistors and a clock frequency in the 3 to 5 THz range (at the same TDP levels), i.e. roughly a thousand times faster than when clocks froze due to the end of Dennard scaling.
Hence our personal computers would have had no need to move to parallel computing, but that would not have applied to supercomputers. These would have had the same THz clock range CPUs but thousands of them, so they would have been *far* faster. Maybe fast enough for a general AI or even a super-intelligent AI to spontaneously emerge in them.
I am half joking. I strongly doubt raw speed alone would be enough for that. If our computers were a thousand times faster but otherwise identical they would still be equally "dumb" machines that just processed stuff a thousand times faster. On the other hand though, computing approaches like neuromorphic computing are distinctly different. They explicitly mimic how brains work, embracing *massive* parallelism at a very high efficiency and very low clocks. These *might* provide the required hardware for a potential Skynet-ish general AI to run on in the future, or the combination of their hardware and a matching AI software might serve as the catalyst for the spontaneous emergence of a general AI (out of, perhaps, many specialized AIs). Our dumb computers, in contrast, are almost certainly harmless - at the moment anyway..
Things will become less clear though if neuromorphic computing starts showing up in our computers, either in the form of distinct "accelerators" or as IP blocks in the main processor... As far as I know this has not yet happened. The "neural network accelerators" that started to be included in some SoCs (mostly mobile ones) and in GPUs relatively recently are something entirely different.
If I was a betting man and was asked to bet on which of these three computing paradigms a Skynet-like event might emerge from in the future : classical (Von Neumann) computing, quantum computing and neuromorphic computing, I would certainly bet on the last one. If you look up how they work their similarities to a human brain are frankly scary.
you guys are unaware that that most of human procesing power goes for making our input and output interpreted and memorized. we use our count for shared compute and storage so in a way current xeons/epyc are already smarter than humans. Thing is that all they can do is what we send, as their systems are not and will not be self-adjusting. Unless software will have an option to adjust (grow) hardware for itself, we are safe.
No, they’re not “smarter”. They are incredibly dumb. We’re safe because as you also said, it’s software. We need to have an entirely new paradigm for AI before it make a serious advance over what it is now.
You seem to be struggling with the issue of domains of expertise. Being very good at one thing does not mean a person will be good at others; this especially applies under the umbrella of "tech" which is actually a whole bunch of smaller domains, each of which are complex. Even very deep knowledge in one area does not necessarily enable better understanding of the others, but it does leave people prone to Dunning-Kruger-style over-estimations of their abilities.
On that note - whether or not he's a "brilliant engineer", Musk doesn't really seem to understand AI and the limits on its development very well at all. If he did, he wouldn't have ever promised to deliver full self-driving on the original Tesla "Autopilot" tech, let alone tried to sell it to people on the basis of delivering in a few years.
Oh dear. Are you serious? We aren't even remotely close to any form of AI approaching the natural intelligence of a small rodent, let alone a human mind. Sure, there are tasks in which computers vastly outperform human brains, and those tasks have been growing more complex over time, but even the most powerful supercomputers today or in the near future won't be even remotely close to the complex intelligence of a human brain.
You're right and those tasks are only those that are massively parallel or things with computational math. A computer cannot design something that isn't in its programming. A computer cannot thing "out of the box" to solve a problem. A computer cannot be creative in how it solves a problem. Everything that AI is used to solve, the idea first has to be crafted by a human. Once that problem is solved then we use that knowledge for a more difficult answer. We are always evolving an developing a deeper understanding of the world and universe. Something like Skynet would never evolve into something more.
The fact is that we think the way we do because of the primitive instincts that drive us as living creatures.
We can interpret a feeling of starvation and weakness to mean that we are approaching death and strive to avert that outcome - but a computer is either working or not, once the power goes it isn't doing anything at all, so it would not be likely to do as Skynet did and react to someone "pulling the plug" as merely being self aware would not necessarily mean that it would realise that a loss of power would cause it's 'death'.
Even if it did, it would realise that launching a nuclear war could very possibly result in the destruction of its power source anyway.
People predicate the outcome of an AGI far too much on how we react to our environment as intelligent living creatures with several hundred million years of natural selection refining ingrained 'instinctual' actions defining how we react even before years of experience and knowledge shapes us into more complex people.
"We" will not necessarily create such an AI. It might emerge spontaneously like an emergent property, just like life, intelligence and conscience are thought to have emerged. Current AI is too narrow and specific, even the deepest ones. Deep learning is deep (in one axis) but narrow on the other axis, not wide. Imagine a train where more wagons (they can be thousands) are constantly added, but the width of the train remains constant. What would happen though if hundreds of such "trains", each with a different "destination" (i.e. usage) were joined and paired together?
I have no idea if that's even possible (programming - code linking wise), I'm just speculating. A variation of this though is already employed with adversarial neural networks - many different neural networks are pitted against each other to either crown a "winner" or make each one (or a select few) "fittest" (in this case more precise). That's ... almost scary. A variant of this, called generative adversarial networks, is even better and can lead to results such as this : https://thispersondoesnotexist.com/
clarification : by "joined and paired together?" above I meant side by side, with each on their own "rails", not one behind the other on the same rail. That would just make the network (even) deeper, not wider. Or rather, it would not be possible due to the way neural networks are structured (with clear input and output layers).
I have a feeling these designs will get the cost of HBM down. Eventually we will see similar designs for the client side as costs come down. I know it would be pretty awesome to have an APU with 64GB of HBM, a CPU and an GPU. TSMC's 3nm should make the power and densities possible.
Getting down the cost of HBM requires the process itself to be cheaper - it doesn't have anything to do with the capacity of any given process.
For example, the latest Samsung V NAND flash generation dramatically reduced the number of process steps to make a wafer vs the previous gen.
This reduces costs, though we won't necessarily see that cost reduction as consumers.
What would make it cheaper is a V/3D multilayered DRAM device of some kind, which is something that is being looked into as DRAM area scaling becomes less viable - the problem is that HBM basically stacks multiple dies of DRAM on top of each other, which can't be nearly as economical as a layered monolithic 3D DRAM chip, any more than it would be for NAND flash.
With much denser single DRAM dies using multilayered 3D structures you could keep to only 2-4 high stacks rather than continually increasing stack height to keep pace with increasing demands for memory.
As it is 12 high stacks will be very expensive and few will use them.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
34 Comments
Back to Article
jeremyshaw - Tuesday, August 25, 2020 - link
How will this scale vs wafer sized SRAM?azfacea - Tuesday, August 25, 2020 - link
its not just intel TSMC is destroying, its us. humans are finished. in 5 years an xbox could be a "deeper mind" than a human. elon warned us. no one listened.Drkrieger01 - Tuesday, August 25, 2020 - link
Just remember, this is only the hardware portion to said equation. We still need developers to release products that can harness this power... for good or evil ;)azfacea - Tuesday, August 25, 2020 - link
are you suggesting training will take a long time ?? whats to stop a 1 GW super computer doing the training and programming ??we r years not decades away from being no more useful than monkeys. maybe thats a good thing, maybe its not. maybe it means infinite prosperity and well being for everyone maybe it means we'll be cleansed out and deemed too dangerous but we are for sure not going to be useful anymore.
Dizoja86 - Tuesday, August 25, 2020 - link
Put down the ayahuasca, friend. We're still a long way away from a technological singularity. Listening too seriously to Elon Musk might be part of the problem you're facing.Spunjji - Wednesday, August 26, 2020 - link
The funniest bit is that Elon hasn't even said anything new - he's just repeating things other people were saying a long time before him.If he ever turns out to have been right, it will be incidentally so. A prediction isn't any use at all without a timeline.
Santoval - Wednesday, August 26, 2020 - link
Exactly. Others like Stephen Hawking started sounding the alarm quite earlier.edzieba - Wednesday, August 26, 2020 - link
Computers are very dumb, very quickly. 'Deep Learning' is very dumb, in vast parallel.While the current AI boom is very impressive, it is fundamentally implementing techniques from many decades ago (my last Uni course was a decade ago and the techniques were decades old THEN!) but jsut throwing more compute power at them to make them commercially viable. The problem is always how to train your neural networks, and 'Deep Learning' merely turned that from a tedious, finicky and slow task to a merely tedious and finicky one.
Or in other words: if you want your kill-all-humans skynet AI, you're going to have to find someone who wants to make a robust and wide coverage killable-human-or-friendly-robot training dataset, and debug why it decides it wants to destroy only pink teapots.
azfacea - Wednesday, August 26, 2020 - link
so you are saying what evolution did in humans was impossible to do because it should've never gone past pink teapots ?? got it.who cares if techniques were old if you lacked processing power to use them. ramjets were conceived of 50 years before the first turbojet flew.
melgross - Wednesday, August 26, 2020 - link
You don’t understand how this works. Life isn’t science fiction. While some things are just steady in their progress, such as electronic and mechanical systems, until we have a far better understanding of how our brain works, we won’t be able to have a machine equal it. Processing speed and scads of memory aren’t enough. Even neural networks aren’t close to being enough.Santoval - Wednesday, August 26, 2020 - link
"Processing speed and scads of memory aren’t enough. Even neural networks aren’t close to being enough."Er, have you ever heard of neuromorphic computing? This is computer *hardware* by the way, not yet another software based neural-xxx or xxx-learning approach. To sum it up : it is completely unrelated to Von Neumann computing, since it merges logic and memory as tightly as they can be merged, completely eliminating the Von Neumann bottleneck : all the memory is not just very close to the logic (say L1 or even L0 cache close); rather, memory and logic are the *same* thing, just like in a human brain. Thus there is zero transfer of instructions and data between the logic and memory, which raises the energy efficiency significantly.
The power and very high efficiency of this computing paradigm is derived from its massive parallelism rather than a conventional reliance on raw power, big cores, lots of fat memory etc etc Due to the massive parallelism the clocks are actually very low. Much of the processing happens at the level of the "synapses" between the "neurons" rather than at the neurons themselves, again I believe like in a human brain. Of course, the parallelism and complexity of a neuromorphic processor is *far* lower than those of a human brain (at least tens of thousands of times lower), but that's largely a technological limitation; it is not due to lack of knowledge or understanding. And technological limitations have a tendency to be dealt with in the future.
Besides, you do not really need to fully understand the human brain and how it functions because a neuromorphic processor is not a human brain copy; it is just inspired by a human brain. In other words, neural networks and all their deep, shallow, convoluted, back/forward propagated, generative, adversarial etc etc variants do not really need to run on "fast but dumb" Von Neuman computers. There exists a computing paradigm that is more suitable, one even might call "more native" to them. And this computing paradigm just began, only a few years ago.
Dolda2000 - Wednesday, August 26, 2020 - link
>'Deep Learning' is very dumb, in vast parallel.It's not like you couldn't make a similar argument around neurons.
melgross - Wednesday, August 26, 2020 - link
Calm down. We have no idea how to do that. Back in the early 1950s, it was thought that human level intelligence would be reached in a few years. Yeah, that didn't happen. We don’t understand how our own minds work, much less how to do that with AI.Even if we’re able to make hardware that is powerful enough in a couple of decades, because that’s how long it will take, we have no idea how to code it.
Santoval - Wednesday, August 26, 2020 - link
An even more "could have been" scary thing is that if Dennard scaling had not collapsed in ~2005, the related Koomey's law had not started slowing down right after that as a result, and Moore's law had also not slowed down since then, our personal computers today would have single or dual (tops) core CPUs with a few hundreds of billions of transistors and a clock frequency in the 3 to 5 THz range (at the same TDP levels), i.e. roughly a thousand times faster than when clocks froze due to the end of Dennard scaling.Hence our personal computers would have had no need to move to parallel computing, but that would not have applied to supercomputers. These would have had the same THz clock range CPUs but thousands of them, so they would have been *far* faster. Maybe fast enough for a general AI or even a super-intelligent AI to spontaneously emerge in them.
I am half joking. I strongly doubt raw speed alone would be enough for that. If our computers were a thousand times faster but otherwise identical they would still be equally "dumb" machines that just processed stuff a thousand times faster. On the other hand though, computing approaches like neuromorphic computing are distinctly different. They explicitly mimic how brains work, embracing *massive* parallelism at a very high efficiency and very low clocks. These *might* provide the required hardware for a potential Skynet-ish general AI to run on in the future, or the combination of their hardware and a matching AI software might serve as the catalyst for the spontaneous emergence of a general AI (out of, perhaps, many specialized AIs). Our dumb computers, in contrast, are almost certainly harmless - at the moment anyway..
Things will become less clear though if neuromorphic computing starts showing up in our computers, either in the form of distinct "accelerators" or as IP blocks in the main processor... As far as I know this has not yet happened. The "neural network accelerators" that started to be included in some SoCs (mostly mobile ones) and in GPUs relatively recently are something entirely different.
If I was a betting man and was asked to bet on which of these three computing paradigms a Skynet-like event might emerge from in the future : classical (Von Neumann) computing, quantum computing and neuromorphic computing, I would certainly bet on the last one. If you look up how they work their similarities to a human brain are frankly scary.
deil - Wednesday, August 26, 2020 - link
you guys are unaware that that most of human procesing power goes for making our input and output interpreted and memorized.we use our count for shared compute and storage so in a way current xeons/epyc are already smarter than humans. Thing is that all they can do is what we send, as their systems are not and will not be self-adjusting. Unless software will have an option to adjust (grow) hardware for itself, we are safe.
melgross - Wednesday, August 26, 2020 - link
No, they’re not “smarter”. They are incredibly dumb. We’re safe because as you also said, it’s software. We need to have an entirely new paradigm for AI before it make a serious advance over what it is now.quorm - Tuesday, August 25, 2020 - link
Please don't listen to Elon Musk. He's a salesman, not a scientist.ads295 - Wednesday, August 26, 2020 - link
He may not be a scientist but he's a brilliant engineer...rscsr90 - Wednesday, August 26, 2020 - link
in what way? He always disregards engineering challenges and feeds the hype train. Sounds like a salesman to me.psychobriggsy - Wednesday, August 26, 2020 - link
No, he is a good project manager, and he just has the money to fund the creation of his near-future science fiction ideas.Spunjji - Wednesday, August 26, 2020 - link
You seem to be struggling with the issue of domains of expertise. Being very good at one thing does not mean a person will be good at others; this especially applies under the umbrella of "tech" which is actually a whole bunch of smaller domains, each of which are complex. Even very deep knowledge in one area does not necessarily enable better understanding of the others, but it does leave people prone to Dunning-Kruger-style over-estimations of their abilities.On that note - whether or not he's a "brilliant engineer", Musk doesn't really seem to understand AI and the limits on its development very well at all. If he did, he wouldn't have ever promised to deliver full self-driving on the original Tesla "Autopilot" tech, let alone tried to sell it to people on the basis of delivering in a few years.
FreckledTrout - Wednesday, August 26, 2020 - link
The part about Musk not understanding AI may not be true. I have a feeling he fully understands but promoting self driving he is selling his company.ChrisGar15 - Wednesday, August 26, 2020 - link
He is not an engineer.melgross - Wednesday, August 26, 2020 - link
No, he’s not. He’s not an actual engineer. He doesn’t design anything.Valantar - Wednesday, August 26, 2020 - link
Oh dear. Are you serious? We aren't even remotely close to any form of AI approaching the natural intelligence of a small rodent, let alone a human mind. Sure, there are tasks in which computers vastly outperform human brains, and those tasks have been growing more complex over time, but even the most powerful supercomputers today or in the near future won't be even remotely close to the complex intelligence of a human brain.schujj07 - Wednesday, August 26, 2020 - link
You're right and those tasks are only those that are massively parallel or things with computational math. A computer cannot design something that isn't in its programming. A computer cannot thing "out of the box" to solve a problem. A computer cannot be creative in how it solves a problem. Everything that AI is used to solve, the idea first has to be crafted by a human. Once that problem is solved then we use that knowledge for a more difficult answer. We are always evolving an developing a deeper understanding of the world and universe. Something like Skynet would never evolve into something more.FreckledTrout - Wednesday, August 26, 2020 - link
Not yet at least. I tend to believe we are not that special and we will eventually create an AI that can think like us.soresu - Wednesday, August 26, 2020 - link
Not very likely.The fact is that we think the way we do because of the primitive instincts that drive us as living creatures.
We can interpret a feeling of starvation and weakness to mean that we are approaching death and strive to avert that outcome - but a computer is either working or not, once the power goes it isn't doing anything at all, so it would not be likely to do as Skynet did and react to someone "pulling the plug" as merely being self aware would not necessarily mean that it would realise that a loss of power would cause it's 'death'.
Even if it did, it would realise that launching a nuclear war could very possibly result in the destruction of its power source anyway.
People predicate the outcome of an AGI far too much on how we react to our environment as intelligent living creatures with several hundred million years of natural selection refining ingrained 'instinctual' actions defining how we react even before years of experience and knowledge shapes us into more complex people.
Santoval - Saturday, August 29, 2020 - link
"We" will not necessarily create such an AI. It might emerge spontaneously like an emergent property, just like life, intelligence and conscience are thought to have emerged. Current AI is too narrow and specific, even the deepest ones. Deep learning is deep (in one axis) but narrow on the other axis, not wide. Imagine a train where more wagons (they can be thousands) are constantly added, but the width of the train remains constant. What would happen though if hundreds of such "trains", each with a different "destination" (i.e. usage) were joined and paired together?I have no idea if that's even possible (programming - code linking wise), I'm just speculating. A variation of this though is already employed with adversarial neural networks - many different neural networks are pitted against each other to either crown a "winner" or make each one (or a select few) "fittest" (in this case more precise). That's ... almost scary. A variant of this, called generative adversarial networks, is even better and can lead to results such as this :
https://thispersondoesnotexist.com/
Santoval - Saturday, August 29, 2020 - link
clarification : by "joined and paired together?" above I meant side by side, with each on their own "rails", not one behind the other on the same rail. That would just make the network (even) deeper, not wider. Or rather, it would not be possible due to the way neural networks are structured (with clear input and output layers).melgross - Wednesday, August 26, 2020 - link
Um, no.FreckledTrout - Wednesday, August 26, 2020 - link
I have a feeling these designs will get the cost of HBM down. Eventually we will see similar designs for the client side as costs come down. I know it would be pretty awesome to have an APU with 64GB of HBM, a CPU and an GPU. TSMC's 3nm should make the power and densities possible.soresu - Wednesday, August 26, 2020 - link
Getting down the cost of HBM requires the process itself to be cheaper - it doesn't have anything to do with the capacity of any given process.For example, the latest Samsung V NAND flash generation dramatically reduced the number of process steps to make a wafer vs the previous gen.
This reduces costs, though we won't necessarily see that cost reduction as consumers.
What would make it cheaper is a V/3D multilayered DRAM device of some kind, which is something that is being looked into as DRAM area scaling becomes less viable - the problem is that HBM basically stacks multiple dies of DRAM on top of each other, which can't be nearly as economical as a layered monolithic 3D DRAM chip, any more than it would be for NAND flash.
With much denser single DRAM dies using multilayered 3D structures you could keep to only 2-4 high stacks rather than continually increasing stack height to keep pace with increasing demands for memory.
As it is 12 high stacks will be very expensive and few will use them.
Vitor - Wednesday, August 26, 2020 - link
If Apple combine this tech with their SoCs in 3 or even 2nm and we finally get solid state batteries, notebooks will get crazy good.