I am sure that these chips have more or less redundancy built in. The first iterations from Samsung might have had a lot more than the latest tlc models. the final cost of the product is greatly influenced by yields. So a bigger die that has more redundancy built in that yields more might be cheaper than a smaller die. The big advantage that I see for Samsung is that they have far more experience with the process and they will continue to fine tune it. I hope that everyone will find their way and prices will fall.
The die size isn't known. However if you extrapolate Samsung's density to 48 layers you can see it will be very competitive. This will be coming this year.
Toshiba/Sandisk do have 50% more layers so they are likely more efficient even with half the capacity. To be fair die size is less relevant now when the number of layers and the technology differs.
I agree with both of you points. I would honestly be surprised if Toshiba-SanDisk's 48-layer isn't more efficient, especially since it will enter the market later. I think die size and bit density are still important metrics when comparing different dies, but you're right that it's no longer possible to use die size as the sole indicator of cost efficiency because there are so dramatic differences in the structures and manufacturing processes.
Maybe you could try to ask one of the NAND makers to give you some indication of how the costs change with the number of layers. They won't provide precise numbers but some wide range would be ok. And ofc we all ignore packaging costs , those are a significant factor when it comes to cheap chips (and NAND is relatively cheap).
It has some effect but less than most think when you get into dies with lots and lots of layers. When you get into 32 to 48 layer dies, you costs are going to be overwhelmingly dominated by processing costs. A 48 layer die is looking at well over 100 Litho steps. The cost savings of the multi-layer dies come from the planar dies requiring multiple patterning litho steps. Most of the equipment is going to be the same between a multi-layer nand and planar nand (even at 19-15nm). Where you'll save is likely faster throughput per litho step (older easier to work with resist, less precise etch requirements, etc). If the throughput per wafer is overall higher with multi-layer, then the cost per area for multi-layer will be cheaper, if it isn't, then it won't.
All i said was that it matters less not that it doesn't matter at all. Anyway , bare die is cheap, what matters is what you do with it. Here is a quote from a report that's some 6 months old ,sadly i don't remember the source. (they are talking revenue per wafer for the foundry) "There is more than a 14 times difference between 0.5-micron 200mm revenue per wafers (US$430) and 28nm 300mm revenue per wafers (US$5,850). Even when normalizing the figures by using revenue per square inch, the difference is dramatic (US$51.77 for the 28nm technology versus US$8.56 for the 0.5-micron technology)." To make it easier for some ,a square inch is 645.16 square mm. Ofc there are yields and other costs not just the wafer. So here you got different numbers of layers (impacting cost and yield), Intel using floating gate vs others charge trap, maybe different processes so the costs per square inch can be very different.
3D NAND requires a whole new set of deposition and etch tools because the tools used for planar NAND aren't designed for depositing dozens of layers and the etch tools aren't capable of such high aspect ratio etching with the precision required. 3D NAND basically moves the difficulty away from lithography/patterning to deposition and etching.
I'm not so sure about new depo and etch tools. The process of CVD is pretty much independent of layer count and etch is also pretty much independent of layer count. They may be using newer tools with increased deposition rates but those new tools would work just as well with planar. And etching is that heavily tool based, its primarily chemistry.
Now it may require new deposition, etch, and polishing techniques/formulas, but the equipment should be the same. The mechanics though of those three things really don't change at all.
Etch is still dip wafer in aggressor agent for set time period. CVD is still put wafer in vacuum chamber for set amount of time, etc.
They way to get to things like high aspect ratios is via recipe manipulations and barrier manipulations.
The production cost per wafer is higher since there are more steps to build a 48-layer die, but the increased bit density overrules that, which makes a 48-layer chip generally more cost effective than 32-layer (assuming similar yields, array efficiency etc.).
Note that Toshiba-SanDisk won't enter mass production until H1'16, so Intel-Micron has an advantage in terms of entering the market and will likely have 2nd gen in mass production before Toshiba-SanDisk. Samsung's 3rd gen V-NAND is still a question mark.
I'm curious where this is going to go once they require 100s of layers -- a huge pipeline of lithography machines with one wafer taking months to build up all the layers?
It already takes months for normal production wafers with a planar design. The primary advantage of multi-layer is a significant increase in tolerance around litho-etch. This can result in an increased throughput per lith-etch stage. Also for <20nm, there are multiple layers that have to have multiple litho-etch steps. Multi-layer basically doesn't need multiple patterning. What it all means is that while multi-layer has more total die layers, it can have less overall steps to get those layers in some cases and those steps can go faster. Multi-layer also generally will result in larger charge capture structures leading to longer endurance as well than the equivalent density on a planar design.
Interesting, so they are already limited by the speed of the steps? If a 1% increase in litho scale is more than 2% faster they can achieve more bits per wafer at the larger scale.
I mean, they are the same layer count and same node AND MLC die is smaller in capacity. If everything, these two should be the same, since if cutting 128Gbit TLC to MLC yields around 86Gbit.
is the die size small enough to permit on package fabrication with a 14nm Atom class CPU? A smartphone SoC with an economical, reasonably efficient, on package SSD might have appeal if it reduces BoM costs.
I'd still go with Samsung as the charge trap flash technology is pretty incredible, and i'm more interested in cell reliability than density. When it comes to storing my data I always choose quality over quantity.
If the tech reviewers give the different cell technologies a fair shake when reviewing drives, i think most users will agree and spring for the more expensive more durable flash which will bring costs down and force Intel-Micron-Toshiba to improve their technology.
If they don't... let the race to the bottom commence.
1) Reliability is the sum of many factors. Die type (planar, 3D, charge trap or floating gate) is only one
2) Tech Report just finished a SSD reliability experiment. It took them almost a year to kill planar MLC 256GB SSDs by writing over 1PB of data. Unless you are dealing with uncompressed 4k video on a daily basis, reliability for consummer from NAND exhaustion is a non-issue.
3) Testing for this is a *VERY* long process. Reviewers like Anandtech test what the smart parameters report, but those are usually programmed to the guaranteed reliability (in this case, would be 3000). Real, tested reliability would take a reviewer writing till the SSD dies, like the techreport experiment. With 256Gb dies with 10k cycles expected, it will take a LONG while...
4) Market has proven that consummers go for convenience and costs first. Few will research the underlying tech and the differences.
5) technology will continue improving none the less (NAND is a competitive market and other techs like RRAM are coming up)
quality is only as much as a manufacturer is willing to warranty it for. For example, it doesn't matter if Samsung's process can have 10k write cycles if they only warranty it for 3k cycles. Because a 10k write cycle is an average number, but they will still let NAND through, even if it is a NAND running marginal to the process curve as long as it meets the 3k cycle spec.
You don't throw a 1 year old car away just because its warrenty has run out. Sure it's better to get the write cycles guaranteed, but the manufacturers are notoriously bad at this. Most will just pull some random number out of their.. ehm, marketing department and claim this one for all capacities of an SSD model. The resulting number hardly relates to reality at all, as the number of total writes simply scales linearly with drive capacity.
Actually, there are standardized formulas and procedures for calculating the endurance of an SSD. Pretty much all manufacturers publish specifications in like with these formulas and procedures. Though, many manufactures tend to be overly conservative with their ratings, both because even the conservative ratings as good enough for the market and they want further differentiation for their various enterprise level drives. Its unlikely that anyone running desktop workloads is going to exhaust their SSD's endurance. To really start stressing an SSDs endurance you pretty much have to do full span random <=4k writes which are rather unusual in the consumer market.
number of total writes scales with drive capacity yes, but it stays constant per cell, which is what the OP has to refer to for "quality over quantity".
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
36 Comments
Back to Article
CristianM - Friday, March 27, 2015 - link
I am sure that these chips have more or less redundancy built in. The first iterations from Samsung might have had a lot more than the latest tlc models. the final cost of the product is greatly influenced by yields. So a bigger die that has more redundancy built in that yields more might be cheaper than a smaller die.The big advantage that I see for Samsung is that they have far more experience with the process and they will continue to fine tune it.
I hope that everyone will find their way and prices will fall.
nandnandnand - Friday, March 27, 2015 - link
Where is Toshiba's 48 layer NAND?psychobriggsy - Friday, March 27, 2015 - link
The die size isn't known. However if you extrapolate Samsung's density to 48 layers you can see it will be very competitive. This will be coming this year.melgross - Friday, March 27, 2015 - link
Next year.eddieobscurant - Friday, March 27, 2015 - link
nope , toshiba's 48 layer nand, won't be coming into consumer products before q2/2016Shadowmaster625 - Friday, March 27, 2015 - link
So in the year 2015, we can now fit the entire library on congress on a 62 sq inch piece of silicon?RU482 - Friday, March 27, 2015 - link
impress me and figure out how many times you can fill the library of congress with 62 sq inch pieces of siliconMr Perfect - Saturday, March 28, 2015 - link
Time to email Randall at XKCD.jjj - Friday, March 27, 2015 - link
Toshiba/Sandisk do have 50% more layers so they are likely more efficient even with half the capacity.To be fair die size is less relevant now when the number of layers and the technology differs.
Kristian Vättö - Friday, March 27, 2015 - link
I agree with both of you points. I would honestly be surprised if Toshiba-SanDisk's 48-layer isn't more efficient, especially since it will enter the market later. I think die size and bit density are still important metrics when comparing different dies, but you're right that it's no longer possible to use die size as the sole indicator of cost efficiency because there are so dramatic differences in the structures and manufacturing processes.jjj - Friday, March 27, 2015 - link
Maybe you could try to ask one of the NAND makers to give you some indication of how the costs change with the number of layers. They won't provide precise numbers but some wide range would be ok.And ofc we all ignore packaging costs , those are a significant factor when it comes to cheap chips (and NAND is relatively cheap).
extide - Friday, March 27, 2015 - link
Die size is still very important, because it determines how many die can fit on a wafer, and thus directly influences the cost.ats - Friday, March 27, 2015 - link
It has some effect but less than most think when you get into dies with lots and lots of layers. When you get into 32 to 48 layer dies, you costs are going to be overwhelmingly dominated by processing costs. A 48 layer die is looking at well over 100 Litho steps. The cost savings of the multi-layer dies come from the planar dies requiring multiple patterning litho steps. Most of the equipment is going to be the same between a multi-layer nand and planar nand (even at 19-15nm). Where you'll save is likely faster throughput per litho step (older easier to work with resist, less precise etch requirements, etc). If the throughput per wafer is overall higher with multi-layer, then the cost per area for multi-layer will be cheaper, if it isn't, then it won't.jjj - Friday, March 27, 2015 - link
All i said was that it matters less not that it doesn't matter at all.Anyway , bare die is cheap, what matters is what you do with it. Here is a quote from a report that's some 6 months old ,sadly i don't remember the source. (they are talking revenue per wafer for the foundry)
"There is more than a 14 times difference between 0.5-micron 200mm revenue per wafers (US$430) and 28nm 300mm revenue per wafers (US$5,850). Even when normalizing the figures by using revenue per square inch, the difference is dramatic (US$51.77 for the 28nm technology versus US$8.56 for the 0.5-micron technology)."
To make it easier for some ,a square inch is 645.16 square mm.
Ofc there are yields and other costs not just the wafer.
So here you got different numbers of layers (impacting cost and yield), Intel using floating gate vs others charge trap, maybe different processes so the costs per square inch can be very different.
Kristian Vättö - Saturday, March 28, 2015 - link
3D NAND requires a whole new set of deposition and etch tools because the tools used for planar NAND aren't designed for depositing dozens of layers and the etch tools aren't capable of such high aspect ratio etching with the precision required. 3D NAND basically moves the difficulty away from lithography/patterning to deposition and etching.ats - Saturday, March 28, 2015 - link
I'm not so sure about new depo and etch tools. The process of CVD is pretty much independent of layer count and etch is also pretty much independent of layer count. They may be using newer tools with increased deposition rates but those new tools would work just as well with planar. And etching is that heavily tool based, its primarily chemistry.Now it may require new deposition, etch, and polishing techniques/formulas, but the equipment should be the same. The mechanics though of those three things really don't change at all.
Etch is still dip wafer in aggressor agent for set time period. CVD is still put wafer in vacuum chamber for set amount of time, etc.
They way to get to things like high aspect ratios is via recipe manipulations and barrier manipulations.
JatkarP - Friday, March 27, 2015 - link
I heard this is the first use of a floating gate cell in 3D NAND. What did samsung use then for 850 EVO ?Kristian Vättö - Friday, March 27, 2015 - link
Samsung's V-NAND uses a charge trap instead of a floating gate.jhgf1000 - Friday, March 27, 2015 - link
How does the number of layers affect the cost? Intel's has 32 layer while the others will have 48.Kristian Vättö - Friday, March 27, 2015 - link
The production cost per wafer is higher since there are more steps to build a 48-layer die, but the increased bit density overrules that, which makes a 48-layer chip generally more cost effective than 32-layer (assuming similar yields, array efficiency etc.).Note that Toshiba-SanDisk won't enter mass production until H1'16, so Intel-Micron has an advantage in terms of entering the market and will likely have 2nd gen in mass production before Toshiba-SanDisk. Samsung's 3rd gen V-NAND is still a question mark.
stephenbrooks - Friday, March 27, 2015 - link
I'm curious where this is going to go once they require 100s of layers -- a huge pipeline of lithography machines with one wafer taking months to build up all the layers?ats - Friday, March 27, 2015 - link
It already takes months for normal production wafers with a planar design. The primary advantage of multi-layer is a significant increase in tolerance around litho-etch. This can result in an increased throughput per lith-etch stage. Also for <20nm, there are multiple layers that have to have multiple litho-etch steps. Multi-layer basically doesn't need multiple patterning. What it all means is that while multi-layer has more total die layers, it can have less overall steps to get those layers in some cases and those steps can go faster. Multi-layer also generally will result in larger charge capture structures leading to longer endurance as well than the equivalent density on a planar design.stephenbrooks - Sunday, March 29, 2015 - link
Interesting, so they are already limited by the speed of the steps? If a 1% increase in litho scale is more than 2% faster they can achieve more bits per wafer at the larger scale.hojnikb - Friday, March 27, 2015 - link
Any reason why MLC V-nand die bigger than TLC ?I mean, they are the same layer count and same node AND MLC die is smaller in capacity.
If everything, these two should be the same, since if cutting 128Gbit TLC to MLC yields around 86Gbit.
MrSpadge - Friday, March 27, 2015 - link
Optimization of the peripheral circuitry according to the article.dealcorn - Friday, March 27, 2015 - link
is the die size small enough to permit on package fabrication with a 14nm Atom class CPU? A smartphone SoC with an economical, reasonably efficient, on package SSD might have appeal if it reduces BoM costs.extide - Friday, March 27, 2015 - link
NAND processes are totally different to logic processes, not really comparable.dealcorn - Friday, March 27, 2015 - link
On package means not on the same die. Different logic processes are irrelevant.alacard - Friday, March 27, 2015 - link
I'd still go with Samsung as the charge trap flash technology is pretty incredible, and i'm more interested in cell reliability than density. When it comes to storing my data I always choose quality over quantity.If the tech reviewers give the different cell technologies a fair shake when reviewing drives, i think most users will agree and spring for the more expensive more durable flash which will bring costs down and force Intel-Micron-Toshiba to improve their technology.
If they don't... let the race to the bottom commence.
frenchy_2001 - Friday, March 27, 2015 - link
1) Reliability is the sum of many factors. Die type (planar, 3D, charge trap or floating gate) is only one2) Tech Report just finished a SSD reliability experiment. It took them almost a year to kill planar MLC 256GB SSDs by writing over 1PB of data. Unless you are dealing with uncompressed 4k video on a daily basis, reliability for consummer from NAND exhaustion is a non-issue.
3) Testing for this is a *VERY* long process. Reviewers like Anandtech test what the smart parameters report, but those are usually programmed to the guaranteed reliability (in this case, would be 3000). Real, tested reliability would take a reviewer writing till the SSD dies, like the techreport experiment. With 256Gb dies with 10k cycles expected, it will take a LONG while...
4) Market has proven that consummers go for convenience and costs first. Few will research the underlying tech and the differences.
5) technology will continue improving none the less (NAND is a competitive market and other techs like RRAM are coming up)
alacard - Saturday, March 28, 2015 - link
Endurance is good, cells not losing their charge over extended time spans of non use is better.menting - Friday, March 27, 2015 - link
quality is only as much as a manufacturer is willing to warranty it for. For example, it doesn't matter if Samsung's process can have 10k write cycles if they only warranty it for 3k cycles. Because a 10k write cycle is an average number, but they will still let NAND through, even if it is a NAND running marginal to the process curve as long as it meets the 3k cycle spec.MrSpadge - Friday, March 27, 2015 - link
You don't throw a 1 year old car away just because its warrenty has run out. Sure it's better to get the write cycles guaranteed, but the manufacturers are notoriously bad at this. Most will just pull some random number out of their.. ehm, marketing department and claim this one for all capacities of an SSD model. The resulting number hardly relates to reality at all, as the number of total writes simply scales linearly with drive capacity.ats - Friday, March 27, 2015 - link
Actually, there are standardized formulas and procedures for calculating the endurance of an SSD. Pretty much all manufacturers publish specifications in like with these formulas and procedures. Though, many manufactures tend to be overly conservative with their ratings, both because even the conservative ratings as good enough for the market and they want further differentiation for their various enterprise level drives. Its unlikely that anyone running desktop workloads is going to exhaust their SSD's endurance. To really start stressing an SSDs endurance you pretty much have to do full span random <=4k writes which are rather unusual in the consumer market.menting - Sunday, March 29, 2015 - link
number of total writes scales with drive capacity yes, but it stays constant per cell, which is what the OP has to refer to for "quality over quantity".sonicmerlin - Sunday, March 29, 2015 - link
So does this mean we'll see 500 GB SSDs for under $100 next year?Also perhaps 64 GB will become the minimum capacity in the iPad Air 4?