Comments Locked

44 Comments

Back to Article

  • ingwe - Tuesday, January 26, 2016 - link

    This article was a really cool idea. I enjoyed reading it.
  • codylee - Thursday, January 28, 2016 - link

    I agree, Great way to write the roundup! VR + External graphics dock looks to be most exciting!
  • hojnikb - Tuesday, January 26, 2016 - link

    >but Mushkin is expecting to be able to ship the 4TB model for a mere $500, which will greatly help it find a niche.

    thats actually not true. this drive is gonna cost 2k$
  • Ian Cutress - Tuesday, January 26, 2016 - link

    We were told $500 direct from Mushkin in our meeting with them, and other sites are reporting $500 as well. Google search 'Mushkin 4TB 500'. Unless you have other information...?
  • MikhailT - Tuesday, January 26, 2016 - link

    Please provide your source for this. All of the press were told by Mushkin directly that it will be around 500$.
  • hojnikb - Tuesday, January 26, 2016 - link

    They later correted this and said its gonna be 2000$

    Its kinda too good to be true, that a nieche product with two controllers and 4TB of flash would be that cheap.
  • hojnikb - Tuesday, January 26, 2016 - link

    Correction, its actually gonna be close to 1000$. They specifically said its gonna be targeted at 0.25$/GB, which equals to 1k$
  • Billy Tallis - Tuesday, January 26, 2016 - link

    The current Reactor with planar NAND is at $0.25/GB already. The 3D NAND transition should mean that large drives (where the controller costs are a small portion of the total BOM) will be significantly below that price point once production is at full capacity.
  • Kristian Vättö - Tuesday, January 26, 2016 - link

    But the 4TB drive is dual-controller with a hardware RAID controller, so the price should be at least twice compared to the 2TB single controller version. From what I heard from Chris Ramseyer at Tom's, the target price for the 2TB model is $0.25/GB, whereas the pricing of the 4TB is yet to be confirmed.

    Perhaps the long days (and nights) of CES resulted in Mushkin misspeaking about the pricing during meetings.
  • GTRagnarok - Tuesday, January 26, 2016 - link

    Yeah, I didn't believe for one moment that we would get a 4TB SSD for $500 this year.
  • Ian Cutress - Tuesday, January 26, 2016 - link

    We got clarification from Mushkin. Initially they will be aiming for $0.25/GB, which makes the aim south of $1000. Going down another half is a long-term goal down the line, but it won't happen overnight. I've made an addition clarifying this.
  • hojnikb - Wednesday, January 27, 2016 - link

    Its gonna take some time for nand to come down to 0.12c/GB.

    This probably means 3D TLC nand. I doubt 2d nand can get that cheap for ssd use.

    By the time nand will be cheap enough fro 500$/4tb, im sure someone will come up with a controller capable of adressing 4TB of flash (so no need for dual controllers+ raid).
  • nathanddrews - Wednesday, January 27, 2016 - link

    In the words of Darth Vader, "NOOOoooooOOOOooOOOOO!"

    It was too good to be true. :-(
  • Dirk_Funk - Tuesday, January 26, 2016 - link

    Totally irrelevant but is that an Xbone hdmi cord coming out of the MSI tower on the first page?
  • nathanddrews - Tuesday, January 26, 2016 - link

    "This reportedly kills random access performance, but Mushkin is expecting to be able to ship the 4TB model for a mere $500, which will greatly help it find a niche."

    4TB of NAND for $500. Just let that sink in a bit... So basically, they're slapping two decent 2TB SSDs behind a slower controller. Here's a similar device with the same JMicron controller:
    http://amzn.com/B00ITJ7WDC

    What sort of random IO "killing" is happening here? To what extent? I would expect a modern SSD with low random IO to still outperform a HDD by a significant margin, but some rough numbers to put it in perspective would help. This could be awesome for a speedy micro-NAS.
  • Ian Cutress - Tuesday, January 26, 2016 - link

    If I recall correctly, we were told '10K IOPS'. Not sure if that's read or write, or if that was the 2TB version or the 4TB version. Billy might know, he was more awake for that meeting :)
  • Billy Tallis - Tuesday, January 26, 2016 - link

    10K IOPS sounds right, but I don't have any more context for that number.

    Since SM2246EN drives have steady-state random write IOPS of around 5k, that metric might actually not be hurt at all by the RAID overhead. But all the other random access numbers would be.
  • nathanddrews - Tuesday, January 26, 2016 - link

    So what you're telling me is that apart from $$/GB, one of these would make for a significant improvement in IOPS over a HDD-based file server. Say no more. ;-)
  • jasonelmore - Tuesday, January 26, 2016 - link

    I don't agree with the consensus that VIVE is better than oculus. They are different systems for different use case scenarios.

    The Oculus Rift is much better at long gaming sessions at your desk, like playing star citizen, euro truck, all flight sims, etc... Due to its light weight and polished design. It can still track in a small playspace, but the controllers require a additional sensor. The Oculus controllers are amazing and groundbreaking IMO. It's to bad they weren't ready to ship with the CV1.

    The Vive is meant to be a system for a large play area, like the Nintendo Wii. It's much heavier, and it needs a large area to do most of interactive stuff. The controllers are not as good as oculus's (general consensus of most testers), however the lighthouse's trackers are amazing and they really do push the envelope.

    Most of the demo's i've seen in larger playspaces, show the users constantly doing cable management with their feet. Your mind is thinking about the cables subconsciously thus reducing immersion.

    Both have similar display specs and performance specs. (1080p @ 90FPS X 2) but Oculus seems to have more exclusives. Good guy steam will support both headsets, but Oculus will have their own exclusive titles. Maybe HTC can sign some exclusive deals, but i just don't see steam requiring you to buy their HMD to play HL3.

    HTC, Steam and Oculus, need to focus on getting rid of the cables. (Maybe a Li-Fi System would have the necessary bandwidth) Wireless freedom is much more important for HTC/Steam because of the larger playspaces. I would hold off on the VIVE version 1, and see if version 2.0 is any better. Oculus will be supported by SteamVR, so Steam's involvement shouldn't make you prefer it over the Oculus.

    Finally, The VIVE has not been priced yet, and HTC is not a company that can afford to subsidize the hardware. Steam maybe, but not HTC. But we shall see. I'm afraid the VIVE is gonna cost $999 and up for the HMD, 2X controllers and 2X light house sensors.
  • Nintendo Maniac 64 - Tuesday, January 26, 2016 - link

    Is Brandon Chester not aware that LG unveiled a new OLED TV line-up for 2016 that adds support for Dolby-Vision? That by definition is an advancement.
  • Brandon Chester - Tuesday, January 26, 2016 - link

    The fact that it was LG is exactly why it didn't advance OLED as a category. It's a market with basically one vendor. Everyone else is still pushing LCD.
  • Nintendo Maniac 64 - Tuesday, January 26, 2016 - link

    Well that's kind of expected when the majority of the competition are also the incumbent leaders when it comes to flat-panel market-share (that being Samsung and Innolux).

    I mean, there's a reason you don't see Toyota going all-in on electric vehicles - because they're in incumbent when it comes to market-share of the internal combustion engine.
  • boeush - Tuesday, January 26, 2016 - link

    I see a need for a new high-bandwidth optical cable.

    Intel always envisioned Thunderbolt evolving toward optical signalling (going at least back to the days when it was code-named LightPeak). With VR goggles potentially pushing 8k+ x 2 at 90 Hz in the future, with external graphics enclosures having potential for crossfire/SLI needing at least the equivalent of 2 x PCIe3 x8 (or even x16) for optimal performance (never even mind the upcoming PCIe4 standard), and with next-gen storage (like 3D XPoint, etc) promising much higher bandwidths for DAS systems - it seems to me the time is right for a thin, long, light and flexible optical cable and accompanying compact transmitter/receiver standard capable of moving data at TB/s rates.

    Surely there has to be some development occurring along these lines?
  • Ikefu - Tuesday, January 26, 2016 - link

    The Razer Core Thunderbolt GPU box has me very intrigued. As a traveling engineer I'm always fighting the battle between a bulky laptop I have to drag in to the field with me or a sleek laptop that is under-powered for development and games at a desk. A Dell XPS 15 with a Razer Core would be a juicy option indeed.
  • lmc5b - Tuesday, January 26, 2016 - link

    Page 2, line 2: "and there always" should be "and there is always" I think.
  • Lolimaster - Tuesday, January 26, 2016 - link

    The easy route to detect an apple fanatic:

    -Brandon Chester in his twenties.
    -Bash OLED just to mention microLED after Apple bought the company instead of keep face mentioning maybe Sony's CrystalLED.
  • Brandon Chester - Wednesday, January 27, 2016 - link

    1. Nowhere was OLED bashed. In fact, I said it will become the dominant tech in all mobile devices, which is quite a testament to its quality. It simply has inherent issues with aging, and that will cause issues in TVs.

    2. CLED was a one time Sony tech demo. MicroLED is not exclusive to Apple at all, and to my knowledge the only relation Apple has even had with it is buying a small company working with MicroLED around two years ago. If anything, using the example of something done by Sony as a CES tech demo instead of an actually disruptive upcoming technology would be the wrong way to go.

    3. I'm not in my twenties.
  • at80eighty - Tuesday, January 26, 2016 - link

    Loved this.

    You guys should make this a running thing. something like a State of Tech; maybe a bi-monthly / quarterly cadence? get all your editors to chip in.
  • random2 - Wednesday, January 27, 2016 - link

    "Covering PC to smartphone to TV to IoT to the home and the car,..."

    OK, I give....What's loT?

    "Firstly was storage - Mushkin showed us an early PVT board...."

    PVT? Pressure, volume, temperature? Price, volume, trend? Position, velocity, and time? Paroxysmal Ventricular Tachycardia?

    I stay out of the loop for a while and I have to take a course in leet speak to read an Anandtech article.
  • Ian Cutress - Wednesday, January 27, 2016 - link

    IoT = Internet of Things - basically can you put a chip into it that'll connect to the web for monitoring/control. It's a term that started around 2008.

    PVT = Production Validation Test, one of the sample stages for validation before mass production. This one is as old as mass production in electronics.

    But point taken, I'll expand acronyms in future on first use :)
  • JonnyDough - Wednesday, January 27, 2016 - link

    "With these things in mind, it does make sense that Samsung is pushing in a different direction. When looking at the TV market, I don’t see OLED as becoming a complete successor to LCD, while I do expect it to do so in the mobile space. TVs often have static parts of the interface, and issues like burn in and emitter aging will be difficult to control."

    Wouldn't that be opposite? Phones and tablets are often used in uncontrolled environments, and have lock screens and apps that create static impressions on a display as much as any tv in my opinion. I think OLEDs could definitely penetrate the television market, and I think as a result of either they will trickle over into other markets due to cost. Unless a truly viable alternative to OLEDs can overtake these spaces, I think that continual refinements in OLED help it prove to be a constantly used and somewhat static technology. Robots are moving more and more towards organics as well - so it would make sense that in the future we borrow more and more from nature as we come to understand it.
  • Brandon Chester - Wednesday, January 27, 2016 - link

    Relative to TVs you keep your phone screen on for a comparatively short period of time. Aging is actually less of an issue in the mobile market. Aging is the bigger issue with TV adoption, with burn in being a secondary thing which could become a larger problem with the adoption of TV boxes that get left on with a very static UI.
  • JonnyDough - Thursday, January 28, 2016 - link

    You brought up some good points. I wonder though how many people have a phablet and watch Netflix or HBO now when on the road in a hotel bed.
  • Kristian Vättö - Thursday, January 28, 2016 - link

    I would say the even bigger factor is the fact that TV upgrade cycles are much longer than smartphones. While the average smartphone upgrade cycle is now 2-2.5 years, most people keep their TVs for much longer than that, and expect them to function properly.
  • Mangemongen - Tuesday, February 2, 2016 - link

    I'm writing this on my 2008, possibly 2010 Panasonic plasma TV which shows static images for hours every day, and I have seen no permanent burn in. There is merely some slight temporary burn in. Is OLED worse than modern plasmas?
  • JonnyDough - Wednesday, January 27, 2016 - link

    What we need are monitors that have a built in GPU slot, since AMD is already helping them to enable other technologies, why not that? Swappable GPUs on a monitor, the monitors already have a PSU built in so why not? Put a more powerful swappable PSU with the monitor, a mobile like GPU, and voila. External plug and play graphics.
  • Klug4Pres - Wednesday, January 27, 2016 - link

    "The quality of laptops released at CES were clearly a step ahead of what they have been in the past. In the past quality was secondary to quantity, but with the drop in volume, everyone has had to step up their game."

    I don't really agree with this. Yes, we have seen some better screens at the premium end, but still in the sub-optimal 16:9 aspect ratio, a format that arrived in laptops mainly just to shave a few bucks off cost.

    Everywhere we are seeing quality control issues, poor driver quality, woeful thermal dissipation, a pointless pusuit of ever thinner designs at the expense of keyboard quality, battery life, speaker volume etc., a move to unmaintanable soldered CPUs and RAM.

    Prices are low, quality is low, volumes are getting lower. Of course, technology advances in some areas have led to improvements, e.g. Intel's focus on idle power consumption that culminated in Haswell battery-life gains.
  • rabidpeach - Wednesday, January 27, 2016 - link

    yea? 16k per eye? is that real, or him make up numbers to make radeon have something to shoot for in future?
  • boeush - Wednesday, January 27, 2016 - link

    There are ~6 million cones (color photoreceptors) per human eye. Each cone perceives only the R, G, or B portion (roughly speaking), making for roughly 2 megapixels per eye. Well, there's much lower resolution in R, so let's say 4 megapixels to be generous.

    That means 4k, spanning the visual field, already exceeds human specs by a factor of 2, at first blush. Going from 4k to 16k boosts pixel count by a factor of 16, we end up exceeding human photoreceptor count by a factor of 32!

    But there's a catch. First, human acuity exceeds the limit of color vision, because we have 20x more rods (monochromatic receptors) than cones, which provide very fine edge and texture information over which the color data from the cones is kind of smeared or interpolated by the brain. Secondly, most photoreceptors are clustered around the fovea, giving very high angular resolution over a small portion of the visual field - but we are able to rapidly move our eyeballs around (saccades), integrating and interpolating the data to stitch and synthesuze together a more detailed view than would be expected from a static analysis of the optics.

    In light of all of which, perhaps 16k uniformly covering the entire visual field isn't such overkill after all if the goal is the absolute maximum possible visual fidelity.

    Of course, running 16k for each eye at 90+ Hz (never even mind higher framerates) would take a hell of a lot of hardware and power, even by 2020 standards. Not to mention, absolute best visual fidelity would require more detailed geometry, and more accurate physics of light, up to full-blown real-time ray-tracing with detailed materials, caustics, global illumination, and many bounces per ray - something that would require a genuine supercomputer to pull off at the necessary framerates, even given today's state of the art.

    So ultimately, its all about diminishing returns, low-hanging fruit, good-enough designs, and balancing costs against benefits. In light of which, probably 16k VR is impractical for the foreseeable future (meaning, the next couple of decades)... Personally, I'd just be happy with a 4k virtual screen, spanning let's say 80% of my visual field, and kept static in real space via accelerometer-based head-tracking (to address motion sickness) with an option to intentionally reposition it when desired - then I wouldn't need any monitors any longer, and would be able to carry my high-res screen with/on me everywhere I go...
  • BMNify - Wednesday, January 27, 2016 - link

    "AMD's Raja Koduri stating that true VR requires 16K per eye at 240 Hz."

    well according to the bbc r&d scientific investigations found the optimal being close to the 300 fps we were recommending back in 2008 and prove higher frame rates dramatically reduce motion blur which can be particularly disturbing on large modern displays.

    it seems optimal to just use the official UHD2 (8k) spec with multi surround sound and the higher real frame rates of 100/150/200/250 fps for high action content as per the existing bbc/nhk papers... no real need to define UHD3 (16k) for near eye/direct retina display
  • JonnyDough - Thursday, January 28, 2016 - link

    Higher refresh rates certainly make a difference for people with epilepsy.
  • weewoo87 - Tuesday, February 2, 2016 - link

    Decent article. Thanks for the summary. Not agreeing with your analysis on OLED at all though, but that's just my opinion (as is yours).
  • Denithor - Wednesday, February 3, 2016 - link

    Like the format, keep it up.

    So, let me make sure I've got this right. Three different companies are making external GPU docks but only one of them is going to use an open/standard connector, potentially making it free for use on any system (not locked to their own hardware)? Are ASUS and MSI actually thinking they are going to sell more than like 12 of these units? I understand wanting to push more of your other hardware, but come on, let this thing stand alone and charge a fair markup on it, don't force me to also buy a specific model laptop from your lineup to go with it.

    If Razer allows this device to be used outside their ecosystem, I think they will have a winner on their hands.
  • anubis44 - Wednesday, February 17, 2016 - link

    Anton's coverage of Razer's Core GPU external graphics card dock was my favourite piece of tech. Now I understand why AMD's Fury X2 card has a TDP of 375 watts! It'll fit into this!

Log in

Don't have an account? Sign up now