Comments Locked

55 Comments

Back to Article

  • HStewart - Friday, April 20, 2018 - link

    It sounds like Intel is trimming some fat - probably non productive group. But I never actually heard of this product Intel does a lot of R&D and some things just don't make it . Possibly there is newer technology like FPGA's Group technology has a better fit.
  • Wilco1 - Saturday, April 21, 2018 - link

    Well this is another huge market lost. Intel went in with a slightly modified 80486 - however x86 is just too complex, large, slow and power hungry. We're starting to see this now detailed comparisons with Arm servers are available.
  • Ryan Smith - Sunday, April 22, 2018 - link

    Administrative note: a user has been banned for bigoted comments.

    This is a tech site, not a politics site, so please leave the latter at the door.
  • Hifihedgehog - Sunday, April 22, 2018 - link

    It was all a dog and pony show, anyway, merely to appease ignorant investors since their bread and butter has been stuck on 14nm with an outdated architecture for far too long now.
  • wumpus - Friday, April 20, 2018 - link

    Why do I get the feeling that these companies had marginal products (not bad ones, but certainly niches that Apple, Samsung, and the like couldn't be bothered with) to begin with, and that once Intel replaced their ARM chips with Atom, all hope was lost.

    Younger readers might not be familiar with the horrible kludge that is x86, but the whole architecture is warts on top of warts, run by a modern OoO execution engine that powers through all the problems. The "x86 penalty" might be only a few mm^2 and less than a W of power on a desktop or laptop chip, but it has knocked Intel clean out of the running for phones and wearables are even more hopeless.
  • HardwareDufus - Friday, April 20, 2018 - link

    True.... and then we had 20bit addressing.. Now X64 uses 40bit addressing....
    Intel tried to leave it behind with EPIC architecture of the Itanium, but really hard to get a whole generation of programmers to write explicitely parallel code.

    But then of course the CISC instruction set paired to the X86/X64 architectures has the advantage of pretty decent IPC when it's humming along, compared to it's ARM counterparts. But as ARM matures, and good productivity apps (accounting software, word processors, spreadsheets, databases, cad) are created from the ground up and compiled to take advantage of native ARM architectures and instruction sets, we will see that IPC advantage narrow
  • FunBunny2 - Friday, April 20, 2018 - link

    "really hard to get a whole generation of programmers to write explicitely parallel code."

    well, really hard to get any bunch of programmers to create explicitly parallel user space problems.
  • HStewart - Friday, April 20, 2018 - link

    "Younger readers might not be familiar with the horrible kludge that is x86, but the whole architecture is warts on top of warts, run by a modern OoO execution engine that powers through all the problems"

    As person that has 30 years of development with x86, I feel offended that one is naïve to stated that x86 architecture is a kludge. It just different then ARM. It basically the RISC ( ARM ) vs CISC ( x86 ). RISC architecture like ARM have been around for long time and yes taking x86 architecture to 64 bit maybe consider a kludge - but in other ways it natural evolution. Will we need to go to say 128 or 256 cpus - I am not sure but it days before 32 bit, they were not sure about we need more than a meg of memory in the early days. But as the technology increase, first the hardware involves and then software involves.

    Most people are not knowledgeable about other parts of x86 CPU that can make a huge big difference - in the early days beside x86 going to 32 bit - the big change was introduction of Virtual Mode - this allow run virtual dos sessions in early versions of Windows for example. I see going to 64 bit a natural evolution allowing more to 4G of memory.

    But there are more than just enhance memory in cpu that is important. There are extensions which now especially with AVX 2 and AVX 512 with vector array calculations that RISC like ARM would take many instruction to handle.

    The big difference between x86 (CISC) and ARM (RISC) is basic in CISC vs RISC. In CISC you can have a single instruction that would require many instructions in RISC basically - but RISC does have the advantage of splitting execution up so that it can be done in parallel better because instructions are smaller - but at least for Intel and I believe also AMD, the larger CISC instructions are broken down in smaller micro code which can be done in parallel better as same as RISC.

    The problem today is people think of there phone so much, yes larger instructions are not the best thing for phone - but companies like Intel and like AMD realize that as increase customers need smaller and lighter machine - thus Intel came out with low power Atom - but I believe that Atom was a test ground to get the whole computer on chip, this has now merge in Intel series Y chip and like more of in future mobile chips especially once Intel perfects 10nm.

    As a developer - the biggest reason why I don't see ARM replacing the x86 processor is simple, look at Apple iPad Pro - it claims to be desktop replacement but it still requires ( as far as I know ) an Mac to create code for it.

    Also take your latest game technology - like say "Rise of Tomb Raider" can that run on ARM machine. The answer is no. But it can run on Xbox One and PS4 but those are not ARM cpus.
    Yes GPU has a big part of this - but so the CPU which is the code that drives the GPU and game.
  • anymous456 - Friday, April 20, 2018 - link

    Take a look at the Geekbench single-core scores of the 2015 MacBook Pro 15in(which supposedly will play Rise of The Tomb Raider fairly decently) vs the A11 in the iPhone X. Just because ARM traditionally has less power and heat displacement to deal with does not mean that it is architecturally less strong, in fact, the Qualcomm Centriq 2400 can supposedly offer the same performance as an Itel Xeon using less power.
  • HStewart - Friday, April 20, 2018 - link

    Geek Bench in my opinion is one of worst benchmarks out - especially the part with web benchmarks. Any case 2015 MacBook Pro is Intel based not ARM.

    The QUALCOMM also has 48 cores and what kind of tests - I pretty sure with tests that required Vector Math - it will come no where near the Xeon. Maybe for some web processing that requires no real computations - it can handle it.

    Of course we all went thought yet another claim of Windows on ARM emulation - with less power than an Atom - why didn't they name Windows RT.

    ARM uses less instructions, of course it uses less power and for some web services that is fine.
  • patrickjp93 - Sunday, April 22, 2018 - link

    Yeah those GB scores are useless comparing between ISAs. X86 is a second class citizen to that organisation. It doesn't even use AVX code yet uses the equivalent ARM instructions where possible.
  • Wilco1 - Sunday, April 22, 2018 - link

    That's completely false. GB is not only developed on x86, it supported vectorized versions for x86 first, and there are multiple vectorized implementations for SSE and AVX.
  • Hifihedgehog - Sunday, April 22, 2018 - link

    Geekbench does not test sustained performance well at all. 3DMark's physics test and TabletMark are much better indicators of sustained performance.
  • FunBunny2 - Friday, April 20, 2018 - link

    baloney. the issue is simple: Intel built a 1960s era chip in the 1970s; the 8086 was just an incremental build from the 8080. why that matters isn't obvious. here's why CISC even existed: until Burroughs and its ALGOL driven machines, computers were programmed in assembler. for such machines, CISC is necessary. writing user level applications in a RISC assembler was always a non-starter. with the rise of C (in particular) and other 2nd gen languages, only compiler writers care about the ISA. we don't care how horrible life is for compiler writers. once that became true, RISC (at the ISA level) was a no-brainer. enter Acorn, the progenitor of ARM.

    the fight between CISC and RISC was lost by Intel, et al, decades ago. if CISC really were superior, Intel, et al, would use those billions and billions of transistors that have been available for decades to implement the X86 ISA in silicon. they blatantly didn't do that. they blatantly built a RISC hardware behind a CISC "decoder". yes, Itanium died, but it lives on Inside Intel.
  • HStewart - Friday, April 20, 2018 - link

    The funny thing about this is this was AMD stating this - it would be ok.

    The fight between CISC and RISC is not been lost - it just different - ARM has it purpose and so does x86 code

    I think we should just agree to disagree - we come from two different - go on believe that ARM processor can compete with high end Xeon processor - I like to see it do stuff like real 3d graphic creation like in Lightwave 3D, 3DMax and AutoCAD an Solid Works.

    If so you are correct, other wise I will laugh and you should be called "Funny Bunny"
  • Wilco1 - Saturday, April 21, 2018 - link

    Arm certainly beats Xeon on image processing, both single and multithreaded and is more than 6 times more power efficient: https://blog.cloudflare.com/neon-is-the-new-black/

    With results like these it's safe to say x86 will be losing a lot of the server market to Arm just like they lost mobile.
  • patrickjp93 - Sunday, April 22, 2018 - link

    Yeah that bench is bogus. It uses serial x86 code, not vectorised. Bring in AVX/2 and it flips about 40% in Intel's favor. See GIMP benchmarks.
  • Wilco1 - Sunday, April 22, 2018 - link

    No the x86 version is vectorized in the same way as is clearly shown. It was even explained why using AVX2 actually slows things down.
  • mode_13h - Saturday, April 21, 2018 - link

    So C enabled RISC? I never head that one. I think you're off by at least a decade.

    I think HDL and VLSI enabled pipelined and superscalar CPUs, which were easier to optimize for RISC and could offset its primary disadvantage of a more vebose machine code.

    Also, IA64 DOES NOT live inside of modern Intel CPUs. That's almost troll-worthy nonsense.
  • FunBunny2 - Saturday, April 21, 2018 - link

    "So C enabled RISC?"
    indirectly. C made it practical to have a RISC ISA with a 2 GL coder space. C proved that a 2 GL could reach down to assembler semantics in a processor independent way. if you work at the pure syntax level. at the same time, the notion of a standard library meant that one could construct higher level semantics grafted on; again in a processor independent way. there's a reason C was/is referred to as the universal assembler. virtually every cpu in existence

    the result was/is that CISC on the hardware wasn't necessary. the libraries took care of that stuff (C as in complex) when necessary. compiler writers, on the other hand, benefited from a RISC semantics at the assembler level, since there's much greater similarities among various RISC ISAs than CISC. by now, of course, we're down to X86, Z, and ARM. and, I reiterate, if CISC were inherently superior, Intel (and everybody else) would have used those billions and billions of transistors to implement their ISAs in hardware. they didn't. they built RISC on the hardware. calling it "micro-code" is obfuscation. it's a RISC machine. the last X86 chip that wasn't RISC behind a CISC "decoder" was Pentium era.

    if one looks at cpu die shots over the years, the core continues to be a shrinking percent of the real estate.

    "Also, IA64 DOES NOT live inside of modern Intel CPUs."
    do you really think that Intel could have built that RISC/X86 machine if they hadn't gone through the Itanium learning curve?
  • mode_13h - Sunday, April 22, 2018 - link

    I wasn't arguing CISC vs. RISC, but I'm amused by the historical fiction involving C. If there's anything to it, you shouldn't have difficulty finding sources to cite.

    > do you really think that Intel could have built that RISC/X86 machine if they hadn't gone through the Itanium learning curve?

    In a word... yes. Not least because the Pentium Pro (their first architecture to translate x86 to RISC micro-ops) launched 6 years before it. And if THAT was substantially influenced by anything else they did, I'd first look to the i860 and i960. Not that I have any evidence to support that it was, but at least my speculation is both qualified and not refuted by basic causality.
  • mode_13h - Sunday, April 22, 2018 - link

    The thing is, your premise is just wrong:

    > writing user level applications in a RISC assembler was always a non-starter.

    I don't think user level apps should've been written in asm since ... I don't know exactly when. But there's nothing especially bad about RISC assembler. I've written RISC assembly at a job, you can find plenty of it kicking around in kernels and device drivers. You can make life easier with macros and subroutines, as with any assembly language.

    Perhaps you're confusing it with VLIW, because that's a legitimately hard to write any substantial quantity of efficient code. You can't even use tiny macros or small subroutines, if you care about keeping your instruction slots and pipelines filled. And then, all of the registers you have to juggle to keep everything in flight makes the exercise especially tedious. And any time you need to add a feature or fix a bug, you get to reschedule the entire block and allocate all of the registers.

    Are you sure you weren't thinking of VLIW? But that didn't even really hit the scene until after RISC went superscalar and out-of-order, at which point people started thinking it might be a good idea to do the scheduling at compile-time. Again, this was so long after C was already established that it might've been a prerequisite but you can't call it a game-changer.
  • FunBunny2 - Sunday, April 22, 2018 - link

    "Are you sure you weren't thinking of VLIW?"

    no. Intel's 8XXX chips grew from the late 60s, a time when much code really was written in assembler (C hadn't yet fully escaped Bell Labs). said assemblers were CISC. the later IBM 360 (or 370, can't find a link) even added COBOL assist instructions to the ISA. in today's terms, COBOL and FORTRAN were DSLs, not real programming languages. real coders used assembler, and had since the beginning of electronic computing. RISC came about just because ISAs/assemblers had gotten unwieldy, real estate hungry, and sloooooooow. one might argue that V(VVVVVVV)LSI is what made assembler passe`. memory, transistor, and speed budgets that were not imagined in 1960. if you can be profligate with resources, then the application building paradigm shifts.

    anyone who used 1-2-3 through the transition from pure assembly to C saw the difference. if memory (mine, not this computer's) serves, that fact generated lots o PC upgrades.

    or, to ask the question from the other end: if you expect your machine to only run 3/4GL, why would you need CISC in the first place? application coders will never be on the metal. the compiler/OS writers need to understand the machine, but nobody else does.
  • mode_13h - Sunday, April 22, 2018 - link

    There were plenty of other programming languages, back then. Lisp dates back to 1958; SNOBOL to 1962. It's pretty remarkable how quickly new languages developed and gained sophistication.

    You talk like C was the only game in town. Sure, if you're writing on OS, it was going to be C or asm or maybe a small handful of other options (Mac OS was written in Pascal, which dates back to 1970; Multics - the inspiration for UNIX - used PL/I).

    I'm not exactly a programming language historian, but I'm just not buying the idea that CPU designers were building out their instruction sets because programmers lacked better tools and were too lazy to write subroutines or use macros. I think they did it simply because each time they got more transistors to play with, they tried to speed up programs by implementing ever higher level functionality in hardware.
  • StevoLincolnite - Friday, April 20, 2018 - link

    Pretty sure modern x86 processors are all RISC these days internally anyway.
  • HStewart - Friday, April 20, 2018 - link

    Actually CISC and RISC are both eventually come down Micro-Code.
  • mode_13h - Saturday, April 21, 2018 - link

    No. You can't turn one into the other simply by replacing the microcode.

    More troll-worthy nonsense.
  • FunBunny2 - Saturday, April 21, 2018 - link

    "No. You can't turn one into the other simply by replacing the microcode."

    but you can by swapping the "decoder". that's the whole point of RISC on the hardware. calling a "micro code engine" is just obfuscation. it's a RISC machine. whether any (past or current) X86 machines shared any specific hardware with Itanium I leave to the discussion.

    for those old enough you know that the 360/30, bottom end of the family, implemented the ISA purely in firmware/microcode/whatever. that was 1965. https://en.wikipedia.org/wiki/IBM_System/360_Model...
  • mode_13h - Sunday, April 22, 2018 - link

    I wouldn't over-generalize from the example of modern x86.
  • Wilco1 - Sunday, April 22, 2018 - link

    No, an ISA is not implemented just in the decoder, so you can't swap the decoder and implement a different ISA. ISAs affects *everything* - the registers, ALUs, flags, control logic, caches, memory model, etc. Just think about it for one second. It's simply impossible unless the ISAs are virtually identical (think Arm and Thumb-2).

    Calling a CISC a RISC machine is plain wrong - RISC vs CISC is about the user visible ISA, not about the internal implementation. Micro-ops on modern implementations are very similar to actual instructions. There are complex micro-ops which take multiple cycles to execute.
  • mode_13h - Sunday, April 22, 2018 - link

    That's what I was thinking. The underlying machine state goes a long way towards enabling CISC, and this is not something you change with just the front end.

    RISC instructions often had a latency of more than one cycle - just that you could usually issue one every cycle. But that's more about achieving the necessary efficiency and less of a defining characteristic. Division is an example of an instruction many CPUs implement, but one that never would've been pipelined much (if at all) on older CPUs. Still, there are substantial benefits to hard-wiring it.
  • FunBunny2 - Sunday, April 22, 2018 - link

    " ISAs affects *everything* - the registers, ALUs, flags, control logic, caches, memory model, etc. "

    yes, and cheaper machines in a family would do multiplication as serial add. and so on. and most of those aspects are mediated by the OS, anyway. it was Gates, not Intel, that decided no one needed more than 640K. now, it could be that Intel chose to use the increasing real estate to bring off-chip functions on-chip as a way to lock in clients. it was Grove who said (and wrote the book on it), "only the paranoid survive".
  • wumpus - Saturday, April 21, 2018 - link

    About the only way x86 could possibly be assumed "RISC internal" is that they almost certainly split load/store instructions from other instructions. Beyond that, there's very little to RISC.

    Generally speaking, the more true an x86 is to RISC it was, the less well it worked.

    The "most RISC" was K5. That was pretty much a 29000 RISC chip, and even used 29000 assembler to write the microcode. It failed badly.
    The NX5 chip wasn't very RISC (80 bit instructions), but since you *could* code with them (instead of x86) I suspect it qualifies. It only did well enough to be bought by AMD and produce the K6 next.
    Transmeta: the core of the machine didn't execute x86, that was handled with software. About as pure a RISC as x86 could get and failed hard.
    There were tales of x86 PowerPC. If any tried to break out of the lab (where technology goes to die), we don't know about them. Presumably nobody wanted to admit they existed.
  • mode_13h - Sunday, April 22, 2018 - link

    > About the only way x86 could possibly be assumed "RISC internal" is that they almost certainly
    > split load/store instructions from other instructions. Beyond that, there's very little to RISC.

    Doesn't sound like you have a source on that. I think the reason we believe it's RISC is that they've previously referred to it as such and we know that the typical case (and the only case until Core 2) is for one x86/x86-64 op to translate into multiple micro ops.

    I'm not aware of any published lists of the micro-op instruction sets in Intel CPUs, but here's some impressive reverse-engineering. You can infer the complexity of the micro-ops by looking at how many are generated by different x86 instructions and to which execution ports they go.

    http://www.agner.org/optimize/instruction_tables.p...

    Looks pretty RISCy to me.
  • Wilco1 - Sunday, April 22, 2018 - link

    It's not RISC: the ISA is still CISC. Micro-ops on x86 implementations are very complex so can't possibly qualify as RISC. To give a simple example, the AGU in most implementations support all the complex x86 addressing modes. Complex addressing modes means CISC.
  • Samus - Friday, April 20, 2018 - link

    I think readers definitely get the everyday problems with x86. Just pulling out your pocket compu...smart phone and browsing the web is a dead giveaway how limiting x86 is. An iPhone is a better web browsing experience than many Core i5's. Just look at the sunspider scores. Those are noticeably in everyday use, and no, it has nothing to do with Windows on your x86. It's the architecture running Windows. The long pipeline. The baggage.

    And if you doubt me. Go use a chromebook running a Pentium, then use a similar chromebook running an Exynos. Sure, it's still not a perfect comparison (because the Pentium has a piss poor iGPU) but just browsing the web is smoother.
  • HStewart - Friday, April 20, 2018 - link

    The problem is not with CPU or GPU it is with software and OS running on the device. The reason why Windows has more issues with Virus and such is not because the CPU but that it is more popular - only now that Android and iOS is becoming more popular is the because they are getting more users.

    And you can't really blamed Windows either - a lot of is because some people are envy Microsoft and yes Intel's success with it. But there is a lot of poorly written software and some of them actually try to take advantage of situation. For example majority of virus and such come from developers who used variants of Unix.

    Sunspider benchmark is not good example of benchmark - this is JavaScript based benchmark and has many dependencies on like browser and OS. One should use a compiled base benchmark instead an interpreted based benchmark.

    Chromebook is bad example - do you really think Google wants to make x86 system better?
  • JoJ - Saturday, April 21, 2018 - link

    I wish I could find better links, but here below is RWTs not really that old note on the subject:
    https://www.realworldtech.com/risc-vs-cisc/

    RISC vs. CISC Still Matters
    February 13, 2000 by Paul DeMone

    And then I cop out and offer you search results from HN, but they do seem all of a high standard.

    The forums at RWT, the site of the first article linked, are excellent on subjects like this, if you're able to find the threads...if you need to narrow things down, on RWT the Itanium saga had its most comprehensive, unflagging and unwaveringly loyal dissection in the Ret forums, and I consider the discussions which took place there can 2000 to have been a education. You might narrow your search by looking for dates of HPE Itanium launches, which prompted debate there.

    https://news.ycombinator.com/item?id=12353489

    https://news.ycombinator.com/item?id=12353489
  • mode_13h - Saturday, April 21, 2018 - link

    Don't you have anything better to do with your time than being offended on these forums?

    The problem with x86 is the complexity of the instruction decoder. It's a kludge because the opcode space and instruction format wasn't planned to accommodate all of the various extensions. This means it requires larger, more power-hungry decoders. That's its biggest liability for IoT, where devices need to run on microwatts.
  • PeachNCream - Friday, April 20, 2018 - link

    It's not a surprise at all to see NDG go. Wearable devices aren't particularly popular. I've seen a small number of Fitbits that made it a few months on a wrist before disappearing and I know of one person that's bothered with a smart watch, but he's one of those people that has to buy the latest gadget, the most expensive phone, and the nicest new car so in his case, it was no surprise. I'm just glad I'm not his spouse because that guy is going to have a miserable time later in life when he figures out he's burned through everything he's earned without ever putting something away for a rainy day or as a nest egg.
  • mode_13h - Sunday, April 22, 2018 - link

    IoT will happen. It's just going to take some time for standards, security issues, power issues, and costs to sort out.

    Watches are the natural place to start, but perhaps the bigger market is medical implants and prosthetics.
  • FunBunny2 - Sunday, April 22, 2018 - link

    "IoT will happen."

    well, how many really nasty hacks have already happened on IoT just because there's no such thing as a really secure innterTubes? or ISA (meltdown, etc.)? or OS? it's the wild, wild west where everybody needs a gun just to walk down the street.

    there's a reason the CIA/NSA/MI6/Agent 99 were able to destroy centrifuges. and that was years ago.
  • mode_13h - Sunday, April 22, 2018 - link

    Be contrarian, if you want. I don't own any IoT devices, but I still think it'll happen. Just a bit slower than some people have assumed.
  • Gadgety - Friday, April 20, 2018 - link

    TAG Heuer and Hublot will have to look elsewhere... Qualcomm's the only one now?
  • eastcoast_pete - Friday, April 20, 2018 - link

    Intel needs to make a strong effort to hang on to their core business, and that will require focus and investment. Today reminds me of the time they got caught napping and the rude awakening the first Opteron/Athlon chips from AMD gave them, especially once AMD introduced x64 and dual core chips. Right now, Apple will likely move to in-house chips for their MacBooks within 18 months, and the Windows side is under threat by AMD on servers and desktops (EPYC, Ryzen) and ARM-derived designs for ultralight laptops and 2-in-1s (e.g. the WIN 10 on Snapdragon project by Qualcomm and MS). Intel needs a successor to the Core arch ASAP, or it's sales will shrink a lot.
  • FunBunny2 - Friday, April 20, 2018 - link

    "that will require focus and investment."

    in what, exactly? we're a few years from the end of node shrink. don't tell me about quantum computing. won't happen for deterministic applications, which are 99.9999999%. there's only one periodic table, not yet patented, so that's on the flat line of the asymptote of progress; not much to mine there. software? not much change since C. and so on. engineering is another matter. 99.44% of the devices you have today are based on basic science from decades ago.
  • fteoath64 - Saturday, April 21, 2018 - link

    New stuff, of course!. Like AI chips ?!!!. TensorFlow co-processors of various types, Fpga variants, the one XilinX cooked up, Everest is particularly juicy. Something along that line. And a decent "Home" server which is still non-existent. I mean one that can backup and re-categorize photos, videos,music for mutiple phones, tablets, etc.
    Also in VR, Intel cannot do a SnapDragon 845 equivalent ?. Have they been sleeping ?. PVR was not a bad GPU partner in mobile,can't they evolve Atom much much further internally ?. There are lots of areas but Intel hardly touch any of them!. What gives ?.
  • mode_13h - Saturday, April 21, 2018 - link

    Well, they recently created a new graphics group.
  • mode_13h - Saturday, April 21, 2018 - link

    Still not clear how much of that is graphics vs. GPU computing, however.
  • HStewart - Friday, April 20, 2018 - link

    Windows 10 on Snapdragon - is DOA. And Apple on in-house chip is a pipe dream of Apple - has the iPad Pro become a PC replacement - maybe for some people who only need email and internet like my sister.

    Just go into a local Best Buy and count the Intel vs AMD and you see the real state of AMD. as for arm tablets -yes they are out there - but even Samsung when Intel on their tablet.
  • mode_13h - Saturday, April 21, 2018 - link

    They're focusing on AI, datacenter/cloud (processing, networking, and storage), and driverless cars. That's where they're investing.
  • boozed - Friday, April 20, 2018 - link

    Intel made wearables?
  • mode_13h - Saturday, April 21, 2018 - link

    I know, right?

    I don't know if I'm more surprised by that or that they killed off the Edison line (which I hadn't heard).
  • sseemaku - Saturday, April 21, 2018 - link

    End of another experiment from Intel!

Log in

Don't have an account? Sign up now