Comments Locked

33 Comments

Back to Article

  • HStewart - Thursday, December 28, 2017 - link

    As a developer, limiting to 64 bit is smart thing now - they can compile the code for 64 bit only and make it optimization globally to increase performance not worrying about damaging 32 bit systems.

    32->64 debate is long time been over with and it become pretty much standard. It however can lead sloppiest in development in some cases - by assuming that one has tons of memory.

    The 32->64 bit jump was not as significant as 16->32bit jump originally - yes 64 bit allows for much more memory, but basically it not much different than 32 bit - 64 bit is natural evolution of 32 bit - not sure that we will ever need 128 bit or higher - but then during earlier that what said about 64 bit, or 32 bit or even more than 64k of memory in beginning of PC's

    I would not doubt - someday in the next couple of years we will have 64 bit only Windows version.
  • ddriver - Thursday, December 28, 2017 - link

    If they have a logistical problem with maintenance of two build targets that most likely has to do with their code base being plagued by non-portable outdated legacy garbage. Target platform is generally not a concern in development. The same C and C++ code I write runs perfectly fine on x86, x64, armv7 and arm64 platforms, with zero modifications necessary.

    Problematic transitions are only present when your code is crap. Shortsighted and stiff architecture design, poor API layer separation, heavy reliance on non-portable APIs across the entire code base rather than using a platform integration abstraction layer. If you designed it poorly it becomes a maintenance nightmare, and an issue that can only be overcome by means of a complete rewrite on top of better design foundation.

    I don't see computers moving beyond 64 bit anytime soon, if ever. 64 bits allow for over 18 exabytes of addressing space. That is over 18 MILLION TERABYTES. In fact modern processors are not even touching the 64 bit addressing limit, they only use 48 of the 64 bits.

    Keep in mind that 32bit to 64bit might seem like a two-fold increase, but in reality it is over 2 billion fold increase of actual addressing space. 16 to 32 bit was only a 65536 fold increase, and 8 to 16 bits was merely a 256 fold increase. 32 to 64 goes from a fairly modest to tremendously ample addressing space, it is not like it is any longer necessary to double addressing space representation every few years as it has been done in the past.
  • Santoval - Thursday, December 28, 2017 - link

    Memory addressing aside would it make any sense to go 128 bit for performance reasons? FPU, SIMD & vector blocks have long gone 128 bit, then 256 bit, and now even 512 bit (AVX-512). Would the integer (or MIMD) blocks see any performance gains from going wider, or would it negatively impact power efficiency? I am not referring to today, I mean perhaps in 10 to 12 years. Potentially after Moore's "law" -and MOSFET tech by and large- have completely run out of steam and nothing viable has replaced them.
  • ddriver - Thursday, December 28, 2017 - link

    At this point the ALU/scalar unit is ample as it only serves to drive program flow. It doesn't need to go wider than the pointer size. Traditionally, and with very few exceptions, the platform native integer width has been the same size as the pointer.

    For vector (SIMD) execution there is really no limit, as long as you have the workload to throw at it. Power efficiency will be good as long as the hardware is property utilized.

    CPUs don't really have MIMD, that's more of a gpu thing, although the xeon phi has it too, remember it came from intel's failed attempt at making a high end gpu. The same principle applies as with the SIMD - as long you have the bandwidth and the data to throw at it, indefinite increase of the architecture width is feasible. The first brickwall is die size limitations, but it is already being addressed by fragmenting chip design and taking a departure from monolithic dies.

    Moore's law not about performance or transistor density, it is about the number of transistors in an integrated circuit. We are about to hit a process brickwall the next decade or so, and so far there is no replacement for that. However the departure from monolithic dies will help to keep boosting the number of transistors in an integrated circuit, it is just that the circuit will be spread wider and a tad less integrated relative to monolithic dies. So I guess that will keep going until the contains of form factor allow it.
  • extide - Thursday, December 28, 2017 - link

    You can use 128-bit or even bigger values, with 32-bit code. When stuff is 32-bit or 64-bit it is talking about the address space and the length of the pointers. You can use 64-bit ints with 32-bit code for example, etc.

    There is a slight advantage that X64 code has access to more registers but that is just because the architecture is newer, not necessarily because it is 64-bit.
  • ddriver - Friday, December 29, 2017 - link

    x64 does increase the number of registers, but in direct comparison that doesn't result in any tangible performance gains. The extra registers are not really a necessity but more of a "why not while we are at it", at best it saves a L1 cache access every once in a while.
  • lmcd - Friday, December 29, 2017 - link

    This post almost "got it" in my opinion -- 32-bit limitations aren't merely the change from 32-bit to 64-bit, but the expected CPU features as well. Switching to 64-bit moves that baseline to a more realistic and modern position.

    I wouldn't be surprised a few years down the road if CPUs prior to Sandy Bridge are cut off due to lacking instructions.
  • llukas11 - Thursday, December 28, 2017 - link

    > If they have a logistical problem with maintenance of two build targets that most likely has to do with their code base being plagued by non-portable outdated legacy garbage. Target platform is generally not a concern in development.

    If you do not test what you ship it doesn't work by definition. I don't think QA/testing is free.
  • ddriver - Thursday, December 28, 2017 - link

    It is done via unit tests. You run the same tests on every platform. The only additional cost is the time additional test take, which is not a lot relatively speaking. I highly doubt that's the reason they are dropping 32bit support. It is the development and maintenance costs that are the real burden.
  • ddriver - Thursday, December 28, 2017 - link

    "but in reality it is over 2 billion fold increase"

    but in reality it is over FOUR billion fold increase
  • extide - Thursday, December 28, 2017 - link

    "The same C and C++ code I write runs perfectly fine on x86, x64, armv7 and arm64 platforms, with zero modifications necessary."

    But do you write code that very closely interacts with hardware and the respective kernels in each situation? Sure it is good practice to have your code be as portable as possible but with stuff like this you definitely have more limitations than generic code.
  • ddriver - Friday, December 29, 2017 - link

    That code would be a part of the thin and easily maintainable integration abstraction layer. Even for something like kernel space driver development you can the bulk of your code be 100% portable.

    That's one of the most common design mistakes - riddling your code all over with non-portable functionality. This turns porting into a nightmare. I personally never make direct use of any external resources, be that system calls or 3rd party libraries or even the C++ standard library. It is all handled by abstraction layers, so porting requires to reimplement only the abstraction layer, while all the remaining core logic, which is like 99.9% of the code remains the same, because it is completely abstracted away, there is no need to change anything, it just works.
  • jospoortvliet - Sunday, December 31, 2017 - link

    Note that the NVIDIA code also has to be portable between kernels - lin/win/bsd & mac ;-)

    I bet the at least on some of those the API is different between 32/64bit
  • twtech - Tuesday, January 2, 2018 - link

    If you aren't writing display drivers specifically, then I'd save criticism of their codebase until you know more about the specific reasons why they decided they want to drop 32-bit OS support. Also, the more platforms you support, the more platforms you have to test and optimize for. If some of those OS configurations are not being used with the newer hardware that new driver releases are targeting, then it's a waste of time to continue supporting them.
  • timecop1818 - Friday, December 29, 2017 - link

    > I would not doubt - someday in the next couple of years we will have 64 bit only Windows version.

    Windows server already only supports 64bit, last 32bit version was server 2008 or something iirc.
  • ztrouy - Thursday, December 28, 2017 - link

    I am curious about a potential repercussion of this move, and that relates the x86 emulation provided by the new ARM version of Windows 10, as it's limited to 32 bit x86 emulation. Does this mean that ARM-based Windows 10 devices will be limited to integrated or AMD GPUs for the foreseeable future? Or am I misunderstanding something?
  • haukionkannel - Thursday, December 28, 2017 - link

    Most likely, or there will be generic driver that does not use any spesial features (read slow)
    But not big deal when we talk about arm x86 emulation. If it can render text and netpages and maybe even video, it is good enough, and those systems use build in gpu in anyway like phones and other ultra portable devices.
  • r3loaded - Thursday, December 28, 2017 - link

    Nope, hardware drivers for Arm devices will be compiled natively as Arm AArch64 code by the SoC vendor. Emulation pertains only to user mode programs that are compiled for x86.
  • HStewart - Thursday, December 28, 2017 - link

    I don't believe any ARM device has support for NVidia cards and in any case if they did the device has about performance of Atom CPU - so why do it.

    Also I serious doubt there is emulation software for NVidia hardware out there.
  • ztrouy - Friday, December 29, 2017 - link

    I'm not talking about some janky third-party Nvidia emulator, I'm talking about the official ARM port of Windows 10 by Microsoft. While lots of code is not specifically x86 reliant, for the code that does rely on x86 instructions, the ARM port has a built-in emulator for the x86 code. However, this emulator is limited to running 32 bit code, which could pose a problem in the future for more powerful ARM-based Windows devices as they would be incapable of connecting to Nvidia graphics, instead being limited to AMD and the integrated graphics.
    And yes R3loaded, that is the hope, however one has to wonder if Nvidia will actually do the same and compile their code to run natively on ARM-devices as well.
  • cyberguyz - Thursday, December 28, 2017 - link

    Old news, but this is a move that makes sense.
    Who are the folks running a 32-bit OS? It is people that are running older, less capable hardware. That is not the demographic of people buying the latest in Nvidia video cards. The people running older 32-bit hardware more often than not have well working drivers 6that don't need updating. There is no point in Nvidia continuing the expense of porting the latest gaming support to a version of drivers that really don't need that kind of update.
  • UtilityMax - Friday, December 29, 2017 - link

    It's the enterprise customers who use some kind of an ancient and unmaintained app or driver from 1980s or 90s. But it's still strange. Can't a 64-bit OS made compatible with 32-bit apps and even drivers? Doing that would be a lot easier than maintaining a whole 32-bit OS.
  • Pork@III - Thursday, December 28, 2017 - link

    That's how you talk about bits. When will the 128-bit digital era arrive? When it comes to the market 5nm, or 3nm or 1nm chips?
  • quiksilvr - Thursday, December 28, 2017 - link

    128 bits you probably will not see for another decade. 5nm will be coming at around 2021. 3nm will probably be around 2025 and 1nm in 2027. After that it is time to say goodbye to silicon and start using carbon nanotubes.
  • HStewart - Thursday, December 28, 2017 - link

    I not sure it is dependency is on die size - but more if actually the instruction set needs - there of course some operations that need higher bits

    Theoretical 64 bit gives you gives you 16 million terabytes - it going to be a while for even storage for that - 16 Terabyte drives I believe are out and you need a million of those.
  • Pinn - Tuesday, January 2, 2018 - link

    I've worked on clustered server file systems and a bunch of engineering stuff, and i thought 64-bit was supposed to be super awesome for memory mapped i/o. not sure what came of that. that was before ssd/xpoint.
  • HStewart - Thursday, December 28, 2017 - link

    We may never see 128 bit in our lifetime, but then that was said about 64 bit and even 32 bit. So who knows.

    Of course we do have AVX-512 which 512 bit - but that is different thing.
  • Pork@III - Thursday, December 28, 2017 - link

    I am pretty young :)
  • TennesseeTony - Thursday, December 28, 2017 - link

    Young, maybe, but pretty?
  • Pork@III - Thursday, December 28, 2017 - link

    All of Pork family has this value :)
  • Pinn - Tuesday, January 2, 2018 - link

    avx-512 is SIMD silly boy
  • Pinn - Sunday, December 31, 2017 - link

    Not needed for local memory addresses for any time soon, although could help with IPv6.
  • UtilityMax - Friday, December 29, 2017 - link

    To be honest that the transition to 64-bit could take so long! Even today I am being pestered with the questions, do you want to download 32-bit or 64-bit app? Geez. Moreover, Microsoft still continues selling 32-bit OS. I mean come on, do you really have to continue selling a 32-bit OS just for 32-bit app and driver compatibility? Make a binary interface for 32-bit drivers, and stop this insanity with 32-bit OS.

Log in

Don't have an account? Sign up now