Comments Locked

35 Comments

Back to Article

  • Pandamonium - Monday, June 29, 2009 - link

    Anand,

    Is AT doing a review of the ASRock ION 330-BD? I'm very much interested in its FLV capabilities. Sorry to harp on the HTPC-potential, it's just that since it doesn't look like Intel will be releasing a dual core Atom in the near future, the 330 is all that we can play with. Tweaktown's review suggests that the ASRock units might be overclockable to 2.1 GHz, and I think it might use the 9400 compared to Zotac's 9300. If this is enough additional horsepower to handle Hulu, then I know what I'll buy. Tweaktown's reviews, however, aren't as thorough or clear as yours though. So I'm waiting for the AT verdict.

    Thanks!
  • Tutor - Friday, June 26, 2009 - link

    If you're interested in the possibility of upgrading an i7 or Xeon Mac Pro check out this discussion thread and benchmark result to see what Anand's blog has inspired:

    http://forums.macrumors.com/showthread.php?t=73073...">http://forums.macrumors.com/showthread.php?t=73073...

    http://browse.geekbench.ca/geekbench2/view/143750">http://browse.geekbench.ca/geekbench2/view/143750
  • Tutor - Saturday, June 27, 2009 - link

    Update: Thread is at:

    http://forums.macrumors.com/showthread.php?t=71393...">http://forums.macrumors.com/showthread.php?t=71393...
  • gwolfman - Friday, June 26, 2009 - link

    Thanks Anand for the updates... keeps us eager for more news/reviews. :)
  • procaregiver - Thursday, June 25, 2009 - link

    Please tell me the CPU upgrade was for the 8-core Nehalem Mac Pro. It should be as easy as purchasing two off-the-shelf Xeon (Gainestown) and having a bit of patience in removing the heatspreaders from the replacement CPUs. The only issue I see is with the 3.2GHz W5580, as the TDP is 130 W and it's only a guess whether it will work until somebody with large pockets tries it out.
  • doclucas - Thursday, June 25, 2009 - link

    This cool PCIe drive delivers insane performance and huge storage at quite an affordable price (in comparison with other SSD prices). When can we expect a review of this monster?
  • halcyon - Thursday, June 25, 2009 - link

    I know you are in the USA, but a big portion of your readers come from outside.

    Please do not make the same idiotic mistake that 99% of US mobile bloggers do: assume that their readers only care about Sprint, Verizon, AT&T or that they think that iPHone is the world's most sold smartphone (it's probably not even in top 5).

    Symbian dominates the smartphone world, totally, even if it sucks in market share in the US. While I'm on WM myself, I like to see coverage of all the major platforms and Symbian is one of them. Talking about Pre/WebOs so much with it's miniscule installed based, US-only availability and just a few phones sold really skews the picture.

    There are more symbian and windows smart phones sold every week than the combined total of Pre. More Symbian devices every day than 3GS in the first whole week.

    That should give you an idea of what the "rest of the world" (outside) US is buying and concentrating on.

    And yes, new and upcoming things are interesting, but so are old, tried and tested.
  • anandtech02148 - Thursday, June 25, 2009 - link

    We are at a point where Apple has less to improve on except more market shares, a purple or metallic iphone that can play more nintendo video games.
    I hate to see Apple or Google dominate the smartphone market with their cheap built on the fly from Taiwan/China.
    I'm a big fan of Symbian Os for Nokia E71, one of the best looking phone/price without the evil ccellphone contract, i feel as though Nokia is like Bmw and minicoop.
    Also anything coming from Rim is just excessive plastic from China, with their Gui heading toward the Palm Treos graveyard.
    Nvidia and Intel are going to bring their guns to the smartphone market. Yay for us all.
    Nvidia Tegra, Intel should make the smartphone market much more interesting.

  • PaulaTejano - Thursday, June 25, 2009 - link

    Well , I'm thinking a SSD is a need for my new rig. I was looking between x-25 M or OCZ vertex 60gb.

    These are the prices in Swiss francs (and the same amount in dollars)

    Intel X25-M 80GB, SATA-II, 2,5 Zoll : 369.- ($ 337.60)

    OCZ SSD Vertex 60GB, SATA-II, 2,5 Zoll, MLC 32MB Cache : 315.- ($288)

    The 128 gb version will cost me like : 479.- ($ 438)

    I'm thinking it's still time to wait , to see prices come down...
    What you guys think about it ?
  • djc208 - Thursday, June 25, 2009 - link

    Have to agree, I'm thinking my Win 7 upgrade may be a good time to move my system drive to an SSD. My CPU and GPU are both fast enough for the work and play I do, so this may be a good upgrade path.

    Price will always be moving, I'm not sure you'll see a big change, especially in Intel pricing till someone gets much closer to their performance level.

    I am curious to see if anything significant is around the corner. It doesn't sound like it but it may change my timeline.
  • fedemaste - Thursday, June 25, 2009 - link

    Try using using GPUs that are not for Mac, is it possible? Maybe with a firmware update you can, and they are cheaper!
    Can you also test SLI configurations for OSX and Windows?
  • danielk - Thursday, June 25, 2009 - link

    More than anything, i would like to know a bit more about SSD roadmaps, specifically Intel's. I'm in the market for getting a x25-m, but dont want to see a next-gen 'uber' disk released a month after my purchase.

    vr-zone.com supposedly posted intels SSD roadmap earlier this year, vaguely stating the SSD's will get a revamp in Q4(50nm to 34nm and "an updated controller").

    By the looks of how long TRIM is taking, im tempted to think that either TRIM will be launched along side the next-gen Intel SSD's, or TRIM will be popped 'soon'(tm) and used to milk the current Intels a bit more -probably resulting in the next gen release being delayed?

    Any thoughts?
  • The0ne - Thursday, June 25, 2009 - link

    Just my thoughts...

    If you aren't in a hurry and can wait til end of the year I think you should wait. Competition is hot, designs are changing and improving and there's lots to improve still. While it may be a couple more years for it to compete competitively with hard drives, it'll eventually get there. Then there's the manufacturing process that will also improve and drive cost down.
  • lux47 - Thursday, June 25, 2009 - link

    When testing SSD-s, it would be great if you managed to include Velociraptor (10000 RPM drive) results. You performed Velociraptor tests some time back; it would be interesting to compare these with SSD-s now, when TRIM is supported and new generation of SSDs has arrived.

    Thanks for the great articles!
  • TA152H - Thursday, June 25, 2009 - link

    x86 plaguing computers isn't bad enough? We need it to grow like a cancer and spread to phones?

    x86 costs the world tons of money, and needs to go away. We don't need it spreading to graphics cards, and phones, we need it gone. It makes processors slower, bigger, and more expensive to make.

    Maybe it's not a huge amount per processor, but when you multiply it by the number out there, it's an enormous amount of waste.

    I don't understand Intel. They tried to kill x86 with the Itanium, and now they are spreading this disease with reckless abandon.

    What the Hell are they doing????
  • Rasterman - Thursday, June 25, 2009 - link

    it is unquestionable that x86 wastes some performance and power, but any transistors or energy lost dealing with x86 is insignificant when compared to the amount of research, training, tools, development, and programs that exist for it. supporting x86 isn't really a choice, its a requirement.

    what would you rather have:

    a 1-10% slower machine OR 99% less programs to run?
    a 1-10% increase in battery life OR a device that costs 90% more?
  • The0ne - Thursday, June 25, 2009 - link

    If that was the case for many things you would not have seen advances and innovations in the technical field, software included. At some point there has to be change. People often get too comfortable and often times too lazy with what is already familiar and refuse to change. Someone has to try something they believe will be of more benefit in the long term or else the field goes nowhere.
  • JarredWalton - Thursday, June 25, 2009 - link

    back when x86 support on an otherwise RISC-style CPU first came to be (the P6 in the Pentium Pro), Intel had to use something like 30% of the die to handle the x86 translation. That was huge, and it was space that could have been put to better use perhaps. Now, though, x86 is ubiquitous, well understood, has excellent compilers, and the chips are highly optimized.

    If you have a 5 million transistor processor, using 1.5 million transistors to handle x86 decoding (i.e. translate into RISC-style micro ops) is a costly affair. Now we're looking at chips with over a billion transistors; needless to say, using less than 1% of the transistors to maintain x86 compatibility isn't really a problem.

    Also note that pretty much no one writes in ASM anymore, so there's another reason x86 isn't a problem. Compilers need to handle it properly, and that's about it.

    Sure, Atom and such are a lot smaller at "only" 47 million transistors, but the extra million (or two or three million) trannies to handle x86 is a drop in the bucket http://www.anandtech.com/cpuchipsets/showdoc.aspx?...">compared to the potential benefits.
  • TA152H - Thursday, June 25, 2009 - link

    What you're saying ignores a couple of important issues.

    First, there are no benefits, except for compatibility. Who needs this compatibility with a phone? ARM is a cleaner instruction set, and the wasted transistors are big. Keep in mind, a transistor isn't a transistor. You throw around numbers while apparently not taking into account decode logic is a lot larger per transistor than cache memory.

    You might be too young to know this, but x86 was considered a bad instruction set even when it was made. Most people preferred the 68K, and even the 6809. It was damn annoying then.

    Also, you point to transistor count like that's the only issue. How about those extra steps that have to be done? They slow it down, requiring more hardware to perform at the same level, or simply performing worse.

    Then there is all the legacy garbage, like operating modes that aren't useful anymore. How often do you think 286 protected mode is still used? Then there's the lovely x87 section, that serves no purpose at all anymore, and is not even part of x86-64. Of course, if you buy an AMD, you get 3D Now!, which never really was 3D Now!, and certainly isn't today. It was rarely used.

    On top of this, there are limits to what you can do behind it with the RISC engine that has to deal with it. It's not like this RISC engine can be anything you want, except for the fact you have to convert from a bad instruction set. It does limit IPC.

    Then there are the very few registers. Far less than most RISC implementations.

    And, of course, we have multiple forms of software. You can get 32-bit, and 64-bit. So, there is money spent on that.

    And, AMD actually pads the L1 cache to help with decoding, making it smaller than what 128K for actual data.

    Writing compilers for it is also difficult, since you aren't writing for the execution engine. You're writing for something before it, and that level of indirection adds complexity, and can negatively affect efficiency. It's also harder to implement virtually anything because of this level of indirection. It's not that different to writing to an API rather than the actual hardware, although certainly the penalty is lower in magnitude.

    Maybe you like slower performance, extra cost, and extra power use, but I don't. It's bad enough in a PC, but now in phones???? Yes, I really want my battery time chewed up with decode logic. Yup, I really want it chewed up by necessary extra clock speeds to compensate for the speed penalty introduced by extra stages. Even if it were a good CISC implementation, like the 6809 or 68K, or even the VAX, I could live with it a little easier since it might make coding easier. Have you ever heard anyone say x86 code helped in ease of development???? It's a damn nightmare. It blows even for CISC. Why do we want to spread this?
  • Penti - Thursday, July 2, 2009 - link

    Your forgetting that Intel got rid for their XScale PXA arm including it's engineers. You can't expect Intel to go ARM for MIDs/Handhelds again. Neither is the chip only for those devices.
  • JarredWalton - Thursday, June 25, 2009 - link

    I actually learned to program ASM back in college on a 68000 system; you're right: x86 is a nightmare by comparison. However, nearly all of the issues you bring up have been "solved" - and in fact most instructions sets are moving back towards CISC in a sense, what with the proliferation of vector instruction sets.

    As an example, the limited number of registers is a non-issue, since we have register renaming and other tricks going on. Yes, it adds to the bulk of the chip, but seriously: how much larger is an Atom compared to an equivalent performance ARM CPU? Let's say it's still 10%; does Intel have the resources to make up that deficit? Ask AMD the answer to that one. Core i7 and Phenom II are very similar in many ways, but Intel is a lot faster in the vast majority of applications - and it's not just clock speeds or Hyper-Threading.

    I could throw out Apple switching to x86 as an example of how unimportant instruction set really is in the modern era, but there are political reasons for the move as much as anything I think. Still, with the money invested in x86, anything that uses it benefits - even smart phones. Backwards compatibility for, i.e. DOS/Windows/Linux apps, clearly isn't a real concern for a phone, but having a similar code base is a big deal from the programming side. You'll still have to do some extra work, but potentially not as much as if you use/target a completely different instructions set.

    So while I definitely agree that x86 is a Frankenstein instruction set that has been added to and mangled over the years, it's still so honed that the vast majority of systems run that ISA today. About the only real competitors are POWER and ARM, and those are for a different market. Can Intel penetrate both those markets with x86 modifications and specialized CPUs? Like it or not, they're going to try.
  • emboss - Saturday, June 27, 2009 - link

    "You'll still have to do some extra work, but potentially not as much as if you use/target a completely different instructions set."

    Porting some piece of software from desktop x86 to mobile phone x86 will involve almost exactly the same amount of work as porting from desktop x86 to mobile phone any-other-architecture. Why? Because 99.999% of non-OS code falls into one of two cases:
    1) Non-performance critical code, written in some HLL. Underlying CPU instruction set doesn't matter here.
    2) Performance critical code, written in assembler. This is going to have to be completely rewritten even for mobile phone x86 because the instruction timings etc mean that the code will run poorly on a mobile phone x86.
    The remainder of the code is non-performance-critical assembler, which IMO should be binned ASAP.

    *If* Intel was merely managing to run a Core 2 (say) at a really low power level, keeping everything else as-is, then the "x86 has a huge codebase" argument would make sense. But they're not.

    The main reason Intel is going for x86 everywhere is partially that it makes it harder for everyone else to compete, and partially an over-reaction due to the failure of IA64 (x86 IS still important on the desktop). Besides the whole legal issue in getting Intel to give out an x86 license (they haven't for over a decade), designing a fast x86 decoder is a nightmare. Intel already has the decoders figured out, so they're an easy roadblock to throw in front of any would-be competitors.

    It's the same for Larrabee - there the x86 part is really only going to be used for branching and address computations (any real work will be done with LRBni), so would have been much better replaced with something more tuned for that task (again, compatibility with existing code is a non-starter). But if they can get Larrabee established as a graphics/HPC platform instead of a vendor-independant thing like OpenCL they can have a much more secure hold on their marketshare.
  • The0ne - Thursday, June 25, 2009 - link

    I also started on the 68k series processors and had been designing controllers for quite some time. I like it much better than the x86 of course. I think both of your arguments are correct and it'll be interesting to see how the Intel does.

    One thing that I believe whole heartily that will overshadow both your argument is the poor programming that will come about in any event. The increase in memory will always produce poor programming from "managers" that don't care not matter what instruction set you are using.
  • winterspan - Wednesday, June 24, 2009 - link

    I'm pretty sure you are wrong about Apple using SSDs that have the same new SSD ( Samsung S3C29RBB01) controller as the Corsair P256.
    The P256 is able to hit read and write rates over 200MB/sec, while Apple has said directly during the recent SATA issue that they don't sell SSDs that can make use of anything above SATA/150.

    Additionally, I have seen pretty recent benchmarks of a Macbook Pro ordered with SSD and then replaced with the OCZ Vertex, and the Vertex was far faster in sequential read and write.

    I believe at least most of the Apple SSDs they are using are Samsung OEM units using the old S3C49RBX01 controller --- just like the Corsair S64/S128 uses. Again, these drives are far different than the Corsair P256 which uses the newer and far faster Samsung S3C29RBB01 controller.
  • Anand Lal Shimpi - Thursday, June 25, 2009 - link

    You're very correct, Samsung is phasing out the older drives and replacing them with the newer ones but I believe most if not all of the older Macs used the previous gen controller.

    Take care,
    Anand
  • RamarC - Wednesday, June 24, 2009 - link

    Wow. That's surprising since it took Samsung/Sprint 8 MONTHS to get the Instinct to a usable state (text input in any app, a browser that worked, video that was actually viewable, GPS that would load 90% of the time). It wasn't that they weren't fixing things, but they just wouldn't release the fixes except once every other month!
  • bbomb - Wednesday, June 24, 2009 - link

    Samsung doesn't believe in software updates once a phone is released. It was truly an act of God that you ended up with one for the Instinct.

    I have the Eternity and Samsung jut released the Impression which is basically the Eternity with a keypad in a different body. They get the ToughWizUI version 2 and yet the Eternity gets jack when it should be a simple update for us.
  • strikeback03 - Thursday, June 25, 2009 - link

    That seems to be the case with most of these cell phone makers. As far as I can tell the LG Versa and Dare are pretty much the same thing, except that the firmware is more ironed out in the Versa.
  • ltcommanderdata - Wednesday, June 24, 2009 - link

    I hope you'll add Boot Camp results of the Mac Pro to Anandtech Bench to provide a reference point to the capabilities of dual processor setups. Certainly the Nehalem Mac Pro and the Harpertown and Clovertown Mac Pros if you have them.

    For your iPhone review, I hope you'll also do some benchmark comparisons between iPhone OS 2.2.1 and iPhone OS 3.0 on the iPhone 3G to see how much the OS update contributes to improved performance. Certainly with load times and web rendering and battery life as you've done, but also fps in games if possible. With the iPhone 3G S being marketed as 2x faster than the iPhone 3G, John Cormack has speculated that a 2x speedup in games is possible on the ARM11+MBX platform if Apple spent the time milking the software. I'd also be curious to know what the memory speed of the various iPhones are since CPU and GPU speeds have been confirmed or speculated on.
  • reckert - Wednesday, June 24, 2009 - link

    I'd be interested to see a comparison of the file systems as well.

    I actually found situations where on flash drives, NTFS beats out FAT32 and exFat.

    My test scenario came when I installed World of Warcraft on a OCZ Slate drive using USB.
    Initially it was formatted with the default settings as a FAT32 drive. Every time I logged out, it took 15 minutes to save everything.
    I wound up using some of the tools from sysinternals and found there was a lot of 2-30 byte writes occurring. Since the clusters were large, each write was writing at the performance of the drive, but io in the app was slow -
    Long story short, I tried various combinations of exFat, Fat32 and NTfS and found in the end for scenarios where there are a large number of small write io/s the NTFS out performed the Fat32 and exFat dramatically. - This is on Windows 7 x64 RC, with a 32 Gb OCZ Slate flash drive, with 'Optimize for Performance' (Bought the drive for a laptop I was getting for work, but then they ordered a model with out the express card slot)


  • Fox5 - Wednesday, June 24, 2009 - link

    NTFS has additional caching that FAT32 doesn't, so even if actual speed isn't as good as FAT32, perceived performance is better.
    It should be very hard to beat FAT32 in sequential performance, but the FAT file system is rather horrible for random access.
  • meson2000 - Wednesday, June 24, 2009 - link

    Why does it seem like you are always leaving out RIM as a competitor in the smart phone market??
  • SLEEPER5555 - Thursday, June 25, 2009 - link

    my exact thoughts the new curve 8900 and its cdma twin the 9630 are great phones. and now rim even has an apps store with over 1000 apps to date. i love my 8900 but i am tempted to give the Palm Pee a try and just might.
  • fyleow - Wednesday, June 24, 2009 - link

    Simply because he does not have RIM phones. It was mentioned in the other article.

    Pretty sure Anand is a big BB fan and has nothing against them.
  • PrinceGaz - Thursday, June 25, 2009 - link

    for several moments there I was thinking "I didn't know Anand was a fan of Big Brother, but what has that got to do with smart phones"... :)

Log in

Don't have an account? Sign up now