Did I fall back in time to fifteen years ago? There is a big reason why audio processing moved entirely to software... Over time, as CPUs got faster and faster, the cost of doing sound work on the CPU got smaller and smaller, until eventually there was no performance benefit whatsoever to doing anything in hardware. All these years later, with so much more CPU power available to us, (not to mention 4+ core chips in gaming rigs) I find it very hard to believe that the situation has somehow reversed itself since about a decade ago when everything moved to software.
So you must think that MS put a dedicated audio DSP chip in the Xbox One for the sake of it?
Advanced 3D positional and reverb effects can use a significant amount of CPU cycles on high-end chips. This is why no modern games use them, because they're too expensive.
The xbox one has a netbook processor. The xbox one has a planned life-cycle of ten years. There will be phones with more cpu than the xbox one well before it stops selling. What a stupid argument.
Haha, no, there's nothing a 5 dollar DSP can do that an i7 can't. AMD slapped in a cheap DSP, gave it a marketing term, and they finally have their own exclusive API. Which is all it is since it has to then pass through your soundcard codec. If they actually wanted to advance game audio they would have made an open standard, instead they made a marketing term.
Yep. Of course an i7 can calculate immense FX, whether it is reverb of delay of whatever. It is only that when the reverb has to be really dense (high quality) or more reverbs/FX are going on at the same time that the calculations are better off-loaded to a dedicated piece of hardware (a DSP chip) so the CPU can churn away on other tasks and no latency will occur. Ask anyone who creates music on a computer.
Windows Vista/7/8 may have stopped hardware audio dead on PC, but it's been alive and kicking in the cinema/HT market. My current high-end AVR uses a wide range of DSPs for maintaining quality while improving positional sounds, including height channels. Sadly, it's pre-Atmos, so I'll have to upgrade when Atmos-enabled AVRs hit the market. Anyway, the state of 3-D audio via the Windows stack is a sad one.
Using a $5 ASIC (probably cheaper than that) for 3-D aural calculations is an excellent move. I still use my Audigy 4 Pro w/Alchemy for my older games and the reverb and positional effects are superior to any modern software implementation I've heard so far. The sound quality is noticeably superior as well.
Since AMD is using this same technology in the PS4 and XBone, this means that PC gamers (with select AMD GPUs) won't be left with an inferior experience. The logic will carry over whether it's a console-to-PC port or vice versa. At least we'll have the option. It would be nice if AMD equipped ALL their GPUs with it... or sold a separate PCIe 1X or USB adapter so that all PC gamers could enjoy it. GPU agnostic, but they still get paid... Maybe that will be the next wave.
Either way, I'll be buying my next GPU based upon graphics performance and nothing more.
"Haha, no, there's nothing a 5 dollar DSP can do that an i7 can't. AMD slapped in a cheap DSP, gave it a marketing term, and they finally have their own exclusive API. Which is all it is since it has to then pass through your soundcard codec. If they actually wanted to advance game audio they would have made an open standard, instead they made a marketing term."
Correct me, but isn't that what nVidia do with its PhysX? Choose to make it propietary item instead of open source one like OpenCL. That is until all 3 next-gen console using AMD GPU inside and they scared to death that their technology would go vanished and decide to make PhysX available for next-gen console gaming.
There's nothing wrong with AMD introduce TrueAudio as propietary because all 3 next-gen console are AMD hardware inside (2 of which likely to use these next-GCN based GPU inside with the next iteration of their console). AMD is taking advantage of these situation well, and likely maybe tressfx will come to console (PS4 and X1) with Mantle API enabled (via updated OS on PS4 and X1 to include this Mantle API low-level inside their OS).
Except nVidia didn't create PhysX. A startup company developed a dedicated physics add-in card, that company managed to get their PhysX API adopted by a bunch of games, nVidia bought out the company and migrated the hardware acceleration from using a dedicated physics processor to using a programmable GPU. They didn't choose to make it, they integrated an existing solution.
Complex physics can also get rather more compute intensive than audio effects.
People forget that the reason that games moved to software sound engines wasn't because of anything Microsoft did with Vista, the transition happened years before Vista came out. I think the first point that I realized it was in 2004 when both Doom 3 and Half-Life 2 had moved to software sound (although Creative later sued Id into adding EAX to Doom 3 due to a patent dispute about shadow rendering, of all things). This is at a point where hardware audio effects (EAX) had widespread support, too, because even integrated audio in motherboard chipsets supported EAX.
So why, then, did the whole industry move over to software audio processing, even when hardware support was widespread? I think it's because the CPU impact to do it well enough became sufficiently small, doing it in software added a lot of extra flexibility over EAX, and it gave a good deal of consistency to have your game sound the same everywhere rather than hoping that everybody's EAX implementation was created equal.
Will TrueAudio be able to produce better quality sound than you might achieve on the CPU? Sure. But will people be able to tell the difference between TrueAudio reverb and current "good enough" reverb?
Regarding PhysX, I'm sure it'll be running on the two next gen consoles, although I suppose it might be on the CPUs only, not GPUs. We saw it on the current gen consoles all the time though (and Vita too) though I assume it was always just running on the CPUs, but still, it's a way for developers to do physics I guess cheap, and it can be accelerated on PC. I guess this is the audio equivalent of that, maybe.
It's a game of diminishing returns. Currently available 3D positional and reverb effects very likely sound nearly indistinguishably as good as the more advanced ones. To top it off, GPUs already act very well as programmable DSPs, so if they desperately wanted to do this sort of stuff today, they don't need new dedicated silicon to do it.
In PCs, It's a marketing gimmick, nothing more. On home consoles, the XBox One has very limited compute resources as a cost saving measure, to the point where it has to compensate for extremely low single-threaded performance by throwing more cores at the problem. It's possible that in that specific situation, a dedicated DSP might make sense. But the benefits would be far less in a PC with its much more powerful processor, and the advantages would rapidly shrink as CPUs continue to improve.
Isn't it possible that AMD could use existing compute resources in the GPU for accelerating audio? Writing compute sharers that implement audio DSP effects, essentially? This has been demonstrated with OpenCL (https://www.khronos.org/assets/uploads/developers/... Maybe AMD is providing a ready-made implementation with a direct path to output sound without writing back results to CPU memory?
I have to wonder about that... I mean you may be right, but the first time I actually heard impressive positional audio was on the Xbox 360, which lacked dedicated hardware (and I quickly quit using my surround speakers as they're a pain, and just use my TV's speakers, and forgot about that).
I was never too impressed by anything my Creative Labs cards did, save for the quality SNR, and my hope that maybe they were offloading work.
It's still up to 15% of one modern processor core, as they said. It's not a huge deal now, sure, but I'd still rather have that go to some teeny dedicated silicon than taking up CPU time.
Dont know why you're butthurt over a useful feature that free's up CPU cycles and enables far better sound effects. Nvidia fanboy or what?
Many people who game dont even have great CPU's, so for them it will be more than just 15% CPU usage for advanced sound effects. It could mean the difference between 40 FPS and 60 FPS. Wouldn't be surprised if Nvidia copies this within the next 18 months.
Because that space on the card could be dedicated to improving outright GPU performance instead? Because that space on the card could be nonexistent reducing the cost of the overall card? Because just using space to use space to no practical gain in most games since this is a proprietary solution exclusive to just AMD discrete GPU's of one generation is probably a waste of space?
Because it would make less powerfull CPU's (especially AMD's) overcome the latency involved with adding the soundeffects needed. It's so much easier and cheaper to slap in a dedicated DSP than to improve your whole CPU of GPU just for that.
Guess it depends on the added complexity and cost, since now you've got this thing using transistors, and they're worthless unless actually getting used. In the consoles it probably makes sense? But then why did they abandon them last gen?
The reason we went back to software is not computation cost, but driver instability. MS had enough of CreativeLabs crappy drivers crashing the system because it had access to low level hardware and could not use it correctly. This reflected poorly on Microsoft (instability) and gained very little to a negligible part of their market. They chose stability for the sound stack of WinVista+ (user land driver). Creative has always been at the same time the company pushing to sell its audio product and the one to try to wall-off the market using their proprietary API. Aureal was actually better in positioning audio, as they were doing the real calculation, not just occlusion, but they died and got acquired by Creative. AMD recreating a positional API could be nice *IF* they get enough clout and/or open the API. Otherwise, it will just fracture the market again...
They effectively did the same thing with the graphics drivers, forcing most of the driver outside of the kernel space and into user space so that crashes only took the driver down and not the operating system.
People were desperately wanting more stable machines and Microsoft put out a chart with Vista about the cause of the crashing and sound and graphics drivers dominated it (mostly graphics). Aureal A3D was an amazing effect and I was very sad to see it go at the time, I want that sound experience again.
The fact that they only included it on the high end cards dooms this. At best it will be a physx level feature: easy to do in software, gimmicky, but restricted to a handful of cards and games.
Or maybe it's only included in the upcoming cards that aren't re-badged, hence why it's missing from the 80X.
Advanced software audio effects are expensive, you don't WANT them executing on a core that isn't specially designed for it, because it can be more productive doing something else. Haswell included.
Virtually every desktop chip intel sells has an igp. Every mobile chip amd or intel sells does. Many of AMD's desktop chips do. An igp in a system is much more prevalent than a special piece of hardware in the GPU. Plus that igp tends to sit idle.
Ooooh there's an interesting idea. Yeah, I'm always pissed off that CPUs are wasting gigantic amount of die area on terrible video I don't want. We could get at least one extra core if I'm remembering right, on Sandy Bridge? Probably worse now.
I actually think the audio dsp would have made sense if the dsp chip had gone into the xbox one, the ps4 and on APU's (i.e. for pc's and steamboxes). At this point they could included near hardware level audio in Mantle and have a uniform api across all the platforms. However as I understand it the dsp chip isn't in any of these.
If AMD gets developers to write for it then it has a chance. Echo/reverb are poor examples even if they take processing power. Positioning and staging would be a better focus to make things more realistic. [I remember 'SoundStorm.' Still a better sound than most of today's onboard sound chips.]
There's are other very good reason for doing it in hardware and next to the GPU: latency (for calculating the sound and effects), jitter and A/V sync offset. Interestingly all gamers seem to care for lowest display response times but no one seems to care that the audio actually matches the video...
I wanted to write exactly this. Also, if the graphics card misses a frame (or two) because of heavy scene, few people will notice. However if the audio misses the frame everyone will immediately hear it.
Maybe I will be able to get a positional sound that isn't a load of crap. Every single game with camera controls right now is doing the thing where if the sound source is exactly 90° to the left of the camera, the sound only comes from left channel with nothing on the right. How the hell this passes for acceptable and why I'm the only one noticing this I'm not sure.
As someone who is 75% percent deaf in one ear (and a whole lot of distortion in what I do hear) I definitely notice the 90degree sound source issue. Scripted sequences where someone is talking and you have no control over where you are looking are the worst, as I sometimes have to turn my headphones around to (L -> R, R -> L) just to hear the conversation.
A few months ago, Razer brought our Surround Sound for headphone use and it's free and meant to be pretty good (I haven't tried it yet). There's CMSS for creative's X-Fi chips, mentioned in the article (which I use) and there's also Dolby Headphone, which some people like though I don't think it supports full 3D i.e. I don't think it calculates for exact sound effect positioning or elevation etc.
Now that this is coming out from AMD, it would be good to have a round up of the technology behind, and the effectiveness of, the various 3D positional audio solutions for the PC using headphones. And do use headphones with a wide soundstage such as Audio Technica AD700 or AD900 or Sennheiser 558 or 598 (or even the older 555 or 595)!
At first I've heard about 'Programmable Audio Audio Engine', I thought about something similar to shaders in Graphics. But the more they revealed, the less I'm sure it is. It sounds like they come with the predefined effects rather than to get the developer to create their own.
But if that's the case, then it would not be called 'Programmable' right ?
Probably they don't just stress the word enough, and instead announce the partner/technology based on it. I don't know, may be I've got to check the info in the developer website in the future.
Upon reading "programmable" I first thought it was going to be a software solution running on the shaders. Which should work pretty well, especially if the GPU could finally run several tasks at once without major headaches. Which is a direction they should have moved into anyway. Especially considering this is supposed to be GCN 2.0 rather than 1.0/1.1.
Anyway, this should have made it into DX 11.2 or 11.3, open to be supported by anyone. Let the game enable it or not. And if it's enabled, but no supporting hardware is present, just use the current simple positioning etc. instead.
No, I think that this is actually programmable sound, not just enabling reverb and echo (which ultimately was the only thing that EAX did). The trick isn't to figure out whether you have a weird reverberation in a scene, its to calculate what the sound stage should sound like based on the direction you are looking, the physical makeup of the scene (where are the walls, what material they're made of, and ultimately how does that pact how sound reverberates off that surface), where the sources of the sounds are, and the cumulative affect each surface has on the sound sources as it travels to the listener. That's a hard problem to solve. Think of raytracing but for sound, and add appropriate algorithms to figure out how the listener (with two ears) would perceive the sound and ship that to the speakers. I imagine that the shader component comes in to play when you're determining the effect (by frequency) of a surface on incoming and outgoing sound. That's way more complicated than I originally thought. Doing that in real time is very expensive to do.
Actually, this is what Aureal did 15 years ago with theirs Aureal Vortex 2 chips. Think you are in the room closer to one wall. You reload your weapon. You hear the "click" echo from the closer wall sooner and louder, while from the other side it comes later and more fuzzy. Imagine, you slightly turn around while reloading and you can immediately figure out where is the wall which is closer and where is the one far from you, even if there is pitch dark.
Or imagine you ride on train through the tunnel and echo of the wheels bumping the rails is literally pressing on you, and suddenly the tunnel expands to the large room and the echo is suddenly much more delayed and attenuated.
Those were some examples of what you can experience in Half-Life (first one) if you played it with Aureal Vortex 2 hardware and headphones (as I did).
Was an amazing effect and I used my Aureal Vortex 2 for as long as I could before finally there was no reason to anymore. Real shame they were sued into destruction because it was a great technology, gave a real advantage in some games.
But I wish AMD was making a sound card instead with this, so we could get this capability without an AMD graphics card.
I dont understand why people are complaining. If the work results in improvements, small or large it is a benefit. If it triggers their competitor to do the same, everybody wins. Granted it would be better if it was brand agnostic, but at least somebody is doing something to push the envelope.
I think maybe it should be mentioned that the amount of GPU space this sort of DSP would take up is miniscule. So you're getting a lot of benefit, for almost nothing. I can completely understand why AMD would do this, it fits in to their fusion strategy of integrating everything and gives them a feature that Nvidia doesn't have for very little of their overall transistor budget.
Ooooh hey, yeah if they can stick this on their A series CPUs, that would be a nice little bullet point for 'em. They're already IMO the best choice at the low end.
Coming from a audio engineers perspective this is going to make the process of audio engineering for pc games much simpler, also creating more realistic real time free perspective 3d Sound fields which i promise you an i7 can't handle.
This is also going to make recording into a digital environment more precise considering digital audio engineers are always feuding with latency. I assure you i welcome a dedicated APU, which is vastly going to improve my recording work flow.
This will also translate into using a PC generated digital effects signal chain with live instruments which i assure you isn't a possibility today because of latency problems due to a lack of a buffer layer with dedicated APU.
Hmm...this is all sort of strange. This happened before (and it will all happen again ;) back in 2001 with Nvidia and their Xbox 1 GPU. I was under the impression Nvidia had a great part, but gave up because of licensing nonsense and lawsuits from Creative?
Then around the same time Xbox 360 ditches dedicated sound processing, putting it all in software I think (? when actually even the PSP has DSPs to handle it, I think?) and relying on it's 3 general purpose cores. Windows NT 6 does the same thing, and also of course has spare cores around the same time.
It was a bit disappointing, but kind of made sense.
Only now...we've got dedicated hardware back again?
I'd assume this was something Microsoft (and/or Sony) requested, and now AMD can expose on their PC new PC parts.
I'm sure it'll be well utilized on the consoles. I suppose that like that closer-to-the-metal language for the GPU portion of their chips, this might get utilized on PC just thanks to (perhaps) portions of it already getting written for the consoles, so it being less of an issue to support on PC.
I don't know... Physics on GPUs is an awesome idea, but isn't getting used tons when only Nvidia can do it (and then really on high end GPUs ideally). I'd imagine this'll be the same boat, maybe kind of cool, but developers will have to make sure everything works in software on the CPU too.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
62 Comments
Back to Article
Guspaz - Wednesday, September 25, 2013 - link
Did I fall back in time to fifteen years ago? There is a big reason why audio processing moved entirely to software... Over time, as CPUs got faster and faster, the cost of doing sound work on the CPU got smaller and smaller, until eventually there was no performance benefit whatsoever to doing anything in hardware. All these years later, with so much more CPU power available to us, (not to mention 4+ core chips in gaming rigs) I find it very hard to believe that the situation has somehow reversed itself since about a decade ago when everything moved to software.B3an - Wednesday, September 25, 2013 - link
So you must think that MS put a dedicated audio DSP chip in the Xbox One for the sake of it?Advanced 3D positional and reverb effects can use a significant amount of CPU cycles on high-end chips. This is why no modern games use them, because they're too expensive.
coburn_c - Wednesday, September 25, 2013 - link
The xbox one has a netbook processor. The xbox one has a planned life-cycle of ten years. There will be phones with more cpu than the xbox one well before it stops selling. What a stupid argument.wlee15 - Wednesday, September 25, 2013 - link
As mentioned in the article some audio effects are extremely expensive and aren't feasible on current CPU including i7 Haswell.coburn_c - Wednesday, September 25, 2013 - link
Haha, no, there's nothing a 5 dollar DSP can do that an i7 can't. AMD slapped in a cheap DSP, gave it a marketing term, and they finally have their own exclusive API. Which is all it is since it has to then pass through your soundcard codec. If they actually wanted to advance game audio they would have made an open standard, instead they made a marketing term.franzeal - Wednesday, September 25, 2013 - link
You're clearly naive, because that's what ASICs do.pouncedashfly - Wednesday, September 25, 2013 - link
Isn't latency the reason why they need DSP for audio??? Either that or realtime os...cfaalm - Thursday, September 26, 2013 - link
Yep. Of course an i7 can calculate immense FX, whether it is reverb of delay of whatever. It is only that when the reverb has to be really dense (high quality) or more reverbs/FX are going on at the same time that the calculations are better off-loaded to a dedicated piece of hardware (a DSP chip) so the CPU can churn away on other tasks and no latency will occur. Ask anyone who creates music on a computer.nathanddrews - Thursday, September 26, 2013 - link
Windows Vista/7/8 may have stopped hardware audio dead on PC, but it's been alive and kicking in the cinema/HT market. My current high-end AVR uses a wide range of DSPs for maintaining quality while improving positional sounds, including height channels. Sadly, it's pre-Atmos, so I'll have to upgrade when Atmos-enabled AVRs hit the market. Anyway, the state of 3-D audio via the Windows stack is a sad one.Using a $5 ASIC (probably cheaper than that) for 3-D aural calculations is an excellent move. I still use my Audigy 4 Pro w/Alchemy for my older games and the reverb and positional effects are superior to any modern software implementation I've heard so far. The sound quality is noticeably superior as well.
Since AMD is using this same technology in the PS4 and XBone, this means that PC gamers (with select AMD GPUs) won't be left with an inferior experience. The logic will carry over whether it's a console-to-PC port or vice versa. At least we'll have the option. It would be nice if AMD equipped ALL their GPUs with it... or sold a separate PCIe 1X or USB adapter so that all PC gamers could enjoy it. GPU agnostic, but they still get paid... Maybe that will be the next wave.
Either way, I'll be buying my next GPU based upon graphics performance and nothing more.
nezuko - Thursday, September 26, 2013 - link
"Haha, no, there's nothing a 5 dollar DSP can do that an i7 can't. AMD slapped in a cheap DSP, gave it a marketing term, and they finally have their own exclusive API. Which is all it is since it has to then pass through your soundcard codec. If they actually wanted to advance game audio they would have made an open standard, instead they made a marketing term."Correct me, but isn't that what nVidia do with its PhysX? Choose to make it propietary item instead of open source one like OpenCL. That is until all 3 next-gen console using AMD GPU inside and they scared to death that their technology would go vanished and decide to make PhysX available for next-gen console gaming.
There's nothing wrong with AMD introduce TrueAudio as propietary because all 3 next-gen console are AMD hardware inside (2 of which likely to use these next-GCN based GPU inside with the next iteration of their console). AMD is taking advantage of these situation well, and likely maybe tressfx will come to console (PS4 and X1) with Mantle API enabled (via updated OS on PS4 and X1 to include this Mantle API low-level inside their OS).
Guspaz - Friday, September 27, 2013 - link
Except nVidia didn't create PhysX. A startup company developed a dedicated physics add-in card, that company managed to get their PhysX API adopted by a bunch of games, nVidia bought out the company and migrated the hardware acceleration from using a dedicated physics processor to using a programmable GPU. They didn't choose to make it, they integrated an existing solution.Complex physics can also get rather more compute intensive than audio effects.
People forget that the reason that games moved to software sound engines wasn't because of anything Microsoft did with Vista, the transition happened years before Vista came out. I think the first point that I realized it was in 2004 when both Doom 3 and Half-Life 2 had moved to software sound (although Creative later sued Id into adding EAX to Doom 3 due to a patent dispute about shadow rendering, of all things). This is at a point where hardware audio effects (EAX) had widespread support, too, because even integrated audio in motherboard chipsets supported EAX.
So why, then, did the whole industry move over to software audio processing, even when hardware support was widespread? I think it's because the CPU impact to do it well enough became sufficiently small, doing it in software added a lot of extra flexibility over EAX, and it gave a good deal of consistency to have your game sound the same everywhere rather than hoping that everybody's EAX implementation was created equal.
Will TrueAudio be able to produce better quality sound than you might achieve on the CPU? Sure. But will people be able to tell the difference between TrueAudio reverb and current "good enough" reverb?
Wolfpup - Wednesday, October 2, 2013 - link
Regarding PhysX, I'm sure it'll be running on the two next gen consoles, although I suppose it might be on the CPUs only, not GPUs. We saw it on the current gen consoles all the time though (and Vita too) though I assume it was always just running on the CPUs, but still, it's a way for developers to do physics I guess cheap, and it can be accelerated on PC. I guess this is the audio equivalent of that, maybe.HisDivineOrder - Thursday, September 26, 2013 - link
I must have missed mention of an i7 in that article...Guspaz - Wednesday, September 25, 2013 - link
It's a game of diminishing returns. Currently available 3D positional and reverb effects very likely sound nearly indistinguishably as good as the more advanced ones. To top it off, GPUs already act very well as programmable DSPs, so if they desperately wanted to do this sort of stuff today, they don't need new dedicated silicon to do it.In PCs, It's a marketing gimmick, nothing more. On home consoles, the XBox One has very limited compute resources as a cost saving measure, to the point where it has to compensate for extremely low single-threaded performance by throwing more cores at the problem. It's possible that in that specific situation, a dedicated DSP might make sense. But the benefits would be far less in a PC with its much more powerful processor, and the advantages would rapidly shrink as CPUs continue to improve.
shonferg - Thursday, September 26, 2013 - link
Isn't it possible that AMD could use existing compute resources in the GPU for accelerating audio? Writing compute sharers that implement audio DSP effects, essentially? This has been demonstrated with OpenCL (https://www.khronos.org/assets/uploads/developers/... Maybe AMD is providing a ready-made implementation with a direct path to output sound without writing back results to CPU memory?Jaybus - Tuesday, October 1, 2013 - link
No, not exactly. For one thing, the DSP runs independently, so in addition to freeing up CPU cycles also frees up GPU cycles.Wolfpup - Wednesday, October 2, 2013 - link
Ooooh duh, I forgot, yeah I guess you could run this sort of thing theoretically on the GPU... It never occurred to me, but that might work lolWolfpup - Wednesday, October 2, 2013 - link
I have to wonder about that... I mean you may be right, but the first time I actually heard impressive positional audio was on the Xbox 360, which lacked dedicated hardware (and I quickly quit using my surround speakers as they're a pain, and just use my TV's speakers, and forgot about that).I was never too impressed by anything my Creative Labs cards did, save for the quality SNR, and my hope that maybe they were offloading work.
I don't know...
tipoo - Wednesday, September 25, 2013 - link
It's still up to 15% of one modern processor core, as they said. It's not a huge deal now, sure, but I'd still rather have that go to some teeny dedicated silicon than taking up CPU time.coburn_c - Wednesday, September 25, 2013 - link
15% of one core. On the highest end card only... where offloading will do the least good.tipoo - Wednesday, September 25, 2013 - link
15% of high end modern CPU cores is still a lot. *Shrugs*. Why complain about 15% less load on one core?coburn_c - Wednesday, September 25, 2013 - link
Physx offloads a lot more than 15% of one core. It's still a crappy gimmick that should be shamed.B3an - Thursday, September 26, 2013 - link
Dont know why you're butthurt over a useful feature that free's up CPU cycles and enables far better sound effects. Nvidia fanboy or what?Many people who game dont even have great CPU's, so for them it will be more than just 15% CPU usage for advanced sound effects. It could mean the difference between 40 FPS and 60 FPS. Wouldn't be surprised if Nvidia copies this within the next 18 months.
andrewaggb - Friday, September 27, 2013 - link
like anybody with an amd cpu :-)Tams80 - Friday, September 27, 2013 - link
It's a nice little bonus.HisDivineOrder - Wednesday, September 25, 2013 - link
Because that space on the card could be dedicated to improving outright GPU performance instead? Because that space on the card could be nonexistent reducing the cost of the overall card? Because just using space to use space to no practical gain in most games since this is a proprietary solution exclusive to just AMD discrete GPU's of one generation is probably a waste of space?cfaalm - Thursday, September 26, 2013 - link
Because it would make less powerfull CPU's (especially AMD's) overcome the latency involved with adding the soundeffects needed. It's so much easier and cheaper to slap in a dedicated DSP than to improve your whole CPU of GPU just for that.jabber - Monday, September 30, 2013 - link
Considering the lengths some here will go to with their ram timings etc. to get 1fps extra......I would have thought they would be all over this.
Wolfpup - Wednesday, October 2, 2013 - link
Guess it depends on the added complexity and cost, since now you've got this thing using transistors, and they're worthless unless actually getting used. In the consoles it probably makes sense? But then why did they abandon them last gen?Guspaz - Wednesday, September 25, 2013 - link
Dedicated silicon to shave off 3.75% of your total compute power isn't a terribly good investment. 18 months from now, that'll be 1.875%...inighthawki - Wednesday, September 25, 2013 - link
You're right. Why bother making efficiency improvements. That's stupid. Let's go back three generations, that's all we need.frenchy_2001 - Thursday, September 26, 2013 - link
The reason we went back to software is not computation cost, but driver instability.MS had enough of CreativeLabs crappy drivers crashing the system because it had access to low level hardware and could not use it correctly. This reflected poorly on Microsoft (instability) and gained very little to a negligible part of their market. They chose stability for the sound stack of WinVista+ (user land driver).
Creative has always been at the same time the company pushing to sell its audio product and the one to try to wall-off the market using their proprietary API. Aureal was actually better in positioning audio, as they were doing the real calculation, not just occlusion, but they died and got acquired by Creative.
AMD recreating a positional API could be nice *IF* they get enough clout and/or open the API. Otherwise, it will just fracture the market again...
BrightCandle - Saturday, September 28, 2013 - link
They effectively did the same thing with the graphics drivers, forcing most of the driver outside of the kernel space and into user space so that crashes only took the driver down and not the operating system.People were desperately wanting more stable machines and Microsoft put out a chart with Vista about the cause of the crashing and sound and graphics drivers dominated it (mostly graphics). Aureal A3D was an amazing effect and I was very sad to see it go at the time, I want that sound experience again.
coburn_c - Wednesday, September 25, 2013 - link
The fact that they only included it on the high end cards dooms this. At best it will be a physx level feature: easy to do in software, gimmicky, but restricted to a handful of cards and games.CrispySilicon - Wednesday, September 25, 2013 - link
Or maybe it's only included in the upcoming cards that aren't re-badged, hence why it's missing from the 80X.Advanced software audio effects are expensive, you don't WANT them executing on a core that isn't specially designed for it, because it can be more productive doing something else. Haswell included.
eanazag - Wednesday, September 25, 2013 - link
It is on at least one low end SKU.eanazag - Wednesday, September 25, 2013 - link
I have no clue why they didn't include it on everything above the R7 260X.eanazag - Wednesday, September 25, 2013 - link
R7 260X at $139 has it.carage - Wednesday, September 25, 2013 - link
So hardware sound acceleration is the name of the game now?I sure wish nVidia brings back SoundStorm...that will show 'em///
whyso - Wednesday, September 25, 2013 - link
What would be amazing is if they could get this kind of stuff (physics, audio, etc.) to work on the CPU igp which just really sits there when gaming,IanCutress - Thursday, September 26, 2013 - link
Not everyone has an IGP :Pwhyso - Thursday, September 26, 2013 - link
Virtually every desktop chip intel sells has an igp. Every mobile chip amd or intel sells does. Many of AMD's desktop chips do. An igp in a system is much more prevalent than a special piece of hardware in the GPU. Plus that igp tends to sit idle.nezuko - Thursday, September 26, 2013 - link
Oh, believe me when I'm saying AMD will put that TrueAudio on their Next-gen APU to attract Intel user and nvidia user.Wolfpup - Wednesday, October 2, 2013 - link
Ooooh there's an interesting idea. Yeah, I'm always pissed off that CPUs are wasting gigantic amount of die area on terrible video I don't want. We could get at least one extra core if I'm remembering right, on Sandy Bridge? Probably worse now.use that for SOMETHING lol
moozoo - Thursday, September 26, 2013 - link
I actually think the audio dsp would have made sense if the dsp chip had gone into the xbox one, the ps4 and on APU's (i.e. for pc's and steamboxes). At this point they could included near hardware level audio in Mantle and have a uniform api across all the platforms.However as I understand it the dsp chip isn't in any of these.
i'm not here - Thursday, September 26, 2013 - link
If AMD gets developers to write for it then it has a chance. Echo/reverb are poor examples even if they take processing power. Positioning and staging would be a better focus to make things more realistic.[I remember 'SoundStorm.' Still a better sound than most of today's onboard sound chips.]
Daniel Egger - Thursday, September 26, 2013 - link
There's are other very good reason for doing it in hardware and next to the GPU: latency (for calculating the sound and effects), jitter and A/V sync offset. Interestingly all gamers seem to care for lowest display response times but no one seems to care that the audio actually matches the video...risa2000 - Thursday, September 26, 2013 - link
I wanted to write exactly this. Also, if the graphics card misses a frame (or two) because of heavy scene, few people will notice. However if the audio misses the frame everyone will immediately hear it.boomie - Thursday, September 26, 2013 - link
Maybe I will be able to get a positional sound that isn't a load of crap.Every single game with camera controls right now is doing the thing where if the sound source is exactly 90° to the left of the camera, the sound only comes from left channel with nothing on the right. How the hell this passes for acceptable and why I'm the only one noticing this I'm not sure.
c4keislie - Thursday, September 26, 2013 - link
As someone who is 75% percent deaf in one ear (and a whole lot of distortion in what I do hear) I definitely notice the 90degree sound source issue. Scripted sequences where someone is talking and you have no control over where you are looking are the worst, as I sometimes have to turn my headphones around to (L -> R, R -> L) just to hear the conversation.JNo - Thursday, September 26, 2013 - link
A few months ago, Razer brought our Surround Sound for headphone use and it's free and meant to be pretty good (I haven't tried it yet). There's CMSS for creative's X-Fi chips, mentioned in the article (which I use) and there's also Dolby Headphone, which some people like though I don't think it supports full 3D i.e. I don't think it calculates for exact sound effect positioning or elevation etc.Now that this is coming out from AMD, it would be good to have a round up of the technology behind, and the effectiveness of, the various 3D positional audio solutions for the PC using headphones. And do use headphones with a wide soundstage such as Audio Technica AD700 or AD900 or Sennheiser 558 or 598 (or even the older 555 or 595)!
mr_tawan - Thursday, September 26, 2013 - link
At first I've heard about 'Programmable Audio Audio Engine', I thought about something similar to shaders in Graphics. But the more they revealed, the less I'm sure it is. It sounds like they come with the predefined effects rather than to get the developer to create their own.But if that's the case, then it would not be called 'Programmable' right ?
Probably they don't just stress the word enough, and instead announce the partner/technology based on it. I don't know, may be I've got to check the info in the developer website in the future.
MrSpadge - Thursday, September 26, 2013 - link
Upon reading "programmable" I first thought it was going to be a software solution running on the shaders. Which should work pretty well, especially if the GPU could finally run several tasks at once without major headaches. Which is a direction they should have moved into anyway. Especially considering this is supposed to be GCN 2.0 rather than 1.0/1.1.Anyway, this should have made it into DX 11.2 or 11.3, open to be supported by anyone. Let the game enable it or not. And if it's enabled, but no supporting hardware is present, just use the current simple positioning etc. instead.
erple2 - Friday, September 27, 2013 - link
No, I think that this is actually programmable sound, not just enabling reverb and echo (which ultimately was the only thing that EAX did). The trick isn't to figure out whether you have a weird reverberation in a scene, its to calculate what the sound stage should sound like based on the direction you are looking, the physical makeup of the scene (where are the walls, what material they're made of, and ultimately how does that pact how sound reverberates off that surface), where the sources of the sounds are, and the cumulative affect each surface has on the sound sources as it travels to the listener. That's a hard problem to solve. Think of raytracing but for sound, and add appropriate algorithms to figure out how the listener (with two ears) would perceive the sound and ship that to the speakers. I imagine that the shader component comes in to play when you're determining the effect (by frequency) of a surface on incoming and outgoing sound. That's way more complicated than I originally thought. Doing that in real time is very expensive to do.risa2000 - Friday, September 27, 2013 - link
Actually, this is what Aureal did 15 years ago with theirs Aureal Vortex 2 chips. Think you are in the room closer to one wall. You reload your weapon. You hear the "click" echo from the closer wall sooner and louder, while from the other side it comes later and more fuzzy. Imagine, you slightly turn around while reloading and you can immediately figure out where is the wall which is closer and where is the one far from you, even if there is pitch dark.Or imagine you ride on train through the tunnel and echo of the wheels bumping the rails is literally pressing on you, and suddenly the tunnel expands to the large room and the echo is suddenly much more delayed and attenuated.
Those were some examples of what you can experience in Half-Life (first one) if you played it with Aureal Vortex 2 hardware and headphones (as I did).
BrightCandle - Saturday, September 28, 2013 - link
Was an amazing effect and I used my Aureal Vortex 2 for as long as I could before finally there was no reason to anymore. Real shame they were sued into destruction because it was a great technology, gave a real advantage in some games.But I wish AMD was making a sound card instead with this, so we could get this capability without an AMD graphics card.
Tig3RStylus - Thursday, September 26, 2013 - link
I dont understand why people are complaining. If the work results in improvements, small or large it is a benefit. If it triggers their competitor to do the same, everybody wins. Granted it would be better if it was brand agnostic, but at least somebody is doing something to push the envelope.Flunk - Friday, September 27, 2013 - link
I think maybe it should be mentioned that the amount of GPU space this sort of DSP would take up is miniscule. So you're getting a lot of benefit, for almost nothing. I can completely understand why AMD would do this, it fits in to their fusion strategy of integrating everything and gives them a feature that Nvidia doesn't have for very little of their overall transistor budget.Wolfpup - Wednesday, October 2, 2013 - link
Ooooh hey, yeah if they can stick this on their A series CPUs, that would be a nice little bullet point for 'em. They're already IMO the best choice at the low end.Sleeper0013 - Tuesday, October 1, 2013 - link
Coming from a audio engineers perspective this is going to make the process of audio engineering for pc games much simpler, also creating more realistic real time free perspective 3d Sound fields which i promise you an i7 can't handle.This is also going to make recording into a digital environment more precise considering digital audio engineers are always feuding with latency. I assure you i welcome a dedicated APU, which is vastly going to improve my recording work flow.
This will also translate into using a PC generated digital effects signal chain with live instruments which i assure you isn't a possibility today because of latency problems due to a lack of a buffer layer with dedicated APU.
Wolfpup - Wednesday, October 2, 2013 - link
Hmm...this is all sort of strange. This happened before (and it will all happen again ;) back in 2001 with Nvidia and their Xbox 1 GPU. I was under the impression Nvidia had a great part, but gave up because of licensing nonsense and lawsuits from Creative?Then around the same time Xbox 360 ditches dedicated sound processing, putting it all in software I think (? when actually even the PSP has DSPs to handle it, I think?) and relying on it's 3 general purpose cores. Windows NT 6 does the same thing, and also of course has spare cores around the same time.
It was a bit disappointing, but kind of made sense.
Only now...we've got dedicated hardware back again?
I'd assume this was something Microsoft (and/or Sony) requested, and now AMD can expose on their PC new PC parts.
I'm sure it'll be well utilized on the consoles. I suppose that like that closer-to-the-metal language for the GPU portion of their chips, this might get utilized on PC just thanks to (perhaps) portions of it already getting written for the consoles, so it being less of an issue to support on PC.
I don't know... Physics on GPUs is an awesome idea, but isn't getting used tons when only Nvidia can do it (and then really on high end GPUs ideally). I'd imagine this'll be the same boat, maybe kind of cool, but developers will have to make sure everything works in software on the CPU too.
Yash047 - Thursday, October 10, 2013 - link
Will HD 7790 support True Audio based that the R7-260X utilizes the same GPU(overclocked version)?