Comments Locked

13 Comments

Back to Article

  • brucethemoose - Thursday, April 6, 2023 - link

    > the card’s AV1 encoding quality should be comparable to AV1 slow. These comparisons are based on VMAF scores

    Thats crazy. I saw a graph where Nvidia's realtime Ampere AV1 encoder was on par with x264 veryslow, which equates to a quick aomenc av1 preset. "Slow" AV1, on the other hand, is properly slow.

    The AI parameter control thing is super interesting, as that is undoubtedly where tons of bitrate is wasted in streaming. Even relatively primitive, non realtime tuning (like av1an's vmaf target testing + dark scene boost) has a huge effecting, and thats a long way from region-of-interest encoding.
  • Farfolomew - Thursday, April 6, 2023 - link

    Yes that AV1 slow quality surprised me too. Seems too good to be true. Maybe a typo?
  • Hifihedgehog - Thursday, April 6, 2023 - link

    It costs $1595 and is a dedicated video encoding ASIC, so I believe it.
  • brucethemoose - Thursday, April 6, 2023 - link

    Its possible, as AV1's default rate control isnt that sophisticated.
  • brucethemoose - Thursday, April 6, 2023 - link

    Also, VMAF is a great model, but I dont think it can, say, recognize that a face as a more important part of a frame to preserve.

    Hence AMD could actually be understating the quality of the output here.
  • Ryan Smith - Thursday, April 6, 2023 - link

    I've double checked, and that is indeed a typo. AMD is advertising the MA35D's video encode quality are comparable to x265 slow, not AV1 slow. Apologies for that.
  • brucethemoose - Friday, April 7, 2023 - link

    Thats makes more sense, but is still pretty good.
  • MrCommunistGen - Thursday, April 6, 2023 - link

    Wondering/hoping that maybe a cut down version of one of these encode blocks could be used to replace AMD's current VCN architecture. To save space, they could possibly leave out the dedicated AI blocks and utilize GPU compute for that instead.
  • brucethemoose - Thursday, April 6, 2023 - link

    Or slap an existing die onto a PCB and sell it as a "streamer edition." I dunno how its wired internally, but maybe they could give it enough bandwidth to skip the memory ICs and function as a chiplet hanging off the main GPU die.
  • ballsystemlord - Thursday, April 6, 2023 - link

    It only costs as much as an RTX4090! ;)
  • Samus - Friday, April 7, 2023 - link

    A card that can encode H264 or H265 at 4K using 1w per stream would be a lot more useful to me personally. I have a lot of BlueIris PC's out there that have to record direct to disk because even Intel QuickSync can't keep the power draw low, even on monster chips like an i7-12700. Going with a weaker CPU like a 10th or 11th gen usually results in more power usage because of much more CPU utilization.

    Throw some AI analysis in there like Deepstack and you're practically pegging a 12th gen i7 with anything more than 20 streams.

    And while one could offload Deepstack to a GPU, that's actually even less power efficient. The only time that would be needed is if you had a CPU that couldn't handle the task at all for the number of cameras\streams you have coming in.

    The less-elegant solution I've started using is to have high quality sub channels (usually 720p but sometimes 480p is the camera max) and have AI analyze that instead of the 4k stream. It's still good enough at figuring out vehicles, people, shipments, open\closed doors, and differentiating shadows\branches in the wind as false positives. But it isn't perfect and is very clunky to setup. License plate recognition requires at least 1080p which no sub channel will do.
  • Threska - Friday, April 7, 2023 - link

    Dedicate some FPGA/DSPs to the task.
  • Samus - Thursday, April 13, 2023 - link

    I'm kind of restricted to what BlueIris and software in general support.

Log in

Don't have an account? Sign up now