"Taking a step back, what Varjo is going for is essentially removing the screen-door effect of current VR HMDs via by offering an area of very high pixel density. But the tradeoff with that pixel density is the limited size and FOV of the OLED microdisplays, and thus foveated projection, with Varjo filling in the rest of the FOV with a lower resolution. Executed correctly, this means that the high resolution display is always aligned with the center of the user's FOV – where a user's eyes can actually resolve a high level of detail – while the coarser peripheral vision is exposed to a lower pixel density. As mentioned earlier, in principle this allows for the benefits of high resolution rendering without all of the drawbacks of a full FOV high density display."
How are they supposed to do this without mechanically moving the high-density display (or the entire display setup)? After all, eyes can move quite a lot without the head moving, and unless I'm misunderstanding this entirely, the high density display will only align with the center of the user's FOV when looking straight ahead. I get the value of this for specific focused tasks where the user needs to look at a specific place/object, but for anything dynamic this would seem inferior to a pure foveated rendering approach - at least that can follow the user's eye movement as quick as the GPU can render new frames (and the eye tracker can inform the GPU), and you won't have the "oh, you're looking a bit to the side? yeah, there's no detail there" effect.
No, it isn't answered. Sure, there's eye tracking. Eye tracking does not move a display, it tracks eye movement. Do the displays actually physically move? Is there some optical trickery making this work? Or are they simply stalling for time, trying to hide that this has a fixed central "high detail" area while they figure out a flexible solution for the next revision? There's no substance at all in this claim. If eye tracking was all you needed, this would be a fixed problem, given how eye tracking HMD concepts have been shown off for years.
As I understand it a mirror moves. It sounds gimmicky but I have read people saying it's works very well after they tried it and works as decribed - you see an extremely clear image.
Most likely, the microdisplay itself won't move; instead they'll probably be moving a small mirror (via a couple of fast X-Y galvo motors) that repositions the microdisplay's projection into/within the optical combiner. Whereas the low-res backdrop would stay fixed, covering the full 100 degree FOV...
How are they going to do that without getting some weird glitches when the mirror moves? Wouldn't you then run into issues of "double vision" (i.e. imperfect display edge overlapping), perspective shifts in the central image, or possibly focus issues from the mirror bringing the microdisplay out of the focal plane of the lens? Sounds immensely complicated to me.
Also, from how I read the descriptions in the article, the microdisplays and peripheral displays are physically overlapping/adjacent, making a solution like this very difficult. For both to be in focus, they'd have to be within a very short distance of each other. Of course, that problem is indeed solveable with a mirror setup.
Yeah it seems like they are trying to go light on the details right now on purpose, BUT I would guess the displays are static and then the lens moves a bit in the X-Y directions which could keep the high res part in your FoV. Although, I'm not sure...
I believe this was definitely how the retrofitted Oculus prototype worked, but as for the Alpha Prototype it's a little unclear - and purposely so, like extide mentioned. According to Varjo's developer FAQ, anyone that has a Varjo headset as part of their early access has to run by them first before showing or demoing to others. So I don't expect us to find out anytime soon.
From what I can tell, these initial HMDs lack whatever 'secret sauce' they intend to use to physically shift a microdisplay faster than average galvanometer. The central panel is fixed resistive to the head like with the modified CV1s, so you need to point your head at a target and keep your eyes centred in the middle of the view to gain any benefit. The version with the mobile microdisplay will come later.
"Microsecond switch times" is actually what they told me verbatim, so I didn't want to twist their words per se. I've reworked the sentence to include a quote in order to make that a little clearer.
sophisticated weaponized Eye tracking for targeting systems on military crafts ( like the Huey Cobra's targeting for it's Gatling gun ) has been an advanced technology concentration as far back as the late 80's? On the other hand... Why would that actual effect require a mechanical result to achieve the focus/blur? Between what that foveated tracking tech returns as the point of interest... and in combination with an HMD's depth camera/inside-out tracking of the actual depth at that position?
Why can't those coordinates be enough to simply calculate a realistic render of focus/blur depending on depth returned at any given point of viewer interest?
Does this optical combiner actually need to do anything other than composite an overlay where ( if AR ) everything CG is simply as blurry as everything else not in focus given every objects supposed/given depth? And if VR... What optic wizardry would even be needed for the displayed content? Wouldn't the render simply take into account returned depth tracked to calculate depth for stereoscopic separation and convergence and then blur and focus depending on interest tracking? ( interest tracking being the actual/only foveat tech needed? )
My brain hurtz... So I assume I am missing something not obvious to me and beyond my meager comprehension. ( I can smell my burning brainz bits )
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
19 Comments
Back to Article
ZeDestructor - Thursday, November 30, 2017 - link
I'm liking this. If it's not too expensive while working with the Oculus/Vive SDKs, it would be up there for consideration in my book.MamiyaOtaru - Tuesday, December 5, 2017 - link
you're in luck! it will be "under $10,000"Valantar - Thursday, November 30, 2017 - link
"Taking a step back, what Varjo is going for is essentially removing the screen-door effect of current VR HMDs via by offering an area of very high pixel density. But the tradeoff with that pixel density is the limited size and FOV of the OLED microdisplays, and thus foveated projection, with Varjo filling in the rest of the FOV with a lower resolution. Executed correctly, this means that the high resolution display is always aligned with the center of the user's FOV – where a user's eyes can actually resolve a high level of detail – while the coarser peripheral vision is exposed to a lower pixel density. As mentioned earlier, in principle this allows for the benefits of high resolution rendering without all of the drawbacks of a full FOV high density display."How are they supposed to do this without mechanically moving the high-density display (or the entire display setup)? After all, eyes can move quite a lot without the head moving, and unless I'm misunderstanding this entirely, the high density display will only align with the center of the user's FOV when looking straight ahead. I get the value of this for specific focused tasks where the user needs to look at a specific place/object, but for anything dynamic this would seem inferior to a pure foveated rendering approach - at least that can follow the user's eye movement as quick as the GPU can render new frames (and the eye tracker can inform the GPU), and you won't have the "oh, you're looking a bit to the side? yeah, there's no detail there" effect.
SunnyNW - Thursday, November 30, 2017 - link
I had this same exact question. How is it that they are using eye-tracking?jordanclock - Thursday, November 30, 2017 - link
That is answered in the very next paragraph: Eye-tracking to move the lens/display with your eyes.Valantar - Thursday, November 30, 2017 - link
No, it isn't answered. Sure, there's eye tracking. Eye tracking does not move a display, it tracks eye movement. Do the displays actually physically move? Is there some optical trickery making this work? Or are they simply stalling for time, trying to hide that this has a fixed central "high detail" area while they figure out a flexible solution for the next revision? There's no substance at all in this claim. If eye tracking was all you needed, this would be a fixed problem, given how eye tracking HMD concepts have been shown off for years.Diji1 - Friday, December 1, 2017 - link
As I understand it a mirror moves. It sounds gimmicky but I have read people saying it's works very well after they tried it and works as decribed - you see an extremely clear image.boeush - Thursday, November 30, 2017 - link
Most likely, the microdisplay itself won't move; instead they'll probably be moving a small mirror (via a couple of fast X-Y galvo motors) that repositions the microdisplay's projection into/within the optical combiner. Whereas the low-res backdrop would stay fixed, covering the full 100 degree FOV...Valantar - Thursday, November 30, 2017 - link
How are they going to do that without getting some weird glitches when the mirror moves? Wouldn't you then run into issues of "double vision" (i.e. imperfect display edge overlapping), perspective shifts in the central image, or possibly focus issues from the mirror bringing the microdisplay out of the focal plane of the lens? Sounds immensely complicated to me.Also, from how I read the descriptions in the article, the microdisplays and peripheral displays are physically overlapping/adjacent, making a solution like this very difficult. For both to be in focus, they'd have to be within a very short distance of each other. Of course, that problem is indeed solveable with a mirror setup.
extide - Thursday, November 30, 2017 - link
Yeah it seems like they are trying to go light on the details right now on purpose, BUT I would guess the displays are static and then the lens moves a bit in the X-Y directions which could keep the high res part in your FoV. Although, I'm not sure...Diji1 - Friday, December 1, 2017 - link
I have read people's impression of it at trade shows and apparently it works very well, the image is super clear and very stable.Nate Oh - Thursday, November 30, 2017 - link
I believe this was definitely how the retrofitted Oculus prototype worked, but as for the Alpha Prototype it's a little unclear - and purposely so, like extide mentioned. According to Varjo's developer FAQ, anyone that has a Varjo headset as part of their early access has to run by them first before showing or demoing to others. So I don't expect us to find out anytime soon.edzieba - Friday, December 1, 2017 - link
The retrofitted Oculus prototype lacked any movement whatsoever, the microdipslay was fixed to the centre of the view.Nate Oh - Friday, December 1, 2017 - link
I wasn't trying to say otherwise. I was actually responding to boeush.edzieba - Thursday, November 30, 2017 - link
From what I can tell, these initial HMDs lack whatever 'secret sauce' they intend to use to physically shift a microdisplay faster than average galvanometer. The central panel is fixed resistive to the head like with the modified CV1s, so you need to point your head at a target and keep your eyes centred in the middle of the view to gain any benefit. The version with the mobile microdisplay will come later.leonlee - Thursday, November 30, 2017 - link
"As for latency, Varjo notes that the displays have microsecond switch times and thus generally have low latency."Did you mean sub-millisecond switching times?
Nate Oh - Thursday, November 30, 2017 - link
"Microsecond switch times" is actually what they told me verbatim, so I didn't want to twist their words per se. I've reworked the sentence to include a quote in order to make that a little clearer.theuglyman0war - Thursday, January 18, 2018 - link
sophisticated weaponized Eye tracking for targeting systems on military crafts ( like the Huey Cobra's targeting for it's Gatling gun ) has been an advanced technology concentration as far back as the late 80's?On the other hand...
Why would that actual effect require a mechanical result to achieve the focus/blur? Between what that foveated tracking tech returns as the point of interest...
and in combination with an HMD's depth camera/inside-out tracking of the actual depth at that position?
Why can't those coordinates be enough to simply calculate a realistic render of focus/blur depending on depth returned at any given point of viewer interest?
Does this optical combiner actually need to do anything other than composite an overlay where ( if AR ) everything CG is simply as blurry as everything else not in focus given every objects supposed/given depth?
And if VR...
What optic wizardry would even be needed for the displayed content? Wouldn't the render simply take into account returned depth tracked to calculate depth for stereoscopic separation and convergence and then blur and focus depending on interest tracking? ( interest tracking being the actual/only foveat tech needed? )
My brain hurtz... So I assume I am missing something not obvious to me and beyond my meager comprehension. ( I can smell my burning brainz bits )
theuglyman0war - Thursday, January 18, 2018 - link
damn... didn't realize the subject was so necro... :(