I wonder if the limited field of view is really from a GPU constraint. More FOV = higher resolution = GPU that is not going to fit on a head mounted unit with a limited thermal envelope.
I suspect this problem will take several years / generations to address. You really have a worst case scenario here - anything interacting with your eyes in the real world is going to need some serious GPU power (human vision is very high resolution compared to computer graphics) and heat is going to be an issue on something strapped around your head, much more so than a phone or tablet.
I think you could either deal lower resolution for a wider field of view (depending on what is more important to you) or even render to a higher resolution in the center of your field of view and to a lower one outside of that. Our eyes work very similar.
Anyway, it seems that Microsoft fell to the temptation to over-promise and under-deliver. Getting people all giddy and then offering something that is rather bland isn't clever. MS really got me suspicious with the unrealistic demos, these were too good to be true.
Still, this is cool technology and if you manage to not compare it to the demos there will be some uses for it. As long as it can be used for games (and porn) nobody will complain very loudly. It's novel and potentially more practical then full-immersive VR (due to avoiding disorientation and nausea). Things like a wider field of view are easily addressed with better technology in a fairly linear way sooner or later.
First gen Oculus hardware wasn't that great either and it's tethered to big powerful machines that do all the heavy lifting.
For a standalone battery-powered rig with very limited power and thermal budgets I think the Hololens hardware is surprisingly far along. These things take time, but that doesn't mean they should hide the technology under a rug and work in the dark. The more people they get testing (and eventually using) these and building software for them, the better future gen units will be.
At least a decade is my guess. Microsoft is rushing this tech to market because it doesn't want people to think "it's falling behind VR". But AR was ALWAYS going to be at least 10 years behind VR. It's just much harder to do AR RIGHT.
The available pico projectors are pretty low resolution. Himax or Omnivision are likely at 720p max for compact sizes LCoS. Sony, using Microvision tech, has 1920 x 720 laser pico projector but likely that's too big for this, even more so when having to use 2. So bigger image wouldn't mean more pixels really since the display tech doesn't allow it. You can't even go with more than 2 projectors since size, cost,weight and especially power would go crazy.
Maybe using optics and a laser projector they could display a higher res in the center and lower res towards the sides. Laser doesn't need focus so that would allow for such a trick to be implemented. Not sure how the human eye would react to it and if it would be a viable solution to widen the field of view but maybe it's worth testing it. Guess brightness uniformity would be a big problem too so they would need to adjust it on the fly as the projector pains the image.
I don't think so. It would be dumb to let this thing render every pixel on the display. Most areas will be unchaged by HoloLens and simply show reality. They only need to generate whatever is overlaid. Tiile based renderers can handle this very well.
So I suspect it's more a question of the projection technology and fitting everything into a device not too bulky.
It's not, it's from resolution. Tiny displays like this, it sounds like they're using a projector type, can't get to very high resolutions versus how close they are to your face. Oh they're fine today for phones, but imagine holding a phone 2 inches from your face and even if you've got a 1440p display you'll still be seeing a screen door effect.
By narrowing the field of view, you are essentially doing the same thing as shrinking the phones display size in the above example even though your keeping the resolution the same. It's a tradeoff that has to be made currently between actually looking like anything beyond an 80's 240x180 display and having a field of view you can actually use.
It's almost certainly a physical constraint. Going by Oliver Kreylos' description of the Hololens (http://doc-ok.org/?p=1223https://www.reddit.com/r/oculus/comments/34k7pn/re... it is using a pretty standard LCOS/DMD light modulator - point illuminator (colour LED) + lightguide (in this case the source of the 'holographic' terminology due to to the use of diffraction gratings rather than pure total-internal-reflection). These modules are common with industrial HMDs and HUDs, though generally tethered to a stationary external machine of a laptop-in-a-backpack, and with a more robust external tracking system like a Flock Of Birds magnetic tracker or a multiple camera tracking rig.
This small FoV is a fundamental limitation of compact microdisplay optics at the moment. A larger FoV will require fundamentally different optics, e.g. PinLight displays or metamaterial lenses. Microsoft will need to either wait for an optics module producer to start manufacturing displays using these new technologies (once they have matured) or actually take design and manufacture in-house to develop them.
Thanks for a more realistic take than what most of the press is going with.The first time they showed it,it was really hard to get the real info too. So it's a Google Glass for both eyes with Project Tango and some interesting software. It's way bulkier and for inside only so in practice it will be hard to find what to use it for, unless the field of view gets wide.Pricing will be tricky too. I really like that they are trying and they did some positive things but, as it is, it will be hard to market. Hopefully it gets better and cheaper fast. It's also good that it forces Google and others to keep investing in glasses. If they widen the field of view and they add the capability for the cover "lens" to change opacity, it would be a much much bigger deal and hopefully doable at least for the second gen hardware. Would be , i think, the first to unify VR and AR. Was curious about the hardware too, what kind of projection tech, how many cams and so on.
This also shows how Oculus might find itself years behind because they went with delays after delays and external hardware. For FB every month of delay might end up costing them billions. Instead of having a huge user base and being the default by now,they might end up being far behind by next year and having to waste a few years to try to catch up. For Google on the OS side it reminds them that there is a new OS race and that not trying much harder to scale Android on all screen sizes was a huge mistake.Android not being a competitor on the PC side gives M$ a significant advantage.
Well, it's a bit like Google Glass in 3D that projects things into your field of view. Not too bad actually. And I guess it will have a better resolution than the 640x360 pixels of Glass. It just looks more like an actual product you could buy.
Yeah i said in my first post that it's like a dual Glass but it is in no way more like an actual product,it's far worse from that point of view. Glass was lot more discrete , this is way bulkier and heavier so it's pretty much inside only. Glass had a bunch of scenarios where it was useful, this has none and making it costs a lot more. Glass or Oculus can actually be sold in high volume, this not at all really. Ofc the final goal is to converge VR and AR and from that point of view M$ is closer to it. But that's for future generations of hardware in 1-2 years hopefully while this version is just a cool toy nobody needs.
Glass is really nothing like this. Glass doesn't project AR images into your view (like adding virtual objects to a real environment, or floating AR windows). Glass doesn't watch your hand movements for manipulating said AR objects. Saying it's "dual Glass" is drivel.
That's Project Tango and Kinect added to it and both were on track to be used in a consumer version. And it is just like Glass but for both eyes and in the the center of your view line.You might have read overexcited articles from other sources and gotten the wrong idea about what this is. It can be great but in a few years not now.
I know it's not released yet, but this experience makes it sound like a typical Microsoft style "over-promise and under-deliver" product.
Microsoft has a history filled with amazing fantasy vaporware products that demo or release with 10% of the promised capabilities. One would think that *eventually* they'd stop doing this to themselves.
I have to disagree there I think there are a pile of great applications for this. The Trimble design demo that I did was an obvious use case for something like this. Like VR, there are plenty of education possibilities too.
Holo lens article in NYT isn't much about holo, yet does story on Microsoft. http://www.nytimes.com/2015/05/03/technology/micro... Worth read ,I did Microsoft research from 2004 to 2008, XPsp3 to 7/8. Taking clunky service pack to new high, then large form factor and developing small footprint, which used Dell into libraries, so Truth, so much. Now doing Media Theatre at:U of Mn.And PE.
Not pair of glasses with projection, more party favor and cool for school honors rebuff. Maybe this season 7 Dwarfs from Sanderson, put into glasses could be EZ way to get playwrights work up to snuff . Private TS stuff gets public as Machines meet match. Paradigm Shift, New revolution on brink. Notice uptick of writer in Times and selling bit of Fab. Drashek
Q1) Did you get a feel if there was a lectern lag? Q2) Any thoughts on what resolution image you were seeing? Q3) Do you think that colours looked rich and saturated? Q4) Were you aware of a framerate? Q5) Did they talk about using the Hololens to SEE the image output from another device (like a Camera recording an image)
This thing of course is a compromise that tries to circumvent some problems by creating and accepting others.
Let's look at a hypothetical "perfect" AR/VR setup:
You have two displays closely in front of your eyes. The displays have high resolution (like 4K or more for each eye) and instead of a big lens setup you have adjustable micro-lenses integrated with every single pixel. The display sits on a multi-layer die and the other side actually is an image sensor with a similar setup: Same resolution, micro-lenses on each pixel. The layer between these carries the CPU/GPU, RAM and memory. Each pixel in the sensor has a straight pathway to the display pixel (on the other side) it represents (with the GPU having its hands on all of them), so you can get a lag-free real-time (but still digital and controllable) 3D display of what the sensors on the outside see. Object tracking and 3D scanning is done purely optical via the spread of the sensors and a very precise measurement of the distance between them (maybe by fibre optics and lasers). The CPU and GPU now can render objects and UI elements freely into the 3D image your eyes see or even totally replace the view with something else. Also continuously scanning of the world before your eyes is used to create a full 3D-map of everywhere you were, with a growing database of information about everything you ever saw.
From the outside this would look like an ordinary pair of (black) glasses (if you can figure out wireless power transmission). Or you integrate also an display in between the sensor pixels on the outside, so you can display something there. Hey, Google: You could display ads here! ;-)
Of course also integrating image sensors on the inside would be nice too, so you could map (with infrared light) the eyeball of the user and could automatically correct for all optical weaknesses of that wetware. 20/20 eyesight for everyone! Now that both sides are sensors and displays you could make both the same (saves costs anyway). Now you can dial down the display on the outside to infrared, you can illuminate even darkness with infrared, "see" it and display it in visible light on the inside. 20/20 in darkness! Easy.
Then, shrink that design down further and finally do the very same gadget as contact lenses.
Compared to that the "Hololens" of course is a laughable crutch. It's a start though and I'm looking forward to it.
People here continually talk about the possible hardware in the sense like it is like oculus display actually full screens onto you view. Instead it is only displaying small portions of things within your view. It doesn't need high end computing to do. This is what augmented reality is. All the phones now a days use these type of views with cameras and GPS. It doesn't require much more than a way to notice the depths and measurements of your surroundings. Stop trying to fit into the box that others are on the market already and starting thinking outside the box of what this will do. I am absolutely sure that the final product will be great and will not be exactly as clear and crisp as what they show. Much like google glass does but with exponentially more use for things like games. I am absolute in the idea that this will destroy google glass.
So true, especially on the hardware side. I knew this was bunk when they claimed "it literally processes terabytes of data every second from it's sensors", right away you know it's all bs.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
33 Comments
Back to Article
kyuu - Friday, May 1, 2015 - link
The limited FOV is definitely something that needs to be addressed before they release this.Gunbuster - Friday, May 1, 2015 - link
I wonder if the limited field of view is really from a GPU constraint. More FOV = higher resolution = GPU that is not going to fit on a head mounted unit with a limited thermal envelope.maecenas - Friday, May 1, 2015 - link
I suspect this problem will take several years / generations to address. You really have a worst case scenario here - anything interacting with your eyes in the real world is going to need some serious GPU power (human vision is very high resolution compared to computer graphics) and heat is going to be an issue on something strapped around your head, much more so than a phone or tablet.uhuznaa - Friday, May 1, 2015 - link
I think you could either deal lower resolution for a wider field of view (depending on what is more important to you) or even render to a higher resolution in the center of your field of view and to a lower one outside of that. Our eyes work very similar.Anyway, it seems that Microsoft fell to the temptation to over-promise and under-deliver. Getting people all giddy and then offering something that is rather bland isn't clever. MS really got me suspicious with the unrealistic demos, these were too good to be true.
Still, this is cool technology and if you manage to not compare it to the demos there will be some uses for it. As long as it can be used for games (and porn) nobody will complain very loudly. It's novel and potentially more practical then full-immersive VR (due to avoiding disorientation and nausea). Things like a wider field of view are easily addressed with better technology in a fairly linear way sooner or later.
Alexvrb - Friday, May 1, 2015 - link
First gen Oculus hardware wasn't that great either and it's tethered to big powerful machines that do all the heavy lifting.For a standalone battery-powered rig with very limited power and thermal budgets I think the Hololens hardware is surprisingly far along. These things take time, but that doesn't mean they should hide the technology under a rug and work in the dark. The more people they get testing (and eventually using) these and building software for them, the better future gen units will be.
Krysto - Thursday, May 7, 2015 - link
At least a decade is my guess. Microsoft is rushing this tech to market because it doesn't want people to think "it's falling behind VR". But AR was ALWAYS going to be at least 10 years behind VR. It's just much harder to do AR RIGHT.Zoomer - Friday, May 1, 2015 - link
I would think it's a power / integration constraint. Assuming the display tech is LCD; how much LCD can they practically wrap around the goggles?I imagine resolution can be mitigated by rendering peripheral vision at a lower res, perhaps at a lower color depth or even monochrome.
jjj - Friday, May 1, 2015 - link
The available pico projectors are pretty low resolution.Himax or Omnivision are likely at 720p max for compact sizes LCoS.
Sony, using Microvision tech, has 1920 x 720 laser pico projector but likely that's too big for this, even more so when having to use 2.
So bigger image wouldn't mean more pixels really since the display tech doesn't allow it. You can't even go with more than 2 projectors since size, cost,weight and especially power would go crazy.
jjj - Friday, May 1, 2015 - link
Maybe using optics and a laser projector they could display a higher res in the center and lower res towards the sides. Laser doesn't need focus so that would allow for such a trick to be implemented. Not sure how the human eye would react to it and if it would be a viable solution to widen the field of view but maybe it's worth testing it. Guess brightness uniformity would be a big problem too so they would need to adjust it on the fly as the projector pains the image.MrSpadge - Friday, May 1, 2015 - link
I don't think so. It would be dumb to let this thing render every pixel on the display. Most areas will be unchaged by HoloLens and simply show reality. They only need to generate whatever is overlaid. Tiile based renderers can handle this very well.So I suspect it's more a question of the projection technology and fitting everything into a device not too bulky.
Frenetic Pony - Friday, May 1, 2015 - link
It's not, it's from resolution. Tiny displays like this, it sounds like they're using a projector type, can't get to very high resolutions versus how close they are to your face. Oh they're fine today for phones, but imagine holding a phone 2 inches from your face and even if you've got a 1440p display you'll still be seeing a screen door effect.By narrowing the field of view, you are essentially doing the same thing as shrinking the phones display size in the above example even though your keeping the resolution the same. It's a tradeoff that has to be made currently between actually looking like anything beyond an 80's 240x180 display and having a field of view you can actually use.
edzieba - Sunday, May 3, 2015 - link
It's almost certainly a physical constraint. Going by Oliver Kreylos' description of the Hololens (http://doc-ok.org/?p=1223 https://www.reddit.com/r/oculus/comments/34k7pn/re... it is using a pretty standard LCOS/DMD light modulator - point illuminator (colour LED) + lightguide (in this case the source of the 'holographic' terminology due to to the use of diffraction gratings rather than pure total-internal-reflection). These modules are common with industrial HMDs and HUDs, though generally tethered to a stationary external machine of a laptop-in-a-backpack, and with a more robust external tracking system like a Flock Of Birds magnetic tracker or a multiple camera tracking rig.This small FoV is a fundamental limitation of compact microdisplay optics at the moment. A larger FoV will require fundamentally different optics, e.g. PinLight displays or metamaterial lenses. Microsoft will need to either wait for an optics module producer to start manufacturing displays using these new technologies (once they have matured) or actually take design and manufacture in-house to develop them.
edzieba - Sunday, May 3, 2015 - link
That second link should be: https://www.reddit.com/r/oculus/comments/34k7pn/re...jjj - Friday, May 1, 2015 - link
Thanks for a more realistic take than what most of the press is going with.The first time they showed it,it was really hard to get the real info too.So it's a Google Glass for both eyes with Project Tango and some interesting software.
It's way bulkier and for inside only so in practice it will be hard to find what to use it for, unless the field of view gets wide.Pricing will be tricky too.
I really like that they are trying and they did some positive things but, as it is, it will be hard to market. Hopefully it gets better and cheaper fast. It's also good that it forces Google and others to keep investing in glasses.
If they widen the field of view and they add the capability for the cover "lens" to change opacity, it would be a much much bigger deal and hopefully doable at least for the second gen hardware. Would be , i think, the first to unify VR and AR.
Was curious about the hardware too, what kind of projection tech, how many cams and so on.
jjj - Friday, May 1, 2015 - link
This also shows how Oculus might find itself years behind because they went with delays after delays and external hardware.For FB every month of delay might end up costing them billions. Instead of having a huge user base and being the default by now,they might end up being far behind by next year and having to waste a few years to try to catch up.
For Google on the OS side it reminds them that there is a new OS race and that not trying much harder to scale Android on all screen sizes was a huge mistake.Android not being a competitor on the PC side gives M$ a significant advantage.
Krysto - Thursday, May 7, 2015 - link
Oculus just announce shipping Q12016.uhuznaa - Friday, May 1, 2015 - link
Well, it's a bit like Google Glass in 3D that projects things into your field of view. Not too bad actually. And I guess it will have a better resolution than the 640x360 pixels of Glass. It just looks more like an actual product you could buy.jjj - Friday, May 1, 2015 - link
Yeah i said in my first post that it's like a dual Glass but it is in no way more like an actual product,it's far worse from that point of view.Glass was lot more discrete , this is way bulkier and heavier so it's pretty much inside only.
Glass had a bunch of scenarios where it was useful, this has none and making it costs a lot more.
Glass or Oculus can actually be sold in high volume, this not at all really.
Ofc the final goal is to converge VR and AR and from that point of view M$ is closer to it. But that's for future generations of hardware in 1-2 years hopefully while this version is just a cool toy nobody needs.
Alexvrb - Friday, May 1, 2015 - link
Glass is really nothing like this. Glass doesn't project AR images into your view (like adding virtual objects to a real environment, or floating AR windows). Glass doesn't watch your hand movements for manipulating said AR objects. Saying it's "dual Glass" is drivel.jjj - Saturday, May 2, 2015 - link
That's Project Tango and Kinect added to it and both were on track to be used in a consumer version.And it is just like Glass but for both eyes and in the the center of your view line.You might have read overexcited articles from other sources and gotten the wrong idea about what this is.
It can be great but in a few years not now.
steven75 - Friday, May 1, 2015 - link
I know it's not released yet, but this experience makes it sound like a typical Microsoft style "over-promise and under-deliver" product.Microsoft has a history filled with amazing fantasy vaporware products that demo or release with 10% of the promised capabilities. One would think that *eventually* they'd stop doing this to themselves.
Kracer - Friday, May 1, 2015 - link
No practical application has been ever devised (as far as I have heard) for this device. If the PR people can't make one up, I don't think one exists.Brett Howse - Saturday, May 2, 2015 - link
I have to disagree there I think there are a pile of great applications for this. The Trimble design demo that I did was an obvious use case for something like this. Like VR, there are plenty of education possibilities too.Morawka - Friday, May 1, 2015 - link
wow everyone is quick to shit all over hololense once the first negative experience is blogged about.calm down folks.. this thing is over 6 months from release.. Hell apple would only do demo loops of the apple watch (no touching, no interacting)
give them time, this stuff is truly bleeding edge tech.
thomasxstewart - Saturday, May 2, 2015 - link
Holo lens article in NYT isn't much about holo, yet does story on Microsoft. http://www.nytimes.com/2015/05/03/technology/micro... Worth read ,I did Microsoft research from 2004 to 2008, XPsp3 to 7/8. Taking clunky service pack to new high, then large form factor and developing small footprint, which used Dell into libraries, so Truth, so much. Now doing Media Theatre at:U of Mn.And PE.Not pair of glasses with projection, more party favor and cool for school honors rebuff. Maybe this season 7 Dwarfs from Sanderson, put into glasses could be EZ way to get playwrights work up to snuff . Private TS stuff gets public as Machines meet match. Paradigm Shift, New revolution on brink. Notice uptick of writer in Times and selling bit of Fab.
Drashek
Morawka - Saturday, May 2, 2015 - link
ever heard of adjectives and conjecture?microsofttech - Saturday, May 2, 2015 - link
its still in development which means that microsoft is still working on it why are you people complaining on a device that is still being madeAntony Newman - Saturday, May 2, 2015 - link
Brett,Thanks for the insight.
Q1) Did you get a feel if there was a lectern lag?
Q2) Any thoughts on what resolution image you were seeing?
Q3) Do you think that colours looked rich and saturated?
Q4) Were you aware of a framerate?
Q5) Did they talk about using the Hololens to SEE the image output from another device (like a Camera recording an image)
Thanks,
AJ
uhuznaa - Sunday, May 3, 2015 - link
This thing of course is a compromise that tries to circumvent some problems by creating and accepting others.Let's look at a hypothetical "perfect" AR/VR setup:
You have two displays closely in front of your eyes. The displays have high resolution (like 4K or more for each eye) and instead of a big lens setup you have adjustable micro-lenses integrated with every single pixel. The display sits on a multi-layer die and the other side actually is an image sensor with a similar setup: Same resolution, micro-lenses on each pixel. The layer between these carries the CPU/GPU, RAM and memory. Each pixel in the sensor has a straight pathway to the display pixel (on the other side) it represents (with the GPU having its hands on all of them), so you can get a lag-free real-time (but still digital and controllable) 3D display of what the sensors on the outside see. Object tracking and 3D scanning is done purely optical via the spread of the sensors and a very precise measurement of the distance between them (maybe by fibre optics and lasers). The CPU and GPU now can render objects and UI elements freely into the 3D image your eyes see or even totally replace the view with something else. Also continuously scanning of the world before your eyes is used to create a full 3D-map of everywhere you were, with a growing database of information about everything you ever saw.
From the outside this would look like an ordinary pair of (black) glasses (if you can figure out wireless power transmission). Or you integrate also an display in between the sensor pixels on the outside, so you can display something there. Hey, Google: You could display ads here! ;-)
Of course also integrating image sensors on the inside would be nice too, so you could map (with infrared light) the eyeball of the user and could automatically correct for all optical weaknesses of that wetware. 20/20 eyesight for everyone! Now that both sides are sensors and displays you could make both the same (saves costs anyway). Now you can dial down the display on the outside to infrared, you can illuminate even darkness with infrared, "see" it and display it in visible light on the inside. 20/20 in darkness! Easy.
Then, shrink that design down further and finally do the very same gadget as contact lenses.
Compared to that the "Hololens" of course is a laughable crutch. It's a start though and I'm looking forward to it.
DrChemist - Sunday, May 3, 2015 - link
People here continually talk about the possible hardware in the sense like it is like oculus display actually full screens onto you view. Instead it is only displaying small portions of things within your view. It doesn't need high end computing to do. This is what augmented reality is. All the phones now a days use these type of views with cameras and GPS. It doesn't require much more than a way to notice the depths and measurements of your surroundings. Stop trying to fit into the box that others are on the market already and starting thinking outside the box of what this will do. I am absolutely sure that the final product will be great and will not be exactly as clear and crisp as what they show. Much like google glass does but with exponentially more use for things like games. I am absolute in the idea that this will destroy google glass.meacupla - Monday, May 4, 2015 - link
I don't care for what any one of you have to say about this device.I just want to play minecraft on this thing.
Krysto - Thursday, May 7, 2015 - link
You'd be much better off with VR for a game like Minecraft. AR-like headsets are more for tabletop style games.andychow - Friday, May 8, 2015 - link
So true, especially on the hardware side. I knew this was bunk when they claimed "it literally processes terabytes of data every second from it's sensors", right away you know it's all bs.