Okay, I hate to do this because I like the site, but I have to now...
Is anyone on the editorial staff either:
A) A native English speaker or B) An English major?
I've noticed over the years (beginning around the time Anand started getting hands-off with the site) that the writing quality has slowly decline. It's as if the articles are written and given a 30-second editor review before being posted.
Sentences run long and are awkward. In some case, they're actually hard to parse and understand without re-reading and breaking them up. I find myself re-reading phrases multiple times more often as time goes by. The writing isn't structured very well. I sometimes feel like I'm reading someone's speech pattern or un-edited thought process rather than a deliberate attempt at professional journalistic writing.
I love the site for its content. It's just getting harder to understand some of the said content without some amount of confusion.
A few suggestions: beginning around the time => around the time slowly decline => slowly declined In some case => In some cases hard to parse and understand => hard to comprehend "re-reading phrases multiple times" => reading phrases multiple times OR re-reading phrases "I love the site for its content. It's just getting harder to understand some of the said content without some amount of confusion." => "I love the site for its content. It's just getting harder to understand some of the recent articles without some amount of confusion."
Happens to the best of us. Your post is valid but still you might want to reconsider the initial portion, as it might be offensive to some. I do agree that the article leans towards a stream of thought style. Perhaps it is because of my continuous perusal of engineering write-ups that I do not mind as much. I have seen worse, much worse.
And I stand corrected. This is what happens when I kind of rush through and don't copy-edit my own work properly. (Guess that sort of proves the point ^_^) I've seen much worse as well. (Fairly recently, too, at school.) Ryan Smith was kind enough to reply. I've replied in turn below.
I do admit the start of my comments came off harsh. It was the product of seeing the issue for a couple of years. Ryan was still willing to address it though, thankfully. I thank both you and Ryan for the feedback.
"Microft then said that all of the code that we used in the demo, and all of the code used on the demos last year, is all available on GitHub to allow devs quicker access to code."
^ Perpetuating the problem by giving them free editing.
Yes, virtually the entire staff is native English speakers. As for English majors, I find that teaching them tech is harder than glaring at the technical staff until their English improves.;-)
Anyhow, while we always strive for the highest quality, much of the time we're working on very short deadlines. This means that there isn't as much time for copy editing as we'd like. We do the best we can, but we have to strike a balance between speed and quality. A poor article is a poor article, and a late article is a late article; neither one is very useful.
Anyhow, I've gone ahead and reworked this article to something that you should find more enjoyable. And though I don't necessarily have the response you'd like to hear, I appreciate the feedback all the same.
First, thank you, that was an above-and-beyond reply to the OP.
I always worry though about remarks like 'we can be either sloppy or late and both are bad' since they seem to represent a change in Anand's 'don't be cable news' motto. I feel like slightly late is absolutely better than sloppy, and furthermore that it's not a binary proposition, since you could also be quick+thorough on fewer topics for example.
@OP - One element of this is the site let the author tasked with editing other articles depart a while back. Presumably the though it made sense to invest those resources instead in more content at potentially lower quality.
"I always worry though about remarks like 'we can be either sloppy or late and both are bad' since they seem to represent a change in Anand's 'don't be cable news' motto."
This is something we've always had to balance. In situations where we're crunched for time, when we do it right our articles are just clean enough to pass muster, and just soon enough not to be entirely too late. Otherwise if we have enough time, quality is always goal #1.
The direct reply is very appreciated. I understand the issue of deadlines and such. (I had one recently with school.) I'm guessing the staff is just getting spread to thin as of late. Thinking back on it, I do suppose AnandTech has expanded coverage. I guess it could be taking its toll on you guys.
I still like the content, I'll be coming back for the foreseeable future. It is nice to see staff members listening to the audience. You have my gratitude.
Just want to say thanks to Ryan for cleaning this up. I was working on little sleep at Build and trying to get this done before the keynote started on day 2 - which I didn't quite make so it was finished while listening to the keynote. I was a bit distracted and next time I'll make sure to give it a couple of read-throughs before posting.
The holo lens really seems like the coolest tech in the AR/VR space with the single exception of why on earth it is a tetherless experience. I get that it is neat... but why? And why on a 1st gen device? Sure, nobody enjoys tethers. But the amount of processing available in my desktop, or even in my puny little untrabook is going to far out-strip what is available in this headset. And while the first few generations of this are going to be very niche products, I don't think it is a valid concern to have tethers tangling with people walking around a room. The advantage of having a full desktop or laptop providing the power for this device would allow for much greater complexity, a much lighter headset, no worries about battery life, and the ability to have a much greater field of view (which they have hinted at being a processing/power/expense limitation.
I mean, perhaps there is a reason they want it to be an all-in-one unit... but if there is then they have not done a very good job at explaining it. Still, gen 3-4 of this tech a few years down the road with a wider field of view and longer battery life would be absolutely fantastic!
With a tether the Hololens loses most of its capability. If you think about its potential uses outside of gaming, if you have to carry a PC around with it, it basically becomes pointless. From a professional point of view, if I am an architect trying to show a client layout options in a building, I need to be able to put a headset on them, not force them to carry around a PC. If this gets used as HUD in a car or other vehicle, again I can't have it dependent on a PC.
One of its greatest features is its ability to have holograms follow you or stay pinned to a location, again this feature is useless if its a tethered device. At that point you may as well just go VR.
What would make more sense is to have the option to use it like the shield. Where the processing of the GPU/CPU go on your computer and it wirelessly transmits that power to the headset. Now I am not trying to say the above is EXACTLY what the Shield does. However, approaching this pitfall in this fashion could really fix a lot of problems. I see a time where the Internet is fast enough that most of our work won't even be done from our main computers. We'll have the option for that or we can pay a "Fee" and the majority of our work is done on the cloud and we can just "go"
that stuff is not sci-fi but it's not estabilished or mature even in traditional gaming (streaming from the main computer maybe works but is not used much at all, over the internet, I don't think so), plus there's nothing stopping the gen2 of the hololens from supporting said technology as long as it has wi-fi, IF they pan out, IF internet speeds grow.
In my experience video over wi-fi sucks BIG TIME even when you put a laptop very close to the transmitter so I don't think we're going to see this any time soon.
Streaming wouldn't work. Playing Halo 5 tethered to my PC via Ethernet from Xbox one still gives me 50ms of input lag. That's enough to give me a disadvantage. Imagine if u went wireless stream/processing and got 100-250ms of input lag. It would be far too noticeable and would be a terrible experience
if you're going to sit at a desk you might as well use VR with hand sensors or something so that your hands show up in the view.
This is useful to show architectural renders to decision makers (finally renders that aren't misleading?), showing instructions and visual cues to people who have to identify and repair stuff, playing games in the real world with other people (which is what really differentiates it from VR in this sector), and thus has to be untethered otherwise it has no reason to exist vs VR sets.
I mean, you could shoot at holograms with holographic projectiles in the real world, but if you're tethered then it's a problem if you want to move around. Also if you start off tethered the software developed will become useless in the first untethered generation due to the drop in processing power.
Tethered to desk is quite useless - you might as well have VR or even a simple 3D screen. Point of AR is to have holograms at least appear in the real world if not even interact with it (obviously one way - you could throw that virtual ball, but virtual ball couldn't break your face. Yet) Tethered to cloud wouldn't work far too often. Latency kills you even if bandwidth is fine enough. Tethered to your existing laptop wouldn't work, as that laptop doesn't have HPU to generate those holograms and probably isn't powerful enough without that bit. Mixed laptop/HPU on the glasses would work though, but you probably wouldn't gain a lot, a lot of processing would be still done by the headset.
But I could see a "backpack" tethered version aka headset has just IO parts and ALL processing (as well as batteries) are in the special unit in the backpack. So, not all that much different than the laptop version, just with specialized laptop that couldn't be used for other purposes. Tradeoff is mainly the need to carry backpack in return for FOV and battery. This would be an excellent tradeoff in many circumstances, but a completely pointless one in most of other. Battery part is mostly fixed even without the backpack if you can save your current setting and load it on the other headset and don't mind a brief interrupt in the experience. Where backpack tradeoff makes sense is mainly games - those require FOV and details, as well as the need/desire of several hours of uninterrupted fun. But why bother making this special 3k device for the audience served by 5 times cheaper VR stuff? But it just isn't needed for most of that boring real life. You don't need better device to explore human body, make a skype call, see or design parts of car, see how will the new house fit among the others, select which color would be better for walls or where to put kitchen elements etc etc.
Teether to a mobile phone, so that the hardware power in that thing can finally be put to some good use (and relieve battery drain on the headset). (I'm not really implying mobile phones would never be used properly. But those relatively powerful GPUs are mostly underutilized for sure)
It's the latter. The holograms are cropped to a small region in the center of your field of view. Everything outside of that small region is clear plastic that does not impede your vision.
How is it incorrect? They said they shipped yesterday, so the hardware they demo'd was the same as shipped to developers, which means it has a limited field of view just like the demo hardware.
Cool stuff. It does sound and seem like Microsoft is on the Oculus schedule with this though - i.e., it's still got a few years to go before being consumer-ready.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
23 Comments
Back to Article
Stahn Aileron - Thursday, March 31, 2016 - link
Okay, I hate to do this because I like the site, but I have to now...Is anyone on the editorial staff either:
A) A native English speaker or
B) An English major?
I've noticed over the years (beginning around the time Anand started getting hands-off with the site) that the writing quality has slowly decline. It's as if the articles are written and given a 30-second editor review before being posted.
Sentences run long and are awkward. In some case, they're actually hard to parse and understand without re-reading and breaking them up. I find myself re-reading phrases multiple times more often as time goes by. The writing isn't structured very well. I sometimes feel like I'm reading someone's speech pattern or un-edited thought process rather than a deliberate attempt at professional journalistic writing.
I love the site for its content. It's just getting harder to understand some of the said content without some amount of confusion.
Magius - Thursday, March 31, 2016 - link
A few suggestions:beginning around the time => around the time
slowly decline => slowly declined
In some case => In some cases
hard to parse and understand => hard to comprehend
"re-reading phrases multiple times" => reading phrases multiple times OR re-reading phrases
"I love the site for its content. It's just getting harder to understand some of the said content without some amount of confusion." => "I love the site for its content. It's just getting harder to understand some of the recent articles without some amount of confusion."
Happens to the best of us. Your post is valid but still you might want to reconsider the initial portion, as it might be offensive to some. I do agree that the article leans towards a stream of thought style. Perhaps it is because of my continuous perusal of engineering write-ups that I do not mind as much. I have seen worse, much worse.
Stahn Aileron - Friday, April 1, 2016 - link
And I stand corrected. This is what happens when I kind of rush through and don't copy-edit my own work properly. (Guess that sort of proves the point ^_^) I've seen much worse as well. (Fairly recently, too, at school.) Ryan Smith was kind enough to reply. I've replied in turn below.I do admit the start of my comments came off harsh. It was the product of seeing the issue for a couple of years. Ryan was still willing to address it though, thankfully. I thank both you and Ryan for the feedback.
nandnandnand - Thursday, March 31, 2016 - link
"Microft then said that all of the code that we used in the demo, and all of the code used on the demos last year, is all available on GitHub to allow devs quicker access to code."^ Perpetuating the problem by giving them free editing.
Ryan Smith - Thursday, March 31, 2016 - link
Yes, virtually the entire staff is native English speakers. As for English majors, I find that teaching them tech is harder than glaring at the technical staff until their English improves.;-)Anyhow, while we always strive for the highest quality, much of the time we're working on very short deadlines. This means that there isn't as much time for copy editing as we'd like. We do the best we can, but we have to strike a balance between speed and quality. A poor article is a poor article, and a late article is a late article; neither one is very useful.
Anyhow, I've gone ahead and reworked this article to something that you should find more enjoyable. And though I don't necessarily have the response you'd like to hear, I appreciate the feedback all the same.
Sunrise089 - Friday, April 1, 2016 - link
Ryan,First, thank you, that was an above-and-beyond reply to the OP.
I always worry though about remarks like 'we can be either sloppy or late and both are bad' since they seem to represent a change in Anand's 'don't be cable news' motto. I feel like slightly late is absolutely better than sloppy, and furthermore that it's not a binary proposition, since you could also be quick+thorough on fewer topics for example.
@OP - One element of this is the site let the author tasked with editing other articles depart a while back. Presumably the though it made sense to invest those resources instead in more content at potentially lower quality.
Ryan Smith - Friday, April 1, 2016 - link
"I always worry though about remarks like 'we can be either sloppy or late and both are bad' since they seem to represent a change in Anand's 'don't be cable news' motto."This is something we've always had to balance. In situations where we're crunched for time, when we do it right our articles are just clean enough to pass muster, and just soon enough not to be entirely too late. Otherwise if we have enough time, quality is always goal #1.
Stahn Aileron - Friday, April 1, 2016 - link
Ryan,The direct reply is very appreciated. I understand the issue of deadlines and such. (I had one recently with school.) I'm guessing the staff is just getting spread to thin as of late. Thinking back on it, I do suppose AnandTech has expanded coverage. I guess it could be taking its toll on you guys.
I still like the content, I'll be coming back for the foreseeable future. It is nice to see staff members listening to the audience. You have my gratitude.
Brett Howse - Saturday, April 2, 2016 - link
Just want to say thanks to Ryan for cleaning this up. I was working on little sleep at Build and trying to get this done before the keynote started on day 2 - which I didn't quite make so it was finished while listening to the keynote. I was a bit distracted and next time I'll make sure to give it a couple of read-throughs before posting.CaedenV - Thursday, March 31, 2016 - link
The holo lens really seems like the coolest tech in the AR/VR space with the single exception of why on earth it is a tetherless experience. I get that it is neat... but why? And why on a 1st gen device?Sure, nobody enjoys tethers. But the amount of processing available in my desktop, or even in my puny little untrabook is going to far out-strip what is available in this headset. And while the first few generations of this are going to be very niche products, I don't think it is a valid concern to have tethers tangling with people walking around a room. The advantage of having a full desktop or laptop providing the power for this device would allow for much greater complexity, a much lighter headset, no worries about battery life, and the ability to have a much greater field of view (which they have hinted at being a processing/power/expense limitation.
I mean, perhaps there is a reason they want it to be an all-in-one unit... but if there is then they have not done a very good job at explaining it. Still, gen 3-4 of this tech a few years down the road with a wider field of view and longer battery life would be absolutely fantastic!
Reflex - Thursday, March 31, 2016 - link
With a tether the Hololens loses most of its capability. If you think about its potential uses outside of gaming, if you have to carry a PC around with it, it basically becomes pointless. From a professional point of view, if I am an architect trying to show a client layout options in a building, I need to be able to put a headset on them, not force them to carry around a PC. If this gets used as HUD in a car or other vehicle, again I can't have it dependent on a PC.One of its greatest features is its ability to have holograms follow you or stay pinned to a location, again this feature is useless if its a tethered device. At that point you may as well just go VR.
DPOverLord - Thursday, March 31, 2016 - link
What would make more sense is to have the option to use it like the shield. Where the processing of the GPU/CPU go on your computer and it wirelessly transmits that power to the headset. Now I am not trying to say the above is EXACTLY what the Shield does. However, approaching this pitfall in this fashion could really fix a lot of problems. I see a time where the Internet is fast enough that most of our work won't even be done from our main computers. We'll have the option for that or we can pay a "Fee" and the majority of our work is done on the cloud and we can just "go"Interesting Sci Fi fantasy or reality?
Murloc - Thursday, March 31, 2016 - link
that stuff is not sci-fi but it's not estabilished or mature even in traditional gaming (streaming from the main computer maybe works but is not used much at all, over the internet, I don't think so), plus there's nothing stopping the gen2 of the hololens from supporting said technology as long as it has wi-fi, IF they pan out, IF internet speeds grow.In my experience video over wi-fi sucks BIG TIME even when you put a laptop very close to the transmitter so I don't think we're going to see this any time soon.
Sushisamurai - Thursday, March 31, 2016 - link
Streaming wouldn't work. Playing Halo 5 tethered to my PC via Ethernet from Xbox one still gives me 50ms of input lag. That's enough to give me a disadvantage. Imagine if u went wireless stream/processing and got 100-250ms of input lag. It would be far too noticeable and would be a terrible experienceMurloc - Thursday, March 31, 2016 - link
if you're going to sit at a desk you might as well use VR with hand sensors or something so that your hands show up in the view.This is useful to show architectural renders to decision makers (finally renders that aren't misleading?), showing instructions and visual cues to people who have to identify and repair stuff, playing games in the real world with other people (which is what really differentiates it from VR in this sector), and thus has to be untethered otherwise it has no reason to exist vs VR sets.
I mean, you could shoot at holograms with holographic projectiles in the real world, but if you're tethered then it's a problem if you want to move around.
Also if you start off tethered the software developed will become useless in the first untethered generation due to the drop in processing power.
Zizy - Friday, April 1, 2016 - link
Tethered to desk is quite useless - you might as well have VR or even a simple 3D screen. Point of AR is to have holograms at least appear in the real world if not even interact with it (obviously one way - you could throw that virtual ball, but virtual ball couldn't break your face. Yet)Tethered to cloud wouldn't work far too often. Latency kills you even if bandwidth is fine enough.
Tethered to your existing laptop wouldn't work, as that laptop doesn't have HPU to generate those holograms and probably isn't powerful enough without that bit.
Mixed laptop/HPU on the glasses would work though, but you probably wouldn't gain a lot, a lot of processing would be still done by the headset.
But I could see a "backpack" tethered version aka headset has just IO parts and ALL processing (as well as batteries) are in the special unit in the backpack. So, not all that much different than the laptop version, just with specialized laptop that couldn't be used for other purposes.
Tradeoff is mainly the need to carry backpack in return for FOV and battery. This would be an excellent tradeoff in many circumstances, but a completely pointless one in most of other. Battery part is mostly fixed even without the backpack if you can save your current setting and load it on the other headset and don't mind a brief interrupt in the experience.
Where backpack tradeoff makes sense is mainly games - those require FOV and details, as well as the need/desire of several hours of uninterrupted fun. But why bother making this special 3k device for the audience served by 5 times cheaper VR stuff?
But it just isn't needed for most of that boring real life. You don't need better device to explore human body, make a skype call, see or design parts of car, see how will the new house fit among the others, select which color would be better for walls or where to put kitchen elements etc etc.
MrSpadge - Tuesday, April 19, 2016 - link
Teether to a mobile phone, so that the hardware power in that thing can finally be put to some good use (and relieve battery drain on the headset). (I'm not really implying mobile phones would never be used properly. But those relatively powerful GPUs are mostly underutilized for sure)Murloc - Thursday, March 31, 2016 - link
is the FOV limited in the sense that you see black on your peripheral vision, or just that the holograms don't show up in the peripheral vision?bji - Thursday, March 31, 2016 - link
It's the latter. The holograms are cropped to a small region in the center of your field of view. Everything outside of that small region is clear plastic that does not impede your vision.bji - Thursday, March 31, 2016 - link
"The field of view issue is still very small, and clearly not something they were not able to address before they shipped to developers"That is a very awkward, confusing, and just plain incorrect sentence!
extide - Thursday, March 31, 2016 - link
How is it incorrect? They said they shipped yesterday, so the hardware they demo'd was the same as shipped to developers, which means it has a limited field of view just like the demo hardware.bji - Thursday, March 31, 2016 - link
The field of view ISSUE is not "still very small". It's the field of view that is "still very small".Also, it's clearly NOT something they were able to address, not clearly NOT something they were NOT able to address.
ABR - Friday, April 1, 2016 - link
Cool stuff. It does sound and seem like Microsoft is on the Oculus schedule with this though - i.e., it's still got a few years to go before being consumer-ready.