Let’s go to the “Metaverse”
Reposted from ww.linkedin.com/posts/lindaricci_metaverse-concept-digitalexperiences-activity-6917537794756067329-WT5N
#virtualreality #augmentedreality #VR #AR #web3 #mytwocents #opinion
Reposted from ww.linkedin.com/posts/lindaricci_metaverse-concept-digitalexperiences-activity-6917537794756067329-WT5N
#virtualreality #augmentedreality #VR #AR #web3 #mytwocents #opinion
Reposted from linkedin
Heard a really sweet story today by Donna Z. Davis, Ph.D. at AWE (Augmented World Expo): she told the audience about an elderly woman with Parkinson’s, who regularly “meets” her tuxedo clad physically distant son (avatar) in VR to go dancing with him. *That’s* the power of VR for me. Not the whiz bang isn’t it cool game stuff, it’s the human element – and how much better it can make people’s lives. On the We Get Real AF Podcast (airing in June) I was asked what I thought the ultimate benefit of VR would be: my answer, without missing a beat, was “The end of loneliness.” And I really believe that.
#VR #loneliness #VRforhumanity #immersivetech #wegetrealaf #spatialcomputing #AWE2020 #Decahedralist
Fascinating stuff. And, whoa. The inevitable march towards brain-computer interface continues! “Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person’s brain activity as actual images mimicking what they observe in real time. “
We are rapidly moving from keyboard and mouse input – which, although we’ve done it so long that it *seems* natural, but it is not – to spatial input; this is truly an astounding leap towards natural computing.
I applaud the application that this particular work is working towards (helping post-stroke patient with rehabilitation devices controlled by brain signals), but imagine a world where we don’t have to interact with technology – and each other – through screens!
One of the many challenges is that although there is a standard model for brain architecture, everyone has their own variation, so there are no specific templates that can be applied. No doubt there will be a “training” period for the interface. But once “trained” our personal brain reader will be able to function across all interfaces; unless of course Apple and Microsoft put up the usual walled garden model (personal gripe, also true with VR headsets; this game only works with this system etc).
But inevitably, the early stage development is paid off, enough people adopt, the squinky convoluted hoops early adopters need to jump through are ironed out, and mass adoption takes off. And while I realize that true brain computing interface is a long way off, I’m heartened by all the work I’ve seen by teams like this (CTRL-Labs in particular – interestingly, just bought by Facebook) . And hope that it will help the quality of life for both patients with limitations, and mundane every day life.
https://techxplore.com/news/2019-10-neural-network-reconstructs-human-thoughts.html