In the context: With all the hype surrounding the company formerly known as Facebook, which renamed itself Meta last week, it’s no surprise that other big tech industry players are starting to talk about their own vision of what the latest versions of virtual reality (VR) and augmented reality are all about. reality (AR) – the metaverse – can bring. What is somewhat unexpected is how different points of view on this topic already turn out to be.
At Microsoft’s Ignite conference this week and in preparation for the news for next week’s Nvidia GTC conference, both companies demonstrated a more practical and business-oriented approach to the metaverse than the consumer-centric version that Mark Zuckerberg and his team unveiled.
While the meta version of the metaverse has focused on things like the ability to port digital skins acquired in one gaming environment to another, both Microsoft and Nvidia have focused on team collaboration and cross-business communication.
There are many other differences between different points of view of the metaverse. These differences indicate the intentions of various organizations, the means by which they intend to enter the market, and much more. Microsoft, for example, will integrate its AR Mesh technology into Teams, providing a different “view” in Teams for all Teams users.
Cartoon avatars give participants the ability to still offer non-verbal communication cues, such as facial expressions, gestures, and more, without the need to be constantly in frame. Of course, it remains to be seen if this will actually relieve the stress associated with the constant use of video calls.
Mesh and Teams integration also extends to activities such as sharing a whiteboard in different physical locations and other forms of interaction, which are designed to make participants feel more like they are in the same physical environment. Given the expected resilience of hybrid work models – even after employees start returning to work in large numbers – it’s easy to imagine how some of these capabilities could be of practical use.
In the case of Nvidia, the company has been talking about its Omniverse platform for several years now, which means providing a high-quality 3D environment designed for development, graphics creation and collaboration.
Omniverse Enterprise is designed to co-create graphically realistic simulations of real devices and systems, making it useful for everything from designing the latest cars to watching AI versions of those cars perform in simulated environments. In fact, Omniverse provides the foundation for Nvidia Drive Sim’s autonomous driving efforts as well as the Isaac Sim robotics simulation platform.
Omniverse Create is a tool that uses the company’s RTX graphics technology to create complex compositions of scenes and virtual worlds with photorealistic detail, and then share them via the Pixar USD format. (Ironically, though we’ll never know for sure, maybe Meta even used the Omniverse Create to create her fantastical “worlds” of the metaverse.)
Another interesting aspect of Nvidia’s approach is its partnership orientation. Just like Microsoft is doing with Teams, Nvidia continues to highlight Omniverse as an extensible platform that other software companies can use, including well-known graphics ISVs like Adobe, Autodesk and others. In addition, Nvidia is focused on working with a variety of hardware partners to help provide systems that can form the backbone of its Omniverse vision. The company is also working with reseller partners to provide businesses with 3D collaboration capabilities.
All of this highlights another important difference between Meta, Microsoft, and Nvidia’s offerings: the amount of effort expected. While Zuckerberg and Meta’s other leaders have repeatedly admitted that most of the innovations needed to achieve a vision of extraordinarily impressive graphics in his metaverse will come in a few years, they also implied that it would be the mainstream option for everyone. In a way, it was like the actual implementation of the Oasis metaverse from the book and movie Ready Player One, which, in fact, should have been regularly used by everyone who currently uses something like the Facebook app.
On the other hand, Microsoft and Nvidia’s concepts seem to be much more focused on certain environments of certain enterprises. Microsoft Mesh for Teams is theoretically designed to work with most types of business meetings, but the fact that it requires dedicated AR / VR headsets (at least to power avatars) limits its use. Nvidia’s offering is even more specialized as it targets engineers, designers, and other creative professionals who are into 3D models and virtual worlds. This is certainly an important group, but not a huge one.
Although more limited in scope, in practice they are also more practical than Met’s approach, not only in terms of numbers, but also in terms of acceptability. While I definitely expect significant differences in preferences across generations, I still think it’s fair to say that most people don’t really want to spend a significant amount of time in the metaverse like Ready Player One, especially given the state of virtual reality today. and AR equipment.
While movie-like CGI graphics and fantasy-like environments are certainly visually appealing to watch, most people don’t want to watch them all the time. Also, let’s not forget what is involved with creating such visual effects. Have you ever seen what sci-fi and graphic films actors have to go through to get their scenes shot? Surrounding yourself with green screens is not an experience that most people will want to do more than a few times, and I certainly don’t understand how this will be reflected in home environments or even most office environments. Using this setup several times a day does not seem practical at all.
… we need to think about the privacy implications of technology, which can ultimately track all the environments we live, work and play in, all the people we interact with, and everything we do. We need to think a lot about this.
Finally, we need to think about the privacy implications of technology, which can ultimately track the entire environment in which we live, work and play, all the people we interact with, and everything we do. It is necessary to think a lot about this.
Part of this problem stems from the limitations of modern hardware – a problem that will go away over time. But even when these restrictions are lifted, there is still something very isolating about purely virtual interactions. Almost two years after the change in our work environment caused by the pandemic, most of us are feeling the impact, despite better cameras and near-continuous video interactions. People like to interact with real people in real time, in real life, and we are very far from any technology that replaces them.
In fact, I can’t help but note the irony of the timing for all of these announcements. By the time most of these offerings become available next year, many people will likely be returning to the office, at least on a semi-regular basis. As a result, the perceived need for these new types of interactions may not be as compelling as it is now.
To be clear, meeting-based interactions and collaborating with remote colleagues (around the world or just on the street) will continue to be critical. As a result, even though they are probably the least attractive of these new technologies, collaborative developments will undoubtedly continue and have the greatest impact – definitely over the next five years.
This is why, in the end, while it is definitely cool to think about the interesting possibilities that technologies and concepts such as the metaverse can open up, the practical benefits are likely to be successful in the long run.
Bob O’Donnell – Founder and Principal Analyst TECHnalysis Research, LLC a technology consulting company that provides strategic consulting and market research services to the high-tech industry and the professional financial community. You can follow him on Twitter @bobodtech…