By KIM BELLARD
Google receives a lot of (deserved) publicity for its Starline Project, announced at last week’s I / O conference. Project Starline is a new 3D video chat capability that promises to make your Zoom experience look even more tedious. It’s nice, but I expect a lot more from holograms – or even better technologies. Fortunately, there are several such candidates.
For all those who have been excited about the advancement in telesana, you haven’t even seen anything.
If you missed the Google ad, the Starline Project was described as follows:
Imagine looking through a sort of magical window, and through that window, you see another person, life-size and in three dimensions. You can speak naturally, manage and make eye contact.
Google says, “We believe this is where person-to-person communication technology can and should go,” because, “The effect is the sense of a person standing in front of you, as if it’s here. ”
Sounds pretty cool. The thing, though, is that you’re always looking at images through a screen. Google can call it a “magic window” if you want, but there’s always a screen between you and what you see.
Not so with Optical Trap Displays (OTD). These were pioneered by the BYU holographic research group three years ago, and, in his latest advance, they created – what else? – floating lightsabers that emit effective beams:
Optical trap displays are not, strictly speaking, holograms. They use a laser beam to trap a particle in the air and then push it around, leaving a bright, floating path. As the researchers describe it, it’s like “a 3D printer for light.”
The authors explain:
The particle moves through each point of the image several times per second, creating an image for persistence of vision. The higher the resolution and refresh rate of the system, the more convincing this effect can be, where the user will not be able to perceive the updates of the images shown to them, and at a sufficient resolution will have difficulty to distinguish display points from real-world print points.
Principal Investigator Dan Smalley notes:
Most 3D displays require you to keep a screen, but our technology allows us to create floating images in space – and they’re physical; not some mirage. This technology can allow you to create vibrant animated content that orbits around or crawls on or explodes outside of everyday physical objects.
Co-author Wesley Rogers adds: “We can play some fantasy game with motion parallax and we can make the visual look much larger than it is physically. This methodology will allow us to create the illusion of a much deeper visualization. until theoretically an infinite dimension ”.
In fact, theirs paper in Nature speculates: “This result leads us to contemplate the possibility of immersive OTD environments that include not only real images capable of wrapping around physical objects (or the user themselves), but that also provide simulated virtual windows in expansive outdoor spaces.”
I don’t know what that means at all, but it sounds very impressive.
BYU researchers he believes: “Unlike OTDs, holograms are extremely computationally intensive and their computational complexity scales rapidly with display size. Neither is true for OTD displays.” They need to meet Liang Shi, a Ph.D. student at MIT leading a development team ”tensor holography. ”
Before anyone with mathophobia is afraid of the “tensor,” let’s just say it’s a way to produce holograms almost instantly.
The work was published in Nature last March. The technique uses deep neural networks to generate 3D holograms in almost real time. I’ll skip the technical details of how it all works, but you can watch his video:
Their approach does not require supercomputers or lengthy calculations, instead allowing neural networks to teach themselves to generate holograms. Amazingly, the “compact tensor network” requires less than 1 MB of memory. Images can be calculated from a multi-camera configuration or a LiDAR sensor, which become standard in smartphones.
“People thought before that with existing consumer hardware, it was impossible to make computations in 3D holography in real time,” says Shi.
Joel Kollin, a Microsoft researcher who was not involved in the research, he said MIT News that research “shows that true 3D holographic screens are practical with only moderate computational requirements.”
All efforts are already focused on health care. Google is currently testing the Starline Project in a few of its offices, but is making a major bet on its future. He explicitly chose health care as one of the first industries to work, with the goal of testing demonstrations later this year.
BYU researchers to see medicine as a good use for OCD, helping doctors plan complicated surgeries: “A high-resolution MRI with an optical trap display could show, in three dimensions, the specific issues they are susceptible to. to meet. As a real-life Operations game, surgical teams will be able to plan how to navigate delicate aspects of their upcoming procedures. “
MIT researchers believe the approach offers a lot of promise for VR, 3D volumetric imaging, microscopy, medical data visualization, and surface design with unique optical properties.
If you don’t know what “3D volumetric printing” is (and I don’t have it), it’s fine described as in a reverse MRI: “the shape of the object is projected to form the model instead of scanning the object.” It could revolutionize 3D printing, and, for health specifically, “Being able to print 3D from all spatial dimensions at the same time could be instrumental in the production of complex organs … This will allow for better and more functional vascularity and multi-cell-material structures.”
As for “viewing medical data,” for example, surgeons at Ohio State University’s Wexner Medical Center are already using it “Mixed Reality 3D Holograms” to assist in shoulder surgery. Holograms were also used for cardiac, liver, and spine surgery, among others, and also in imaging.
2020 was, in essence, an upcoming party for video conferencing in general and for telecare in particular. Capacities had been around, but it was only when we were closed and reluctant to be around others that we began to experience their possibilities. However, we should think of it as version 1.0.
Versions 2.0 and beyond will be more realistic, more interactive, and less constrained by screens. They could be holograms, tension holograms, optical trap screens, or other technologies that they are not aware of. I just hope that it doesn’t take another pandemic to realize its potential.
Kim is a former emarketing executive at a major Blues platform, editor of the late & lamented Tincture.io, and now a regular contributor to THCB.