Lumafuse

My first passthrough prototype with the Quest Pro intended to explore what a physically interactive and eye-tracked virtual desktop app could look like.

00

problem

Expanding digital workspaces is often limited by the cost, inflexibility, and lack of portability of physical monitors, hindering multi-tasking and productivity.

solution

Lumafuse leverages the Quest Pro's passthrough mixed reality to seamlessly augment physical displays with weightless, configurable virtual monitors streamed from your desktop, offering expanded, portable productivity.

Background

For quite some time I've wished that I could easily extend my physical monitors by augmenting virtual monitors alongside them - with my high-end desktop powering them. Not only would this allow me to add, subtract, or move weightless monitors at my whim while still being able to use my beloved mechanical keyboard and ergonomic mouse... but it would also mean that I could take these weightless monitors with me on the go; unlocking a new lightweight productivity powerhouse.

I knew that this was the first concept that I wanted to prototype the day that I received the Meta Quest Pro, and that's exactly what I set out to do. I landed on the name Lumafuse for this concept due to the idea that the app would be fusing (fuse) illuminated (luma) displays with reality.

The Display Controller

Ultimately I wanted Lumafuse to revolve around being as physically interactive as possible to help ground the interface in reality. If you've read/watched my Empowering Ecosystems With Spatial Computing article/prototype, you'll likely know that I've been a huge proponent in creating spatial computing interfaces that compliment reality rather than distract from it.

To achieve this I ended up creating a physical controller entirely operated with physics-driven mechanical buttons and analog thumbsticks, and driven by physical hand tracking. This controller's entire purpose for this concept is to intuitively provide the user with options to modify their virtual displays' position, rotation, scale, and various other settings down the road.

Foveated Compression

Thanks do the Quest Pro's built-in low-latency eye tracking, I thought of a (hopefully) novel concept based off of the now well-known foveated rendering technique. Except instead of using eye tracking to increase 3D rendering performance, why not try to apply the same logic to reduce streaming bandwidth?

Although I did take a stab at a custom C++ pixel streaming protocol, I ended up shelving the implementation due to it taking up far too much time and quickly escaping what I had originally set out to create with this project. However, I still knew I could create the first-half of the equation, which is creating a render target manipulated pixelization shader that would act as the foundation in assisting the compression algorithm further down the pipeline.

This was a really fun technical challenge, and at the end of it I did find that if I were to export individual render target frames as JPGs with my foveated compression technique applied to them, there was a ~15% - 35% reduction in file size while having a very subtle effect in-headset.

When I finished the above implementation, I had the idea of trying to use a dynamic gaussian blur effect rather than a tiered low-res pixelation effect. My thinking was that it would be "smoother" and therefore more difficult to see in your periphery. Although it was cool to see it working in-headset, this is definitely one of the times when the idea was much better in theory than it practice. The blurring was actually far more distracting and actively made the viewing experience worse.

The Physical Hands

Contrary to my previous prototype found in my Empowering Ecosystems With Spatial Computing prototype, I got around to figuring out how to create nearly 100% physics driven hand-tracked hands. I say "nearly" because each hand as a whole is a physically simulated object, however the joints on each hand are not. But I don't think this application is really in need of such an advanced implementation anyways.

I felt that it was necessary to create physics hands due to the fact that the hands would need to interact so frequently with other physically simulated interface objects such as the Display Controller alongside the displays themselves.

I found that since the entire application is using passthrough mixed reality, you don't want to see these physically simulated hands unless it's absolutely necessary. And in this case, I deemed it necessary to see these hands so that the user has a better understanding of how their virtual hands are interacting with other simulated objects - fading into existence when they deviate too far from one another.

year

2022

timeframe

1 month

tools

Unreal Engine, C++, Meta Quest Pro, 3ds Max

category

Personal Project

01

The Lumafuse Display Controller

see also

.say-hello

want to get in touch?

.say-hello

want to get in touch?

.say-hello

want to get in touch?

.say-hello

want to get in touch?