The best spatial computer could be the one in your pocket
XR is seeing a shift toward offloading components to external devices. Someday, could we pair HMDs with smartphones for a fully featured spatial UX?

When I bought my first pair of AR glasses, I thought I was holding the future of mobile XR in my hands. It was March 2023 and I was quickly becoming heavily invested in XR, thanks to picking up a Meta Quest 2 like so many other folks stuck inside during lockdown. I was heavily anticipating the Quest 3 that year, and rumors were swirling about Google’s and Apple’s first forays into VR.
Still in the honeymoon phase with my Nreal Air glasses (later known as Xreal Air after the company rebranded), I thought the future seemed obvious for legacy mobile platform holders: AR glasses should be piggybacking off the computers we already carry around in our pockets (i.e. smartphones).
My primary use case with the Nreal Airs was basic, plugging them into my Galaxy phone to run Samsung’s DeX mode, rendering a 0DoF display of a desktop-like experience. Xreal’s Nebula app showed off a more spatialized 3DoF AR ecosystem, though it was effectively a walled garden within Android, unable to fully leverage all the existing apps and services available on my phone.
The solution for a more fully featured spatialized experience seemed obvious: more top-down support from platform holders. Surely Google and Apple could deliver a native XR user experience on their own platforms that better integrates users’ apps and services.
Instead, Google’s and Apple’s first releases in the modern XR market aligned closely with existing hardware products. They both put out entire spatial computers you wear on your face. Arguably the biggest innovation from a hardware perspective, besides bleeding-edge optics and proportionally eye bleed-inducing prices, was offloading the battery to a tethered puck.
However, not long after Samsung Galaxy XR hit the market as the first Android XR consumer product, Google announced during an XR edition of the Android Show another hardware partner’s first Android XR device that had been waiting in the wings: Xreal Project Aura.
Project Aura uses an AR display glasses optical stack not unlike that of their Xreal One line of glasses — specifically, an iteration of the prismatic lenses in the more premium One Pro model, but delivering a 70-degree field-of-view over One Pro’s 57 degrees. Like their other glasses, they connect to their main compute and battery via a cable. Unlike those earlier models, Project Aura comes with a dedicated Android XR compute puck powered by Qualcomm’s Snapdragon XR2+ Gen 2 chip — the same SoC featured in Samsung’s Galaxy XR.
My initial reaction was that this was I was wanting from Xreal for years: a full spatial OS running on AR display glasses. You can get a similar experience today using Xreal’s Beam Pro compute puck, which likewise runs Android, but overlayed with Xreal’s latest custom Nebula UI. But surely Android XR from the purveyors of Android itself will be a significant leap in functionality and app support.
Sure, Project Aura’s 70-degree FOV makes it inherently more limited than its Galaxy XR cousin or other mixed reality headsets, which boast FOVs closer to 110 degrees. To Google’s credit, the Android Show featured what looked like a pretty realistic illustration of how Android XR would look on Project Aura’s limited FOV.

Folks wanting the extra real estate of a headset might still be better served with Galaxy XR, provided they can stomach the ergonomics. But we’ve seen how limited adoption has been for Samsung’s and Apple’s uncomfortable, expensive headsets. It may be that a smaller form factor is exactly what spatial computing needs to win over more converts, even if it comes with some inherent compromises in FOV and use cases like fully immersive VR.
So while I am excited for Project Aura and eager to see how it’ll perform in the market, why do feel a twinge of hesitation?
Part of it’s cost. I know I’m an edge case, but I’ve already upgraded to a new pair of Xreal glasses once this year with the Xreal 1S, which replaces their 2025 Xreal One model as their new entry-level device. I also hope to put in my order for Valve’s Steam Frame after not too much more of a wait (fingers crossed). Am I really going to buy Project Aura in addition to at least two other 2026 XR purchases? The answer is probably, but not without some internal strife.
My cost consciousness is compounded by ongoing tech market trends. AI data centers are gobbling up all our chips, sending prices skyrocketing for computer components like RAM and SSDs. Essentially the same 32gb DDR5 kit I bought in summer 2023 for $95 is $440 today. While hardware manufacturers don’t pay the same prices as end users, it’s hard not to imagine something like Project Aura seeing its MSRP ballooned by shipping in current market conditions. Now more than ever, I’m eager to leverage the compute I already have in my home, bought and paid for, rather than splashing cash on inflated hardware prices.
Beyond cost, something that sticks out to me about Project Aura’s split-computing hardware configuration is that maybe it’s a little … redundant? Or at least, maybe there’s a path forward for a more efficient distribution of labor.
Project Aura’s compute puck will use the same Snapdragon XR2+ Gen 2 SoC found in Galaxy XR. The main thing that sets these chips apart from more traditional smartphone processors is that they have dedicated circuits to handle positional tracking, camera passthrough and spatial mapping.
Meanwhile, the glasses themselves will feature an onboard X1S processor, an iteration of the X1 chips found in Xreal’s current One line of glasses. If Xreal’s X1S chip can facilitate 6DoF natively, why does it need to still be paired with an XR-specific SoC? Why not just any ol’ mobile chipset, such as those already in our smartphones?
Of course, there are power and thermal considerations, but Project Aura will likely be geared toward productivity, media consumption and some light gaming, such as the native version of Demeo reported on during Google’s demo event. Aura also notably uses optical see-through displays. The cameras are for tracking and mapping, not passthrough like MR headsets, which saves on compute overhead.
This dual-chip or split-computing configuration is something we’ve begun to see more and more in XR, beyond Xreal’s offerings.
In late 2025, Chinese startup GravityXR demoed a reference headset to show off their G-X100 coprocessor, which handles all the tracking and passthrough while the primary compute is offloaded to an external device. Even Apple Vision Pro has its dedicated R1 chip for tracking and passthrough, though its primary processor is still integrated into the headset.

These examples show the potential of relegating the XR-specific compute to a lower cost onboard processor, while the main compute can be stored in a tethered external device. Further, if that onboard processor is specifically handling the more XR-specific processes, why does the tethered computer need to have any kind of XR-specific circuitry? Why can’t such an HMD just as easily be paired with a standard mobile chipset?
We shouldn’t need to buy a whole new computer every time we buy an XR device. As much as XR enthusiasts like to imagine HMDs as smartphone replacements, people are going to be slow to part with their pocket computers. There may always be some instances where having a smartphone makes sense. In which case, why not further pursue making smartphones the hub of our personal computing, including spatial? Something with which you might pair an HMD, rather than an HMD replacing the compute you already have?
This also aligns with recent trends of making smartphones more compatible with desktop-like experiences. Samsung phones and tablets have had the aforementioned DeX Mode for years, providing a desktop-like Android UI when you plug into an external display. Google’s introducing Android Desktop Mode as a native Android 16 feature on Google Pixel 8 phones and newer. ChromeOS is being merged with Android to create a unified platform dubbed Aluminium OS.
All this points toward a convergence in mobile and desktop/laptop computing, where a smartphone or tablet can serve as the primary brains of your setup, adapting to your displays and peripherals of choice. Devices like NexDock, essentially a laptop shell for Android phones, demonstrates this potential flexibility of the pocket computer you already have on you at all times.

Let me tell you, as someone who wrote this entire article in DeX mode on my Xreal 1S glasses, it’s refreshing having so much continuity between my mobile and desktop computing. When I got a call from my wife while writing, I was able to answer it with a mouse click and talk to her right on my glasses, rather than fumbling for my phone. It’s a marked improvement over Microsoft’s Phone Link app for Windows, even if DeX itself has some quirks and limitations compared to Microsoft’s deeply entrenched desktop OS.
Similarly, and yet conversely, as a one-time Galaxy XR owner before deciding the ergonomics were a dealbreaker, yes, it was nice being able to sign into so many Google and Samsung apps and services and enjoy some of that same continuity. A lot carried over automatically from my Galaxy phone.
But not everything. It’s still not as quick and seamless setting up a whole new device for spatial computing, even from the same hardware manufacturer and the same platform holder, rather than extending the functionality of the device that already contains my entire everyday digital existence.
In the long term, maybe we could see this same trajectory for spatial computing that we’re seeing manifest lately in mobile and desktop computing’s convergence. But there’s still a lot of road to travel.
In its current state, it’s likely Android XR isn’t optimized to run on any ol’ hardware. The current approach with dedicated XR hardware may be necessary to create an optimal user experience. Maybe in the future, through further optimization, standardization and iteration of existing mobile and spatial chipsets, Android XR could be deployed as easily on a smartphone, either as a separately partitioned OS or a mode within standard Android.
As sensational as it is foretell some great convergence, I want to make it clear that I’m not proposing a true all-in-one device that covers absolutely every and all use cases and content types. For example, there will always be a place for high-end VR gaming, which will likely still require your own rig. Similarly, many professionals have greater power needs than what they can get in a smartphone, and may use applications that remain only supported on traditional desktop ecosystems. If they want to spatialize their work, they might be better served pairing an XR device with a beefy Windows, Mac or Linux computer.
However, for the majority of everyday users, the hardware trajectory I’m proposing would lower XR’s cost of entry and put spatial computing in millions of people’s hands instantly. Imagine if one day, Google announced that everyone already has a spatial computer in their pockets. Instead of buying a premium MR headset for upwards of $1,800 or at least, in my estimation, $1,000 or more for something like Project Aura, you can unlock similar potential with a few-hundred-dollar investment in a pair of AR display glasses.
The road ahead is long and winding. We could see it branch into completely different paths than the one I’m trying to plot out. But at the very least, I believe this is a route worth exploring.
For now, I’ll content myself following all the steps we take this year with devices like Project Aura, and experimenting with all the potential use cases for the Xreal 1S glasses I have today. I’m planning to post impressions as I put the glasses through their paces. If you’re interested, be sure to subscribe for that and all my XR coverage.

