After years of rumors and speculation, Apple finally revealed their AR/VR headset, the Vision Pro
It'll be on sale early next year for $3500
Apple Vision Pro is Apple’s first spatial computer. It seamlessly blends digital content with your physical space using revolutionary technology.
www.apple.com
Apple today unveiled Apple Vision Pro, a revolutionary spatial computer that seamlessly blends digital content with the physical world.
www.apple.com
When not connected to power directly, it has 2 hours of battery life with the external battery bank.
The device appears to mainly focus on productivity and entertainment. The main use cases that they showed during WWDC 2023 were multitasking (browsing the web, listening to music, iMessage, Microsoft Teams, etc), watching movies, and video calls.
When you have the device, you must scan your face to create your digital persona, which is used for Facetime and appears on the front of the device for others to see
With the device, you can also take Spatial Photos using the multiple sensors and cameras on the device. You can then watch them back in 3D
The device is powered by the M2 chip and a new R1 chip. The R1 chip is used for processing the data from all the cameras and sensors. There's 12 cameras, five sensors, and six microphones. One of the sensors is a LIDAR sensor.
During development, they filed over 5000 patents.
According to this ex-Apple researcher, they are able to track and predict your intentions through eye tracking, brain activity, etc.
I spent 10% of my life contributing to the development of the #VisionPro while I worked at Apple as a Neurotechnology Prototyping Researcher in the Technology Development Group. It’s the longest I’ve ever worked on a single effort. [...]
The large majority of work I did at Apple is under NDA, and was spread across a wide range of topics and approaches. But a few things have become public through patents which I can cite and paraphrase below.
Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences.
So, a user is in a mixed reality or virtual reality experience, and AI models are trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state. And these may be inferred through measurements like eye tracking, electrical activity in the brain, heart beats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc.