Client Self
Year 2023
Spatial Computing is a paradigm shift in computing but also in the ways we interact with our worlds: Real, Virtual and in-between.
Spatial Computing promises our personal devices a human-scale understanding of space, its scene and its elements. Together with a deeper knowledge of the user’s context and their intentions it will allow us to create more intimate interactions with our surroundings and use them as a canvas to display our blended realities.
In this research we explore the value of objects through their augmented versions by expanding and building upon their native functionality, peppering it with the users' daily life data and smart algorithms to extract patterns, insights to, finally, provide assistance.
Throughout the history of computing the digital world has been always behind a screen, a display. With Spatial Computing / XR we have the opportunity to break this barrier, our devices are able to track and understand the physical layer and its features so we can use the world as a display.
But spatial computing / XR is not just the distribution of our interactions through the physical layer/space, it brings along a deep understanding of its context and of its user(s): through AI/ML algorithms we can cut through the noise of all the data we constantly generate within our lives - through our calendars, to-do lists, emails, apps and services - and extract patterns to better understand context and even infer intent so we can create interactions of value.
As we bring these digital layers together to blend with our real world so the way we interact with them needs to adapt: we’ll no longer need older input devices, their metaphors, mappings and conventions and instead bring back the interfaces we use in the real world, our natural user interfaces (our body, voice, eyes and hands).
Never have our lives been so much distributed between the real and the digital worlds. These days most of what we do is natively digital and even what we do in the real world most times leaves its dent in the digital world in the form of calendar events, items in to-do lists, emails etc.
All these huge pools of data can now be processed though AI/ML algorithms to cut through the noise and extract valuable understanding of the user's life, context, needs and, even, intent.
As we move towards head worn devices we are offered capabilities to interact with the digital and hybrid worlds through a singular multi/cross modal interface using our gaze, voice and hands.
We are now offered new potentials of interacting with the digital worlds leaving behind older conventions and metaphors: a truly ‘direct manipulation’, interacting with the object itself- and not its representation - using your full bodied self: your hands, eyes and voice). One step closer to interacting with the digital world the same way we do with our richer, multisensory, real-world interaction.
What powers can we extract from our interactions with objects? And how can they be augmented? How can we leverage their already powerful functional and semiotic loads?
In the past we have used objects and their semiotic value to explore relationships native to the real world but transmuted to the digital/virtual ecosystem. From direct manipulation paradigms to skeuomorphism we’ve exploited objects plenty as shortcuts between the real and digital world.
A lot of the object's power in our real world interactions stems from their immediate and effortless access and utility to information with direct implications in our well-being, our lives.
We’re calling 'glanceables', objects that provide immediate information with a quick glance - think of the clock and time, the thermometer and temperature, etc.
One quick glance at our wrist watch and we know precise time of day, a quick glance outside the window and we can feel the weather outside. And as we do it naturally , so - in the background - our minds immediately use this information to project and plan accordingly.
In Objects:Augmented we extend that glance and dig a bit deeper into the future by crisscrossing the object's native function with the users’ egocentric data streams to provide relevant and actionable information/interactions.
Still the same way: with a glance, just a little longer.
A simple clock can (in a glance) provide us immediately the exact time of the day but we now augment it with the user's egocentric data (in this case a calendar) and leveraging both data sources to create interactions of value and/or assistance.
In this case as we glance at the clock we’re provided with a quick overview of the day and current task at hand, the last in the workday (see clock display 1).
Yet, because it knows about your planned life, the augmented clock can go a step beyond: as it identified a location change between the finished task (and workday) and the upcoming event (kids pick up).(see clock display 2)
As it knows the users preferences it can easily calculate the journey time based on current driving conditions to provide an optimal departure time,no frills, no stress.(see clock display 3)
Making the transition between work and home modes easier, with a helping hand (of a clock).
Breakdown of Clock display sequence
1. Clock reveals the users schedule events throughout the day and signals the user it's ending the last event of the workday.
2. Clock reveals the upcoming event later today (4pm) and it notes it as an exception/alert, change of mode (work/family).
3. Clock provides suggestion of assistance.
(In the background it computes both events locations, retrieves the traveling time with current conditions from API and provides the user with the optimal time for departure)
In the same way, as we approach a window to know the weather outside, it can also be augmented by expanding on its data source (weather) and projecting it on the user's schedule, using both datasets and their interactions to find opportunities of assistance.
In our example, the augmentation gives us current weather information and it surfaces exceptions to provide additional value:
The user’s calendar has an event (bike ride) set in an outdoors location later that day, understanding the nature/location of the event and, projecting the weather data for the day, the algorithm can quickly find an exception (it will rain at the same time of the outdoors event) and signal it to the user so they can act accordingly.
All of the features of these prototypes are functional and leverage existing technologies: This is not a look into the future but a perspective of what we could be doing already.
The interaction with these augmented objects tries to be seamless and non-disruptive - in an effort to integrate them as much as possible with our life pace and rhythms.
We leveraged a standard multi-modal pattern, using eye tracking /gaze for object selection together with finger pinch for confirmation/ activation. The interaction with these objects/widgets can be broken into 2 main parts, discovery and activation:
(In headsets where there are no eye tracking capabilities we replaced gaze with the head’s orientation.)
Egocentric data sources / personal data streams used include: Google Calendar API, Google Directions API, OpenWeather Weather API.
The functional prototype was built in Unity and runs on Meta Quest Pro / Meta Quest 3.
Client Self
Year 2023
Spatial Computing is a paradigm shift in computing but also in the ways we interact with our worlds: Real, Virtual and in-between.
Spatial Computing promises our personal devices a human-scale understanding of space, its scene and its elements. Together with a deeper knowledge of the user’s context and their intentions it will allow us to create more intimate interactions with our surroundings and use them as a canvas to display our blended realities.
In this research we explore the value of objects through their augmented versions by expanding and building upon their native functionality, peppering it with the users' daily life data and smart algorithms to extract patterns, insights to, finally, provide assistance.
Throughout the history of computing the digital world has been always behind a screen, a display. With Spatial Computing / XR we have the opportunity to break this barrier, our devices are able to track and understand the physical layer and its features so we can use the world as a display.
But spatial computing / XR is not just the distribution of our interactions through the physical layer/space, it brings along a deep understanding of its context and of its user(s): through AI/ML algorithms we can cut through the noise of all the data we constantly generate within our lives - through our calendars, to-do lists, emails, apps and services - and extract patterns to better understand context and even infer intent so we can create interactions of value.
As we bring these digital layers together to blend with our real world so the way we interact with them needs to adapt: we’ll no longer need older input devices, their metaphors, mappings and conventions and instead bring back the interfaces we use in the real world, our natural user interfaces (our body, voice, eyes and hands).
Never have our lives been so much distributed between the real and the digital worlds. These days most of what we do is natively digital and even what we do in the real world most times leaves its dent in the digital world in the form of calendar events, items in to-do lists, emails etc.
All these huge pools of data can now be processed though AI/ML algorithms to cut through the noise and extract valuable understanding of the user's life, context, needs and, even, intent.
As we move towards head worn devices we are offered capabilities to interact with the digital and hybrid worlds through a singular multi/cross modal interface using our gaze, voice and hands.
We are now offered new potentials of interacting with the digital worlds leaving behind older conventions and metaphors: a truly ‘direct manipulation’, interacting with the object itself- and not its representation - using your full bodied self: your hands, eyes and voice). One step closer to interacting with the digital world the same way we do with our richer, multisensory, real-world interaction.
What powers can we extract from our interactions with objects? And how can they be augmented? How can we leverage their already powerful functional and semiotic loads?
In the past we have used objects and their semiotic value to explore relationships native to the real world but transmuted to the digital/virtual ecosystem. From direct manipulation paradigms to skeuomorphism we’ve exploited objects plenty as shortcuts between the real and digital world.
A lot of the object's power in our real world interactions stems from their immediate and effortless access and utility to information with direct implications in our well-being, our lives.
We’re calling 'glanceables', objects that provide immediate information with a quick glance - think of the clock and time, the thermometer and temperature, etc.
One quick glance at our wrist watch and we know precise time of day, a quick glance outside the window and we can feel the weather outside. And as we do it naturally , so - in the background - our minds immediately use this information to project and plan accordingly.
In Objects:Augmented we extend that glance and dig a bit deeper into the future by crisscrossing the object's native function with the users’ egocentric data streams to provide relevant and actionable information/interactions.
Still the same way: with a glance, just a little longer.
A simple clock can (in a glance) provide us immediately the exact time of the day but we now augment it with the user's egocentric data (in this case a calendar) and leveraging both data sources to create interactions of value and/or assistance.
In this case as we glance at the clock we’re provided with a quick overview of the day and current task at hand, the last in the workday (see clock display 1).
Yet, because it knows about your planned life, the augmented clock can go a step beyond: as it identified a location change between the finished task (and workday) and the upcoming event (kids pick up).(see clock display 2)
As it knows the users preferences it can easily calculate the journey time based on current driving conditions to provide an optimal departure time,no frills, no stress.(see clock display 3)
Making the transition between work and home modes easier, with a helping hand (of a clock).
Breakdown of Clock display sequence
1. Clock reveals the users schedule events throughout the day and signals the user it's ending the last event of the workday.
2. Clock reveals the upcoming event later today (4pm) and it notes it as an exception/alert, change of mode (work/family).
3. Clock provides suggestion of assistance.
(In the background it computes both events locations, retrieves the traveling time with current conditions from API and provides the user with the optimal time for departure)
In the same way, as we approach a window to know the weather outside, it can also be augmented by expanding on its data source (weather) and projecting it on the user's schedule, using both datasets and their interactions to find opportunities of assistance.
In our example, the augmentation gives us current weather information and it surfaces exceptions to provide additional value:
The user’s calendar has an event (bike ride) set in an outdoors location later that day, understanding the nature/location of the event and, projecting the weather data for the day, the algorithm can quickly find an exception (it will rain at the same time of the outdoors event) and signal it to the user so they can act accordingly.
All of the features of these prototypes are functional and leverage existing technologies: This is not a look into the future but a perspective of what we could be doing already.
The interaction with these augmented objects tries to be seamless and non-disruptive - in an effort to integrate them as much as possible with our life pace and rhythms.
We leveraged a standard multi-modal pattern, using eye tracking /gaze for object selection together with finger pinch for confirmation/ activation. The interaction with these objects/widgets can be broken into 2 main parts, discovery and activation:
(In headsets where there are no eye tracking capabilities we replaced gaze with the head’s orientation.)
Egocentric data sources / personal data streams used include: Google Calendar API, Google Directions API, OpenWeather Weather API.
The functional prototype was built in Unity and runs on Meta Quest Pro / Meta Quest 3.