In order to get into what a telic interface is, let’s first begin with a story about what lead me to believe that there needed to be a word for this kind of thing.
I stumbled across this interesting VR project early last year, and was later surprised to discover that it’s creative director was Alex Coloumbe (iBrews); an old highschool friend of mine. He has a degree in architecture and co-founded Agile Lens, a firm specializing in developing Virtual Reality solutions for architects. This project was for a client who needed to optimize seating arrangements and sight lines to the stage of a large multi-level theater, and agile Lens created a VR environment for them to test each seat in various configurations.
Part of the firm’s solution was to attach a Vive Tracker puck to the backrest of a real chair from their office (an Eames) that could be rolled around and spun in 360 degrees. They then modeled the exact same chair in VR and bound the tracker data to it such that there was a direct 1:1 positional and rotational correlation between the real chair and the virtual one. The team was then free to model the entire theater and “sit” anywhere they wanted to, facing any direction. If they wanted to take a seated position, all they had to do was walk over to the virtual chair, sit down, and trust that there was actually a real chair waiting for them in reality. I thought this was pretty cool.
I’d already seen lots of adverts for things like faux guns with tracking attachments that let you play FPS’ers in VR with enhanced tactile immersion. But this chair…something was different about it. It was a real chair, that you could use for its real purpose in an environment that didn’t physically exist. Everything I’d seen before – things like faux guns – seemed like empty vessels by comparison. They weren’t real guns because they didn’t do the business that a real gun is supposed to; probably a good thing. This chair, though, seemed weirdly complete.
On the surface, what Agile Lens did might seem obvious or simple, but I think there’s a certain elegance to that simplicity. Tracking a real physical tool in reality and fulfilling its purpose in both XR and real-life, simultaneously, seems to offer interesting new opportunities for user experiences.
I started making lists of things that fit this description:
- VR gloves with finger tracking
- Full-body tracking
- FPV drones
- The skill toys we were working on; poi, staff, hula hoops, juggling clubs, etc…
- A glass of water?
A Fun Experiment
I actually tried that last one recently as an experiment. By screwing a suction cup into a Vive Tracker puck and sticking the tracker to the bottom of a Coke can I found that I could drink Coke in VR without any problem. In the images below, consider that both were made with my camera statically fixed to a tripod between the two shots. All I did was take my Vive and gently press one of its lenses up against the lens of the camera in the second photo. I could have just taken a screencap but I think this makes the demonstration all the more poignant and helps to illustrate how everything is right where it should be inside of the virtual environment.
I could reach for my Coke and there it was – as if the Coke was tunneling through the boundary between real-life and VR. Select parts of the real world were converted into a cartoon mimic of themselves, but were still tangible. The controller wands do this, too, but they don’t have any purpose beyond their function as a VR interface device. Purpose became the element in need of a word, which ultimately lead to “telicity”, more on that in a moment.
Notice that the red and green cylinders are also “there”. In the XR industry, these are what’s known as passive haptics – real physical objects and resistive barriers erected around the user that correspond to objects in VR. Haptics help reinforce the immersive illusion by providing a tangible force when the user expects to experience one; a fixed wall or table to touch or bump into, for example. The thing is, if you move passive haptics around IRL, like if I were to move the stool or the table, then they would become misaligned with respect to the virtual red or green cylinder and wouldn’t be where you expected them to be. My Coke ends up on the floor when I try to put it down. If we attach trackers to them, though, then they can tell the simulation where they are.
This encapsulates roughly half of what makes interfaces telic, what gives them “telicity”; they are tracked passive haptics. It can help to think of telics as haptics in reverse. Haptics fool you into thinking that something in the virtual simulation is part of your physical experience. Conversely, telics fool you into thinking that the physical experience of using something in reality is part of the simulation. They’re props that you can touch and feel IRL, and are tracked so that they move around in XR when you move them around IRL. You can do whatever you want with the appearance of these objects inside the simulation – like turning a Coke can into a Pepsi can – but they’re interfaces now, too, because you can manipulate them physically.
Telomimes, Telicity, and Holotelicity
The other half of what makes an interface telic is that it needs to be something you use for a real reason in real life. From this we arrive at the word “telomimetic”, which we intend to literally mean “purpose mimicking”. Telics are the product of a field of XR interface design we’re calling “telomimetics”, by which we mean: “the principles of using an XR interface to mimic that for the sake of which something in real life is done”. This terminology was derived mainly from concepts invoked by philosophers like Aristotle and Plato.
The Agile Lens chair and my can of Coke are telomimes because they mimic the purpose of a normal, untracked chair and Coke inside XR. They’re both objects built to serve some purpose IRL that you can also use in roughly the same way with your VR headset on. You can see the chair and sit in it, and you can see the can of Coke and drink from it. Note that Vive controllers, knuckles, etc., do not fulfill this requirement, since they don’t do anything in real life except serve as XR interfaces. This means that they are not telomimes.
The main objective in the field of telomimetics is to build experiences in XR based on highly realistic physical interactions with real, tangible objects of utility or purpose. We do this by ensuring that the interface physically resembles the original object closely enough, and is tracked well enough to be used IRL and in XR simultaneously with little or no break in tactile or kinetic immersion. How good of a job you do determines the telicity of your interface.
The gold standard for telicity is called holotelicity, which is achieved when the tactile sensations and kinetic behavior of using the interface are indistinguishable from those that arise when using the original object IRL.
For the minority of you interested in mathematical support of a need for these words, I’ll hopefully have a paper on telomimetics along sometime later this year describing interface telicity in detail. For now, simply consider that telicity falls within a range of perfect, partial, or nonexistent, and is dependent upon tracking completeness, similarity of utility, and similarity of physical characteristics.
Prop Logic Studio writes gesture recognition software for telomimetic skill toys
For the uninitiated: Skill toys are objects that you play with in a dexterous or skillful way. Think: yo-yo’s, hula hoops, skateboards, juggling, etc. These are activities that you can get really good at and “speak through” like a musical instrument for the eyes instead of the ears. Each class of skill toy has a set of tricks/moves associated with its culture that constitutes a primitive gestural language. Artists and athletes learn to use this language to express ideas to one another and for personal fulfillment.
What we’re doing at Prop Logic is identifying these gestural languages and translating them into motion-tracked interfaces for XR applications. Most skill toys are not suitable for simulation in XR due to the fine motor-skills and tactile feedback required to use them. They do, however, make wonderful telomimes. I cannot emphasize enough just how well suited they are.
Assuming adequate tracking exists for your application, skill toys:
- Are entirely self-contained devices
- You can capture their motion and pull meaningful data from their behavior
- You manipulate them for fun
- You can express yourself through them with tricks and moves; basically gestures
- XR loves using gestures as interface commands
- You can apply fantasy physics to them and create games or artistic applications
I think that telic interfaces, and especially those based on skill toys, present a wonderful path of development for XR. They may also present an interesting solution to something I suspect may detract from the staying power of VR games, which you can read about here.