A Qualm With XR Game Controllers

Charlie Cushing Discussion, Opinion, Technical

When you play a game, there’s a subtle secondary game taking place in the subconscious of your mind, constantly running in parallel with the conscious act of making the things on the screen do what you want them to. This secondary game has to do with the challenge of using your body to talk to the computer through an interface that generally has little to do with the subject on the screen. You don’t walk around in real life by moving a pair of thumbsticks on a gamepad, for example. The interfaces we use to communicate with XR software, however, are trending toward direct first-person control, which emphasizes forms of interaction mirroring reality. I’d like to briefly describe how this trend may be harmful to the staying power of XR games.

My qualm is essentially twofold:

  1. XR games often struggle to maintain the audience of consumers, who are understandably accustomed to a certain caliber of AAA and indie content. While traditional non-XR games are now produced using design principles evolved through decades of trial and error, XR is comparatively infantile in its own design principles. It’s deceptively easy to overlook just how different the two industries are. I think it’s fair to posit, however, that XR and traditional games are almost indistinguishable once you exclude the differences in their interfaces, and that the maturation of XR design principles will therefore occur predominantly through innovations in UI.
  2. It sometimes feels to me as though XR interfaces are too streamlined. The progression of XR interfaces toward direct user interaction, in lieu of more abstracted interfaces like gamepads, mice, and keyboards, suppresses certain aspects of traditional interface devices that I suspect people find enjoyable. Mastering your interface is roughly half the fun. Like any gameplay element, controllers can cause frustration sometimes, especially when they’re poorly mapped, but the sense of progression we enjoy as we learn how to physically bend a game to our will is an essential element of quality gameplay.

All games consist of three top-level elements:

  • The software
  • The interface
  • The user

The software sets the rules of the game, and the interface provides the user with a means of interacting with the software. The extent to which the user freely exerts their will over the game comes from their understanding of its rules, and their ability to fluently manipulate the interface. In XR, the trend is to make the interface the user’s body, and it’s therefore important to observe that the user is becoming the interface, so there’s a layer between the user and the software that’s progressively narrowing. I believe that this may at times lead to a feeling of emptiness in certain aspects of XR gaming. Through many hundreds and thousands of hours of traditional gaming, the average consumer has been conditioned to subconsciously expect a type of physical challenge which is less prominent in contemporary XR.

The Motor-Encoding layer

Traditional 3D video games on a 2D monitor require surprisingly abstract hand-eye coordination when you think about it. If you identify more closely with either a gamepad or a mouse & keyboard for most of your gaming then you probably know how hard it is to switch to the other platform. Remember learning how to play your first 3D shooter? The complex motor-encoding that takes place between the user and the screen involves all sorts of conversions between perspectives and dimensions. For most games out there, the user has to:

  • Infer the position and orientation of a simulated object from a 2-dimensional image
  • Decide upon a desired behavior
  • Dexterously manipulate an interface to project that desired behavior back into 2D.

Consider the layer of challenge that exists between the user and the interface, something I’ve been referring to lately as the motor-encoding layer. It’s separate from but related to the challenge of the behavioral rules of the game state. The relationship between the perspective a user embraces on their screen and the deflections of mouse or thumbstick are tenuous and abstract, so there’s a thick boundary between the user and the software, and I think that might be a good thing for games. This layer constitutes roughly half of the gaming responsibilities of the user, occupying our subconscious with complex motor-encoding duties that feel good when we do them well.

XR games might oversimplify the motor-encoding layer.

Again, a subjective trend in the current XR gaming industry seems to show that the novelty of many XR games quickly vanishes after an initial period. All games are at risk of having this problem, but XR seems particularly susceptible. Users seem to have a tendency to ultimately conclude that many games are simply more enjoyable and less strenuous when traditional interfaces and a 2D monitor are used instead. I have to wonder if this is due in part to an excessively direct correlation between the actions of the user and the behaviors leveraged by those actions in the simulation. A thinner interface may be detracting from the staying power of XR video games because a deeply held subconscious expectation for the game to present a more abstract challenge at the motor-encoding layer is not being met.

If this problem does indeed exist, then I can imagine two possible solutions that would improve the staying power of XR video games:

  1. Make the game more fun by gamifying UI to make it more challenging. A virtual motor-encoding layer built into the UI may be able to compensate for an overly thin physical motor-encoding layer. Drawing inspiration from real life activities that are known to have staying power may offer some inspiration in this regard.
  2. Make the game more fun by making the physical interface more fun. XR video games that include a more abstract motor-encoding layer should fulfill the subconscious expectation for a higher physical challenge.

Telomimes are particularly well suited for fulfilling both of these simultaneously because they’re built out of real life objects which, themselves, can be made into game-like activities that compliment the software experience. You can read about telomimes (telic interfaces) here.