How Motion Control Gaming Work
At Christmas 2006, the UK went wild for the Nintendo Wii-people queued through the night, stores sold out and now 6 million consoles have been sold on these shores alone. By Christmas 2010, motion-control videogame systems were back on the nation’s wish-lists as the technology stepped up a notch.
In September 2010, Sony released the Move controller for PlayStation 3 and in the November, Microsoft launched Kinect, the first controller-free system. Now, We opened each one up to find out how they work.
Let’s start with movement detection: to register its exact position, a handheld controller needs a system that can detect motion and speed in three-dimensional space, along with any tilt and twist. The controllers contain tiny accelerometers, micro-electro-mechanical systems (MEMS), which measure acceleration but ignore the effect of gravity. Interestingly, we have biological accelerometers in our ears, the cantilevered beams are tiny hairs wafting around in fluid like reeds in water.
MEMS accelerometers consist of tiny strands of silicon attached at one end (cantilevered beams) inside a charged field. The MEMS device measures capacitance (how much charge is stored) so when the beam moves from its neutral position, the change in capacitance can then be used to calculate acceleration.
On Earth, all objects at rest and near the surface are pulled towards the planet with a force of iG, so manufacturers calibrate their accelerometers to adjust for this. However, it does mean that your Wii won’t work properly if you take it on holiday with you to the moon. Information from the accelerometers is processed in the controller’s microchip and beamed back by Bluetooth, wirelessly, to the sensor.
It’s all very well being able to locate the controller in space, but what about movement around the controller’s axis? Adding gyroscopic sensors to a controller adds another three dimensions of movement detection: as well as the X, Yand Z planes used to locate the controller, gyroscopes detect movement of the controller around its central axis: pitch (up/down tilt towards the screen), roll (twist) and yaw (aiming the controller to the left or right of the screen). A basic version of this same technology is used in mobile phones to change the image from portrait to landscape depending on which way up the device is held, the iPhone 4 being a perfect example of this technology currently put to extremely good use.
Gyroscopes are an ideal way to detect motion about a central axis – the orientation of a spinning or vibrating gyroscope attached to a low-friction mount remains the same regardless of movement in the surface to which the mount is attached.
There are different types of MEMS gyroscope sensors – the sensor inside the PlayStation Move contains a set of three tiny tuning fork shaped pieces of quartz placed at mutually perpendicular angles in a charged field. The quartz is piezoelectric so when a current is applied, the forks vibrate. Rotation about the axis of the forks changes the forces at work in the crystal: the plane of vibration stays the same but the frequency of the vibrations changes. Detectors monitor capacitance fluctuations in the charged field to calculate movement of the controller relative to the forks.
Motion-control systems combine the data from their internal gyroscopes and accelerometers to produce super-accurate information about location in space (X, Y and Z planes), and movement about the controller’s axis (pitch, roll and yaw). For extra precision, some systems also throw in a micro-compass (like those used in GPS and satnav systems).
So once the device has accurately detected motion, this needs to be translated to movements that fit on the screen. The system used by Nintendo’s Wii uses infrared tracking to determine the cursor’s position on the screen. The sensor box above the screen has sets of five infrared (IR) LEDs at both sides. These, together with the IR detector at the top-end of the Wii Remote, mean that the controller’s position can be triangulated relative to the screen. So if the LEDs are detected towards the top of the Wii Remote’s field of view, the cursor is displayed at the bottom of the screen and vice versa. IR LEDs are used because regular visible light-emitting diodes would be too difficult to pick out from other light sources, especially the screen.
Instead of infrared tracking, Sony’s PlayStation Move uses a camera to track visible light from the glowing orb on top of the handheld controller. When it comes to recognizing who is actually playing, systems now incorporate a rather ingenious face and voice recognition feature so players don’t have to register or pick an avatar. For face recognition, the PlayStation’s EyeCam captures a clear shot of the player’s face and then maps individual characteristics onto a face template to store in the system’s memory. It detects faces using the same technology used first in Sony cameras for “smile recognition’.
Motion controllers contain microphones not Just for sing-along games but also for voice commands and player recognition. So how does this work? Voice-recognition technology is well-established in communications and accessibility software. The sound waves created by speech become vibrations in the microphone, which are converted to digital signals. The processor removes ‘noise’ from the data-stream (by subtracting a reading of the ‘background’ noise in the room) and then breaks down the data into unique speech sounds or ‘phonemes’ – there are roughly 50 phonemes in the English language. The processor then compares the data to its stored library of phoneme combinations to work out which words were said.
So what about all those notoriously tricky words in English, which are spelled differently but sound the same (otherwise known as homophones)? In order to decide which homophone to register, the processor is also equipped with a context-checker – it analyses the words around the homophone, checks the combination against stored examples and then selects the spelling that’s, statistically, most likely. The software is also advanced enough to recognize a number of different accents, and the latest videogame systems ‘recognize’ individual players by storing each user’s unique pitch variations, giving you a personal gaming experience every time you turn the console on.
Every different language requires its own library that can delay the release of products using this technology. In 2010 Microsoft Kinect was initially available in US and UK English, Japanese and Mexican Spanish -with the speakers of other languages having to wait until 2011 for updated versions.
With regard to what’s next for videogame controllers, developers are hard at work on three-dimensional games (for use with 3D television screens), eye-gaze direction detection and other mind-bogglingly futuristic technology. As handsets become increasingly unnecessary and producers create a wider range of videogames, there will be plenty to satisfy both casual and hardcore gamers.