Scientists developed the so-called X-Y position display for monitor systems all the way back in the 60s. That breakthrough begat the mouse, which gained dominance in the 80s. It has remained the number one user input device ever since.
Yet new developments are nibbling away at the rodent's lead. More and more displays are designed using touchscreens. Computers are learning to take voice commands. New sensors are even allowing controls to be given via gestures.
"After years of quiet, a series of new user interface technologies have achieved market readiness," say the IT market researchers at Gartner, a US-based technology research firm. That includes Microsoft's Surface computer employing multi touch technology and the controls on Nintendo's Wii console.
Project Natal is Microsoft's codename for a system featuring a camera with a depth sensor, microphone, and built-in processor. It is rumoured to be coming to the Xbox gaming console in the near future. The device will not just be able to interpret gestures, Microsoft has indicated, but will also be able to transfer speech and pantomimed actions onto the screen, regardless of the lighting. Many game developers have already announced plans to work with the new user interface.
"The Natal approach is extremely important," says Franz Koller, general director at the consulting firm User Interface Design in Ludwigsburg, Germany. The system permits a new degree of freedom: "For example, I can point to an object and tell it what I want to do," he says.
The experts warn against overly high expectations, though. "We'll be using the mouse for a while yet," Koller says. Nintendo's Wii remote has also been exceptionally well received by gamers. "Even so it's unlikely I'll ever want to do my bookkeeping using a Wii remote control," says Koller.
Koller believes instead that more and more future devices will offer a user interface tailored for different applications or preferences. "The individual systems should be viewed as integrative," the expert says. An example might be a tablet PC with a multi touch display and full keyboard.
Another example of a new user interface that has found its place in daily life is Apple's mobile phone. "The iPhone has set off a lot of change," Koller says. The concept is so successful because it's both fun to use and practical at the same time. Multi touch systems require almost no learning curve, since the pinch and flick gestures are intuitive.
The same is true for Microsoft's Surface computer, which includes a 30-inch display capable of being operated by several hands at once. "A study has shown that older people also have an easier time using this kind of system," Koller explains. The Surface, which is primarily intended for project work and presentations, allows for scenarios that have been prepared graphically to be discussed and explored. Yet even the Surface still allows for a keyboard to be connected.
It seems like almost every science fiction film nowadays has the hero working with a computer by gesturing freely at a projection within a room. Yet the reality isn't all that far off from the fantasy. Oblong, a US-based company, has created a spatial working environment dubbed G-Speak that can be used to analyze data sets and examine three-dimensional objects among other applications, the company claims. The environment is presented to the user using several large displays controlled by sensor-loaded gloves.
Yet long term use of Natal or G-Speak could end up being ergonomically questionable. "It's very strenuous in terms of posture," Koller believes. Keyboards and mice let the arms rest on a surface, which prevents premature exhaustion. The same applies for Microsoft's Surface, where work is done on a horizontal surface.
Touch functionality is also slowly making its way into laptops that are not designed as tablet PCs. The Dell Latitude Z series for example includes a scroll function on the edge of the display as well as a touch toolbar to call up programs.
Voice control is well established at this point, although usually for special applications like banking, automated customer service lines and car navigation systems. Situations requiring dialogue or interpretation of the meaning of spoken phrases still need more work, Koller says. "We don't have functional voice control like on the Starship Enterprise at present."
No comments:
Post a Comment