For a very long time, there have been pushes to move computing beyond the confines of the keyboard. The mouse as a computing device actually dates from 1963, but it was the mid 1980s before mice in computing became particularly widespread. In recent years, even the mouse has been updated, improved and worked upon, whether it was the switch from mechanical, ball-based mice to laser-guided devices, or the move from cabled to wireless mice, or even the more oddball mice concepts out there, such as Air Mice that double as 3D pointers.
Mice themselves might become a technological oddity as (if you’ll pardon the rather obvious pun) touch really does take hold. Tablet PCs are the obvious place where touch is most prominent, but it’s not the only “digital” platform; a number of vendors offer PCs and notebooks with inbuilt touch capability, thanks to the fact that Windows 7 natively supports touch based input. To date, I’ve not been thrilled by touch on Windows 7, largely because while it works, there aren’t that many applications that make as much sense within the way that Windows 7 applications are written to use touch rather than a mouse and keyboard. That doesn’t mean a new application can’t use touch sensibly, but at this stage it’s a nice thing for Windows 7 rather than a key feature.
Operating systems that use touch as the basis for everything and are written that way, such as Google’s Android and Apple’s iOS fare better in this regard, because software developers think of them in those kinds of terms.
Touch still relies on physical contact, and one of the other reasons why I’ve yet to be really wowed by a touch-capable notebook is the physical effort involved in reaching over to the screen. Not that this is an onerous task per se, but simply because on a regular notebook, you’re still reaching right past a perfectly usable keyboard and trackpad to press an onscreen button that could be clicked on instead. It doesn’t make a whole lot of sense to me. Even that effort might rather rapidly become something rather quaint, however, and via a rather unusual agency: Console gaming. Specifically, Microsoft’s Kinect, an add-on camera for the Xbox 360 console. The Kinect is intended (at this stage) for games, as it allows a gamer to wriggle, jump, box, or do whatever the game commands, and see those movements mapped onto an in-game character. That’s the theory, but it took very little time at all for intrepid hacking types to grab hold of the USB-connected Kinect camera and use its body-mapping technology for all sorts of other purposes on a PC. Interestingly, Microsoft hasn’t jumped on the lawyer-heavy bandwagon to stop this kind of thing, and some press interviews suggest that a Windows version of Kinect might not be that far away. Suddenly, all those cool sci-fi images of people working on virtual floating computer displays that don’t exist at all are very close indeed.