You are currently browsing the tag archive for the ‘computing’ tag.

MIT graduate student Robert Wang and Associate Professor Jovan Popović developed a gesture-based computing system with cheap hardware: an ordinary webcam and a pair of $1 multicolored Lycra gloves.

Other low-cost prototypes, i.e. the wearable SixthSense, have used tape on the fingertips. Wang said those were limited to “2D information” where “you don’t even know which fingertip [the tape] is corresponding to.”

Wang and Popović’s system can translate the 3D configuration of your hands & fingers on-screen with almost no lag time. (Screencap from the proof of concept video shown below).

Their software compares glove webcam images against a reference database of gestures. When a match is found, the software renders the corresponding hand position in a fraction of a second.

Hand-tracking is made possible by the distinctive glove design. The patchwork arrangement is unique to the front and back of the glove; the colors are distinguishable from each other and from background objects–under a range of lighting conditions.

Possible applications are in video games or in engineering. For example, designers could use this system to manipulate 3D models of commercial products or civic structures.

Wang is expanding his idea and plans to design similarly patterned shirts for use in whole-body motion capture.

—–
Those gloves are pretty rad on their own…is it bad that I want a pair to wear and not compute with? A shirt would be fantastic.

According to Microsoft, we have three classes of computers: the fixed desktop, the mobile laptop, and the ultramobile device. The lines have blurred between the mobile and ultramobile classes, as smaller & lighter notebooks and increasingly larger mini-notebooks & portable media players are introduced into the consumer market.

In the future, these classes will converge into a multi-touch composite with a novel form fact that will bring three-dimensional interactivity to flat surfaces and open spaces, with applications in distance communication and augmented reality. It will provide an immersive, intuitive, and revolutionary user experience with two modes of operation: single and collaborative.

We will not be limited to standard keyboard and mice. Instead, the future computer will accept multimodal input and incorporate sight, sound, and touch in synchrony. Our displays will not be isolated graphical output screens, but rather mixed reality planes.

—-
Next-generation interactive displays and immersion technologies:

  1. Picoprojection: holographic, HD
  2. Flat panel displays: OLED, e-Paper/e-Ink, novel optics, rich visualizations
  3. Sensing technologies: optical/audio/resistive/capacitive sensors, 3D range-sensing cameras (human body tracking), image & voice recognition

Mainstream implementation of touch and multi-touch capable devices is the “next move,” if you will. But first, we need more refinement in the technology and standardization of marketing & protocol.

Advertising must explain all the features, while hardware must enable touch out of the box, recognizing and supporting a minimum of four input points. There needs to be innovation in manufacturing & operations. Companies must strive for highest quality in sensor design, integration, and software to deliver the best user experience. Definitions must be set to distinguish “direct manipulation” from “multi-touch” and from “gestural interactivity.”

It is going to be a while. Amongst other issues, are the barriers of cost and simplicity. Touchscreens offered by 3rd-party hardware vendors must be purchased separately and often require special software & drivers that ramp up the cost. Ideally, touchscreens would be inexpensive monitors connected via USB or VGA and work with built-in Windows or Mac OSX functionality. Additionally, touchscreen adoption will be driven in part by the development of useful apps.

—-
Word is out that Windows 7 supports multi-touch, which is a huge step in the right direction. When will Apple get in the game (post-Snow-Leopard)? 

Touchscreens will become affordable eventually, no doubt. Retrofitting existing displays is an option for now. After all, the major difference between a touchscreen and an LCD display is that one lacks sensors. PQ Labs makes touchscreen overlays that you can mount onto your gigantic LCD or plasma TV monitors to enable multi-touch. Their product demos were pretty impressive.

Perceptive Pixel multi-touch wall for storyboarding & ideation

While working on a group project, I noticed how ill-suited mobile computers were for collaborative use. Even with computer display connected to an external projector and another mouse, it was impossible for more than one person to make edits when pulling together a PowerPoint presentation. Only one set of actions went through via vocal instructions to a laptop user, regardless of the number of ideas tossed out that could have been explored. This hampered productivity.

Imagine trying to have a conversation with five of your best friends that you haven’t seen in a year (yay!) except only one of you can speak at a time with no interruptions or exclamations. This is no way to work nor socialize.

I wished, then, for an operating system that would support a minimum of dual input (at least two mice, two cursors on one screen) for multiple-user single-tasking, AKA “group conversations” on a single workstation.

—–

Computing hardware has advanced by leaps & bound and become increasingly powerful, efficient, and reliable–whereas mainstream graphical user interfaces have remain unchanged, for the most part. 

Technology has allowed us to amass an immense amount of data in digital age (satellite imaging, radiology scans, genome sequences), but no user interfaces exist which can visualize, analyze, and present data as readily as multi-touch platforms can. Other than being downright cool, touch is ideal for consuming/presenting information. Because it is a more natural interface, it increases user productivity.

I’ve been drawn to it from the start.

—-
Zooming in and out of photographs is direct manipulation using two fingers of one hand, a bare bones gimmick for ads; it doesn’t scratch the surface of what true multi-touch (more than two input points!) is capable of.

For example,Perceptive Pixel offers pressure-sensitive multi-touch displays that can sense an unlimited number of simultaneous touches with accuracy and precision. Their displays come bundled with the right software and have applications in geo-intelligence, broadcasting, medical imaging, data exploration, digital storyboarding, industrial design…the list goes on. 

At IDC2009, I had the privilege of meeting Steven Batiche (Director of Research, Applied Sciences Group, Entertainment & Devices Division – Microsoft Corp.) and listening to his presentation on advances in surface computers. He pulled up a slew of videos demonstrating conceptual and working prototypes from the Microsoft design labs–I was utterly awestruck.

Until then, I had been steeped in Apple’s powerful marketing campaigns and lost sight of the obvious: that Microsoft is an immense international entity with resources that, if leveraged appropriately, could surpass Apple a hundred times over. Microsoft’s research & development rocks, as far as I’m concerned. They are doing some unbelievable experimentation with surface computers (think Microsoft Surface but 100X more awesome).

—-
How do I begin to describe that which has the feel of pure fiction? It’s better if I show you:

This is the Productivity Future Vision montage from Microsoft Office Labs . Though a concept video by all rights, it is very much grounded on research and is a plausible articulation of  what to expect by the year 2019. There is more artistic license on the software side, but the actual hardware is all too real. Many of the “concepts” have been prototyped or are somewhere along in development.

From the video, we see:

  • Speech, text, and cultural translation.
  • Low cost, multi-touch, edge-to-edge displays; flexible, transparent displays.
  • Software clusters brought together in a natural user interface.
  • Active workspaces with rich graphics, achieved with ambient projectors and thin OLED displays.
  • Large displays allowing for different user inputs (touch, mouse, stylus).
  • Mobile devices with modular form factors that can access sensor networks and information resources. Image analysis and projection abilities.
  • Seamless secure data sharing and integrated workflow tools between devices and across networks.

Check out the coffee mug at 4:12 – it’s to die for. Nothing is impossible! The music makes me feel very optimistic.

—-
You’ll see technology becoming more invisible, but working harder for you in both your work and personal life. Imagine a future where creating a document with a colleague will be as easy as having a conversation. Making connections with people and your content will be secure and seamless. Relevant insight and information will be delivered proactively and in context to the task at hand.

Mobile devices will be more powerful than desktop computers of today. Technology will connect you with the information you need, when and where you need it, whether it be your local coffee shop, an airport, or a roof top in Hong Kong. Software will be there to make getting things done as efficiently as possible in new ways that are more natural.

[“Productivity Re-Imagined” via Microsoft Office Labs]

Archives