You are currently browsing the tag archive for the ‘user interface’ tag.
MIT graduate student Robert Wang and Associate Professor Jovan Popović developed a gesture-based computing system with cheap hardware: an ordinary webcam and a pair of $1 multicolored Lycra gloves.
Other low-cost prototypes, i.e. the wearable SixthSense, have used tape on the fingertips. Wang said those were limited to “2D information” where “you don’t even know which fingertip [the tape] is corresponding to.”
Wang and Popović’s system can translate the 3D configuration of your hands & fingers on-screen with almost no lag time. (Screencap from the proof of concept video shown below).
Their software compares glove webcam images against a reference database of gestures. When a match is found, the software renders the corresponding hand position in a fraction of a second.
Hand-tracking is made possible by the distinctive glove design. The patchwork arrangement is unique to the front and back of the glove; the colors are distinguishable from each other and from background objects–under a range of lighting conditions.
Possible applications are in video games or in engineering. For example, designers could use this system to manipulate 3D models of commercial products or civic structures.
Wang is expanding his idea and plans to design similarly patterned shirts for use in whole-body motion capture.
Those gloves are pretty rad on their own…is it bad that I want a pair to wear and not compute with? A shirt would be fantastic.
At E3 2009, Sony demo’d the engineering prototype of the new PS3 motion controller, which works with the PlayStation Eye. The “one-to-one” tracking with “sub-millimeter” precision is kind of mesmerizing (skip ahead to 3:44 & 8:35 of below video). It should be launched sometime in 2010.
At the Electronic Entertainment Expo, AKA E3 2009, Microsoft introduced Project Natal, a new interface for the XBox 360 console that eliminates the need for a physical controller. Instead, players game via an accessory capable of voice & image recognition and full-body 3D motion tracking (microphone + video camera + infrared camera + nifty software).
Way to one-up the Nintendo Wii, Microsoft…conceptually, anyway. We will have to see about the implementation. How awesome would it be if Project Natal was backwards compatible?
Product concept video:
We were shown an example of the raw output of the system, which melds the two sources and then breaks them down into a wireframe of objects, a heatmap (for depth), and a point-map (which is akin to one of those hand imprint needle toys). The software merges all of this together to create a picture of movement in the room, allowing for some pretty crazy detail of what is going on…The accuracy is far better than you would imagine it could be; it’s very impressive stuff. [via Engadget]
Back at IDC 2009, I heard that Microsoft acquired a company manufacturing 3D range-sensing cameras. So the rumors were true–it was for the XBox. Human face/body tracking is wicked cool, because of its precision & accuracy.
I grew up in a family where video games were outlawed in favor of actual physical activity. Then Nintendo came along and changed everything. And now we own a Wii.
I wonder if gesture-based control systems will ever replace traditional controllers?
According to Microsoft, we have three classes of computers: the fixed desktop, the mobile laptop, and the ultramobile device. The lines have blurred between the mobile and ultramobile classes, as smaller & lighter notebooks and increasingly larger mini-notebooks & portable media players are introduced into the consumer market.
In the future, these classes will converge into a multi-touch composite with a novel form fact that will bring three-dimensional interactivity to flat surfaces and open spaces, with applications in distance communication and augmented reality. It will provide an immersive, intuitive, and revolutionary user experience with two modes of operation: single and collaborative.
We will not be limited to standard keyboard and mice. Instead, the future computer will accept multimodal input and incorporate sight, sound, and touch in synchrony. Our displays will not be isolated graphical output screens, but rather mixed reality planes.
Next-generation interactive displays and immersion technologies:
- Picoprojection: holographic, HD
- Flat panel displays: OLED, e-Paper/e-Ink, novel optics, rich visualizations
- Sensing technologies: optical/audio/resistive/capacitive sensors, 3D range-sensing cameras (human body tracking), image & voice recognition
While working on a group project, I noticed how ill-suited mobile computers were for collaborative use. Even with computer display connected to an external projector and another mouse, it was impossible for more than one person to make edits when pulling together a PowerPoint presentation. Only one set of actions went through via vocal instructions to a laptop user, regardless of the number of ideas tossed out that could have been explored. This hampered productivity.
Imagine trying to have a conversation with five of your best friends that you haven’t seen in a year (yay!) except only one of you can speak at a time with no interruptions or exclamations. This is no way to work nor socialize.
I wished, then, for an operating system that would support a minimum of dual input (at least two mice, two cursors on one screen) for multiple-user single-tasking, AKA “group conversations” on a single workstation.
Computing hardware has advanced by leaps & bound and become increasingly powerful, efficient, and reliable–whereas mainstream graphical user interfaces have remain unchanged, for the most part.
Technology has allowed us to amass an immense amount of data in digital age (satellite imaging, radiology scans, genome sequences), but no user interfaces exist which can visualize, analyze, and present data as readily as multi-touch platforms can. Other than being downright cool, touch is ideal for consuming/presenting information. Because it is a more natural interface, it increases user productivity.
I’ve been drawn to it from the start.
Zooming in and out of photographs is direct manipulation using two fingers of one hand, a bare bones gimmick for ads; it doesn’t scratch the surface of what true multi-touch (more than two input points!) is capable of.
For example,Perceptive Pixel offers pressure-sensitive multi-touch displays that can sense an unlimited number of simultaneous touches with accuracy and precision. Their displays come bundled with the right software and have applications in geo-intelligence, broadcasting, medical imaging, data exploration, digital storyboarding, industrial design…the list goes on.
At IDC2009, I had the privilege of meeting Steven Batiche (Director of Research, Applied Sciences Group, Entertainment & Devices Division – Microsoft Corp.) and listening to his presentation on advances in surface computers. He pulled up a slew of videos demonstrating conceptual and working prototypes from the Microsoft design labs–I was utterly awestruck.
Until then, I had been steeped in Apple’s powerful marketing campaigns and lost sight of the obvious: that Microsoft is an immense international entity with resources that, if leveraged appropriately, could surpass Apple a hundred times over. Microsoft’s research & development rocks, as far as I’m concerned. They are doing some unbelievable experimentation with surface computers (think Microsoft Surface but 100X more awesome).
How do I begin to describe that which has the feel of pure fiction? It’s better if I show you:
This is the Productivity Future Vision montage from Microsoft Office Labs . Though a concept video by all rights, it is very much grounded on research and is a plausible articulation of what to expect by the year 2019. There is more artistic license on the software side, but the actual hardware is all too real. Many of the “concepts” have been prototyped or are somewhere along in development.
From the video, we see:
- Speech, text, and cultural translation.
- Low cost, multi-touch, edge-to-edge displays; flexible, transparent displays.
- Software clusters brought together in a natural user interface.
- Active workspaces with rich graphics, achieved with ambient projectors and thin OLED displays.
- Large displays allowing for different user inputs (touch, mouse, stylus).
- Mobile devices with modular form factors that can access sensor networks and information resources. Image analysis and projection abilities.
- Seamless secure data sharing and integrated workflow tools between devices and across networks.
Check out the coffee mug at 4:12 – it’s to die for. Nothing is impossible! The music makes me feel very optimistic.
You’ll see technology becoming more invisible, but working harder for you in both your work and personal life. Imagine a future where creating a document with a colleague will be as easy as having a conversation. Making connections with people and your content will be secure and seamless. Relevant insight and information will be delivered proactively and in context to the task at hand.
Mobile devices will be more powerful than desktop computers of today. Technology will connect you with the information you need, when and where you need it, whether it be your local coffee shop, an airport, or a roof top in Hong Kong. Software will be there to make getting things done as efficiently as possible in new ways that are more natural.
[“Productivity Re-Imagined” via Microsoft Office Labs]
The conference is in less than two weeks and I am ecstatic! I will be pondering a future involving portable and surface computing with interactive displays in the days leading up to IDC2009.
Here are the majority of the presentation topics that I will be covering in future posts:
Market and Industry Overview
- Touch in a Touchless World
- The Impact of Wireless Social Networking on the Evolution of the Display Industry
Advances in Touch Technology
- The Advantages of Force-Based Touch Technology
- MultiTouch LCD Cell – Tough and Modular
- Multitouch and Some Food for Thought: Designing The Best User Experience
- Capacitive vs. Resistive Multi-Touch: A User-Centric Comparison
- DuoSense: The Hands-On Computing Revolution
- Multi-Discipline Multi-Touch Development at Drexel University
- Moving from Mechanical Buttons to Capacitive UIs: A solid-state world of posibilities
- How Multi-Touch, Immersion and 3D Tracking Technologies are Revolutionizing Interactive Displays
- Getting to the Heart of Touch
- Measuring the Effectiveness of Digital Signage Using “Gaze-Tracking” SMS and Other Interactive Technologies
- Tactable: Designing Multi-Touch Experiences
Applications and Case Studies
- Sensitive Object Acoustic Technology: The Next Revolution of Touch
- Trends in Interactive Gaming
- Facilitating Human Interaction with Interactive Devices
- The Evolution of a Revolution – The Next Generation iDrive
- Interactivity in Self-Service Applications
- Making Scents
- Robotic Interactive Displays for Music Entertainment
- Haptics for Interactive Displays
- Trends in Immersive and Holographic Interactive Displays