The future of 3DUIs

Reflecting on our class discussions and projects here are a few things that came to mind about the future of 3DUIs.

Are 3DUIs easier to create for entertainment ?

Most of our project (which we think are cool) are related to fun.  The application specific designs we thought of in the beginning either didn’t have the “wow” factor or seemed to be rehashing of previous techniques.  Does this mean that it is easy to design cool interfaces for entertainment?Entertainment is also the most prominent place for 3DUIs currently (gaming interfaces in particular).  Does this color our perception of what we think the mass will think of as cool?

How to design cool 3DUIs?

One way of doing this by creating a new piece of hardware (Multitouch film). Another way is to take something existing and pushing it further (kinect fusion). The last way is 3DUI-fication of a current 2D interaction/interface.Our class focused more on the last way. Why? Is it because of the time constraints of the class or is it because we want more engaging interactions in our current interfaces?

Is it still possible to innovate in 3D UIs?

Through our class experiences and ideas I think there are many areas that can use 3DUIs.  In particular there were many things that we couldn’t accomplish yet exactly because the technology wasn’t there.  This is a great sign because it means that moving forward we will have new things to try and implement. One important theme we take advantage of in our project is that although not all the process are automated the user gets the effect of having a world they can interact with.  It is important to realize that prototyping and working with ambiguity in terms of sensor capabilities is important to push the field further.

Are the innovations in 3D UIs today different than they were in the past? If so, why?

I think that 3DUIs of the past and today are different because we now have more physical tools to assist us.  Before, there were no HMDs, CAVEs, Kinects, and Moves.  Graphics programming was done from scratch using OpenGL or vector graphics.  Therefore the focus seemed more about how to capture 3D inputs.  Now 3DUIs are focused more on the interactions.  In particular, we see projects that look at the interplay between virtual and real objects, physical expression, and fun.

How do new technologies spur innovation in 3D UIs?

New technologies played a big role in our projects.  Most of the projects in our class used the kinect not because there weren’t other ways to accomplish the same motion capture but because it is new.  By using the kinect, it not only adds to the novelty of the interface but its form factor allows others to experience our creation.  Therefore, we can say that new technologies have the capability of creating new techniques (modeling with the kinect, multi-touch with capacitive films, etc.) or bringing techniques to the masses (kinect, move, retinal displays for AR).

What design philosophies, inspirations, or methodologies can lead to innovation in 3D UIs?

The most important thing is baby steps.  You can go the route of creating a new hardware for an interesting technique but there is so much still left to explore with what has already been created.  It is important to understand that technology shouldn’t be the limiting factor when creating a new 3DUI.  The real limit is the designer’s imagination.  With this in mind inspiration can come from anywhere: Current games, daily tasks, things that you imagined doing, or new ways to approach a problem.

When we use new technologies or philosophies, are we ignoring the design principles and techniques developed in traditional 3D UIs?

We can’t help but be influenced by what we have been previously exposed to.  Not to mention we still use traditional 3DUIs as a measure for comparing our techniques.  That being said we need to look outside traditional design principles in order to move forward.  We currently don’t have a perfect understanding of 3DUIs, therefore we shouldn’t have design principles that are set in stone.  Eventually there may be a standard (like the WIMP interface for desktop interaction) in which we can have these types of concrete do and don’ts.  Until then, we need to keep breaking the rules :D

Captains Log 22.03.2012

The Game

Unity has a great offer allowing free iOS developer license available (more details here).  So I have had the pleasure of developing our game Insane Llamas using there software.  It has some great features that allow for easy scripting, editing and debugging.  This is a much welcomed changed from some open source projects that we were using because Unity has a large community base and good documentation.

So far, we have functionality built in for launching Kiwis based on camera position, destroying llamas,  and rigid body physics.  The next steps are to integrate PTAM to the camera, work on the models, and setting up the real world environment.

The Environment

We have also found a room that we can use for our final environment.  We can add as much real world clutter to give a fun and challenging setting for our great llama battle. Now the question becomes how big of a volume can we save with KinFu.  This will be the determining factor of how big the virtual objects will be and how intricate the level can be.

The Prop

We have also finished our slingshot prop!  After much work at Dr Quek’s workshop we have a light weight and protective design for the iPad.  The next step will be painting some cool funky colors.

iPad troubles!

So it seems that the iPad doesn’t suport having both the front and back camera capturing data at the same time.  This really puts a halt to our proposed 3D interaction.  Fortunatly, there are some cool people doing work in our same area and have posted a neat solution that we can take advantage of.

Now instead of using vision to determine where the slingshot will be launched we will use these stretch sensors instead.

These awesome sensors are made from conductive rubber and change resistance as it stretches!  This will allow us to have position determined through mechanical means instead of using the front camera.  Now the only thing that we need is a way to transmit the resistance measurements to the iPad.  Enter Redpark, a development company which has a cable that connects an arduino to the ipad.

Now our launching setup looks like this:  

We will use a lillypad arduino to perform all the analogue input reading.  We chose this sensor because it is very thin and will be less encumbering to the user / case space.  This will connect and be powered by the iPad via the Redpark cable.

The stretch sensor attaches to the case so that it can detect the release from any quadrant of the ipad face.  We think the best way to do this is by having 4 pieces of the rubber sensor.  By knowing the lengths of each leg we can determine the relative position of the user’s pinch and release.

 

Launching Kiwis aka Fun with OpenCV

To create our cool 3D launching interaction we are using computer vision to capture finger positions. The easiest way to do this is to track the fingertips using colored markers.

The only downside to this method is now the user will need to put on some sort of colored marker in order to interact with our system (or paint their nails / fingers).  For our application we will probably use the ends of colored latex gloves to provide the tracked color without the mess.

The pros to this approach is more robust tracking of the fingers which seems to be pertinent for this high stakes game situation (we don’t want the Insane Llamas to win do we?).  Although there are compromises that can be made (open hand and closed hand gesture recognition would be easier than marker-less finger tracking) we think that this approach will provide the best game experience.

To track the fingers we will be using OpenCV which has a port for iOS devices.  With some sample code from AI Shack I was able to get a prototype working for tracking two colors; in this case red and yellow.  In the image below you can see the paths of the objects’ center of mass and the threshold used for determining the objects.

The next step is to determine pinch and release events for the two colors. Once that is complete we can determine trajectory and force for our Kiwis!

Project Insane Llamas

Before the Wii, games were set in 3D environments with limited 2D controls. Now we see more full body gaming and natural motions with the Playstation Move and Microsoft Kinect. Mobile gaming is heading in this direction, but the majority of mobile games still follow traditional game design. We look to give the player a more interactive experience by placing the action in the real-world and having a more natural camera control. Using augmented reality, we will develop a game in which the objective is to rescue Kiwis, who were kidnapped by insane llamas. Players free the Kiwis by beating the insane llamas that inhabit different levels of the game. These levels are spread throughout the real-world space and seen through an iOS device, allowing players to have physical control over their point of view. A slingshot attached to the device is used to shoot vengeful Kiwis into the virtual environment and defeat the insane llamas. Parts of the virtual and real environments can be destroyed to highlight how the player can interact with both virtual and real objects. With this project, we hope to inspire novel gaming design and 3D user interaction for mobile gaming and applications.

Design Questions:
With AR we run this risk of registration errors. Is AR the best route?
  • We don’t want to alienate players just because the find the system unresponsive.
  • How much registration error is acceptable?
If there were perfect registration but no occlusion/destruction of real world objects would this be as fun/innovative/?
What does haptics afford the user? (Slingshot)
  • Is it necessary for understanding the interaction or just cool?
  • Would having to buy an additional prop deter users?
  • How can we make it fun without the prop?
On a side note, if you every need ideas for project names this website proved to be helpful for ours :D

Kinect for Product Development

Often times in the product development cycle you end up at the foam core mockup stage:

  

It is kinda bland and drab but at least you have a tangible product.  What if we took this stage to the next level and gave designers the ability to put new skins on their creations with the click of a button?

Using the kinect, we can correct projected images for any 3D geometry.  The basic idea is shown with these cubes.

The great thing about this technology is that it is almost here:

 

As seen in this paper, the Kinect can account for the distortion of the cube and project and image of the users choosing.  If we apply this technology to the product development stage we can have an unlimited number of “skinned” tangible prototypes. It is mearly putting the peices together in a more mobile formfactor.  Kinect for iPad anyone?

Kinect Magic for a New Move-in Experience

When moving into a new place one of things that is hardest to do is to visualize all of your stuff into the new space.  Will everything fit?  Will you be able to move well?

This practical everyday problem motivated us to think of ways in which we could use 3DUI techniques to visualize and manipulate furniture on the fly when looking for a house / apartment.  Running with this concept we thought of use case scenarios to begin informing our design.

First we want the furniture being manipulated to be the furniture the user already owns.  So the first design challenge will be how do we get this furniture modeled?  The biggest issue is that every person will have different types of furniture, therefore having a database with all possible combinations seems impractical. Enter kinect fusion:

By using the Kinect we can not only create a point cloud representing the 3D space we can also use it to generate textures for the furniture.  Using this we can customize the experience to the user and ensure the furniture dimensions are the same.

Once the furniture is modelled the next step will be to put it in the empty space and have a way to visualize it.  For this we chose Augmented Reality because we felt it would have the best impact on understanding the space.  Rather than just having a virtual world where the user places the furniture the user can now physically walk the space and have a better sense of how the furniture fits.

However, this vision adds many different challenges to the project.  The first being how to create an accurate model of the new space.  We want to keep it light weight.  We don’t want the user to be carrying their Kinect on an apt tour!  Therefore, we are looking into using images similiar to photosynth.

This allows us to generate a point cloud from images, which we can use to place the virtual furniture into.  Once this is done, the user will  need to have a way of manipulating the objects.  This is the reason something like an smart phone work well.  For one, it can support the AR experience since it has a camera on the back and has the necessary computing power to generate the graphics.  Secondly, we can use the multi-touch screen to select and manipulate the 3D objects.  We will have some smart algorithms to ensure objects alighn with surfaces (such as the floor) but it should be fairly easy for users to interact with the furniture.

The last big challenge is determining where the phone is to properly generate the scene.  This is a bit more difficult but we may be able to make it work using a combination of  the sensors available within the phone and image recognition on the video.

Lets see if this works!

Sony Move Design Ideas

Body Tracking

Using a utility type belt that has holders for 3 other moves would allow for the user’s hip positions to be tracked.  By determining center of mass as well as tracking body speed and motion different 3D techniques that rely on full body tracking could be used more efficiently.

This type of tracking could also work for a video game setting:  Imagine playing Zelda where different weapons were different remotes.  An example would be as you change the appropriate move controllers, Link would sheath his sword and pick up the boomerang.

World Exploration for Learning

The move controller can be used as a camera to see into an imaginary world much like the premise of the Magic School Bus.  You can move the controller and virtually observe the inside of a galaxy, bonds of a molecule, life under the ocean, etc.  Experiencing relative distances between virtual objects coupled with real props can provide better spacial judgement and understanding.

Augmented Painting with Wide Tracking Area

Imagine a wall being a virtual canvas were multiple users can use the Move as paint brushes to draw / tag a space collaboratively.  Depending on the size of the canvas / number of users multiple cameras can be used to track the space.

This would pose both a large technical and design challenge.  The technical challenge would be handing the controllers because they will need drop and reconnect seamlessly between systems.  The design challenge would be understanding how users can collaborate in the space and how they would change brush shapes, colors, etc.  This type of virtual drawing allows for users to collaborate locally or virtually.