Launching Kiwis aka Fun with OpenCV

To create our cool 3D launching interaction we are using computer vision to capture finger positions. The easiest way to do this is to track the fingertips using colored markers.

The only downside to this method is now the user will need to put on some sort of colored marker in order to interact with our system (or paint their nails / fingers).  For our application we will probably use the ends of colored latex gloves to provide the tracked color without the mess.

The pros to this approach is more robust tracking of the fingers which seems to be pertinent for this high stakes game situation (we don’t want the Insane Llamas to win do we?).  Although there are compromises that can be made (open hand and closed hand gesture recognition would be easier than marker-less finger tracking) we think that this approach will provide the best game experience.

To track the fingers we will be using OpenCV which has a port for iOS devices.  With some sample code from AI Shack I was able to get a prototype working for tracking two colors; in this case red and yellow.  In the image below you can see the paths of the objects’ center of mass and the threshold used for determining the objects.

The next step is to determine pinch and release events for the two colors. Once that is complete we can determine trajectory and force for our Kiwis!

Project Insane Llamas

Before the Wii, games were set in 3D environments with limited 2D controls. Now we see more full body gaming and natural motions with the Playstation Move and Microsoft Kinect. Mobile gaming is heading in this direction, but the majority of mobile games still follow traditional game design. We look to give the player a more interactive experience by placing the action in the real-world and having a more natural camera control. Using augmented reality, we will develop a game in which the objective is to rescue Kiwis, who were kidnapped by insane llamas. Players free the Kiwis by beating the insane llamas that inhabit different levels of the game. These levels are spread throughout the real-world space and seen through an iOS device, allowing players to have physical control over their point of view. A slingshot attached to the device is used to shoot vengeful Kiwis into the virtual environment and defeat the insane llamas. Parts of the virtual and real environments can be destroyed to highlight how the player can interact with both virtual and real objects. With this project, we hope to inspire novel gaming design and 3D user interaction for mobile gaming and applications.

Design Questions:
With AR we run this risk of registration errors. Is AR the best route?
  • We don’t want to alienate players just because the find the system unresponsive.
  • How much registration error is acceptable?
If there were perfect registration but no occlusion/destruction of real world objects would this be as fun/innovative/?
What does haptics afford the user? (Slingshot)
  • Is it necessary for understanding the interaction or just cool?
  • Would having to buy an additional prop deter users?
  • How can we make it fun without the prop?
On a side note, if you every need ideas for project names this website proved to be helpful for ours :D

Kinect for Product Development

Often times in the product development cycle you end up at the foam core mockup stage:


It is kinda bland and drab but at least you have a tangible product.  What if we took this stage to the next level and gave designers the ability to put new skins on their creations with the click of a button?

Using the kinect, we can correct projected images for any 3D geometry.  The basic idea is shown with these cubes.

The great thing about this technology is that it is almost here:


As seen in this paper, the Kinect can account for the distortion of the cube and project and image of the users choosing.  If we apply this technology to the product development stage we can have an unlimited number of “skinned” tangible prototypes. It is mearly putting the peices together in a more mobile formfactor.  Kinect for iPad anyone?

Kinect Magic for a New Move-in Experience

When moving into a new place one of things that is hardest to do is to visualize all of your stuff into the new space.  Will everything fit?  Will you be able to move well?

This practical everyday problem motivated us to think of ways in which we could use 3DUI techniques to visualize and manipulate furniture on the fly when looking for a house / apartment.  Running with this concept we thought of use case scenarios to begin informing our design.

First we want the furniture being manipulated to be the furniture the user already owns.  So the first design challenge will be how do we get this furniture modeled?  The biggest issue is that every person will have different types of furniture, therefore having a database with all possible combinations seems impractical. Enter kinect fusion:

By using the Kinect we can not only create a point cloud representing the 3D space we can also use it to generate textures for the furniture.  Using this we can customize the experience to the user and ensure the furniture dimensions are the same.

Once the furniture is modelled the next step will be to put it in the empty space and have a way to visualize it.  For this we chose Augmented Reality because we felt it would have the best impact on understanding the space.  Rather than just having a virtual world where the user places the furniture the user can now physically walk the space and have a better sense of how the furniture fits.

However, this vision adds many different challenges to the project.  The first being how to create an accurate model of the new space.  We want to keep it light weight.  We don’t want the user to be carrying their Kinect on an apt tour!  Therefore, we are looking into using images similiar to photosynth.

This allows us to generate a point cloud from images, which we can use to place the virtual furniture into.  Once this is done, the user will  need to have a way of manipulating the objects.  This is the reason something like an smart phone work well.  For one, it can support the AR experience since it has a camera on the back and has the necessary computing power to generate the graphics.  Secondly, we can use the multi-touch screen to select and manipulate the 3D objects.  We will have some smart algorithms to ensure objects alighn with surfaces (such as the floor) but it should be fairly easy for users to interact with the furniture.

The last big challenge is determining where the phone is to properly generate the scene.  This is a bit more difficult but we may be able to make it work using a combination of  the sensors available within the phone and image recognition on the video.

Lets see if this works!