Kinect for Product Development

Often times in the product development cycle you end up at the foam core mockup stage:


It is kinda bland and drab but at least you have a tangible product.  What if we took this stage to the next level and gave designers the ability to put new skins on their creations with the click of a button?

Using the kinect, we can correct projected images for any 3D geometry.  The basic idea is shown with these cubes.

The great thing about this technology is that it is almost here:


As seen in this paper, the Kinect can account for the distortion of the cube and project and image of the users choosing.  If we apply this technology to the product development stage we can have an unlimited number of “skinned” tangible prototypes. It is mearly putting the peices together in a more mobile formfactor.  Kinect for iPad anyone?

Kinect Magic for a New Move-in Experience

When moving into a new place one of things that is hardest to do is to visualize all of your stuff into the new space.  Will everything fit?  Will you be able to move well?

This practical everyday problem motivated us to think of ways in which we could use 3DUI techniques to visualize and manipulate furniture on the fly when looking for a house / apartment.  Running with this concept we thought of use case scenarios to begin informing our design.

First we want the furniture being manipulated to be the furniture the user already owns.  So the first design challenge will be how do we get this furniture modeled?  The biggest issue is that every person will have different types of furniture, therefore having a database with all possible combinations seems impractical. Enter kinect fusion:

By using the Kinect we can not only create a point cloud representing the 3D space we can also use it to generate textures for the furniture.  Using this we can customize the experience to the user and ensure the furniture dimensions are the same.

Once the furniture is modelled the next step will be to put it in the empty space and have a way to visualize it.  For this we chose Augmented Reality because we felt it would have the best impact on understanding the space.  Rather than just having a virtual world where the user places the furniture the user can now physically walk the space and have a better sense of how the furniture fits.

However, this vision adds many different challenges to the project.  The first being how to create an accurate model of the new space.  We want to keep it light weight.  We don’t want the user to be carrying their Kinect on an apt tour!  Therefore, we are looking into using images similiar to photosynth.

This allows us to generate a point cloud from images, which we can use to place the virtual furniture into.  Once this is done, the user will  need to have a way of manipulating the objects.  This is the reason something like an smart phone work well.  For one, it can support the AR experience since it has a camera on the back and has the necessary computing power to generate the graphics.  Secondly, we can use the multi-touch screen to select and manipulate the 3D objects.  We will have some smart algorithms to ensure objects alighn with surfaces (such as the floor) but it should be fairly easy for users to interact with the furniture.

The last big challenge is determining where the phone is to properly generate the scene.  This is a bit more difficult but we may be able to make it work using a combination of  the sensors available within the phone and image recognition on the video.

Lets see if this works!

Sony Move Design Ideas

Body Tracking

Using a utility type belt that has holders for 3 other moves would allow for the user’s hip positions to be tracked.  By determining center of mass as well as tracking body speed and motion different 3D techniques that rely on full body tracking could be used more efficiently.

This type of tracking could also work for a video game setting:  Imagine playing Zelda where different weapons were different remotes.  An example would be as you change the appropriate move controllers, Link would sheath his sword and pick up the boomerang.

World Exploration for Learning

The move controller can be used as a camera to see into an imaginary world much like the premise of the Magic School Bus.  You can move the controller and virtually observe the inside of a galaxy, bonds of a molecule, life under the ocean, etc.  Experiencing relative distances between virtual objects coupled with real props can provide better spacial judgement and understanding.

Augmented Painting with Wide Tracking Area

Imagine a wall being a virtual canvas were multiple users can use the Move as paint brushes to draw / tag a space collaboratively.  Depending on the size of the canvas / number of users multiple cameras can be used to track the space.

This would pose both a large technical and design challenge.  The technical challenge would be handing the controllers because they will need drop and reconnect seamlessly between systems.  The design challenge would be understanding how users can collaborate in the space and how they would change brush shapes, colors, etc.  This type of virtual drawing allows for users to collaborate locally or virtually.