Kinect Magic for a New Move-in Experience

When moving into a new place one of things that is hardest to do is to visualize all of your stuff into the new space.  Will everything fit?  Will you be able to move well?

This practical everyday problem motivated us to think of ways in which we could use 3DUI techniques to visualize and manipulate furniture on the fly when looking for a house / apartment.  Running with this concept we thought of use case scenarios to begin informing our design.

First we want the furniture being manipulated to be the furniture the user already owns.  So the first design challenge will be how do we get this furniture modeled?  The biggest issue is that every person will have different types of furniture, therefore having a database with all possible combinations seems impractical. Enter kinect fusion:

By using the Kinect we can not only create a point cloud representing the 3D space we can also use it to generate textures for the furniture.  Using this we can customize the experience to the user and ensure the furniture dimensions are the same.

Once the furniture is modelled the next step will be to put it in the empty space and have a way to visualize it.  For this we chose Augmented Reality because we felt it would have the best impact on understanding the space.  Rather than just having a virtual world where the user places the furniture the user can now physically walk the space and have a better sense of how the furniture fits.

However, this vision adds many different challenges to the project.  The first being how to create an accurate model of the new space.  We want to keep it light weight.  We don’t want the user to be carrying their Kinect on an apt tour!  Therefore, we are looking into using images similiar to photosynth.

This allows us to generate a point cloud from images, which we can use to place the virtual furniture into.  Once this is done, the user will  need to have a way of manipulating the objects.  This is the reason something like an smart phone work well.  For one, it can support the AR experience since it has a camera on the back and has the necessary computing power to generate the graphics.  Secondly, we can use the multi-touch screen to select and manipulate the 3D objects.  We will have some smart algorithms to ensure objects alighn with surfaces (such as the floor) but it should be fairly easy for users to interact with the furniture.

The last big challenge is determining where the phone is to properly generate the scene.  This is a bit more difficult but we may be able to make it work using a combination of  the sensors available within the phone and image recognition on the video.

Lets see if this works!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>