Often times in the product development cycle you end up at the foam core mockup stage:
It is kinda bland and drab but at least you have a tangible product. What if we took this stage to the next level and gave designers the ability to put new skins on their creations with the click of a button?
Using the kinect, we can correct projected images for any 3D geometry. The basic idea is shown with these cubes.
The great thing about this technology is that it is almost here:
As seen in this paper, the Kinect can account for the distortion of the cube and project and image of the users choosing. If we apply this technology to the product development stage we can have an unlimited number of “skinned” tangible prototypes. It is mearly putting the peices together in a more mobile formfactor. Kinect for iPad anyone?