An independent study project exploring motion detection and geometry manipulation using Microsoft Kinect technology.

The Vision

The initial vision was a physics-based game where users would define object properties through intuitive hand gestures:

  • Demonstrating bounce height to set coefficient of restitution
  • Mimicking object lifting to indicate mass

The Evolution

The project evolved to focus on mesh deformation rather than physics properties.

Hand Recognition

The system distinguishes between open and closed hand positions:

  • Open hands push vertices away
  • Closed hands pull vertices toward the user

This required image processing techniques to analyze depth maps and contour data.

Technical Challenges

Maintaining mesh quality during deformation without excessive vertex displacement was crucial. The solution employed laplacian smoothing applied periodically to prevent geometric degradation.

Technologies Used

  • Unity3D - Game engine and 3D environment
  • C# - Programming language
  • OpenNI - Skeletal motion capture and gestures
  • OpenCV - Hand image processing