As the fifth week ends, our goals become clearer, and with them, the challenges that must be overcome also become more deeply etched into our thoughts.
Overall, it has become more difficult to advance, as we arrive at deeper topics and obstacles. Our project has split into two separate projects, since integrating gesture-interaction into the 3D projection system would cause the system to react very slowly to input. Still, the focus of my research has not changed: How can we create a base system that takes gestures as the main input? How will displays evolve once these systems become a commonality. Now, the challenges of delving into this question are clearer. As it is a new technology, a new form of input for the computer, standard libraries for manipulating the input data are non-existent.
The challenge, then, is that not only must we manipulate and devise intuitive ways in which this new “Skeleton” data interacts with the computer, but we must also write the code which will make the system recognize the data as input of some sort. Not only must we write the gestures and the system’s reactions to each one, but we must also build, from the ground up, a system that understands that the data received from the Kinect is to be considered input. After all, skeleton data is just data until we define it otherwise. These challenges are no small deal, but I will persist and attempt to define a system capable of interacting with the computer easily and intuitively. I will attempt to build the base on which others may build other structures that will, ultimately, expand to control the entirety of the computer.
These challenges are, at many times, frustrating, but that’s why it’s called research. Reading and learning and trial-and-error until we get there.