Hello again, reader,
This week, a pair of books on the Kinect SDK arrived, which have been extremely interesting (other than helping with the programming). I’ve learned a bit of history about how the Kinect slowly came to be: imagined decades ago, different projects slowly building upon the foundation that has made technology like this a reality. In essence, the Kinect detects motion and sound. In depth, though, it captures streams of color frames, depth frames and audio strings. It utilizes that data to produce a 3D virtual environment, and uses other algorithms (which were slowly ‘taught’ to the program) to determine whether a given object at a certain depth is a person, and moves on to determine some of the person’s joints. It also utilizes not one, but four microphones, which synchronize their information to determine very precisely where the source of that audio is.
All of this data combined provides grounds on which programmers can build on; what do they build on it? Essentially, they build gestures and sounds to which the system reacts; gestures that are intuitive to the users (like sliding a page to a side), words that tell the system what to do. The real question is how do we expand on it? How do we make work upon those grounds so that, like the current mouse+keyboard setup, systems like this will be fully functional and ubiquitous? Imagine a world where, by simply moving around or talking naturally to some system, work gets done. Imagine entering your office, fully equipped with such a system: the office quickly turns on all the lights, greets you, and begins projecting a calendar near the door, shows you a reminder for an important meeting on the wall you are facing, shows a bunch of contacts scattered on the desks with their respective messages and windows, and just by moving around your arms, you can scan around the contacts to decide which you will look at first, and with a throwing motion, shoot it at the far wall to make it bigger. Imagine an office that can do that and more.
THAT is the future, and we’re closing in on it with this technology. As we add more sensors to the system, we can make a clearer and more precise virtual environment for the system, making it more effective when reading your actions; as we add break the limits of where we can project images, we get closer to making the entire office a workstation. As we refine the algorithms used to read the user, the system becomes smarter and reactions are made smoother. That is a future I want to create. That is a future I will create. One single summer is too little time to make this all happen, but it is more than enough time to give life to new ideas, to new goals.