Hello again, reader
My research project has gone through many phases. Through those many phases, I’ve learned a lot more about Computer Science. But more than that, I’ve learned to appreciate it even more. I used to think you could do anything thanks to computers, but I never thought of myself as one of those programmers who make those things real. Now, I think I am more capable. Either that, or more confident in my abilities and knowledge.
The Kinect was to be used with the 3D projection project, but that idea crumbled when we thought “hey, there’s going to be too much lag, especially since we would have to use two computers”. Then, I didn’t have much of a goal, but I did have a lot of reading to do to educate myself on Kinect data and its libraries. Basically, I didn’t have a specific goal in terms of making something, but I did have a goal in understanding the Kinect; learning how to manipulate the information it gave me so I could make something out of it. That was very exciting, especially since I had to learn not only the Kinect libraries, but how to program in C#. I learned that I not only like Computer Science, but also greatly enjoy building new things. This project has helped me realize all this, since I’ve had to learn a whole lot on new topics, have built a library for the Kinect controls, and will be building a simple application. This project has helped me realize that I can bring my ideas to life, and that I should attempt to bring them to life; it is an immensely satisfying feeling to think up an idea and go through the process of making it real.
This week was a little different; we did a lot of interesting things. Here the celebration of the Independence Day it’s different from Puerto Rico’ celebration. First of all, we went to a good concert that Dr. Seales and his family invited us. The next day I went to Downtown to see the 4th of July parade and then at night I went back again with the REU group (another research program of UK) to watch the fireworks. I had fun in all these new experiences.
We started of the week pretty good. The first exciting thing is that we got ANOTHER BALL. This time it was a smooth plastic orange ball (kinda shinny but I still got good news about it later). We rethought our approach and soon I am going to be handed a laptop to move around for the calibration, still don’t know when though.
Going back to the ball. Julie also handed me a can of white flat spray paint, emphasis on the flat (needed to take out the shininess). I went out to a patch of grass as soon as I got it and painted a section of the ball to try out in hopes that I would not mess it up and did not work eventually. Guess what? It all worked out. After that on Tuesday I finished painting the whole thing and started working with it.
Up until now the rest of my work consists in tweaking the calibration as much as I can, Working on my presentation which will turn out great if I get the ball done and started my essay draft mostly to keep track of other narrative ideas.
On the other hand I should mention that for the 4rth of July celebration the 3rd we went to Gratz Park on Transilvania University to see a little show that they do every year for the community. They played a couple of patriotic and/or classical pieces. It was nice and the popcorn was great (you can ask Eric). the 4rth I went with a couple of people to see a parade in downtown. So that was it for this week.
This week I have had conflicts with my friend Android, but it looks like we are getting even closer in this journey. I finally managed to get the 3D model file in object format from the server, but that wasn’t the problem. The problem was creating code that would read an object format file, interpret it and display the 3D model. The code that I have done reads the file and interprets it, the only problem is that sometimes the models are too large to display so I have to solve that problem. The main problem or struggle for me this week and the next is cashing images and data already obtained from the server, because the application would get tired of fetching the same data over and over again. Also I will be working on the user interface for the application since we are having a meeting on Monday and I want to keep improving my presentation, my written report and my speech skills for next week.
As the fifth week ends, our goals become clearer, and with them, the challenges that must be overcome also become more deeply etched into our thoughts.
Overall, it has become more difficult to advance, as we arrive at deeper topics and obstacles. Our project has split into two separate projects, since integrating gesture-interaction into the 3D projection system would cause the system to react very slowly to input. Still, the focus of my research has not changed: How can we create a base system that takes gestures as the main input? How will displays evolve once these systems become a commonality. Now, the challenges of delving into this question are clearer. As it is a new technology, a new form of input for the computer, standard libraries for manipulating the input data are non-existent.
The challenge, then, is that not only must we manipulate and devise intuitive ways in which this new “Skeleton” data interacts with the computer, but we must also write the code which will make the system recognize the data as input of some sort. Not only must we write the gestures and the system’s reactions to each one, but we must also build, from the ground up, a system that understands that the data received from the Kinect is to be considered input. After all, skeleton data is just data until we define it otherwise. These challenges are no small deal, but I will persist and attempt to define a system capable of interacting with the computer easily and intuitively. I will attempt to build the base on which others may build other structures that will, ultimately, expand to control the entirety of the computer.
These challenges are, at many times, frustrating, but that’s why it’s called research. Reading and learning and trial-and-error until we get there.
Happy 4th of July! Due to this holiday, we didn’t have a speaker this week and we had Wednesday off. So… I will just discuss the work I did this week.
Basically this week I worked with display and gestures. As a group we decided to have the app organized in a way such that once a manuscript is chosen, it opens up with the first page low res image is on the left and the translation on the right. Also, there will be buttons at the top of the page that you can click on to see another viewer (3D, high res, etc.) instead of the translation.
At the moment I have the low res image set on the right side so you can swipe your finger along the screen and it will flip to the next image and cache it. I also have imported a view instance of our 3D view controller (a UIView would not have worked because you wouldn’t be able to pan, zoom, rotate, or interact with the 3D image). There were a lot of problems with the landscape view conflicting with the 3D gesture recognizing. At first every action did the opposite of what it was supposed to do. It frustratingly took more time than expected. But it’s all good now, the 3D viewer had to be rotated to match the gestures.
So now I just need to import 3D images from the server. That is my goal for next week. I would be great to even get some other viewers imported too.
Week 3 (Software tools)
This week I have been working with the data for on the server. The images that mobile devices download need to have different sizes. Adobe’s Photoshop is a powerful tool when working with large quantities of images. My previous work has all been on Photoshop CS 3, but now Photoshop CS 6 is standard.
Meshlab is the tool we use most for working with the 3d data. It is a general-purpose mesh viewer. It is open source, but it is still buggy.
The 3d data that we have has around 100,000 points for each page. This is too much data for a portable device to use. Fortunately there is a function in mesh lab that allows us to interpolate and reduce points.
Our camera does not use the Bayer Pattern. Instead our camera takes black and white images. These black and white images are at a much higher quality. When we want a color image, we shine a red light on the subject and take a black and white picture. Then we shine a green light on the subject and take a picture, and then a blue light.
After taking multiple photos we have to merge the images back together. Voxels are multi dimensional pixels. We use UNU to make files called (.nrrd) files. These file make up multi dimensional images, where each dimension is a color spectrum.
We have been looking forward to making the project into a significantly well calibrated and move forwards in other aspects of it but it has proven more challenging than expected. It turns out that all that is left is manual labor.
We finally moved the multi-spectral camera out of the area for our setup and brought the aluminum pieces we have to work with for now. It took us a while to get everything in place, but it was a good work out because some of the stuff we moved was fairly heavy.
After consulting with more experienced professors they provided some pretty relevant information about the requirements and expectations in the hardware matter to make the project work. We’ll be putting our efforts on those too. That includes a bigger ball or sphere and projectors that have wider throw for better focusing and more viewing more relevant details. There happened to be that a certain some one had already thought on buying a beach ball.
To start off the week, us research students went to Raising Cane’s to spend some time together and eat lunch. Then we had a little seminar on design. It corresponded with the rest of the week because we all got together with the graphic designer to plan the layout of the app. This is critical, we have to make sure all the interfaces match for the best experience for the user.
So far our iOS app can import videos and images from the server through http requests. I worked mainly on caching the low res images so the user can easily navigate through the images and flip through the pages without waiting for constant loading. John perfected it so it could work with the pages from the server. I also worked on portraying 3D images of the pages. Now to import those from the server! Challenge accepted.
And since I am almost halfway through the program, I need to take a moment to reflect. Have you ever heard the phrase “hit the ground running”? That’s how I felt at the beginning of the program. I felt like I didn’t really know how to do what I needed to do. But I’ve learned how to program for iOS in objective C. I’ve learned how to interact with the server. I’ve learned how to cooperate with others in order to complete a huge project. And the funny thing is, we aren’t even done yet! This is only from 4 weeks of working. And I already feel so accomplished. I can’t wait to see what happens next and I’m excited for the app to be completed.
During this month there were a lot of new experiences. At the beginning we were adapting and trying to have the overview of what we will be doing. It was a transition period until now. Now we are all set up and with a clear vision of what we are doing and what we want to achieve. Now everyone is divided and have their own goals.
My part of the project is related with the Web-based application so I’m improving my skills as time passes. Finally I started to apply what we have been learning during this month. This week I created classes with PHP to read the configuration file, fill instances of the classes and displayed their data dynamically. I have been working with the viewers, I created a viewer that displays some manuscript images with “previous” and “next” buttons that changes and displays the different images. I also displayed a video of chad gospels that Zack e-mailed us. Since the images were located at my computer and the video was from YouTube, next week I will be working to fetch the images and the videos directly from the server.
On Monday we had a seminar by a Design Professor about Interactive Designs. Then on Thursday the members of the Info Forest had a meeting with the Design Professor and Aaron (Graphic Designer) about the design of the user’s interface and we showed them the work we have done until now. On Friday’s meeting we did our presentations in the conference room. I refine the overview of my presentation, add background information, mention the tools which I have been working, and also some visuals. In addition to fetching the data from the server also I will be working in the 3D viewer. After all of these experiences now I’m more confident and clear about what my part of the project I am going to be working on. In conclusion, I felt satisfied of what I have done for now but I’m looking forward to continue improving.