Undergraduate Research at the Vis Center

Hey Android, am I done?

Looking to the past weeks I have learned a lot. I have learned more about web development as I help Krystel get a strong background on web development, I have upgraded my Java skills, my programming and application development skills as I learn how to develop applications for Android. I have learned how to incorporate my work with my partners work. I have been testing simple applications and parts of the InfoForest application on a real Tablet with Android. I have created classes as I expand my knowledge of Object Oriented Programming. I have learn about media and how to display it. I have learn how to create an application for Android that communicates with a server and displays data dynamically obtained from the server. I have learned to create an simple application that displays 3D Objects, Videos, Sound, Pictures and Data, these are called viewers. I have learned to create an application that parses data of an XML and stores it on an array of instances of the classes I created for further use. Finally I learned how to handle events and how to make something happen when the user clicks a button or touches the screen. All this learning and achieving sounds really nice but I am not done yet.

This week I have been working with all the viewers some of them are progressing some of them not so much. My goal for next week is to finish most of the viewers so they will be working properly and efficiently so I can create a user interface that contains all of the viewers and that receives almost all the data and media from the server.  The viewers I am going to work on are the 3D Viewer, incorporate image, sound, and video to a piece of the application called media player, the multispectral viewer, the legacy viewer and the main viewer which is the Book Viewer. I am having trouble getting a 3D model from the server and displaying it, getting High Resolution Images displayed like a book, displaying a video with YouTube player and building the user interface for which we had a meeting yesterday. In this case trouble sounds more like a challenge and I am on my way to finish this adventure with the help of my partners and mentors.

 

                

Half Way-Week 4

We have almost reached the halfway point with our summer research program. I feel like I am getting to know my teammates better–we all went out for lunch at Canes last Wednesday and had some great “work bonding.” I am doing my best to expose Jean-Carlo, Krystel, Karlo and Eric to our fine dining here in Kentucky (i.e. Raising Canes and Orange Leaf).

In regards to my project, I have finally finished the XML for the Chad Gospels, which is culmination of 5 months of work. Last week, I came to the realization that a 3rd XML to note all of the variations between the Latin text and NIV translation of the Bible would not be the best way to display and organize the information. After consulting with Julie, John and Becky, we decided that the information would be better displayed in notes written into the existing XMLs. These notes follow TEI XML standards and contain explanations of the variations in the text.

One of the main reasons we did this is because often in the text, the scribe messes up verses. Every time he did this, he put a slash to distinguish the misplaced text. We wanted to make sure that this was clearly displayed to whomever is using the app because it can be difficult to understand at first. I, admittedly, had to spend some time with the manuscript before I understood what was happening.

Example of a page in the manuscript that has these mistakes. Each are noted with a slash.

I have finished the first part of my project. I am not exactly sure what I will be doing next, but it is a little exciting to not know what I will be doing to complete this project. We have several ideas of what my next move will be–writing XML for another manuscript or working with technology that can highlight passages in the image–and each seems interesting in their own way!

I will be gone next week, but I am excited to come back and continue into whatever new venture we deem best for the overall project!

Time flies

Hello once more, reader,

 

They say time flies.  I believe them.  It’s been almost a month and so much has happened, even though it feels like so little time has gone by.  We were given our projects, we adapted to this new environment, we immersed ourselves in the projects, we have learned and read and seen a lot, and we’ve divided the projects further into parts and made them our own, like personal belongings.

When we first arrived, I was clueless, not knowing what I would be doing, or how I would be doing it.  It was a very exciting moment when I was given the 3D projection project for a few reasons.  One of the main reasons was that I didn’t know much about projection, so I knew I was about to learn a whole lot about it; during the first week, it was just enlightening to read the book on projection setups to open my mind to so many bright ideas and to expand my imagination.  In just a week, I was dreaming of projecting stuff everywhere.  The next week, I was even more excited when we were told we’d be given a Kinect to integrate some interaction to our project, and that I’d be working with that part.  It was very of frustrating because I could not get to understand the basics of programming for Kinect with just a few video tutorials on YouTube.  I had ideas, and wanted to bring them to life, but I was getting nowhere… The next week, the pair of books for Kinect programming arrived, and reading the introduction on how the Kinect came to be blew me away.  It was quite interesting how the idea had been developing for around 50 years.  Once I immersed myself in the book, I began to get a better grasp of how to flow with the data and, once more, I had ideas.  This time around, though, I knew I could bring to life some of those ideas, with the help of books.

That’s how we’ve arrived to this fourth week.  Reading, learning, expanding my views, letting my imagination fly.  I feel I’ve grown and defined myself a little more.  I feel as if I have discovered a responsibility that is implicitly part of each of us; I feel it’s almost an obligation not only to imagine, but also to create and build and improve and reinvent.  I feel I’ve found even deeper meaning to one of the quotes I had posted previously on things being based on smaller parts.  I feel we’re going to do great things in what little time we have left, and I feel that we’ve all learned to always strive to research and create.  I don’t know… it’s just a very enlightening experience.

They say time flies, and each of us has so many great ideas…

To create the future…

Hello again, reader,

 

This week, a pair of books on the Kinect SDK arrived, which have been extremely interesting (other than helping with the programming).  I’ve learned a bit of history about how the Kinect slowly came to be: imagined decades ago, different projects slowly building upon the foundation that has made technology like this a reality.  In essence, the Kinect detects motion and sound.  In depth, though, it captures streams of color frames, depth frames and audio strings.  It utilizes that data to produce a 3D virtual environment, and uses other algorithms (which were slowly ‘taught’ to the program) to determine whether a given object at a certain depth is a person, and moves on to determine some of the person’s joints.  It also utilizes not one, but four microphones, which synchronize their information to determine very precisely where the source of that audio is.

 

All of this data combined provides grounds on which programmers can build on; what do they build on it?  Essentially, they build gestures and sounds to which the system reacts; gestures that are intuitive to the users (like sliding a page to a side), words that tell the system what to do.  The real question is how do we expand on it?  How do we make work upon those grounds so that, like the current mouse+keyboard setup, systems like this will be fully functional and ubiquitous?  Imagine a world where, by simply moving around or talking naturally to some system, work gets done.  Imagine entering your office, fully equipped with such a system: the office quickly turns on all the lights, greets you, and begins projecting a calendar near the door, shows you a reminder for an important meeting on the wall you are facing, shows a bunch of contacts scattered on the desks with their respective messages and windows, and just by moving around your arms, you can scan around the contacts to decide which you will look at first, and with a throwing motion, shoot it at the far wall to make it bigger.  Imagine an office that can do that and more.

 

THAT is the future, and we’re closing in on it with this technology.  As we add more sensors to the system, we can make a clearer and more precise virtual environment for the system, making it more effective when reading your actions; as we add break the limits of where we can project images, we get closer to making the entire office a workstation.  As we refine the algorithms used to read the user, the system becomes smarter and reactions are made smoother.  That is a future I want to create.  That is a future I will create.  One single summer is too little time to make this all happen, but it is more than enough time to give life to new ideas, to new goals.

Lions and tigers and Objective C (week 3)

My love-hate relationship with Objective C grows along with the progress of this iOS app.  I guess I just struggle because I know python and C++ very well and although the concepts are the same, Objective C has random aspects that are very different.  And the fact that there are different iOS devices to keep in mind is quite the pain. But I’ve been able to adjust and I feel like this week has been productive.

For the main page of the app, I figured out to be able to add buttons until they fill the screen.  And just when you think all hope is lost and you won’t be able to add anymore ancient manuscripts to the app…… it adds a scroll view!  I know this isn’t amazingly exciting but it definitely will be useful.

In more exciting news the app can now access the server with http requests using a third party called AFNetworking.  So I wrote a separate class for calls to the server and so far there are functions for accessing images and parsing xml.  From the screenshot you can see the image grabbing.  I definitely enjoy the iOS simulator in xcode.  It blows my mind to think that a couple weeks ago, I lacked a lot of the knowledge I’m using now.  But that’s the exciting part of research.  Just when you think “Is it even possible for someone to learn this much?!”  you realize how little you actually know.

I would like to give a shout out to my fellow part-time,  high school team member Zack.  He knows a lot about iOS and less about hardcoding.  In comparison,  I know little about iOS and more about hardcoding.  So we’ve kind of been educating and working off of each other.  I’ll write what the app does and Zack writes how the app displays it.

Also, this week we’ve been learning about the presentation of our research.  And at the end of this program we will have a paper and a presentation on our work.  I don’t exactly wake up in morning craving to give a speech in front of a group of people.  But, I know that for the rest of my life I will have to give presentations whether it be in my college career or in my work.  Practice makes perfect… right?

Android InfoForest Progress Percentage: 35%

This week we experienced a great and valuable presentation by Dr. Seales which enlightened. The presentation was about how to give a great presentation which we will need in order to give a awesome final presentation. As the days of the week passed I continued working on the classes, which I managed to fill with the data of the XML configuration file, started working on the 3D Viewer for the Info Forest Android application using java and an Object Format File to get the vertices and indexes of the 3D model. I managed to create a cube and dragon, manually figuring out the vertices and indexes, and display it on a View so that the user can rotate the model using his or her finger or with the Directional Pad for some mobile devices. The challenge for next week is to incorporate the HTTP request with the 3D Viewer so the user can see a 3D model that was sent from the server after being requested. Also I helped Krystel with some aspects of PHP and discussed how the Web Based Application will get the data from the server with Krystel and John. I will keep working on playing a video since I still have no luck with that but as super heroes say never fear the Acer Iconia Tablet is here and know i can actually see if the video does not show on the emulator because of lack of resources. With 35% percent of progress estimated i feel like I have learned so much but I also learned how much more there is to learn and do. I hope I can learn and do so much more and that I get to expand my knowledge.

 

3D Dragon on Android

Third Week: Trial & error. (DEBUGGING)

The whole week went by really fast. It seems that the work that we are doing now has that effect on me. By midweek we moved up a couple of tables, projectors and the ball with texture to mount a temporary display. I got some help from Fred, the guy that made the warping and blending program, he’s a really nice guy and sometimes takes some time off his work to help out to get me “in touch” with his program and how it works.

 

The meetings with the content guys was a kind of success again since we have not made as much progress as we have now, at the end of the week. It was difficult for them to contemplate my perspective without seeing what actually the results would be. On the other hand, Aaron offered his mighty hand to help us have an image with certain properties we need to calibrate the projectors onto the setup.

 

Friday we presented our little draft of the presentations and had a peer feedback session. It went smooth but there is still allot to work on.

But performance and results seems to be arriving at a steady stream. Let’s hope things keep that pace.

Shout out to my girlfriend that I miss so much! And to people back home…

Week 3

Let me start off by saying that this has not been my week with technology. Considering that I am a History/Classics major, most of the time this is fine. However, it almost did me in this week.

 

Things I have learned this week:

1. Watching is not learning.

2. Over communication is never a bad thing.

3. I should NEVER re-size images from small to large. Ever.

 

This week I made my first major mistake in the project. I am a little embarrassed to admit what I did only because it shows my ineptitude in Computer Science. However, the whole debacle proved to me that this is a group effort, that I am indeed a History Major and not a Computer Science Major, and that I do not take notes out of habit, but out of necessity.

Here is what happened. In the process of name and processing all of the files and images for the server, I re-sized close to 20,000 images incorrectly. That was Wednesday. On Thursday, I re-sized 20,000 images correctly. It’s been a week, let me tell you.

The moment John and I realized what I had done (which occurred at approximately 2:34 PM–I only know this because the moment was, personally, a very traumatic one), we began to work to re download the images from the servers and databases… and I started again.

Luckily, through the help of John and Zack, I have been able to correct my mistake in a fairly timely manner. I am very thankful for both of them and their patience with me. I have learned more than I ever thought I would need to know about computers this week, and will be very happy to return to my Latin dictionary and ancient manuscripts next week.

Who would have thought naming files would be so complicated? Certainly not I.

(I would also like to note that I wrote a Blog post for this week already about 30 minutes ago, but in the process of submitting it for review, WordPress deleted everything I had written. So I wrote it again. I think this sums up what my interactions with computers have been like this week.)

Third Week: Starting our presentation

We started the week with another seminar by Dr. Seales; it was about communication and technical presentation. He gave us some tips for our final presentation to be a good and efficient one. My project is also progressing. I created a sketch of the application and started implementing it on PHP. I also have been working in Object-Oriented Programming (OOP) in PHP and with XML Parser to combine it with the sketch implementation in order to set up the Web based application using the configuration file. We started the first draft of our Power Point presentation which we also presented at Friday’s meeting and had feedback. I bought a new Mac in the Bookstore and installed all the tools that I need so I can continue my work at the dorm. 🙂

Week Two (Vis U)

So far so good.

This week we spent some time discussing what is “research”.  Wikipedia defines research as “creative work undertaken systematically to increase the stock of knowledge”. This is a very vague definition.

Most of the day to day things that we do here are very applicable. The question “Are we doing research?” has been raised. Earlier this week we were introduced to the three I’s of research.  For work to be considered research it must fulfill all three requirements.

-Inquiry

-Intensity

-Integrity

Our research does fulfill all the requirements to pass the three I’s of research test.

The research statement that pertains to my section of the project is

“Developing an interactive and flexible cross platform system that visualizes data.”

Right now I am still working on the server side of the application. The computer that I do most of my work is in the basement, but I control it by a secure shell (ssh). Through the ssh connection I can control the remote server the same way I would control a local Linux machine. It is odd to think I have never seen the computer (server) that I have worked on for the last two weeks.

 

Django is  a internet framework that we have chosen to use for the info forest project. Django allows us to use python scripts to write the HTTPS responses. This makes the info forest system very flexible. If more content becomes available, it will be easy to modify the scripts.