Android Phone & Computer Application

The concept for my newest project revolves around Computational Photography and Photogrammetry. I am working currently to create an Android application that will be able to build a 3-D model in real time of whatever the phone's camera is looking at. 

The project, still in development, has shown some good initial results. 

The current status implements the FAST Interest point Algorithm to detect edge points for three dimensional structures represented by a two dimensional photo.

FAST interest point algorithm.

FAST interest point algorithm.

The algorithm looks at each pixel given an image, and focuses on the 16 pixels around a 4 pixel radius. For each border pixel, the algorithm focuses on the luminescence of each. The algorithm marks each pixel as either "brighter", "darker" or the "same" luminescence as the center pixel. Given this information, the program uses a threshold, say 55% and asks "are the number of [brighter, darker] pixels greater than this threshold?" If so. the algorithm "marks" this pixel as an interest point. In my implementation of this algorithm, it turns the pixel into a red dot. 

In a quick version of this algorithm, a program looks at just the 4 pixels (up, down, left, right) that are in this 4 diameter range, and asks if two of the edge pixels are darker or brighter, then marks it accordingly. More can be read on the Algorithm here.

Within my implementation of the program, the Android device continuously captures image based on the users movement, uploads the images to a server, and the server continually processes each picture through this fast algorithm upon receiving it from the device.