Week 2 Reflection (28/05/2018 – 01/06/2018)

This past week has been a productive one; we finally started working on our projects. Below is a brief overview of the week:

Monday

Gerald Pineda and I had a meeting with Sven Bambach, a postgraduate student, at 12 o’clock noon to get an introduction to Deep Learning and Machine Learning.  He also briefed us on development toolkits which we will be needing for our projects.

Tuesday

Gerald Pineda and I met with Dr. David Crandall and the sponsor of IU’s Navy microelectronics project to discuss how we can contribute to the development of a machine learning and computer vision algorithms that can distinguish fake micro-chips from authentic ones. Later on in the day, we met with all Ph.D. and graduate students in the Computer Science department to discuss some of their ongoing projects.

Wednesday

Gerald and I spent the majority of the day reading the first two chapters of “Programming Computer Vision with Python” which introduced us to basic image handling and processing; using the Python Image Library, Numpy, Scipy, and Matplotlib; and Local Image Descriptors. These two chapters were very vital in the creation of our first prototype algorithm which matches features of two images.

Thursday

Our main task for the day was to work on creating an algorithm that could take a picture of a cluttered bunch of electronic parts (e.g. a bunch of ICs scattered on a table) and
produce a detailed report of what the parts are (i.e. 5 Intel processors, 3 7414 chips, etc). To do this, we needed to collect a bit of data, so Gerald and I took pictures of 20 sample microchips. The purpose of these pictures was to put our algorithm to the test by matching similar features of two test chips.

Both_Sobel

Our first prototype algorithm was complete at the end of the day. It basically used the SIFT method to match identical feature points of two box images, but we needed to create restrictions;
Image and video hosting by TinyPicImage and video hosting by TinyPic

Friday

Gerald and I reported our first prototype to Dr. Crandall, and the feedback he gave us was as follows;

“Basically, I think that picture is showing you that each of those green lines is a point that the computer thinks matches across the two images, right? So in this case, it thinks there are ~25 matching points or whatever. For two images that don’t actually match, the number of matching points it finds should be much fewer. (Can you check this by feeding in two images of very different objects?) Assuming that’s true, then I think you could test if 2 images match just by counting the number of matching points, and maybe if there are more than some number (10? 20? I don’t know…) then we say it’s a match.”

Our next task for the upcoming days is to work on the algorithm to meet our main goal, which is to generate a report on matching parts.