In my last blog post, I gave an overview of the experiments I intend to conduct. Before, I had described the data set and the network architecture. Today, I finally report the first results. However, instead of first talking about the baseline for the mapping task (as originally intended), I will start with the classification results. The reason for this is that it makes more sense to discuss the baseline and my own transfer learning results together, since it is then much easier to compare them. Before I can talk about my own transfer learning results, I however first need to introduce the classification network on which they are based. So let’s focus focus today on the classification results that I was able to obtain with respect to the sketch data sets.
It’s been a while since my last blog post on this subject. The reason for that is simply that the neural network did not give me the results I wanted. But now it seems that I’m on a better track, so let me give you a quick update on what has changed and an overview of my next steps.
In my last blog post, I introduced my current research project: Learning to map raw input images into the shape space obtained from my prior study. Moreover, I talked a bit about the data set I used and the augmentation steps I took to increase the variety of inputs. Today, I want to share with you the network architecture which I plan to use in my experiments. So let’s get started.
In a previous mini-series of blog posts (see here, here, here, and here), I have introduced a small data set of 60 line drawings complemented with pairwise shape similarity ratings and analyzed this data set in the form of conceptual similarity spaces. Today, I will start a new mini-series about learning a mapping from images into these similarity spaces, following up on my prior work on the NOUN dataset (see here and here).
Since this is going to be my last blog post for this year, I’m going to use it for reflecting a bit on my academic life this year and for talking a bit about my plans for 2021.