Learning To Map Images Into Shape Space (Part 10)

It has been over a year since my last blog post and I have left with something like a cliff hanger: The results from the last set of machine learning experiments with the autoencoder (see also here and here) were still missing. Now that I have finally handed in my dissertation, I am able to finish this already quite long series of blog posts. So let’s recap what still needs to be investigated…

Continue reading “Learning To Map Images Into Shape Space (Part 10)”

Learning To Map Images Into Shape Space (Part 9)

Today it’s finally time to look at the mapping results based on the autoencoder discussed last time. We’ll take a look at both transfer learning and multi-task learning, using the same overall setup as for the  respective classification-based experiments described here and here. Continue reading “Learning To Map Images Into Shape Space (Part 9)”

Learning To Map Images Into Shape Space (Part 7)

In the past two blog posts, we have discussed both transfer learning and multi-task learning for a four-dimensional target space of shapes. We observed that sketch-based networks seem to work better than photograph-based networks and that multi-task learning gave better results than transfer learning. Today, we’ll try to see whether these results also generalize to target spaces of different dimensionality. Continue reading “Learning To Map Images Into Shape Space (Part 7)”