It has been over a year since my last blog post and I have left with something like a cliff hanger: The results from the last set of machine learning experiments with the autoencoder (see also here and here) were still missing. Now that I have finally handed in my dissertation, I am able to finish this already quite long series of blog posts. So let’s recap what still needs to be investigated…
Today it’s finally time to look at the mapping results based on the autoencoder discussed last time. We’ll take a look at both transfer learning and multi-task learning, using the same overall setup as for the respective classification-based experiments described here and here. Continue reading “Learning To Map Images Into Shape Space (Part 9)”
As already mentioned last time, I’m currently running some mapping experiments with autoencoders. Today, I can share some first results of the raw reconstruction capability of the trained autoencoders.
In the past two blog posts, we have discussed both transfer learning and multi-task learning for a four-dimensional target space of shapes. We observed that sketch-based networks seem to work better than photograph-based networks and that multi-task learning gave better results than transfer learning. Today, we’ll try to see whether these results also generalize to target spaces of different dimensionality. Continue reading “Learning To Map Images Into Shape Space (Part 7)”
Last time, we talked about the transfer learning results. Today, it’s time to take a look at the multi-task learning approach. Let’s get started. Continue reading “Learning To Map Images Into Shape Space (Part 6)”