Today it’s finally time to look at the mapping results based on the autoencoder discussed last time. We’ll take a look at both transfer learning and multi-task learning, using the same overall setup as for the respective classification-based experiments described here and here. Continue reading “Learning To Map Images Into Shape Space (Part 9)”
As already mentioned last time, I’m currently running some mapping experiments with autoencoders. Today, I can share some first results of the raw reconstruction capability of the trained autoencoders.
In the past two blog posts, we have discussed both transfer learning and multi-task learning for a four-dimensional target space of shapes. We observed that sketch-based networks seem to work better than photograph-based networks and that multi-task learning gave better results than transfer learning. Today, we’ll try to see whether these results also generalize to target spaces of different dimensionality. Continue reading “Learning To Map Images Into Shape Space (Part 7)”
Last time, we talked about the transfer learning results. Today, it’s time to take a look at the multi-task learning approach. Let’s get started. Continue reading “Learning To Map Images Into Shape Space (Part 6)”
After having spent already four (!) blog posts on the general and specific setup for my current machine learning study (data set, architecture, experimental setup, and sketch classification), it is now finally time to reveal the first results of the mapping task with respect to the shape domain. Today, we’ll focus on the transfer learning setup before looking into the multi-task learning results in one of the next posts.