Learning To Map Images Into Shape Space (Part 9)

Today it’s finally time to look at the mapping results based on the autoencoder discussed last time. We’ll take a look at both transfer learning and multi-task learning, using the same overall setup as for the  respective classification-based experiments described here and here. Continue reading “Learning To Map Images Into Shape Space (Part 9)”

Learning To Map Images Into Shape Space (Part 7)

In the past two blog posts, we have discussed both transfer learning and multi-task learning for a four-dimensional target space of shapes. We observed that sketch-based networks seem to work better than photograph-based networks and that multi-task learning gave better results than transfer learning. Today, we’ll try to see whether these results also generalize to target spaces of different dimensionality. Continue reading “Learning To Map Images Into Shape Space (Part 7)”

Learning To Map Images Into Shape Space (Part 5)

After having spent already four (!) blog posts on the general and specific setup for my current machine learning study (data set, architecture, experimental setup, and sketch classification), it is now finally time to reveal the first results of the mapping task with respect to the shape domain. Today, we’ll focus on the transfer learning setup before looking into the multi-task learning results in one of the next posts.

Continue reading “Learning To Map Images Into Shape Space (Part 5)”