Last time, I have shared the first results obtained by the LTN on the conceptual space of movies. Today, I want to give you a quick update on the first membership function variant that I have investigated.
Applying Logic Tensor Networks (Part 4)
After having already written a lot about Logic Tensor Networks, today I will finally share some first results of how they perform in a multi-label classification task on the conceptual space of movies.
What is a “β Variational Autoencoder”?
I’ve already talked about InfoGAN [1] a couple of times (here, here, and here). InfoGAN is a specific neural network architecture that claims to extract interpretable and semantically meaningful dimensions from unlabeled data sets – exactly what we need in order to automatically extract a conceptual space from data.
InfoGAN is however not the only architecture that makes this claim. Today, I will talk about the β-variational autoencoder (β-VAE) [2] which uses a different approach for reaching the same goal.
Applying Logic Tensor Networks (Part 3)
Last time, I have introduced the evaluation metrics used for the LTN classification task. Today, I will show some first results of the k nearest neighbor (kNN) classifier which will serve as a baseline for our LTN results.
Applying Logic Tensor Networks (Part 2)
In my last LTN blog post, I introduced the overall setting of my experiment. Before I can report on first results, I want and need to describe how we can evaluate the performance of the classifiers in this multi-label classification setting. This is what I’m going to do today.