Some time ago, I wrote two blog posts about a hybrid way for obtaining the dimensions of a conceptual space (see here and here). Currently, I’m rerunning these experiments in a more detailed way and today I want to share both the motivation for doing this as well as some first results.
Last time, I have shared the first results obtained by the LTN on the conceptual space of movies. Today, I want to give you a quick update on the first membership function variant that I have investigated.
After having already written a lot about Logic Tensor Networks, today I will finally share some first results of how they perform in a multi-label classification task on the conceptual space of movies.
Last time, I have introduced the evaluation metrics used for the LTN classification task. Today, I will show some first results of the k nearest neighbor (kNN) classifier which will serve as a baseline for our LTN results.
In my last LTN blog post, I introduced the overall setting of my experiment. Before I can report on first results, I want and need to describe how we can evaluate the performance of the classifiers in this multi-label classification setting. This is what I’m going to do today.