Last time, I have introduced the evaluation metrics used for the LTN classification task. Today, I will show some first results of the k nearest neighbor (kNN) classifier which will serve as a baseline for our LTN results.
Category: Machine learning
Applying Logic Tensor Networks (Part 2)
In my last LTN blog post, I introduced the overall setting of my experiment. Before I can report on first results, I want and need to describe how we can evaluate the performance of the classifiers in this multi-label classification setting. This is what I’m going to do today.
Applying Logic Tensor Networks (Part 1)
In previous blog posts I have already talked about Logic Tensor Networks in general, their relation to Conceptual Spaces, and several additional membership functions that are in line with the Conceptual Spaces framework. As I already mentioned before, I want to apply them in a “proof of concept” scenario. Today I’m going to sketch this scenario in more detail.
A hybrid way for obtaining the dimensions of a conceptual space (Part 2)
Last time, I gave a rough outline of a hybrid approach for obtaining the dimensions of a conceptual space that uses both multidimensional scaling (MDS) and artificial neural networks (ANNs) [1]. Today, I will show our first results (which we will present next week at the AIC workshop in Palermo).
Continue reading “A hybrid way for obtaining the dimensions of a conceptual space (Part 2)”
A hybrid way for obtaining the dimensions of a conceptual space (Part 1)
In earlier blog posts, I have already talked about two ways of obtaining the dimensions of a conceptual space: Neural networks such as InfoGAN on the one hand and multidimensional scaling (MDS) on the other hand. Over the past few months, in a collaboration with Elektra Kypridemou, I have worked on a way of combining these two approaches. Today, I would like to give a quick overview of our recent proposal [1].
Continue reading “A hybrid way for obtaining the dimensions of a conceptual space (Part 1)”