A hybrid way for obtaining the dimensions of a conceptual space (Part 2)

Last time, I gave a rough outline of a hybrid approach for obtaining the dimensions of a conceptual space that uses both multidimensional scaling (MDS) and artificial neural networks (ANNs) [1]. Today, I will show our first results (which we will present next week at the AIC workshop in Palermo).

Continue reading “A hybrid way for obtaining the dimensions of a conceptual space (Part 2)”

A hybrid way for obtaining the dimensions of a conceptual space (Part 1)

In earlier blog posts, I have already talked about two ways of obtaining the dimensions of a conceptual space: Neural networks such as InfoGAN on the one hand and multidimensional scaling (MDS) on the other hand. Over the past few months, in a collaboration with Elektra Kypridemou, I have worked on a way of combining these two approaches. Today, I would like to give a quick overview of our recent proposal [1].

Continue reading “A hybrid way for obtaining the dimensions of a conceptual space (Part 1)”

What is “Multidimensional Scaling”?

I’ve already talked about how to potentially obtain the dimensions of a conceptual space with artificial neural networks in a previous blog post. That approach is based on machine learning techniques, but there’s also a more traditional way of extracting a conceptual space: Conducting a psychological experiment and using a type of algorithm called “multidimensional scaling”. Today, I would like to give a quick overview of this approach.

Continue reading “What is “Multidimensional Scaling”?”

Extending Logic Tensor Networks (Part 2)

A while ago, I introduced Logic Tensor Networks (LTNs) and argued that they are nicely applicable in a conceptual spaces scenario. In one of my recent posts, I described how to ensure that an LTN can only learn convex concepts. Today, I will take this one step further by introducing additional ways of defining the membership function of an LTN.

Continue reading “Extending Logic Tensor Networks (Part 2)”