Last time, I gave a rough outline of a hybrid approach for obtaining the dimensions of a conceptual space that uses both multidimensional scaling (MDS) and artificial neural networks (ANNs) . Today, I will show our first results (which we will present next week at the AIC workshop in Palermo).
In earlier blog posts, I have already talked about two ways of obtaining the dimensions of a conceptual space: Neural networks such as InfoGAN on the one hand and multidimensional scaling (MDS) on the other hand. Over the past few months, in a collaboration with Elektra Kypridemou, I have worked on a way of combining these two approaches. Today, I would like to give a quick overview of our recent proposal .
I’ve already talked about how to potentially obtain the dimensions of a conceptual space with artificial neural networks in a previous blog post. That approach is based on machine learning techniques, but there’s also a more traditional way of extracting a conceptual space: Conducting a psychological experiment and using a type of algorithm called “multidimensional scaling”. Today, I would like to give a quick overview of this approach.
Over the past few weeks, I have been pretty busy fulfilling my teaching duties. As I haven’t done much researching, I won’t talk about research today, but about “Constructive Alignment”, which is an approach for planning lectures, seminars and other courses.
The constructive alignment process consists of three steps:
- Defining the learning targets
- Planning the examination
- Planning the course
But wait a second, why does planning the course appear as the last step in this process?
A while ago, I introduced Logic Tensor Networks (LTNs) and argued that they are nicely applicable in a conceptual spaces scenario. In one of my recent posts, I described how to ensure that an LTN can only learn convex concepts. Today, I will take this one step further by introducing additional ways of defining the membership function of an LTN.