A hybrid way for obtaining the dimensions of a conceptual space (Part 1)

In earlier blog posts, I have already talked about two ways of obtaining the dimensions of a conceptual space: Neural networks such as InfoGAN on the one hand and multidimensional scaling (MDS) on the other hand. Over the past few months, in a collaboration with Elektra Kypridemou, I have worked on a way of combining these two approaches. Today, I would like to give a quick overview of our recent proposal [1].

As you may remember, the big advantage of MDS is its inherent grounding in psychological data – because the space obtained by MDS reflects human similarity ratings, it is at least partially aligned with human cognition. However, MDS is highly dependent on these human similarity ratings: If we observe a new stimulus for which we don’t yet have similarity ratings we are unable to map it onto a point in the MDS space.

Neural networks on the other hand usually generalize well to unseen inputs. But as they are trained to optimize a loss function that has mathematical rather than psychological origins, it is entirely unclear whether the conceptual space learned by a neural network has any connection to the way humans conceptualize things.

Our proposal aims at taking the best from both worlds – finding a mapping from stimuli to a conceptual space that is both psychologically grounded and able to generalize to previously unseen stimuli.

In a nutshell, we propose the following procedure:

  1. Find a data set of your domain of interest (e.g., images) that is big enough to apply machine learning to it.
  2. Take a small, but representative subset of this big data set and use it in a psychological study to elicit human similarity ratings.
  3. Based on the similarity ratings obtained in step 2, run multidimensional scaling to obtain a psychological similarity space.
  4. Now train a neural network using the stimuli as input and their points in the MDS space as output. In order to fight overfitting (which might happen due to the small number of stimuli in the psychological study), you can introduce an additional learning objective such as minimizing the reconstruction error on the remainder of the data set.
  5. Voilà! Your neural network can now map input stimuli (e.g., images) to points in the MDS space. The mapping performed by the neural network is thus psychologically grounded (as we use a conceptual space based on MDS) and if you avoided overfitting, this mapping will also generalize to unseen inputs.

This is of course a quite idealized workflow and one could argue that it is unrealistic. Especially training the neural network in step 4 and making sure that it really learns a generalizable mapping might be much more difficult than it first seems.

So in order to check whether this proposal is worth pursuing, we conducted a first feasibility study with a pre-trained network and similarity data available from other researchers. I’ll report on the results of that study in one of my next blog posts, so stay tuned!


[1] Lucas Bechberger and Elektra Kypridemou: “Mapping Images to Psychological Similarity Spaces Using Neural Networks” AIC 2018. Link

4 thoughts on “A hybrid way for obtaining the dimensions of a conceptual space (Part 1)”

  1. Very interesting work! If I understand correctly, quality dimensions of the conceptual space generated by MDS on human-generated similarity measurements are not interpretable. Has there been any attempt to “ground” the points on quality domains representing sensory features expressible by language (colour, texture, size…)?

    1. Yes, the dimensions of an MDS solution are typically not interpretable, since rotations, reflections, and translations of the resulting configuration of points do not change their distances. I’ve described one approach to finding such dimensions in the following blog post: http://lucas-bechberger.de/2020/10/01/a-similarity-space-for-shapes-part-4/
      Essentially, you take some candidate features and determine feature values for all of your stimuli. You then try to find a direction in the similarity space that corresponds to a given candidate feature. For instance, you can use a linear regression or a support vector machine (or any linear model for that matter) to map coordinates in the similarity space to feature values. Some more details and links/references should be found in the post mentioned above. Hope this helps – feel free to follow up 🙂

Leave a Reply