I’m currently in the process of writing the background chapter on Machine Learning for my dissertation. In the context of doing that, I took a closer look at a widely used feature extraction technique called “Principle Component Analysis” (PCA). It can be described either on the intuitive level (“it finds orthogonal directions of greatest variance in the data”) or on the mathematical level (“it computes an eigenvalue decomposition of the data matrix”). What I found most difficult to understand was how these two descriptions are related to each other. This is essentially what I want to share in today’s blog post. Continue reading “What is a “Principle Component Analysis”?”
In one of my last blog posts, I have introduced a data set of shapes which I use to extract similarity spaces for the shape domain. As stated at the end of that blog post, I want to analyze these similarity spaces based on three predictions of the conceptual spaces framework: The representation of dissimilarities as distances, the presence of small, non-overlapping convex regions, and the presence of interpretable directions. Today, I will focus on the first of these predictions. More specifically, we will compute the correlation between distances in the MDS-spaces to the original dissimilarities and compare this to three baselines. This will help us to see how efficiently the similarity spaces represent shape similarity.
I’ve already introduced the notion of a concept in two older blog posts (see here and here) to set the stage for the conceptual spaces framework and to motivate why concepts are useful. In short, a concept is a mental representation of a category of things in the world. For example, the concept apple ties together all knowledge we have about apples in general, such as their typical shapes and sizes, as well as what they can be used for (e.g., eating or throwing). In order to embed the conceptual spaces framework a bit more in the overall area of concept research, I will today sketch four psychological theories about concepts (based on the great overview by Murphy ) and show how they can be related to the conceptual spaces framework.
As already mentioned earlier, I want to validate my hybrid proposal for obtaining the dimensions of a conceptual space in a second study, which focuses on the domain of shapes. Today I will start reporting on joint work with Margit Scheibel on obtaining a similarity space for the shape domain based on psychological data. This is the first step of the proposed hybrid procedure and will be followed by training a neural network. But for now, let’s focus on obtaining the similarity spaces.
In the past, we have already talked about some machine learning models, including LTNs and β-VAE. Today, I would like to introduce the basic idea of linear support vector machines (SVMs) and how they can be useful for analyzing a conceptual space. Continue reading “What is a “Support Vector Machine”?”