Since I’m still working on background chapters for my dissertation and hence currently do not have to share any research updates, I’m going to focus today’s blog post on my teaching duties. More specifically, I’ll try to convince you that grading sheets are a useful tool for making your own grading process more structured and objective, as well as for providing students with valuable feedback. Continue reading “What are “grading sheets” and why do we need them?”
It’s about time for another blog post in my little “What is …?” series. Today I want to talk about a specific type of artificial neural network, namely convolutional neural networks (CNNs). CNNs are the predominant approach for classifying images and have already been implicitly used in my study on the NOUN data set as well as in the analysis of the Shape similarity ratings. With this blog post, I want to clarify the basic underlying structure of this type of neural networks.
I’m currently in the process of writing the background chapter on Machine Learning for my dissertation. In the context of doing that, I took a closer look at a widely used feature extraction technique called “Principle Component Analysis” (PCA). It can be described either on the intuitive level (“it finds orthogonal directions of greatest variance in the data”) or on the mathematical level (“it computes an eigenvalue decomposition of the data matrix”). What I found most difficult to understand was how these two descriptions are related to each other. This is essentially what I want to share in today’s blog post. Continue reading “What is a “Principle Component Analysis”?”
In the past, we have already talked about some machine learning models, including LTNs and β-VAE. Today, I would like to introduce the basic idea of linear support vector machines (SVMs) and how they can be useful for analyzing a conceptual space. Continue reading “What is a “Support Vector Machine”?”
I’ve already talked about InfoGAN  a couple of times (here, here, and here). InfoGAN is a specific neural network architecture that claims to extract interpretable and semantically meaningful dimensions from unlabeled data sets – exactly what we need in order to automatically extract a conceptual space from data.
InfoGAN is however not the only architecture that makes this claim. Today, I will talk about the β-variational autoencoder (β-VAE)  which uses a different approach for reaching the same goal.