What is a “β Variational Autoencoder”?

I’ve already talked about InfoGAN [1] a couple of times (here, here, and here). InfoGAN is a specific neural network architecture that claims to extract interpretable and semantically meaningful dimensions from unlabeled data sets – exactly what we need in order to automatically extract a conceptual space from data.

InfoGAN is however not the only architecture that makes this claim. Today, I will talk about the β-variational autoencoder (β-VAE) [2] which uses a different approach for reaching the same goal.

Continue reading “What is a “β Variational Autoencoder”?”

What is “Multidimensional Scaling”?

I’ve already talked about how to potentially obtain the dimensions of a conceptual space with artificial neural networks in a previous blog post. That approach is based on machine learning techniques, but there’s also a more traditional way of extracting a conceptual space: Conducting a psychological experiment and using a type of algorithm called “multidimensional scaling”. Today, I would like to give a quick overview of this approach.

Continue reading “What is “Multidimensional Scaling”?”

What is “Constructive Alignment” and why do we need it?

Over the past few weeks, I have been pretty busy fulfilling my teaching duties. As I haven’t done much researching, I won’t talk about research today, but about “Constructive Alignment”, which is an approach for planning lectures, seminars and other courses.

The constructive alignment process consists of three steps:

  1. Defining the learning targets
  2. Planning the examination
  3. Planning the course

But wait a second, why does planning the course appear as the last step in this process?

Continue reading “What is “Constructive Alignment” and why do we need it?”

What are “Logic Tensor Networks”?

About half a year ago, I mentioned “Logic Tensor Networks” in my short summary of the Dagstuhl seminar on neural-symbolic computation. I think that this is a highly interesting approach, and as I intend to work with it in the future, I will shortly introduce this framework today.

Continue reading “What are “Logic Tensor Networks”?”

What are “language games”?

In one of my previous posts, I’ve shown a little overview diagram of my PhD research. One component of this diagram was called “language games” and so far I have not explained what that means. Well, today I’m going to give a short introduction into this topic.

Language games [1] focus on the question of “how can language come into existence?”, i.e., “What are possible mechanisms that allow different individuals to come up with a shared vocabulary that they can use to communicate about things in the world?”. I admit that this sounds a bit abstract, so let me illustrate the problem with an example:

Continue reading “What are “language games”?”