A Hybrid Way: Reloaded (Part 2)

In my last blog post, I analyzed the differences of metric vs. nonmetric MDS when applied to the NOUN data base. Today, I want to continue with showing some machine learning results, updating the ones from our 2018 AIC paper (see these two blog posts: part 1 and part 2).

Continue reading “A Hybrid Way: Reloaded (Part 2)”

A Hybrid Way: Reloaded (Part 1)

Some time ago, I wrote two blog posts about a hybrid way for obtaining the dimensions of a conceptual space (see here and here). Currently, I’m rerunning these experiments in a more detailed way and today I want to share both the motivation for doing this as well as some first results.

Continue reading “A Hybrid Way: Reloaded (Part 1)”

What is a “β Variational Autoencoder”?

I’ve already talked about InfoGAN [1] a couple of times (here, here, and here). InfoGAN is a specific neural network architecture that claims to extract interpretable and semantically meaningful dimensions from unlabeled data sets – exactly what we need in order to automatically extract a conceptual space from data.

InfoGAN is however not the only architecture that makes this claim. Today, I will talk about the β-variational autoencoder (β-VAE) [2] which uses a different approach for reaching the same goal.

Continue reading “What is a “β Variational Autoencoder”?”