What is a “β Variational Autoencoder”?

I’ve already talked about InfoGAN [1] a couple of times (here, here, and here). InfoGAN is a specific neural network architecture that claims to extract interpretable and semantically meaningful dimensions from unlabeled data sets – exactly what we need in order to automatically extract a conceptual space from data.

InfoGAN is however not the only architecture that makes this claim. Today, I will talk about the β-variational autoencoder (β-VAE) [2] which uses a different approach for reaching the same goal.

Continue reading “What is a “β Variational Autoencoder”?”

Applying Logic Tensor Networks (Part 2)

In my last LTN blog post, I introduced the overall setting of my experiment. Before I can report on first results, I want and need to describe how we can evaluate the performance of the classifiers in this multi-label classification setting. This is what I’m going to do today.

Continue reading “Applying Logic Tensor Networks (Part 2)”

How does multidimensional scaling work?

I have already talked about multidimensional scaling (MDS) some time ago. Back then, I only gave a rough idea about what MDS does, but I didn’t really talk much about how MDS arrives at a solution. Today, I want to follow up on this and give you some intuition about what happens behind the scenes.

Continue reading “How does multidimensional scaling work?”

Applying Logic Tensor Networks (Part 1)

In previous blog posts I have already talked about Logic Tensor Networks in general, their relation to Conceptual Spaces, and several additional membership functions that are in line with the Conceptual Spaces framework. As I already mentioned before, I want to apply them in a “proof of concept” scenario. Today I’m going to sketch this scenario in more detail.

Continue reading “Applying Logic Tensor Networks (Part 1)”