What is “Multidimensional Scaling”?

I’ve already talked about how to potentially obtain the dimensions of a conceptual space with artificial neural networks in a previous blog post. That approach is based on machine learning techniques, but there’s also a more traditional way of extracting a conceptual space: Conducting a psychological experiment and using a type of algorithm called “multidimensional scaling”. Today, I would like to give a quick overview of this approach.

Continue reading “What is “Multidimensional Scaling”?”

What is “Constructive Alignment” and why do we need it?

Over the past few weeks, I have been pretty busy fulfilling my teaching duties. As I haven’t done much researching, I won’t talk about research today, but about “Constructive Alignment”, which is an approach for planning lectures, seminars and other courses.

The constructive alignment process consists of three steps:

  1. Defining the learning targets
  2. Planning the examination
  3. Planning the course

But wait a second, why does planning the course appear as the last step in this process?

Continue reading “What is “Constructive Alignment” and why do we need it?”

Extending Logic Tensor Networks (Part 2)

A while ago, I introduced Logic Tensor Networks (LTNs) and argued that they are nicely applicable in a conceptual spaces scenario. In one of my recent posts, I described how to ensure that an LTN can only learn convex concepts. Today, I will take this one step further by introducing additional ways of defining the membership function of an LTN.

Continue reading “Extending Logic Tensor Networks (Part 2)”

Some progress with InfoGAN

I’ve recently shared my first (and unfortunately relatively disappointing) results of applying InfoGAN [1] to simple shapes. Over the past weeks, I’ve continued to work on this, and my results are starting to look more promising. Today, I’m going to share the current state of my research.

Continue reading “Some progress with InfoGAN”

Extending Logic Tensor Networks (Part 1)

I have already talked about Logic Tensor Networks (LTNs for short) in the past (see here and here) and I’ve announced to work with them. Today, I will share with you my first steps with respect to modifying and extending the framework. More specifically, I will talk about a problem with the original membership function and about how I solved it.

Continue reading “Extending Logic Tensor Networks (Part 1)”