What is a “Support Vector Machine”?

In the past, we have already talked about some machine learning models, including LTNs and ╬▓-VAE. Today, I would like to introduce the basic idea of linear support vector machines (SVMs) and how they can be useful for analyzing a conceptual space. Continue reading “What is a “Support Vector Machine”?”

A Hybrid Way: Reloaded (Part 3)

This blog post closes the “A Hybrid Way: Reloaded” mini-series. So far, I have analyzed the MDS solutions in part 1 and investigated first regression results in part 2 (with respect to the effects of feature space, correct vs. shuffled targets, and regularization). Today, I want to analyze what happens if we use different MDS algorithms for constructing the similarity spaces and to what extent our regression results depend on the number of dimensions in the similarity space.

Continue reading “A Hybrid Way: Reloaded (Part 3)”

A Hybrid Way: Reloaded (Part 2)

In my last blog post, I analyzed the differences of metric vs. nonmetric MDS when applied to the NOUN data base. Today, I want to continue with showing some machine learning results, updating the ones from our 2018 AIC paper (see these two blog posts: part 1 and part 2).

Continue reading “A Hybrid Way: Reloaded (Part 2)”

A Hybrid Way: Reloaded (Part 1)

Some time ago, I wrote two blog posts about a hybrid way for obtaining the dimensions of a conceptual space (see here and here). Currently, I’m rerunning these experiments in a more detailed way and today I want to share both the motivation for doing this as well as some first results.

Continue reading “A Hybrid Way: Reloaded (Part 1)”