What is COBWEB?

Since my current ANN experiments are moving forward rather slowly, I have spent some time preparing the final background chapter for my dissertation.  I the course of doing so, I have among other things looked at COBWEB [1], which is one of the most well-known concept formation algorithms. Today, I want to share the basic idea behind COBWEB and its variants.

Continue reading “What is COBWEB?”

Learning To Map Images Into Shape Space (Part 2)

In my last blog post, I introduced my current research project: Learning to map raw input images into the shape space obtained from my prior study. Moreover, I talked a bit about the data set I used and the augmentation steps I took to increase the variety of inputs. Today, I want to share with you the network architecture which I plan to use in my experiments. So let’s get started.

Continue reading “Learning To Map Images Into Shape Space (Part 2)”

Learning To Map Images Into Shape Space (Part 1)

In a previous mini-series of blog posts (see here, here, here, and here), I have introduced a small data set of 60 line drawings complemented with pairwise shape similarity ratings and analyzed this data set in the form of conceptual similarity spaces. Today, I will start a new mini-series about learning a mapping from images into these similarity spaces, following up on my prior work on the NOUN dataset (see here and here).

Continue reading “Learning To Map Images Into Shape Space (Part 1)”

What are “Convolutional Neural Networks”?

It’s about time for another blog post in my little “What is …?” series. Today I want to talk about a specific type of artificial neural network, namely convolutional neural networks (CNNs). CNNs are the predominant approach for classifying images and have already been implicitly used in my study on the NOUN data set as well as in the analysis of the Shape similarity ratings. With this blog post, I want to clarify the basic underlying structure of this type of neural networks.

Continue reading “What are “Convolutional Neural Networks”?”

What is a “Principle Component Analysis”?

I’m currently in the process of writing the background chapter on Machine Learning for my dissertation. In the context of doing that, I took a closer look at a widely used feature extraction technique called “Principle Component Analysis” (PCA). It can be described either on the intuitive level (“it finds orthogonal directions of greatest variance in the data”) or on the mathematical level (“it computes an eigenvalue decomposition of the data matrix”). What I found most difficult to understand was how these two descriptions are related to each other. This is essentially what I want to share in today’s blog post. Continue reading “What is a “Principle Component Analysis”?”