So in the last blog post, I argued why it is not a good idea to use convex regions for representing concepts in a conceptual space – we are not able to represent correlations between domains. This time, I will show you how star-shaped regions can save the day.
But first of all, what does “star-shaped” mean in this context?
Continue reading “Star-shaped regions can encode correlations”
Recently, another one of my papers  (look at the preprint here) has been accepted at the German Conference on Artificial Intelligence. It is a quite technical paper with a lot of formulas, but I’ll try to illustrate the overall high-level idea in this and one or two future blog posts.
Today I would like to talk about the starting point of the research presented in this paper: The observation that convex regions in a conceptual space are highly problematic if we want to represent correlations between domains.
Continue reading “Convex regions in a conceptual space are problematic”
This week, the first paper of my PhD research  has been accepted for publication (you can take a look at the preprint here). I would like to seize the opportunity and explain here on a high level what this paper is about.
I’ve explained in a previous post what a conceptual space looks like. The aforementioned paper discusses the question posed in the title of this post: “Where do the dimensions of a conceptual space come from?”
Continue reading “Where do the dimensions of a conceptual space come from?”
In one of my previous posts, I’ve shown a little overview diagram of my PhD research. One component of this diagram was called “language games” and so far I have not explained what that means. Well, today I’m going to give a short introduction into this topic.
Language games  focus on the question of “how can language come into existence?”, i.e., “What are possible mechanisms that allow different individuals to come up with a shared vocabulary that they can use to communicate about things in the world?”. I admit that this sounds a bit abstract, so let me illustrate the problem with an example:
Continue reading “What are “language games”?”
Last week, I had the chance to spend five days at Schloss Dagstuhl learning, talking, and thinking about “Human-Like Neural-Symbolic Computation”.
Usually, one can distinguish two kinds of approaches in artificial intelligence: Symbolic approaches are based on symbols and rules, mostly in the form of logic. They are good for encoding explicit knowledge as for example “all cats have tails”. Neural approaches on the other hand typically work on raw numbers and use networks of artificial neurons. They are good for learning implicit knowledge, e.g., how to recognize cats in an image. All participants of this invitation-only seminar are actively working on combining these two strands of research.
Continue reading “Dagstuhl Seminar “Human-Like Neural-Symbolic Computation””