In one of my previous posts, I’ve shown a little overview diagram of my PhD research. One component of this diagram was called “language games” and so far I have not explained what that means. Well, today I’m going to give a short introduction into this topic.
Language games  focus on the question of “how can language come into existence?”, i.e., “What are possible mechanisms that allow different individuals to come up with a shared vocabulary that they can use to communicate about things in the world?”. I admit that this sounds a bit abstract, so let me illustrate the problem with an example:
Continue reading “What are “language games”?”
Last week, I had the chance to spend five days at Schloss Dagstuhl learning, talking, and thinking about “Human-Like Neural-Symbolic Computation”.
Usually, one can distinguish two kinds of approaches in artificial intelligence: Symbolic approaches are based on symbols and rules, mostly in the form of logic. They are good for encoding explicit knowledge as for example “all cats have tails”. Neural approaches on the other hand typically work on raw numbers and use networks of artificial neurons. They are good for learning implicit knowledge, e.g., how to recognize cats in an image. All participants of this invitation-only seminar are actively working on combining these two strands of research.
Continue reading “Dagstuhl Seminar “Human-Like Neural-Symbolic Computation””
After having covered different topics in my little “What is …?” series, it is now time to put the parts together and explain what my PhD research is all about.
The preliminary title of my dissertation is “Concept Formation in Conceptual Spaces”. As you have probably already guessed, it involves both the area of concept formation and the framework of conceptual spaces.
The basic idea is the following one:
Continue reading “My PhD project in a nutshell”
In my PhD project, I do research in the area of “concept formation”. Before starting to talk about my PhD research in more detail, I would like to use this post to give a quick introduction into the area of concept formation.
Continue reading “What is “concept formation”?”
Looking at my posts so far, it seems that a little “What is … ?” series is emerging (“What is AGI?”, “What are conceptual spaces?”). Today I’d like to add another post to this series – this time about the term “machine learning” and about three different types of machine learning algorithms one can distinguish.
As already discussed earlier, “good old fashioned AI” is based on manually writing rules and having some sort of inference system that applies these rules in a given situation. Machine learning is more about discovering rules from a (usually quite large) number of examples.
One can distinguish three types of machine learning: supervised, unsupervised and semi-supervised.
Continue reading “What is “machine learning”?”