Last week, I had the chance to spend five days at Schloss Dagstuhl learning, talking, and thinking about “Human-Like Neural-Symbolic Computation”.
Usually, one can distinguish two kinds of approaches in artificial intelligence: Symbolic approaches are based on symbols and rules, mostly in the form of logic. They are good for encoding explicit knowledge as for example “all cats have tails”. Neural approaches on the other hand typically work on raw numbers and use networks of artificial neurons. They are good for learning implicit knowledge, e.g., how to recognize cats in an image. All participants of this invitation-only seminar are actively working on combining these two strands of research.
Continue reading “Dagstuhl Seminar “Human-Like Neural-Symbolic Computation””
After having covered different topics in my little “What is …?” series, it is now time to put the parts together and explain what my PhD research is all about.
The preliminary title of my dissertation is “Concept Formation in Conceptual Spaces”. As you have probably already guessed, it involves both the area of concept formation and the framework of conceptual spaces.
The basic idea is the following one:
Continue reading “My PhD project in a nutshell”
In my PhD project, I do research in the area of “concept formation”. Before starting to talk about my PhD research in more detail, I would like to use this post to give a quick introduction into the area of concept formation.
Continue reading “What is “concept formation”?”
Looking at my posts so far, it seems that a little “What is … ?” series is emerging (“What is AGI?”, “What are conceptual spaces?”). Today I’d like to add another post to this series – this time about the term “machine learning” and about three different types of machine learning algorithms one can distinguish.
As already discussed earlier, “good old fashioned AI” is based on manually writing rules and having some sort of inference system that applies these rules in a given situation. Machine learning is more about discovering rules from a (usually quite large) number of examples.
One can distinguish three types of machine learning: supervised, unsupervised and semi-supervised.
Continue reading “What is “machine learning”?”
Last week, I participated in this year’s interdisciplinary college (https://www.interdisciplinary-college.de). In the course of this spring school, I was able to meet many other students working on exciting research projects. By taking lectures, I acquired basic knowledge of neuroscience, some ideas about creativity (both from the behavioral/neural viewpoint as from the AI perspective) and many impulses on language grounding in robotics.
I was also able to present my overall PhD research project (“Concept Formation in Conceptual Spaces”) both during the poster session and as a “rainbow course” lecture. On both occasions I got very valuable feedback and stimulating impulses for my further research. I’ve uploaded the respective resources in case you are interested: the pdf file of my poster and the slides of my presentation.
Long story short: it was a great week with a lot of input and impulses. 🙂