Dagstuhl Seminar “Human-Like Neural-Symbolic Computation”

Last week, I had the chance to spend five days at Schloss Dagstuhl learning, talking, and thinking about “Human-Like Neural-Symbolic Computation”.

Usually, one can distinguish two kinds of approaches in artificial intelligence: Symbolic approaches are based on symbols and rules, mostly in the form of logic. They are good for encoding explicit knowledge as for example “all cats have tails”. Neural approaches on the other hand typically work on raw numbers and use networks of artificial neurons. They are good for learning implicit knowledge, e.g., how to recognize cats in an image. All participants of this invitation-only seminar are actively working on combining these two strands of research.

The first part of the seminar consisted of talks about the research conducted by the participants. I was a bit surprised about the heterogeneity of topics – which ranged from “how do people build and understand ontologies” over “explainable machine learning”, “commonsense reasoning meets theorem proving”, and “structured computer organization for cognition” to “deep learning with symbols”. Only through attending this seminar I realized how many different problems and topics one can look at when trying to connect neural and symbolic approaches. I was also able to give a talk about my own research topic and got great constructive feedback on it.

In the second part of the week we then did a “hackathon”: we split up into small groups, each of which worked on a different topic. One group looked deeper into the notion of “interpretability” in machine learning, another one compared different cognitive architectures (symbolic and neural ones)), a third one focused on symbols in deep learning, and the group in which I participated implemented a little show case with the “Logic Tensor Networks” framework.

The “Logic Tensor Networks” (LTN) framework provides a way to connect logical rules (“all cats have tails”) with feature spaces (where each observed instance is represented as a point) and to automatically learn a mapping between them. Each part of a rule (e.g., “cat”) is identified with a region in the feature space. Working with the LTN framework was very interesting and I think that it is also highly relevant to conceptual spaces.

To summarize, I am really glad that I was able to participate in this Dagstuhl seminar: I got a better overview of neural-symbolic approaches, I was able to present my research topic and get feedback on it, I met many bright researchers with very interesting ideas, and I familiarized myself with the LTN framework which might be useful for my own future research.

One thought on “Dagstuhl Seminar “Human-Like Neural-Symbolic Computation””

Leave a Reply