The year is coming to an end, Christmas is around the corner, and reviews of 2017’s events are popping up everywhere. I think this is a nice opportunity to also look back at the year 2017, to summarize what has happened in my academic life, and to speculate a bit about 2018.
In my last blog post, I have introduced the general idea of Logic Tensor Networks (or LTNs, for short). Today I would like to talk about how LTNs and conceptual spaces can potentially fit together and about the concrete strands of research I plan to pursue.
About half a year ago, I mentioned “Logic Tensor Networks” in my short summary of the Dagstuhl seminar on neural-symbolic computation. I think that this is a highly interesting approach, and as I intend to work with it in the future, I will shortly introduce this framework today.
A few days ago, I had the chance to attend the workshop “Concept Learning and Reasoning in Conceptual Spaces” in Bochum. Here’s a link to the workshop’s website: CLRCS 2017
It was a really great event with researchers working on conceptual spaces from a wide variety of perspectives, ranging from AI and linguistics over psychology and neuroscience to philosophy. Today, I would like to give a short summary of the workshop for those who were not able to participate but who are nevertheless interested in what kinds of topics have been discussed.
It’s nice to have a mathematical definition of concepts in a conceptual space. It’s also nice that we can create new concepts based on old ones, for instance by intersecting them. But being able to talk about the relation of two concepts is certainly also useful. Last time, we talked about the size of a concept. We can use the size of concept to figure out that the concept of “animal” is more general than the concept of “Granny Smith” – simply because it is larger.
But there are also other ways of describing the relation of two concepts. Two of them, namely subsethood and implication, will be presented in today’s blog post.