I’ve recently shared my first (and unfortunately relatively disappointing) results of applying InfoGAN  to simple shapes. Over the past weeks, I’ve continued to work on this, and my results are starting to look more promising. Today, I’m going to share the current state of my research.
Based on Howard’s comment on my last blog post, I will today give an overview of how I try to stay up to date with current research in the AI and Conceptual Spaces area. What are conferences, workshops, mailing lists, etc. that I think are relevant?
In my last blog post, I have introduced the general idea of Logic Tensor Networks (or LTNs, for short). Today I would like to talk about how LTNs and conceptual spaces can potentially fit together and about the concrete strands of research I plan to pursue.
A few days ago, I had the chance to attend the workshop “Concept Learning and Reasoning in Conceptual Spaces” in Bochum. Here’s a link to the workshop’s website: CLRCS 2017
It was a really great event with researchers working on conceptual spaces from a wide variety of perspectives, ranging from AI and linguistics over psychology and neuroscience to philosophy. Today, I would like to give a short summary of the workshop for those who were not able to participate but who are nevertheless interested in what kinds of topics have been discussed.
It’s nice to have a mathematical definition of concepts in a conceptual space. It’s also nice that we can create new concepts based on old ones, for instance by intersecting them. But being able to talk about the relation of two concepts is certainly also useful. Last time, we talked about the size of a concept. We can use the size of concept to figure out that the concept of “animal” is more general than the concept of “Granny Smith” – simply because it is larger.
But there are also other ways of describing the relation of two concepts. Two of them, namely subsethood and implication, will be presented in today’s blog post.