A while back, I talked about using InfoGAN networks to learn interpretable dimensions for the shape domain of a conceptual space. As this has already been a few months ago, I think it is now time for an update. Where do I stand with my research with respect to this topic?
Based on Howard’s comment on my last blog post, I will today give an overview of how I try to stay up to date with current research in the AI and Conceptual Spaces area. What are conferences, workshops, mailing lists, etc. that I think are relevant?
The year is coming to an end, Christmas is around the corner, and reviews of 2017’s events are popping up everywhere. I think this is a nice opportunity to also look back at the year 2017, to summarize what has happened in my academic life, and to speculate a bit about 2018.
In my last blog post, I have introduced the general idea of Logic Tensor Networks (or LTNs, for short). Today I would like to talk about how LTNs and conceptual spaces can potentially fit together and about the concrete strands of research I plan to pursue.
It’s nice to have a mathematical definition of concepts in a conceptual space. It’s also nice that we can create new concepts based on old ones, for instance by intersecting them. But being able to talk about the relation of two concepts is certainly also useful. Last time, we talked about the size of a concept. We can use the size of concept to figure out that the concept of “animal” is more general than the concept of “Granny Smith” – simply because it is larger.
But there are also other ways of describing the relation of two concepts. Two of them, namely subsethood and implication, will be presented in today’s blog post.