A while back, I talked about using InfoGAN networks to learn interpretable dimensions for the shape domain of a conceptual space. As this has already been a few months ago, I think it is now time for an update. Where do I stand with my research with respect to this topic?
Staying up to date with current research
Based on Howard’s comment on my last blog post, I will today give an overview of how I try to stay up to date with current research in the AI and Conceptual Spaces area. What are conferences, workshops, mailing lists, etc. that I think are relevant?
A summary of 2017
The year is coming to an end, Christmas is around the corner, and reviews of 2017’s events are popping up everywhere. I think this is a nice opportunity to also look back at the year 2017, to summarize what has happened in my academic life, and to speculate a bit about 2018.
Continue reading “A summary of 2017”
Logic Tensor Networks and Conceptual Spaces
In my last blog post, I have introduced the general idea of Logic Tensor Networks (or LTNs, for short). Today I would like to talk about how LTNs and conceptual spaces can potentially fit together and about the concrete strands of research I plan to pursue.
Continue reading “Logic Tensor Networks and Conceptual Spaces”
What are “Logic Tensor Networks”?
About half a year ago, I mentioned “Logic Tensor Networks” in my short summary of the Dagstuhl seminar on neural-symbolic computation. I think that this is a highly interesting approach, and as I intend to work with it in the future, I will shortly introduce this framework today.