A while ago, I introduced Logic Tensor Networks (LTNs) and argued that they are nicely applicable in a conceptual spaces scenario. In one of my recent posts, I described how to ensure that an LTN can only learn convex concepts. Today, I will take this one step further by introducing additional ways of defining the membership function of an LTN.
Category: Machine learning
Some progress with InfoGAN
I’ve recently shared my first (and unfortunately relatively disappointing) results of applying InfoGAN [1] to simple shapes. Over the past weeks, I’ve continued to work on this, and my results are starting to look more promising. Today, I’m going to share the current state of my research.
Extending Logic Tensor Networks (Part 1)
I have already talked about Logic Tensor Networks (LTNs for short) in the past (see here and here) and I’ve announced to work with them. Today, I will share with you my first steps with respect to modifying and extending the framework. More specifically, I will talk about a problem with the original membership function and about how I solved it.
First steps with InfoGAN
A while back, I talked about using InfoGAN networks to learn interpretable dimensions for the shape domain of a conceptual space. As this has already been a few months ago, I think it is now time for an update. Where do I stand with my research with respect to this topic?
Logic Tensor Networks and Conceptual Spaces
In my last blog post, I have introduced the general idea of Logic Tensor Networks (or LTNs, for short). Today I would like to talk about how LTNs and conceptual spaces can potentially fit together and about the concrete strands of research I plan to pursue.
Continue reading “Logic Tensor Networks and Conceptual Spaces”