Logic Tensor Networks and Conceptual Spaces

In my last blog post, I have introduced the general idea of Logic Tensor Networks (or LTNs, for short). Today I would like to talk about how LTNs and conceptual spaces can potentially fit together and  about the concrete strands of research I plan to pursue.

First of all, a quick recap: LTNs are neural networks that can learn the membership function for concepts, both based on labeled examples and on abstract rules. Once these membership functions have been learned, the LTNs can be queried both about the membership values of other points and about the degrees to which other logical formulas are fulfilled.

Remember that we can interpret a conceptual space as a relatively low-dimensional feature space with interpretable dimensions. As LTNs are in principle capable of dealing with any kind of underlying feature space, we can also apply them to a conceptual space.

LTNs can help us to bridge the gap between the conceptual layer and the symbolic layer: They can learn about regions in the conceptual space and map them to symbols on the symbolic layer. Moreover (and I think that this is the really nice thing about LTNs), they also give us a mechanism to deal with abstract rules (like apple ∧ red ⇒ sweet) – we can use such rules as additional constraints during learning and we can automatically derive them based on our data set. So LTNs provide something like a two-way connection between the conceptual and the symbolic layer: Based on points in the conceptual space, we can learn about concepts and rules, but in this learning process we can also incorporate top-down information from the symbolic layer in the form of other rules.

Please note that LTNs follow a batch-learning approach: You need a (relatively large) set of data points over which the optimization algorithm can iterate for quite some time in order to adjust the weights of the underlying neural network. As I have stated earlier, my main interest lies in learning concepts incrementally. Nevertheless, for the reasons listed above, I think that LTNs are quite an interesting approach and that their combination with conceptual spaces should definitely be explored.

So what are my concrete plans? Well, there is a short term and a long term perspective:

In the short term, I will try to create a “proof of concept” application of LTNs to conceptual spaces. Together with a colleague of mine, I want to learn the concepts of different movie genres on a conceptual space for movies. This similarity space will be created by using techniques like multi-dimensional scaling. Multi-dimensional scaling (MDS) takes as input a number of objects and for each pair of objects a number indicating their semantic distance. The MDS algorithm then tries to find for each object a point in the space, such that the distances between these points accurately reflect the original semantic distances. In our case, the semantic distances will be based on data from an online movie data base, where users can rate and tag movies. Our goal is to see whether LTNs can help us to derive symbolic rules on the genres and tags (for instance Children’s movie ⇒ NOT horror movie).

If the short term plans work out nicely, I would like to adapt the LTN framework in the long term to my proposed formalization of conceptual spaces. This means, that the LTN should not learn an arbitrary membership function, but it should learn concepts as specified in my formalization. The advantage of this would clearly be that you could use such a modified LTN to learn concepts and then use my formalization to perform operations on these concepts. LTNs currently assume that the Euclidean metric is used everywhere in the feature space. In the conceptual spaces framework however, both the Euclidean and the Manhattan metric are used. This is one of the points where LTNs might need to be adapted in order to work with the conceptual spaces framework.

Currently, I’m only starting to work on the short term plan, so it might be some time until I revisit this topic and present some results. But I’m confident that I’ll be able to make some good progress and I will certainly post results when they are available.

Leave a Reply