Applying Logic Tensor Networks (Part 1)

In previous blog posts I have already talked about Logic Tensor Networks in general, their relation to Conceptual Spaces, and several additional membership functions that are in line with the Conceptual Spaces framework. As I already mentioned before, I want to apply them in a “proof of concept” scenario. Today I’m going to sketch this scenario in more detail.

The current scenario consists of a conceptual space for movies. It has been extracted by Derrac and Schockaert [1] from a large collection of movie reviews and published online, freely accessible to everyone. This is quite nice, because we can use an off-the-shelf conceptual space instead of having to define one ourselves.

The data set contains 15,000 movies that have been annotated with 25 genres (e.g., children, horror, news, and thriller). Each movie comes with both a point describing its location in the conceptual space and with a list of genres that apply to it. This means that a movie can belong to multiple genres (e.g., action and thriller) at the same time.

Derrac and Schockaert have extracted four different spaces from their movie review data, having 20, 50, 100, and 200 dimensions, respectively. In each of these spaces, they searched for interpretable directions and created an additional version of this space by projecting all movies onto these interpretable directions. So in total, we have eight different conceptual spaces that each contain 15,000 movies.

That’s our data set, but what’s our task?

On the one hand, we want to learn a good classification for the different genres. This means, that we want to be able to predict the relevant genres for an given movie based on its position in the conceptual space.
On the other hand, we also want to extract rules from the data set. Based on the movies seen during training, we want to judge the likelihood of phrases like “action AND crime thriller” or “children ⇒ NOT horror” to be true.

Logic Tensor Networks can in theory provide a solution to both tasks. My goal is now to find out whether they are also able to do so in practice. Moreover, I want to investigate the influence of different membership on the result.

In order to make such an analysis, we need to measure the performance of the LTN and compare it to some sort of baseline. For now, I will only introduce the two baselines used in the two tasks. I will give more detail on performance measures in one of my future blog posts.

For the classification task, the baseline consists of a simple “k nearest neighbor” (kNN) classifier. In order to classify a new movie, the kNN classifier takes a look at its position in the conceptual space and at the k movies that are closest to it and for which the genres are known. The classification is then done by simply counting how often the different genres occur in this set of k movies. This classifier is quite simple, but nevertheless a standard tool in machine learning. The LTN should achieve a performance that is at least comparable to a kNN classifier.

For the rule extraction task, our baseline consists of simple label counting. We completely ignore the conceptual space and only look at the genres: In order to compute the likelihood of “children ⇒ NOT horror” to be true, we simply compute the percentage of movies labeled as “children” that were not labeled as “horror”. Similarly, for “action AND crime thriller” we look at all movies that are labeled as both “action” and “crime” and compute the percentage of these movies that were also labeled as “thriller”. Again, this is a simple yet reasonable baseline which the LTN should be able to beat.

This concludes the general overview of my “proof of concept” scenario. In one of my next blog posts, I will elaborate on ways to evaluate and compare the performance of the different approaches with respect to the two tasks sketched above.

References

[1] Derrac, Joaquín, and Steven Schockaert. “Inducing semantic relations from conceptual spaces: a data-driven approach to plausible reasoning.” Artificial Intelligence 228 (2015): 66-94. Preprint

Leave a Reply

Your email address will not be published.