A while back, I talked about using InfoGAN networks to learn interpretable dimensions for the shape domain of a conceptual space. As this has already been a few months ago, I think it is now time for an update. Where do I stand with my research with respect to this topic?
In my last blog post, I have introduced the general idea of Logic Tensor Networks (or LTNs, for short). Today I would like to talk about how LTNs and conceptual spaces can potentially fit together and about the concrete strands of research I plan to pursue.
About half a year ago, I mentioned “Logic Tensor Networks” in my short summary of the Dagstuhl seminar on neural-symbolic computation. I think that this is a highly interesting approach, and as I intend to work with it in the future, I will shortly introduce this framework today.
Looking at my posts so far, it seems that a little “What is … ?” series is emerging (“What is AGI?”, “What are conceptual spaces?”). Today I’d like to add another post to this series – this time about the term “machine learning” and about three different types of machine learning algorithms one can distinguish.
As already discussed earlier, “good old fashioned AI” is based on manually writing rules and having some sort of inference system that applies these rules in a given situation. Machine learning is more about discovering rules from a (usually quite large) number of examples.
One can distinguish three types of machine learning: supervised, unsupervised and semi-supervised.