A few days ago, I had the chance to attend the workshop “Concept Learning and Reasoning in Conceptual Spaces” in Bochum. Here’s a link to the workshop’s website: CLRCS 2017
It was a really great event with researchers working on conceptual spaces from a wide variety of perspectives, ranging from AI and linguistics over psychology and neuroscience to philosophy. Today, I would like to give a short summary of the workshop for those who were not able to participate but who are nevertheless interested in what kinds of topics have been discussed.
In the beginning, I’ll follow the chronological order of the presentations and try to summarize shortly what they have been about (at least how I understood it). Afterwards I’ll sum up the input I’ve received for my own research. So let’s start:
Martha Lewis (Amsterdam): Compositionality in Conceptual Spaces
One of the core problems in linguistics is about assigning meanings to sentences. Martha presented her approach of using category theory for establishing such a mapping. If we simplify things a bit, her approach works basically as follows: The meaning of each noun (like “clown” or “joke”) is represented as a vector in some noun space (which can be either a space obtained by tools like word2vec, or a conceptual space). The meaning of a complete sentence is represented in a vector in some sentence space. Now each verb (like “tell”) can be represented as a tensor (which is something like a three-dimensional matrix). In order to get the meaning of a sentence like “Clowns tell jokes” you simply multiply the vector for “clown” with the tensor for “tell” and the vector for “joke” (in this order), and the result will be a vector in the sentence space – the meaning for this sentence. It took some time for me to wrap my head around this whole idea, because there’s some abstract math involved, but now I think it’s a quite interesting approach.
Lucas Bechberger (Osnabrück): Conceptual Spaces for Artificial Intelligence: Formalization, Domain Grounding, and Concept Formation
You already know what my research is about, so I’ll keep this one short. Basically, I’ve presented a summarized version of all the papers I’ve published so far, including an outlook for the remainder of my PhD project. I’ve uploaded both the abstract of my talk and the slides I used, so you can take a look if you like.
Peter Brössel (Bochum): From Perception to Belief and Back Again
Peter’s research group was hosting this workshop. In his talk, he outlined his current research project which is concerned with the interaction of perception and beliefs. Traditionally in philosophy, you assume a three-stage pipeline from “perceptual experience” over “perceptual belief” to “hypotheses”. The have been debates about whether the content of the first stage (perceptual experience) is purely conceptual, purely non-conceptual, or both. Peter argues that conceptual spaces can give an account for a mixed view: Non-conceptual content corresponds to points in a similarity space, whereas conceptual content can be expressed by regions in this similarity space, i.e., concepts. He furthermore presented some ideas about concept learning through Bayesian approaches as well as the role of top-down feedback.
Nina Poth (Edinburgh): Concept Learning in Conceptual Spaces: A Hybrid Computational Perspective
Nina has written her Master thesis with Peter Brössel about a Bayesian approach to concept learning and she is now continuing this research in her PhD. After having listed some developmental criteria for an account of concept learning (e.g., being able to learn from few examples), she criticized symbolic approaches to concept learning: Concept learning in this setting corresponds to testing hypotheses who are in turn based on preexisting concepts – in order to learn concepts, you already need to have them. Conceptual spaces can help to break out of this circularity because the hypotheses can be expressed as regions in the conceptual space. This way, learning a new concept can be framed as figuring out which region in a conceptual space to use for defining the concept. The “size principle” she introduced states that learners prefer smaller regions to larger ones and helps to fulfill some of the developmental criteria. Her work is obviously related to my own research and I look forward to reading and hearing more about her progress.
Mara Floris (Turin): Concepts and Labels
In her talk, Mara presented two psychological studies conducted with children that illustrate the influence of labels to concepts. On the one hand, there is the so-called “grouping effect”: If objects that are perceptually dissimilar are always accompanied by the same label, humans end up categorizing them under the same concept. Without the label information, they usually consider them as separate concepts. On the other hand, there is the so-called “segregation effect”: If objects that are perceptually very similar are accompanied by two different labels, humans are able to form two distinct concepts for them. If no labels are present, only a single concept is formed. Relating this to conceptual spaces, there are two possible explanations for the influence of labels on the learning process: Either labels are used as additional features (i.e., the presence/absence of a given label is interpreted as an additional dimension of the conceptual space) or they influence the perception of similarity (e.g., by influencing the salience weights of the different domains and dimensions). This is in my opinion quite related to the top-down feedback considered by Peter Brössel.
Peter Gärdenfors (Lund): An Epigenetic Approach to Semantic Categories
In his talk, Peter focused on the origin of the different domains in conceptual spaces. This is one of the core questions with respect to conceptual spaces that needs to be answered: Where do the domains and dimensions come from? Looking at the core knowledge systems of space, objects, actions, and numbers, Peter argued that all of them are characterized by the set of invariances they exhibit and that these invariances lead to a significant dimensionality reduction. For instance, the location of an object and its properties (e.g., its color) are usually independent of each other. Moreover, these core knowledge systems can be related to important word classes – space is expressed by prepositions, objects are expressed by nouns, actions by verbs, and numbers by quantifiers. I think this question about the origin of domains and dimensions is a very important one and Peter’s research on the psychological aspects is in my opinion nicely complemented by my machine learning approach on this.
Yasmina Jraissati (Beirut): Color categorization: the “local” vs. “global” contrast
In many articles about conceptual spaces, the color domain is used as a prime example. It is intuitively easy to grasp, can be easily visualized, is reasonably well understood, and concepts in it can be easily defined. Yasmina highlighted in her talk, that this however does not hold true for other perceptual spaces, like for instance odor. There, no general “odor terms” exist – we usually characterize odors by words like “coffee smell” or “burnt” that refer to certain types of objects and processes. Color terms like “blue” however are terms of their own right. Such terms do also exist for odor, but only within a certain narrow context. For instance, there are ways of conceptually organizing and labeling the odors of cheese, of wine, or of perfume. These odor terms are however highly context-dependent. Yasmina argued that this might also be the case for other perceptual domains, such that the color space might be not a good example to generalize from.
Jon Carr (Edinburgh): Convexity and Expressivity in the Simplicity–Informativeness Tradeoff
Jon presented a number of experiments illustrating that language and its concepts are shaped by two pressures: The learning pressure (a language should be simple and easy to learn) and the communication pressure (the terms and concepts of a language should be fine-grained enough to allow for informative communication). The experimental results he presented indicate that these two pressures can result in a set of convex concepts – which gives some further support for the convexity assumption in conceptual spaces.
Brendan Ritchie (Leuven): Conceptual Spaces, Representational Similarity Analysis, and the Connection Between Mind and Brain
Typically, similarity spaces (and thus also conceptual spaces) are extracted from the similarity judgments made by participants in psychological experiments. However, given that we nowadays have fMRI, EEG, and the like at our hands, the question arises whether such similarity spaces can also be extracted based on the similarity of neural activities. Brendan analyzed in his talk two different approaches for extracting conceptual spaces from neural data and discussed their strengths and weaknesses. He especially urged to be cautious about such dimensional interpretations: “Just because you can characterize your results with dimensions doesn’t mean that there are dimensions in the brain!”
Michael Sienhold (Bern): Why Conceptual Processes are Inherently Experiential and Dynamic
Concepts are typically thought of as something static – a structure that stays pretty much unchanged over time. In symbolic approaches, one also assumes that concepts are amodal, i.e., that they do not involve any direct sensory experience. In his talk, Michael has challenged both views: In his opinion, conceptualization is a dynamic process that is inherently tied to sensory experience. Whenever you think about a concept such as “apple”, you re-enact (i.e., simulate) prior experiences with example instances of this concept. By interpreting the results of two psychological experiments in the light of this hypothesis, he sparked a lively discussion about this topic.
Marta Sznajder (Prague): Inductive Reasoning as Density Estimation with Conceptual Spaces
Marta analyzed in her talk the problem of making predictions about the future occurrence of labels in a given environment. She used the example of trying to predict which color ladybugs observed in a certain environment will have. In the finite case (three labels, namely “red”, “orange”, and “yellow”, and three hypotheses about their distribution in the environment), this prediction problem can be easily solved by using a Bayesian approach. By introducing so-called Dirichlet processes, Marta generalized from this finite case to the infinite case, where each label corresponds to a point in a conceptual space and where infinitely many hypotheses (i.e., probability distributions about labels) are allowed. She did a really great job in providing an intuitive grasp of this relatively abstract topic and I think her research is highly relevant to Nina’s Bayesian approach.
Igor Douven (Paris): The Three-Step Model of Categorisation
The last talk was given by Igor Douven, who presented a three-step approach for obtaining concepts in a conceptual space. In a first step, a preliminary partitioning of the conceptual space is obtained starting from a certain number of “design principles”. These design principles describe cognitive pressures that influence the representation of concepts, including for instance memory-friendliness and good learnability. IN a second step, prototypical regions within the obtained concepts are defined based on a trade-off between representativeness (e.g., being close to the region’s centroid) and contrast (i.e., being well-distinguishable from prototypical regions of other concepts). Using a variant of Voronoi tesselations, these protypical regions can then in a third step give rise to imprecise concept boundaries. All of this sounded very interesting and was supplemented by two detailed examples. I’m really looking forward to reading more about this research!
Summing up
I know that this post has already become quite long, so I’ll try to keep this section short and to the point.
The workshop was definitely a great event and I learned a lot about other applications of conceptual spaces in a variety of fields. I’ve also received valuable feedback on my own research, both about concrete next steps (with respect to learning domains with neural networks) and about the way that I need to tell my story (e.g., making clearer why I am trying to define concepts on the overall space and not only on individual domains, and emphasizing that I am using star-shaped concepts not because of any psychological evidence but because it is more convenient for my clustering approach).
I’m already looking forward to the next workshop on conceptual spaces! 🙂
Thanks, Lucas, for this summary! Very helpful for those of use who were not present. A few brief comments:
(1) Martha Lewis: I noticed connections to Plate and Kanerva’s work on Vector Symbolic Representations and hypervectors. While using Word2Vec as a utility for generating word vectors might be convenient, I would be cautious about using Word2Vec as a substitute for, say, a more geometric approach for generating vectors in a conceptual space. There’s a huge difference between how Word2Vec artificially creates a vector from associations between words, and how, say, a SOM creates a vector.
(2) Peter Gardenfors: The idea that properties tend to be independent of each other is a very important idea. It is a fundamental idea, IMHO. Independent properties have the capability of forming a bases set for a conceptual space. In linear algebra, bases vectors that span an N-dim space, when superimposed can sum up to any vector in that space. Why is this important? It provides a means by which the brain can form any complex or original thought from a finite set of bases properties (bases vectors). Further research required.
(3) The experimental ideas of Ritchie and Sienhold seem to show potential. I’m looking forward to reading about theirs and the other work!
Keep up the good work, Lucas.
You’re welcome 🙂
A quick response to word2vec: Yes, and Martha of course argued that a conceptual space is definitely preferable to a word2vec-like space (because these word2vec-like spaces don’t have any real grounding in perception and action – as you said they basically only find statistical associations between words).