SPEAKER
All right Let's go ahead, Get started. It's my pleasure to welcome Morgan Barense today. She's a professor and Canada research chair at the University of Toronto and Rotman Research Institute. She originally was in the Boston area, receiving her bachelor's degree from Harvard before going to Cambridge in England to do her PhD. She remained there for postdoctoral work on Peterson Research Fellowship and then joined the Faculty University of Toronto in 2009. She's since been promoted to associate and just this year full professor. But she also addressed the Toronto Neuroimaging facility. Her training is really quite broad, and that's one of things I thought made her a really good fit to bring in. She did. Some of our earliest work in animal Neuroscience has combined that with human neuropsychology fMRI cognitive psychology and brings those approaches together to study memory. I've known working for a number of years now through different conferences, and I really enjoyed the way she approaches kind of the intersection of topics in very novel ways. Challenging some of what we typically thought of a separate domains is maybe not so separate After all, her work has been recognized by a number of awards. The Canada research chair I mentioned as well as, more recently a young investigator award from the Cognitive Neuroscience Society. Today she'll be talking about when seeing becomes knowing. How the brain integrates perceptual conceptual information. Please join me in welcoming her. Thanks very much for that kind introduction. It's a pleasure to be here. Can everyone hear me with microphone? Okay, great. Okay. So the work that I'm gonna be talking about today concerns this central question. How do we know what we see? The problem of how we can know the nature of the world external to our mind is one of the oldest and most difficult in philosophy and cognitive science. Dating as far back as Socrates physicians, philosophers, causing of scientists, neuroscientists and filmmakers at Disney and Pixar have all wrestled with the question that's central to human cognition. And that is when we first interact with a new item in the world. How do we come to know what sort of thing that item is? So this ability underpins our interaction with items in the world. And in the late 19th century, even luminary neuropsychologist like Vernicke were grappling with this question on a lot of their conclusions were incredibly prescient, so Vernicke wrote a long time ago. The concept of a rose is composed of a tactile memory image, an image of touch in the central projection field of the some aesthetic cortex. It is also composed of a visual memory image located in the visual projection of the cortex. The continuous repetition of similar sensory impressions results in such a firm association between those different memory images that the mere stimulation of one sensory avenue by means of the object is adequate to call up the concept of the object. The sum total of closely associated memory images must be aroused into consciousness for perception, not merely of sounds of the corresponding words but also for comprehension of their meaning. So this quote beautifully. I think in my mind beautifully captures the idea that there are multiple modality specific modules distributed across the brain. So each of these so called modules is specialized to represent a particular kind of information. So, for example, you have a part of the brain that seems to be specialized for shape, for color, for motion and so on, and so forth. So the central idea here with what's called this distributed view of semantic cognition, is that the knowledge about an item reflects nothing more and nothing less than the association's existing. Among these modality specific surface representations. Now, with the advent of neuroimaging, this 19th century idea of distributed semantic cognition experienced something of a revival. So here I am showing the results of a meta analysis depicting the peak responses for language comprehension tasks. The differentially engaged different kinds of modality specific semantic information, and the key point here is that retrieving modality specific semantic information evokes activity in corresponding modality specific sensory motor cortex. So if I put that more simply, basically, retrieval from semantic memory involves the same or adjacent parts of the brain that were involved in the perception of those very same stimulant in the first place. So, for example, retrieving color knowledge activated here activated an area that was very close to the four regions that we know are associated with color processing and retrieving the function of an object lead to activation in in the vicinity of Area and T, which is thought to process visually perceived motion so these data about distributed semantic cognition are very compelling. But if we dig deeper, we can see that this proposed architecture, is going to face a problem under some circumstances. And that is because the core similarities that structure conceptual knowledge are often not entirely captured by modality specific representations. So more specifically, items that are similar in kind can vary quite substantially in terms of their surface details. And conversely, items that are different in kind can nonetheless be quite similar in terms of their surface details. So, you know, an ostrich and a hummingbird differ drastically in terms of their size and how they move and their appearance yet we would all agree we would classify them is very similar kinds of things and likewise tape and glue. The way that we move with them, is quite different. They don't look similar. Their actions are very different. Yet again, we would classify tape and glue is being similar kinds of things and again, conversely, a light bulb and a pair are similar in shape, in size but are quite different kinds of things and a gun in a hairdryer. They look quite similar they engage. A similar kind of practice yet we would classify them is quite different indeed. So given all of this, it's unclear how conceptual knowledge could be stored solely based on associations between modality specific regions. And it seems that the semantic system must represent conceptual similarity structure that is not directly reflected in any single representational modality. So this is where my branch of research steps in. So we're interested in how semantic meaning is extracted from and related to visual attributes of objects. Now, as I have indicated, and as this sort of silly headline only time I've read the Mirror. indicated there isn't always a 1 to 1. Correspondence between the way two objects look and what to objects are. So how then are we able to reconcile how something looks with what something is? So many classic classic neuropsychological studies have provided really compelling associations between visual perception and conceptual knowledge. So let's take the famous patient DS, who has an acquired lesion to area loc lateral occipital complex here in the back of the brain. She shows gross visual perceptual deficits, So when she's asked to copy a picture of an apple, she produces something that doesn't look anything like an apple. But when that apple is taken away and she's asked to draw an apple from memory, she does a pretty good job now. In contrast, patients with semantic dementia whose damage at least early on in the course of the disease is focused on the anterior temporal lobes. Do not show these gross perceptual deficits. They can copy a picture of a frog just fine, probably actually better than I could do. But when that picture of the frog is removed and they're asked to draw that frog from memory, they produce a very strange looking frog indeed. So, however, these associations do not preclude the possibility that visual and conceptual information might be integrated somewhere in the venture visual stream. And in fact, it might be this convergence of visual and conceptual information that allows us just so easily discriminate between objects whose conceptual and visual features are orthogonal. Yeah, so one I instant creation of this idea of a convergence zone is captured by what's been called this distributed plus hub or hub and spoke model of semantic cognition, and this theory proposes that a modality and variant hub in the anterior temporal lobe mediates communication across sensory specific representations. So the central idea here is that although conceptual similarity structure may not be perfectly represented in any single representational modality taken on its own, so think of the hummingbird and the ostrich example or the tape and the glue example. The conceptual structure similarity structure becomes apparent when it's considered across all of the different modalities across that are coded across this entire distributed network. So specifically, this view proposes that a hub in somewhere somewhere in interior temporal lobe connects to the various surface modalities, that are and extracts the core transmittal conceptual similarity structure to allow for generalization across different task context. Now, an important but yet unanswered question in this area of research is what the nature of the information that's coded in these punitive convergence zones. So is this putative convergence zone just an index that sort of pointing to these different modules? Or does this convergence zone explicitly integrate this different this different kinds of sensory information into its own sort of explicit conductive representation that might be flexible and a coherent, cohesive representation of that object concept? So where in the brain might we find if there is such an integrated representation, where might we find it? So the majority of my talk is going to focus on an anterior medial temporal lobe structure of the Perihinal cortex shown right here in purple on here on this coronal slice. Mhm. So the Paranal cortex is densely connected with regions throughout the ventral visual stream that are known to be important critical for a visual object perception and also with the anterior temporal lobe known to be important for abstract conceptual processing. So it seems like the ideal region to look. and It's been implicated in both visual perceptual processing on conceptual processing. So I spent sort of the first part of my career chasing down these first questions thing. First question to demonstrate that the perirhinal cortex is critical for the perception of complex novel objects. So, for example, when deciding whether or not these two simultaneously presented, I've lost my pointer, think my battery just died or something? Just if yeah, if anyone has the pointer deciding, um, if these two objects are the same or different, perirhinal cortex impaired performance. So Yeah. Thank you. Super. Thanks a lot. Great. Yeah. So so judging when to highly similar highly visual is similar, simultaneously presented objects are the same or different. perirhinal damage impairs this kind of perceptual visual perceptual discrimination. And in addition, a separate stream of elegant research. A lot of it coming out of Lolly Tyler's lab in Cambridge has shown has demonstrated that the perirhinal cortex is also sensitive to conceptual similarities between between familiar objects. So, for example, responses in perirhinal cortex are more similar, when they're too align into an avocado, then when they're too align and violin. So in sum here we see that perirhinal cortex is sensitive to visual similarities between objects and also to conceptual similarities between objects. However, this similarity that's been demonstrated for conceptually related objects there might be a bit of a confound here. And that's because conceptually related objects tend to share visual features, and so sensitivity to conceptual information might be driven by visual factors. So if we really want to address this question of integration of visual and conceptual information, we need to. Deconfound these attributes by independently varying visual and conceptual overlap across objects. And that's what exactly what my seller postdoc Chris Martin, shown right up there. Uh, did so Chris painstakingly put together a word stimulus set that he based on these chains of objects in which visual and conceptual similarity were not linked to one another. So, for example, bullet and gun, our conceptually, but not visually similar, whereas gun and hairdryer are visually but not conceptually similar. And he created this chain of objects that met these properties. Now I wanna highlight that these are words stimulus. So what we're looking at in this study is how the knowledge we're getting at semantic cognition. So how knowledge of an object visual attributes the visual semantics of object are related to their more conceptual attributes. So a conceptual relationship here between bullet and gun a visual relationship here between gun and hairdryer. But these air words that we're talking about visual semantics just to preempt on. We're running this study and pictures, but you'll have to wait a year or so before I have this data. Okay, so, with armed with this stimulus set, um, Chris created a visual model that captured the visual similarities among all the objects. So we had nearly 1200 participants rate the visual similarity of objects on a scale from 1 to 5. So how visually similar are a gun and a bullet? How visually similar a gun and hairdryer and so on and so forth. These visual similarity model values created our visual model, which was a matrix depicting the visual similarities across all objects in the stimulus set. So, for example, in this model, gun and hairdryer have a high, have a high visual similarity where his gun and bullet do not. Yeah. So you're saying that the quantification of that similarity using only the world using only the words as well? Yes. Everything here is with words were doing an analogous version of pictures but scanning next week. Okay? And so then he took those exact same words and used a different set of nearly 1600 participants to create a model that captured the conceptual similarities of those objects. So 1600 participants produced semantic features for one object and Only one object. So, for example, 17 people said that a bullet was used to kill. No one said that a hair dryer was used to kill just fortunate, but 20 people said that it was used for hair. Now, from these, generated features, we calculated the cosine similarity across the different objects. So here, bullet and gun have a very high cosine similarity of 0.42 whereas gun and hairdryer have a lower cosine similarity of 0.7. So taken together Sorry. These values created our conceptual model, which was this large matrix depicting the conceptual similarities across all of the objects in our stimulus step. Now it's really important to note that the correlation between the visual and the conceptual models was not significant. Thus removing the confound that I noted with lime and Avocado that stimulate that are conceptually similar also tend to be visually similar. So with this clean set up, we can now assess the visual and conceptual integration of information. So now Chris the untireable, Chris Martin, then took these objects to the scanner in still another group of participants. So he scanned eight runs of a property verification task involving visual and conceptual properties. So we created we wanted to create these two different task context, the visual task context and the conceptual task context because we wanted to bias our participants towards either processing the visual attributes of the objects or the conceptual attributes of those objects. okay, so, for example, for the first half of a run, they made visual judgments for all of the objects. Like is the object angular? Then for the second half of the run, they made conceptual judgments like Is the object natural? Is it manmade across different runs? The property to be verified with unique and the order, was counterbalanced. Okay, so then Chris, armed with this MIR data, Chris, assess the similarity of brain activity between different object concepts during the visual and the conceptual task. task context. So, for example, Chris measured the brain activity patterns associated with visual judgments for the object concept gun and likewise, he measured the brain activity for the concept bullet on. Then he correlated these patterns of activity, asking how similar is the brain activity for bullet to the brain activity for gun. And he did this for every single stimulus in our every single object in our stimulus set. And he did the exact same thing for both the visual and the conceptual task. So this created two different matrices, reflecting the similarity of brain activity for different objects during visual and conceptual tasks. Okay, so just to summarize, Chris has done a lot of work up to this point. He's created four models in total, so he has two brain models here that captured the similarity of the brain data during visual and conceptual judgments about these objects. And he has two behavioral models that describe the visual that captured the visual and the conceptual similarities of those objects. So we have a behavior models, and we can kind of think of that as ground truth. And we have what the brain, how the brain is classifying these different the similarity across these different objects. And now what we're going to be looking for is correlations across these different models, both within and across the visual in the conceptual domains. So this allows us to ask, for example, whether the activity on a visual task can be described by the visual similarities, but also sorry, but also whether the brain similarities across objects during the visual task can be described by the conceptual similarities of those objects. So if we find that we get some evidence for integration. So if on, when participants are making visual judgments and their brain, their patterns of brain data can be described by the conceptual similarities of those objects. That is evidence that the conceptual attributes are integrated. They're coming along for the ride on these visual judgments. I'll unpack that a little bit more later. Okay, So for the purposes of this talk, I'm gonna focus on four ROI. Uh, the first RIO is LOC, which is known to be important for visual perception. We also looked at parahippocampal cortex, which has been implicated in representing contextual associations of objects and the temporal pole. As I said, implicated in semantic dementia and known to be critical for conceptual processing and, of course, the perirhinal cortex, which is where we expected to observe convergence. We also did a whole brain search analysis, which is entirely consistent with our ROI results. These ROIs are the main players. And just for the sake of simplicity and time, I'm just going to stick to the ROIs But I'm happy to talk about any of the whole brain results in the discussion. Okay, so I'm going to go through each ROI internee. So in the lateral occipital cortex, we found that the patterns of brain activity could be described by the visual similarities of the objects, but only on the visual task. And that's a nice sort of sanity check. We're calling this task dependent coding of visual attributes, and it's entirely consistent with well appreciated role of this part of the brain in visual object processing, the perirhippocampal cortex showed exactly the opposite. So here the patterns of activity on the brain could be described by the conceptual similarities of the objects, but only for the conceptual task. So one possible interpretation of these results is that participants might bring to mind a contextual setting for the object when they perform the conceptual task, but not when they perform the visual task. So, for example, thinking about the conceptual properties of a comb and a hairdryer might bring to mind an image of a bathroom or salon, and we're doing future experiments. To get it this by explicitly manipulating this contextual co occurrence of the objects in our stimulus that okay so moving on, consistent with its established role in abstract conceptual processing activity in the temporal pole correlated with the conceptual similarities of the objects for both the visual and the conceptual tasks. So that is, it showed task in variant coding of the conceptual similarities between the objects, and this is consistent with what's been learned about this brain region in semantic dementia. But most interestingly, in my mind activity perirhinal cortex was captured by both the visual and the conceptual similarities of the objects, regardless of the task. So this suggests that visual and conceptual information is integrated in perirhinal coding such that visual information comes along for the ride. When completing a conceptual task and vice versa and our whole brain analysis, supported this claim, we found that a contiguous cluster of boxes and left perirhinal cortex with the only region that showed this integrative coding. So that is, in this whole brain analysis. It was exactly the same subset of perirhinal boxes that carried both the conceptual and the visual information. Now, notably, I want to draw your attention to this interaction that we around here between these models of visual and conceptual similarity. So although the perirhinal cortex always coded both visual and conceptual similarity. Visual similarities were more strongly coded in the visual task, and conceptual similarities were more strongly coded in the conceptual task. So, um, so this is the point that I'm gonna be coming back to. But I think it suggests that attentional control modulated these multi boxill activity patterns such that the multi dimensional structure within perirhinal cortex could be flexibly adapted to task to. That's yes, of course. I'm having trouble if you have a reconciled so right with three empirical data from the brain. Looks to me I have trouble perceiving order out of it on. There is some similarity measure with behavior, and it's clearly being reflect. So we're correlating the matrices to we're saying does so this is the similarity between it's a Kendall's how it's a correlation matrix between these two. So are if a gun and which that one. Okay, so if a gun and a hair dryer have similar brain activity, do they also have similar? Are they also similar in terms of their behavioral similarity rates? Make one, but it z turned into a dissimilarity matrix because of because negative isn't is kind of misleading but yes, that's basically what it is. Yes, yes, yeah, yeah. So we're saying we're correlating across these two to say, um can we describe it does the way that the data line up here in behavior? Does it match how the data line up in the brain? Does that make sense? And so, I mean, if it Yeah. Oh, yes. So these numbers are tiny. Yes. That is always the case. Yes. sorry if you're a little bit alarmed by the magnitude of the Y axis, that's just that's that's typical for these kinds of Yeah, correlations are always very, very small. Yeah, but the significance is absolutely walking. Okay, any other questions, please feel free to interact. Okay. Oh, sorry. OK, so what does this all mean? How are we to understand this? So despite the fact that we deconfounded visual and conceptual information and despite the fact that our task is biasing our participants towards either visual or conceptual features, perirhinal cortex nonetheless represented both visual and conceptual information. So if you just stop and think about this for a moment while participants are making visual judgments about these objects, so is a gun. Angular is a bullet smooth perirhinal coding captured the conceptual similarities of those objects and, conversely, when they made conceptual judgments about objects. So, for example, of the hairdryer natural is a gun pleasant perirhinal coding captured the visual similarities of those objects. So the fact that a gun and a bullet are dangerous is not needed to assess whether or not they're angular. And likewise, you don't need to know that a gun and a hair dryer are visually similar. to assess whether or not they're a pleasant but in terms of visual coding in terms of perirhinal coating, this information is just coming along for the ride. So why would the brain represent information in this way? And what I'm gonna argue and it's probably obvious to you, is that in any given situation, only a subset of our full component of semantic knowledge is relevant to the task at hand. So, for example, if I asked you to group these objects, according to, which you would want to host a dinner party with versus which you would want for a jam session, I think we could all agree on this arrangement of the object and If I asked you to sort these objects based on their color, we would very easily come up with a different sorting. And if I asked you to put them on a continuum regarding how you would feel about the objects If a friend helped you ask you to, come over and help you move them out of her apartment we might choose different characteristics of these objects. To emphasize yet again. So in order to use conceptual knowledge appropriately, our semantic system has to retrieve diverse and often competing interfering properties to complete different tasks. So now, of course, whether a property is interfering varies according to that particular task context. So adaptive behavior requires that we flexibly resolve interference from features relevant to the task at hand. And I am gonna argue that this multi dimensional representational structure of object concepts in perirhinal cortex is key to resolving interference from these tasks irrelevant features. So, in the next study me, we wanted to look at this. We wanted to look at the behavioral relevance of these representations in perirhinal cortex or these integrated representations in perirhinal cortex. So to do this, we investigated what happens when there are damaged. So this is a neuro psychological case study specifically going after this idea that the perirhinal cortex is critical for resolving task irrelevant interference. So we worked with three patients and their age matched controls are critical. Patient is patient DA. Who had a large mtl lesion that included the perirhinal cortex. So if you're looking at his brain here, you can, um, he's missing his hippocampus. Um, more on the right thing on the left, but really a bilateral hip perirhinal ablation that goes pretty pretty far. and we had to control patients. Um, HC, Those are actually her initials. Not a shorthand to depict her legion. She had a bilateral hippocampal lesion relatively circumscribed bilateral hippocampal damage and patient RL had ventral medial prefrontal cortex due to a stroke. So the most important comparison that I want to emphasize is between patient DA and patient HC. So both of these cases have MTL damage, but it's only DA. That has damaged the perirhinal cortex. So my two graduate students a the time Danielle Douglas and Rachel Newsome used a similar stimulus creation approach to what I described from Chris. So they created a discrimination task involving Stimulate, whose visual and conceptual similarity were not linked in a block design. Participants were asked to match the top reference. So here, gun to the most visually or conceptually similar object. So in the visual task, the match was visually but not conceptually similar to the object. So here, the correct answer would be hairdryer and in the conceptual task on the answer was conceptually but not visually related. The object, the referent object and here the correct answer would be pulling. Moreover, these foils and targets were fully crossed so that each visual match had a conceptual foil so and each conceptual match had its own visual foil. So here the foil to bullet is batteries so that those are visually related and the foil to hair dryer is home, which is conceptually related. So why did we create these foils? What we wanted to do is to really increase the amount of interference, the amount of competition on any given on any given trial because we wanted to create this requirement for attentional control to flexibly reshape this representational structure in order to solve the task and Rachel and Danielle and their collaborators, including Chris, went through extensive stimulus, that validation to ensure that the relative strength of the visual and conceptual associations was matched. I won't bore you with that, but it was a lot of work. Okay, so what did we find? So where is, both control patients performed normally? DA was exceptionally impaired on both the visual task and the conceptual task. We went back and tested him, and we replicated that, that paradigm. So we're observing susceptibility to visual and conceptual interference after perirhinal damage. And when we go and look at his errors, we can see that DA is falling for the semantic lure on the visual task. So he's choosing bullet when he should be choosing hairdryer. And likewise, he's falling for the visual or on the conceptual task. So he's choosing hairdryer when he should be choosing bullet. That is, he can't resolve this interference from the competing dimension. Now it's important really important to note that he is not that he has trouble with this task per se. He understands the instructions and he can perform normally in the absence of visual and conceptual competition. So that is when we removed the lure from the imposing dimensions. So we got rid of bullet battery in this, uh, in the visual trial, he was able to select hair dryer. No problem on likewise, when we removed the visual or the conceptual task, he did just fine. and We also had additional control conditions that used more simple stimulating letters and numbers that did not require this assessment of multiple dimensions, multiple visual and conceptual dimensions to get the right answer. And in this case again, he was able to resolve that competition. So how are we to make sense of these findings our working hypothesis here is that the multi dimensional representational structure and perirhinal cortex enables flexible task, relevant behavior. So in any given trial, there is a massive amount of both visual and conceptual interference. We have objects that are visually similar. So a hairdryer a gun on. We have objects that are conceptually similar hair dryer in a comb and we're asking our participants to flexibly switch between these two different modes. And I just want you to recall that interaction, that I showed you in neuroimaging study so we think that this switching is enabled by the multidimensional representational structure in perirhinal cortex. It goes both the visual and the conceptual information, and it's representational coding can be transiently adapted based on task context. So when it's a visual match trial the visual similarities can be emphasized. And when it's a conceptual match trial, the conceptual similarities can be emphasized. And as we saw in patient DA. A without the multidimensional representational structure in perirhinal cortex, this flexible recombination of information based on task relevance was not possible. So Chris is planning future work to better understand the mechanisms that drive this representational flexibility and we think that is going to be really important to arbitrate between two different potential explanations. So in the first, it might be some sort of inhibition account whereby the perirhinal cortex is exerting inhibitory control over it's connected regions. So, for example, in a instance, individual task context perirhinal cortex perhaps might be inhibiting the conceptual information from anterior temporal pole and vice versa and the conceptual task context, perhaps it's inhibiting, irrelevant information from earlier in the venture visual stream on DSO in order. to, get it this explanation. What we're gonna do is we're gonna look at the strength of the connectivity in perirhinal cortex between these different brain regions and see how their modulated by task context and the degree of similarity between the lure on the target. Now, a second possibility is that this multi dimensional representational structure and perirhinal cortex is modulated through interactions with some independent semantic control network. Now, under this explanation, we might find that perirhinal connectivity with the LOC of the temporal pole doesn't change. But instead we might be looking at connections between perirhinal cortex and perhaps, inferior frontal gyrus, reflecting that degree of competition and interference. Okay, so we have lots of work to do. But for now, I think we're building a story that the integrated representations in perirhinal cortex provide this informational bedrock with which we can make sense of the sensory cacophony present in our everyday experience and allow us to interact appropriately with the items in our world. So our neuroimaging studies showed that visual and conceptual information are integrated in perirhinal cortex, with the perirhinal cortex representing both the visual and the conceptual attributes of objects and doing so in a flexible manner that allowed for this seamless discrimination between objects whose visual and conceptual features orthogonal. Now, when these representations are damaged, as occurs in mtl amnesia, the representation can't be reshaped in a task appropriate manner. So in some, at the level of perirhinal cortex, it might not be possible to fully disentangle visual and conceptual processing. That is to say, for the computational operations in perirhinal cortex, seeing implies knowing now, I was going to talk about one really quick study that looked at visually guided reaching. I could stop for questions or I could be a good time. Okay, so this is really, really new work. In fact, I just got these slides yesterday, but I was excited to share it with you because I think it demonstrates that the influence of interference from visual and conceptual, information is really far reaching. I realize that's a terrible pun, because we're looking at visually guided, reaching, in this task. So this is again some great work, like Chris Martin. so imagine that you're sitting at this desk and you need to drop down a note. You would pretty easily grab that pencil. Pick it up and write. Drop down your note. Now this ability to do this reflects our semantic knowledge about object contest, how they look, how we hold them, how they move, what they're for, and so on. But there's a lot going on under the surface that underlies this release. Seemingly simple motion of just going and picking up that pencil and that includes that involves resolving competition from all of these tasks. Irrelevant objects so visually guided, reaching i.e. going and grabbing that pencil might be an informative area of behavior in which to investigate the influence of visual and conceptual competition. So let's say that you have to, and so this is the task that we're going to be using. So let's say that you have to reach for a target in the presence of a competing distracter. Whenever there is any sort of target ambiguity, there's going to be competing motor plans between the distracter on the target and until that competition is resolved, the movement is going to be an average of those two motor plans. So when you start moving your finger under this situation. There's gonna be a straight line to start because the competition between the two motor plans has not been resolved. But once it has been resolved, we can see that the movement deviates from the midline and heads over to the target. Now you can imagine how these different reach trajectories might vary based on the nature of the competition between the target and the distracter. And what we're gonna do is look a two dependent variables to understand the dynamics of this interference resolution. So the first is the area under the curve with less area reflecting a more efficient reach trajectory. and the second is the positioning of the deflection point. That is when the reach is deviating from the midline reflecting when these competing motor plans have been resolved. Okay, so Chris manipulated the degree and the nature of the competition to see how it's going. to influence these different reach dynamics. So it's a really simple design. Participants just have to move their finger on a touch screen. to a target location. That target location is cued by a color where the distracter is cued by another color. Now importantly, of course, the all of this is counterbalanced within and across participants. and important for this design is that the distracter in the target locations are always paired with words that can vary in terms of their conceptual or visual competition. So sticking with this example of hair dryer that I've used throughout the talk, the target hairdryer could be paired with a visual distracter, and that visual distracter could have a high similarity. It could be something like Gun could have a medium similarity, like megaphone or low similarity like mud Or that target hairdryer could be paired with a conceptual distracter again that varies in terms of the similarities so high would be gun medium would be scissors and low might be wrench. Okay, so let me I'm just gonna go through really quickly in the time that I have the specific details of the study. So participants are seated in front of a large tabletop touch screen on day, completed just under 400 trials. So in the first phase of the trial, participants are holding their finger on the start location, and they have to read aloud the words that are associated with the two locations. Next, the locations of the target and the distracter are revealed using the color coding scheme, and here participants have to initiate their reached in less than 500 milliseconds. This is actually really quick. When I first heard that, I thought that that seemed quite slow up. Some persons need, like, 40 minutes of training to be able to do it, but we have to force them to move quite quickly because we want to get it competition between these different motor plants and then lastly, for us to count the trial, they have to reach the target in less than 700 milliseconds after the movement is initiated. So before I go into the specific data, I just want to say that across our different conditions, the average time to initiate the movement was 377 milliseconds and there were no differences across the conditions. But we did find something interesting when we look at the area under the curve, so looking first, actually. So if there's a visually, related objects, so there's a visually visual competitors were actually not seeing anything interesting. There's no modulation, however, when there is a conceptual distracter present the area under the curve increased with a degree of competitors. Similarity. So that is, participants had the least efficient reached trajectories when there was the most conceptual competition present. So this would be like the hairdryer comb the pairings and our deflection point analysis is telling a really similar story. So on the left, I'm plotting the reach trajectories to the target with the visual distracter on the right, the trajectories with a conceptual distracter. This is just for illustration purposes, Of course. We never had two targets present on the screen at the same time, so again when there was a visual competitors. We're not really seeing anything interesting in terms of the deflection analyses of the deflection points. But when there's a conceptual distracter, we're seeing larger deflection points with increased conceptual similarity indicating that these competing motor plans were resolved later on in the trial. Okay, but Chris didn't stop there. So he noted that on a significant minority of the trials, the movement was initiated after 500 milliseconds, so they didn't follow our instructions on about 37% of the trials. So Chris decided to look at these trials. This is post talk, and now we're going to replicate it or attempt to replicate it. But the idea being that for these trials participants would have had more time to resolve the competition before making their reach. So that is some of the work that they had been doing in flight would have been resolved before they initiate that movement. So now what do we see? Again we don't see anything for the visual distracters, but for trials with conceptual distracters. Now the story completely flips. So the inflection point is now lowest for trials with the highest similarity and highest for the trial. Low similarity. So now we're seeing the most efficient reach trajectories on the trials that have the most competition. And just finally, this is the last part of my talk. If we're looking at the area under the curve for all of the different conditions in trial types again, when there's visual competition, the area under the curve isn't modulated by competitors. Similarity. Regardless of whether they initiated this trial a little quickly, so that's in less than 500 milliseconds or, um slowly more than 7500 milliseconds. But when there's a conceptual competition, we see a really striking interaction. So when they initiate quickly. The area under the curve increases with competition, but when they take their time to initiate the movement, we see the opposite. So it's like when they've taken their time to initiate the movement there e don't know how else to say it. It's like they're bringing their A game and they successfully inhibited that competitors and get straight to the target. Okay, so were in super early days with this work. We've got lots of different iterations that we're considering. We're going to do this with pictures as well. We're gonna look at how it's affected by perirhinal damage. But the basic take home points is that reaching visually guided reaching can reveal hidden cognitive states. Here in this study, the time required to resolve competition between objects scaled with their conceptual similarities and that resolving competition optimizes reach efficiency. Adding to this growing picture that adaptive behavior in many different cognitive domains requires that we flexibly resolve interference from features that are relevant to the task at hand. I just wanna thank everyone that's involved in this work, Chris, who lead most of it Danielle Douglas and Rachel Newsome and a really excellent undergraduate honors student. Lisa, thank you so much for listening. Yeah, um, so I can understand how you wait for asking that you know what so we debated when we first started this way, do a picture? Should we do with words? And it was exactly that rationale that you just gave. We said we're going to start with pictures, but then I went out and gave this talk, and everyone's like, Why didn't you do it with pictures? Because you're talking about visual attributes, but you're not showing them pictures. Uh, so we have. I mean, we have the data with words, and I feel really good about those data. with pictures, I don't know. I feel like the visual and conceptual are no longer on a level playing field. And and that's why we went towards words first. Because when we show them the words you have to extract either the visual attributes or the conceptual attributes, you know from more than what's there. But when we show them pictures, it's, you know, it's kind of all right there, So yeah, we'll see. But thank you for that. That's actually the first time when somebody said why are you doing it with pictures as opposed to why aren't you? I felt like I was bullied into doing it. Pictures, and so I'm kind of like defensive about it, But, uh, but yeah. Thank you for that question. Yeah, no question about the imaging study. I wondered if you lab, of course, or yes, I can figure out how his name through here. Just, so you were saying that you had subjects? Um, rating this points. And what struck me is while the brain activity seems to be very either industry ratings are very, very Yes, it is easy. Similar morning. Yeah, I wonder. First of all, why that is inseparable. Whether God influences your results, I could imagine. Tend to drive these two apart except right. Yeah. You mean you're referring to the fact that these matrices air really sparse? Yeah. Yeah, I think that I take your point. I think this is a little bit too much. I'll say two things. Um, it doesn't reproduce. This is like a little bit of a cop out. It's not quite as bad as it looks here. Uh, we did yet I mean so people. So what's an example. You know, like a comb and a bullet people way Didn't get the full range of 1 to 5. You know, not every way. Just that people tend to. That's how people rate things in. They don't spread their scores. You begged them to and they don't do it. Um, they are Kendall's tower. Correlation is supposed to take that into account. Um, yeah. No, I mean, we are aware of it. If it yeah, I don't know. It's really hard to get a stimulus set that meets these attributes. And so yeah, have it fully tiled across you. Yeah, it wasn't there. Yeah. Mm. Yeah, I know. It's fine. with clicking longer. Response time. Yes, I guess I was wondering like my first thought would be that there might be a different response time. And I was wondering, Right there are taking longer, right? So I should have said that again. There were no overall RT differences between the conditions, so it's like how they're accelerating. So it's not just like they moved faster once, like they're still taking one. So these are what I'm showing you here. So these are the not so these are the trials in which movement was initiated after 500 milliseconds, they're still It still takes them about, you know, let's say 400 milliseconds to get from the start point to the target, they just get there differently. And this is something that we're gonna be looking at. This is our first time we've ever done any visual reaching study. And we realized that we didn't code it in such a way that we have, like, the data to look at acceleration like we don't have the point by point data to be able to say what's going like. How is that possible, Right? How is it there? Take their traveling more ground in the pink bars than in the than in the green? But it must be just because they're taking the same amount of time to get there. But they're traveling more distance. It must be that they may be accelerate once that the computing motor plans are resolved. That's our hypothesis. But yeah, we need to look at that. So did so. Did that answer your question? I feel like it might not have So So there is no overall RT differences across the different conditions it's just the way in which they're getting there. Is there a difference I guess I'm just not understanding why, right? I not overall, Well, I think in that time before they've initiated they So here we're looking at them resolving. I feel like when I turn, you can't hear me anywhere. Um, here. They're like resolving, resolving, resolving, they got it and they go, So this is when they're moving quickly and there resolving those competing plans in flight. Whereas if they stay and kind of think about it going anyway. Then they made their mind up and they go straight there. Yeah, supposing acute them to reach a second or second and a half. Yes, we're looking at that. So this just emerged. Chris said he just looked at these data. This is a post talk analysis. He said, God, we've lost 37% of our trials. Like maybe we should look to see what goes on there. So now we're gonna give those specific instructions to see how it I think that when we give them more time, it's gonna look like this. Yeah, I think so. Uh oh. Well, with a second and a half. You mean between visual similarity? I don't know. Yeah, I don't know. It will be interesting. Yeah. Three. Yes. Yes. Everything is matched. And also each target they see. They see. It's not like they see hairdryer more than they owe long. They take too long on the same number of trials. I think so. But that I'm not sure. Yeah, I'll ask Chris that. Yeah, yeah. Thank you for asking. I think he would have told me if if these were disproportionately driven by one of the trials types, but I'm Yeah, that's it. I'll ask him. Thank you for that question. So slide so in the files longer than principles. Their correlation between how much longer it took them and the degrees polarity. No, we didn't way will. Thank you. Yeah. Thank you. Yeah, this is great. Great to get feedback. These are super new today Yeah, looking at the my left side field looking at what's that? Negative? Yeah. Mm. Interpret that as me people are just really fast, you know, processing speed for identifying visual protest. Different? Sure. Or it's like the visual distracter just doesn't matter. Like they see, um, gun and hairdryer and gun isn't interfering like maybe it's that those when you're reading it, it doesn't really fit with neuroimaging data. Where I said, you know, this visual information comes along, Um, when you know you're doing a conceptual task. But maybe when you're reading it in this way like there was visual attributes aren't activated sufficiently. To regenerate that interference. So to get it, this one manipulation that we're going to do is prime people to have them think about the visual attributes and then make the do something that kind of get them or into that zone. Yeah. So in the past, when you look at the
SPEAKER 1
play around cortex, how much individual very
All right Let's go ahead, Get started. It's my pleasure to welcome Morgan Barense today. She's a professor and Canada research chair at the University of Toronto and Rotman Research Institute. She originally was in the Boston area, receiving her bachelor's degree from Harvard before going to Cambridge in England to do her PhD. She remained there for postdoctoral work on Peterson Research Fellowship and then joined the Faculty University of Toronto in 2009. She's since been promoted to associate and just this year full professor. But she also addressed the Toronto Neuroimaging facility. Her training is really quite broad, and that's one of things I thought made her a really good fit to bring in. She did. Some of our earliest work in animal Neuroscience has combined that with human neuropsychology fMRI cognitive psychology and brings those approaches together to study memory. I've known working for a number of years now through different conferences, and I really enjoyed the way she approaches kind of the intersection of topics in very novel ways. Challenging some of what we typically thought of a separate domains is maybe not so separate After all, her work has been recognized by a number of awards. The Canada research chair I mentioned as well as, more recently a young investigator award from the Cognitive Neuroscience Society. Today she'll be talking about when seeing becomes knowing. How the brain integrates perceptual conceptual information. Please join me in welcoming her. Thanks very much for that kind introduction. It's a pleasure to be here. Can everyone hear me with microphone? Okay, great. Okay. So the work that I'm gonna be talking about today concerns this central question. How do we know what we see? The problem of how we can know the nature of the world external to our mind is one of the oldest and most difficult in philosophy and cognitive science. Dating as far back as Socrates physicians, philosophers, causing of scientists, neuroscientists and filmmakers at Disney and Pixar have all wrestled with the question that's central to human cognition. And that is when we first interact with a new item in the world. How do we come to know what sort of thing that item is? So this ability underpins our interaction with items in the world. And in the late 19th century, even luminary neuropsychologist like Vernicke were grappling with this question on a lot of their conclusions were incredibly prescient, so Vernicke wrote a long time ago. The concept of a rose is composed of a tactile memory image, an image of touch in the central projection field of the some aesthetic cortex. It is also composed of a visual memory image located in the visual projection of the cortex. The continuous repetition of similar sensory impressions results in such a firm association between those different memory images that the mere stimulation of one sensory avenue by means of the object is adequate to call up the concept of the object. The sum total of closely associated memory images must be aroused into consciousness for perception, not merely of sounds of the corresponding words but also for comprehension of their meaning. So this quote beautifully. I think in my mind beautifully captures the idea that there are multiple modality specific modules distributed across the brain. So each of these so called modules is specialized to represent a particular kind of information. So, for example, you have a part of the brain that seems to be specialized for shape, for color, for motion and so on, and so forth. So the central idea here with what's called this distributed view of semantic cognition, is that the knowledge about an item reflects nothing more and nothing less than the association's existing. Among these modality specific surface representations. Now, with the advent of neuroimaging, this 19th century idea of distributed semantic cognition experienced something of a revival. So here I am showing the results of a meta analysis depicting the peak responses for language comprehension tasks. The differentially engaged different kinds of modality specific semantic information, and the key point here is that retrieving modality specific semantic information evokes activity in corresponding modality specific sensory motor cortex. So if I put that more simply, basically, retrieval from semantic memory involves the same or adjacent parts of the brain that were involved in the perception of those very same stimulant in the first place. So, for example, retrieving color knowledge activated here activated an area that was very close to the four regions that we know are associated with color processing and retrieving the function of an object lead to activation in in the vicinity of Area and T, which is thought to process visually perceived motion so these data about distributed semantic cognition are very compelling. But if we dig deeper, we can see that this proposed architecture, is going to face a problem under some circumstances. And that is because the core similarities that structure conceptual knowledge are often not entirely captured by modality specific representations. So more specifically, items that are similar in kind can vary quite substantially in terms of their surface details. And conversely, items that are different in kind can nonetheless be quite similar in terms of their surface details. So, you know, an ostrich and a hummingbird differ drastically in terms of their size and how they move and their appearance yet we would all agree we would classify them is very similar kinds of things and likewise tape and glue. The way that we move with them, is quite different. They don't look similar. Their actions are very different. Yet again, we would classify tape and glue is being similar kinds of things and again, conversely, a light bulb and a pair are similar in shape, in size but are quite different kinds of things and a gun in a hairdryer. They look quite similar they engage. A similar kind of practice yet we would classify them is quite different indeed. So given all of this, it's unclear how conceptual knowledge could be stored solely based on associations between modality specific regions. And it seems that the semantic system must represent conceptual similarity structure that is not directly reflected in any single representational modality. So this is where my branch of research steps in. So we're interested in how semantic meaning is extracted from and related to visual attributes of objects. Now, as I have indicated, and as this sort of silly headline only time I've read the Mirror. indicated there isn't always a 1 to 1. Correspondence between the way two objects look and what to objects are. So how then are we able to reconcile how something looks with what something is? So many classic classic neuropsychological studies have provided really compelling associations between visual perception and conceptual knowledge. So let's take the famous patient DS, who has an acquired lesion to area loc lateral occipital complex here in the back of the brain. She shows gross visual perceptual deficits, So when she's asked to copy a picture of an apple, she produces something that doesn't look anything like an apple. But when that apple is taken away and she's asked to draw an apple from memory, she does a pretty good job now. In contrast, patients with semantic dementia whose damage at least early on in the course of the disease is focused on the anterior temporal lobes. Do not show these gross perceptual deficits. They can copy a picture of a frog just fine, probably actually better than I could do. But when that picture of the frog is removed and they're asked to draw that frog from memory, they produce a very strange looking frog indeed. So, however, these associations do not preclude the possibility that visual and conceptual information might be integrated somewhere in the venture visual stream. And in fact, it might be this convergence of visual and conceptual information that allows us just so easily discriminate between objects whose conceptual and visual features are orthogonal. Yeah, so one I instant creation of this idea of a convergence zone is captured by what's been called this distributed plus hub or hub and spoke model of semantic cognition, and this theory proposes that a modality and variant hub in the anterior temporal lobe mediates communication across sensory specific representations. So the central idea here is that although conceptual similarity structure may not be perfectly represented in any single representational modality taken on its own, so think of the hummingbird and the ostrich example or the tape and the glue example. The conceptual structure similarity structure becomes apparent when it's considered across all of the different modalities across that are coded across this entire distributed network. So specifically, this view proposes that a hub in somewhere somewhere in interior temporal lobe connects to the various surface modalities, that are and extracts the core transmittal conceptual similarity structure to allow for generalization across different task context. Now, an important but yet unanswered question in this area of research is what the nature of the information that's coded in these punitive convergence zones. So is this putative convergence zone just an index that sort of pointing to these different modules? Or does this convergence zone explicitly integrate this different this different kinds of sensory information into its own sort of explicit conductive representation that might be flexible and a coherent, cohesive representation of that object concept? So where in the brain might we find if there is such an integrated representation, where might we find it? So the majority of my talk is going to focus on an anterior medial temporal lobe structure of the Perihinal cortex shown right here in purple on here on this coronal slice. Mhm. So the Paranal cortex is densely connected with regions throughout the ventral visual stream that are known to be important critical for a visual object perception and also with the anterior temporal lobe known to be important for abstract conceptual processing. So it seems like the ideal region to look. and It's been implicated in both visual perceptual processing on conceptual processing. So I spent sort of the first part of my career chasing down these first questions thing. First question to demonstrate that the perirhinal cortex is critical for the perception of complex novel objects. So, for example, when deciding whether or not these two simultaneously presented, I've lost my pointer, think my battery just died or something? Just if yeah, if anyone has the pointer deciding, um, if these two objects are the same or different, perirhinal cortex impaired performance. So Yeah. Thank you. Super. Thanks a lot. Great. Yeah. So so judging when to highly similar highly visual is similar, simultaneously presented objects are the same or different. perirhinal damage impairs this kind of perceptual visual perceptual discrimination. And in addition, a separate stream of elegant research. A lot of it coming out of Lolly Tyler's lab in Cambridge has shown has demonstrated that the perirhinal cortex is also sensitive to conceptual similarities between between familiar objects. So, for example, responses in perirhinal cortex are more similar, when they're too align into an avocado, then when they're too align and violin. So in sum here we see that perirhinal cortex is sensitive to visual similarities between objects and also to conceptual similarities between objects. However, this similarity that's been demonstrated for conceptually related objects there might be a bit of a confound here. And that's because conceptually related objects tend to share visual features, and so sensitivity to conceptual information might be driven by visual factors. So if we really want to address this question of integration of visual and conceptual information, we need to. Deconfound these attributes by independently varying visual and conceptual overlap across objects. And that's what exactly what my seller postdoc Chris Martin, shown right up there. Uh, did so Chris painstakingly put together a word stimulus set that he based on these chains of objects in which visual and conceptual similarity were not linked to one another. So, for example, bullet and gun, our conceptually, but not visually similar, whereas gun and hairdryer are visually but not conceptually similar. And he created this chain of objects that met these properties. Now I wanna highlight that these are words stimulus. So what we're looking at in this study is how the knowledge we're getting at semantic cognition. So how knowledge of an object visual attributes the visual semantics of object are related to their more conceptual attributes. So a conceptual relationship here between bullet and gun a visual relationship here between gun and hairdryer. But these air words that we're talking about visual semantics just to preempt on. We're running this study and pictures, but you'll have to wait a year or so before I have this data. Okay, so, with armed with this stimulus set, um, Chris created a visual model that captured the visual similarities among all the objects. So we had nearly 1200 participants rate the visual similarity of objects on a scale from 1 to 5. So how visually similar are a gun and a bullet? How visually similar a gun and hairdryer and so on and so forth. These visual similarity model values created our visual model, which was a matrix depicting the visual similarities across all objects in the stimulus set. So, for example, in this model, gun and hairdryer have a high, have a high visual similarity where his gun and bullet do not. Yeah. So you're saying that the quantification of that similarity using only the world using only the words as well? Yes. Everything here is with words were doing an analogous version of pictures but scanning next week. Okay? And so then he took those exact same words and used a different set of nearly 1600 participants to create a model that captured the conceptual similarities of those objects. So 1600 participants produced semantic features for one object and Only one object. So, for example, 17 people said that a bullet was used to kill. No one said that a hair dryer was used to kill just fortunate, but 20 people said that it was used for hair. Now, from these, generated features, we calculated the cosine similarity across the different objects. So here, bullet and gun have a very high cosine similarity of 0.42 whereas gun and hairdryer have a lower cosine similarity of 0.7. So taken together Sorry. These values created our conceptual model, which was this large matrix depicting the conceptual similarities across all of the objects in our stimulus step. Now it's really important to note that the correlation between the visual and the conceptual models was not significant. Thus removing the confound that I noted with lime and Avocado that stimulate that are conceptually similar also tend to be visually similar. So with this clean set up, we can now assess the visual and conceptual integration of information. So now Chris the untireable, Chris Martin, then took these objects to the scanner in still another group of participants. So he scanned eight runs of a property verification task involving visual and conceptual properties. So we created we wanted to create these two different task context, the visual task context and the conceptual task context because we wanted to bias our participants towards either processing the visual attributes of the objects or the conceptual attributes of those objects. okay, so, for example, for the first half of a run, they made visual judgments for all of the objects. Like is the object angular? Then for the second half of the run, they made conceptual judgments like Is the object natural? Is it manmade across different runs? The property to be verified with unique and the order, was counterbalanced. Okay, so then Chris, armed with this MIR data, Chris, assess the similarity of brain activity between different object concepts during the visual and the conceptual task. task context. So, for example, Chris measured the brain activity patterns associated with visual judgments for the object concept gun and likewise, he measured the brain activity for the concept bullet on. Then he correlated these patterns of activity, asking how similar is the brain activity for bullet to the brain activity for gun. And he did this for every single stimulus in our every single object in our stimulus set. And he did the exact same thing for both the visual and the conceptual task. So this created two different matrices, reflecting the similarity of brain activity for different objects during visual and conceptual tasks. Okay, so just to summarize, Chris has done a lot of work up to this point. He's created four models in total, so he has two brain models here that captured the similarity of the brain data during visual and conceptual judgments about these objects. And he has two behavioral models that describe the visual that captured the visual and the conceptual similarities of those objects. So we have a behavior models, and we can kind of think of that as ground truth. And we have what the brain, how the brain is classifying these different the similarity across these different objects. And now what we're going to be looking for is correlations across these different models, both within and across the visual in the conceptual domains. So this allows us to ask, for example, whether the activity on a visual task can be described by the visual similarities, but also sorry, but also whether the brain similarities across objects during the visual task can be described by the conceptual similarities of those objects. So if we find that we get some evidence for integration. So if on, when participants are making visual judgments and their brain, their patterns of brain data can be described by the conceptual similarities of those objects. That is evidence that the conceptual attributes are integrated. They're coming along for the ride on these visual judgments. I'll unpack that a little bit more later. Okay, So for the purposes of this talk, I'm gonna focus on four ROI. Uh, the first RIO is LOC, which is known to be important for visual perception. We also looked at parahippocampal cortex, which has been implicated in representing contextual associations of objects and the temporal pole. As I said, implicated in semantic dementia and known to be critical for conceptual processing and, of course, the perirhinal cortex, which is where we expected to observe convergence. We also did a whole brain search analysis, which is entirely consistent with our ROI results. These ROIs are the main players. And just for the sake of simplicity and time, I'm just going to stick to the ROIs But I'm happy to talk about any of the whole brain results in the discussion. Okay, so I'm going to go through each ROI internee. So in the lateral occipital cortex, we found that the patterns of brain activity could be described by the visual similarities of the objects, but only on the visual task. And that's a nice sort of sanity check. We're calling this task dependent coding of visual attributes, and it's entirely consistent with well appreciated role of this part of the brain in visual object processing, the perirhippocampal cortex showed exactly the opposite. So here the patterns of activity on the brain could be described by the conceptual similarities of the objects, but only for the conceptual task. So one possible interpretation of these results is that participants might bring to mind a contextual setting for the object when they perform the conceptual task, but not when they perform the visual task. So, for example, thinking about the conceptual properties of a comb and a hairdryer might bring to mind an image of a bathroom or salon, and we're doing future experiments. To get it this by explicitly manipulating this contextual co occurrence of the objects in our stimulus that okay so moving on, consistent with its established role in abstract conceptual processing activity in the temporal pole correlated with the conceptual similarities of the objects for both the visual and the conceptual tasks. So that is, it showed task in variant coding of the conceptual similarities between the objects, and this is consistent with what's been learned about this brain region in semantic dementia. But most interestingly, in my mind activity perirhinal cortex was captured by both the visual and the conceptual similarities of the objects, regardless of the task. So this suggests that visual and conceptual information is integrated in perirhinal coding such that visual information comes along for the ride. When completing a conceptual task and vice versa and our whole brain analysis, supported this claim, we found that a contiguous cluster of boxes and left perirhinal cortex with the only region that showed this integrative coding. So that is, in this whole brain analysis. It was exactly the same subset of perirhinal boxes that carried both the conceptual and the visual information. Now, notably, I want to draw your attention to this interaction that we around here between these models of visual and conceptual similarity. So although the perirhinal cortex always coded both visual and conceptual similarity. Visual similarities were more strongly coded in the visual task, and conceptual similarities were more strongly coded in the conceptual task. So, um, so this is the point that I'm gonna be coming back to. But I think it suggests that attentional control modulated these multi boxill activity patterns such that the multi dimensional structure within perirhinal cortex could be flexibly adapted to task to. That's yes, of course. I'm having trouble if you have a reconciled so right with three empirical data from the brain. Looks to me I have trouble perceiving order out of it on. There is some similarity measure with behavior, and it's clearly being reflect. So we're correlating the matrices to we're saying does so this is the similarity between it's a Kendall's how it's a correlation matrix between these two. So are if a gun and which that one. Okay, so if a gun and a hair dryer have similar brain activity, do they also have similar? Are they also similar in terms of their behavioral similarity rates? Make one, but it z turned into a dissimilarity matrix because of because negative isn't is kind of misleading but yes, that's basically what it is. Yes, yes, yeah, yeah. So we're saying we're correlating across these two to say, um can we describe it does the way that the data line up here in behavior? Does it match how the data line up in the brain? Does that make sense? And so, I mean, if it Yeah. Oh, yes. So these numbers are tiny. Yes. That is always the case. Yes. sorry if you're a little bit alarmed by the magnitude of the Y axis, that's just that's that's typical for these kinds of Yeah, correlations are always very, very small. Yeah, but the significance is absolutely walking. Okay, any other questions, please feel free to interact. Okay. Oh, sorry. OK, so what does this all mean? How are we to understand this? So despite the fact that we deconfounded visual and conceptual information and despite the fact that our task is biasing our participants towards either visual or conceptual features, perirhinal cortex nonetheless represented both visual and conceptual information. So if you just stop and think about this for a moment while participants are making visual judgments about these objects, so is a gun. Angular is a bullet smooth perirhinal coding captured the conceptual similarities of those objects and, conversely, when they made conceptual judgments about objects. So, for example, of the hairdryer natural is a gun pleasant perirhinal coding captured the visual similarities of those objects. So the fact that a gun and a bullet are dangerous is not needed to assess whether or not they're angular. And likewise, you don't need to know that a gun and a hair dryer are visually similar. to assess whether or not they're a pleasant but in terms of visual coding in terms of perirhinal coating, this information is just coming along for the ride. So why would the brain represent information in this way? And what I'm gonna argue and it's probably obvious to you, is that in any given situation, only a subset of our full component of semantic knowledge is relevant to the task at hand. So, for example, if I asked you to group these objects, according to, which you would want to host a dinner party with versus which you would want for a jam session, I think we could all agree on this arrangement of the object and If I asked you to sort these objects based on their color, we would very easily come up with a different sorting. And if I asked you to put them on a continuum regarding how you would feel about the objects If a friend helped you ask you to, come over and help you move them out of her apartment we might choose different characteristics of these objects. To emphasize yet again. So in order to use conceptual knowledge appropriately, our semantic system has to retrieve diverse and often competing interfering properties to complete different tasks. So now, of course, whether a property is interfering varies according to that particular task context. So adaptive behavior requires that we flexibly resolve interference from features relevant to the task at hand. And I am gonna argue that this multi dimensional representational structure of object concepts in perirhinal cortex is key to resolving interference from these tasks irrelevant features. So, in the next study me, we wanted to look at this. We wanted to look at the behavioral relevance of these representations in perirhinal cortex or these integrated representations in perirhinal cortex. So to do this, we investigated what happens when there are damaged. So this is a neuro psychological case study specifically going after this idea that the perirhinal cortex is critical for resolving task irrelevant interference. So we worked with three patients and their age matched controls are critical. Patient is patient DA. Who had a large mtl lesion that included the perirhinal cortex. So if you're looking at his brain here, you can, um, he's missing his hippocampus. Um, more on the right thing on the left, but really a bilateral hip perirhinal ablation that goes pretty pretty far. and we had to control patients. Um, HC, Those are actually her initials. Not a shorthand to depict her legion. She had a bilateral hippocampal lesion relatively circumscribed bilateral hippocampal damage and patient RL had ventral medial prefrontal cortex due to a stroke. So the most important comparison that I want to emphasize is between patient DA and patient HC. So both of these cases have MTL damage, but it's only DA. That has damaged the perirhinal cortex. So my two graduate students a the time Danielle Douglas and Rachel Newsome used a similar stimulus creation approach to what I described from Chris. So they created a discrimination task involving Stimulate, whose visual and conceptual similarity were not linked in a block design. Participants were asked to match the top reference. So here, gun to the most visually or conceptually similar object. So in the visual task, the match was visually but not conceptually similar to the object. So here, the correct answer would be hairdryer and in the conceptual task on the answer was conceptually but not visually related. The object, the referent object and here the correct answer would be pulling. Moreover, these foils and targets were fully crossed so that each visual match had a conceptual foil so and each conceptual match had its own visual foil. So here the foil to bullet is batteries so that those are visually related and the foil to hair dryer is home, which is conceptually related. So why did we create these foils? What we wanted to do is to really increase the amount of interference, the amount of competition on any given on any given trial because we wanted to create this requirement for attentional control to flexibly reshape this representational structure in order to solve the task and Rachel and Danielle and their collaborators, including Chris, went through extensive stimulus, that validation to ensure that the relative strength of the visual and conceptual associations was matched. I won't bore you with that, but it was a lot of work. Okay, so what did we find? So where is, both control patients performed normally? DA was exceptionally impaired on both the visual task and the conceptual task. We went back and tested him, and we replicated that, that paradigm. So we're observing susceptibility to visual and conceptual interference after perirhinal damage. And when we go and look at his errors, we can see that DA is falling for the semantic lure on the visual task. So he's choosing bullet when he should be choosing hairdryer. And likewise, he's falling for the visual or on the conceptual task. So he's choosing hairdryer when he should be choosing bullet. That is, he can't resolve this interference from the competing dimension. Now it's important really important to note that he is not that he has trouble with this task per se. He understands the instructions and he can perform normally in the absence of visual and conceptual competition. So that is when we removed the lure from the imposing dimensions. So we got rid of bullet battery in this, uh, in the visual trial, he was able to select hair dryer. No problem on likewise, when we removed the visual or the conceptual task, he did just fine. and We also had additional control conditions that used more simple stimulating letters and numbers that did not require this assessment of multiple dimensions, multiple visual and conceptual dimensions to get the right answer. And in this case again, he was able to resolve that competition. So how are we to make sense of these findings our working hypothesis here is that the multi dimensional representational structure and perirhinal cortex enables flexible task, relevant behavior. So in any given trial, there is a massive amount of both visual and conceptual interference. We have objects that are visually similar. So a hairdryer a gun on. We have objects that are conceptually similar hair dryer in a comb and we're asking our participants to flexibly switch between these two different modes. And I just want you to recall that interaction, that I showed you in neuroimaging study so we think that this switching is enabled by the multidimensional representational structure in perirhinal cortex. It goes both the visual and the conceptual information, and it's representational coding can be transiently adapted based on task context. So when it's a visual match trial the visual similarities can be emphasized. And when it's a conceptual match trial, the conceptual similarities can be emphasized. And as we saw in patient DA. A without the multidimensional representational structure in perirhinal cortex, this flexible recombination of information based on task relevance was not possible. So Chris is planning future work to better understand the mechanisms that drive this representational flexibility and we think that is going to be really important to arbitrate between two different potential explanations. So in the first, it might be some sort of inhibition account whereby the perirhinal cortex is exerting inhibitory control over it's connected regions. So, for example, in a instance, individual task context perirhinal cortex perhaps might be inhibiting the conceptual information from anterior temporal pole and vice versa and the conceptual task context, perhaps it's inhibiting, irrelevant information from earlier in the venture visual stream on DSO in order. to, get it this explanation. What we're gonna do is we're gonna look at the strength of the connectivity in perirhinal cortex between these different brain regions and see how their modulated by task context and the degree of similarity between the lure on the target. Now, a second possibility is that this multi dimensional representational structure and perirhinal cortex is modulated through interactions with some independent semantic control network. Now, under this explanation, we might find that perirhinal connectivity with the LOC of the temporal pole doesn't change. But instead we might be looking at connections between perirhinal cortex and perhaps, inferior frontal gyrus, reflecting that degree of competition and interference. Okay, so we have lots of work to do. But for now, I think we're building a story that the integrated representations in perirhinal cortex provide this informational bedrock with which we can make sense of the sensory cacophony present in our everyday experience and allow us to interact appropriately with the items in our world. So our neuroimaging studies showed that visual and conceptual information are integrated in perirhinal cortex, with the perirhinal cortex representing both the visual and the conceptual attributes of objects and doing so in a flexible manner that allowed for this seamless discrimination between objects whose visual and conceptual features orthogonal. Now, when these representations are damaged, as occurs in mtl amnesia, the representation can't be reshaped in a task appropriate manner. So in some, at the level of perirhinal cortex, it might not be possible to fully disentangle visual and conceptual processing. That is to say, for the computational operations in perirhinal cortex, seeing implies knowing now, I was going to talk about one really quick study that looked at visually guided reaching. I could stop for questions or I could be a good time. Okay, so this is really, really new work. In fact, I just got these slides yesterday, but I was excited to share it with you because I think it demonstrates that the influence of interference from visual and conceptual, information is really far reaching. I realize that's a terrible pun, because we're looking at visually guided, reaching, in this task. So this is again some great work, like Chris Martin. so imagine that you're sitting at this desk and you need to drop down a note. You would pretty easily grab that pencil. Pick it up and write. Drop down your note. Now this ability to do this reflects our semantic knowledge about object contest, how they look, how we hold them, how they move, what they're for, and so on. But there's a lot going on under the surface that underlies this release. Seemingly simple motion of just going and picking up that pencil and that includes that involves resolving competition from all of these tasks. Irrelevant objects so visually guided, reaching i.e. going and grabbing that pencil might be an informative area of behavior in which to investigate the influence of visual and conceptual competition. So let's say that you have to, and so this is the task that we're going to be using. So let's say that you have to reach for a target in the presence of a competing distracter. Whenever there is any sort of target ambiguity, there's going to be competing motor plans between the distracter on the target and until that competition is resolved, the movement is going to be an average of those two motor plans. So when you start moving your finger under this situation. There's gonna be a straight line to start because the competition between the two motor plans has not been resolved. But once it has been resolved, we can see that the movement deviates from the midline and heads over to the target. Now you can imagine how these different reach trajectories might vary based on the nature of the competition between the target and the distracter. And what we're gonna do is look a two dependent variables to understand the dynamics of this interference resolution. So the first is the area under the curve with less area reflecting a more efficient reach trajectory. and the second is the positioning of the deflection point. That is when the reach is deviating from the midline reflecting when these competing motor plans have been resolved. Okay, so Chris manipulated the degree and the nature of the competition to see how it's going. to influence these different reach dynamics. So it's a really simple design. Participants just have to move their finger on a touch screen. to a target location. That target location is cued by a color where the distracter is cued by another color. Now importantly, of course, the all of this is counterbalanced within and across participants. and important for this design is that the distracter in the target locations are always paired with words that can vary in terms of their conceptual or visual competition. So sticking with this example of hair dryer that I've used throughout the talk, the target hairdryer could be paired with a visual distracter, and that visual distracter could have a high similarity. It could be something like Gun could have a medium similarity, like megaphone or low similarity like mud Or that target hairdryer could be paired with a conceptual distracter again that varies in terms of the similarities so high would be gun medium would be scissors and low might be wrench. Okay, so let me I'm just gonna go through really quickly in the time that I have the specific details of the study. So participants are seated in front of a large tabletop touch screen on day, completed just under 400 trials. So in the first phase of the trial, participants are holding their finger on the start location, and they have to read aloud the words that are associated with the two locations. Next, the locations of the target and the distracter are revealed using the color coding scheme, and here participants have to initiate their reached in less than 500 milliseconds. This is actually really quick. When I first heard that, I thought that that seemed quite slow up. Some persons need, like, 40 minutes of training to be able to do it, but we have to force them to move quite quickly because we want to get it competition between these different motor plants and then lastly, for us to count the trial, they have to reach the target in less than 700 milliseconds after the movement is initiated. So before I go into the specific data, I just want to say that across our different conditions, the average time to initiate the movement was 377 milliseconds and there were no differences across the conditions. But we did find something interesting when we look at the area under the curve, so looking first, actually. So if there's a visually, related objects, so there's a visually visual competitors were actually not seeing anything interesting. There's no modulation, however, when there is a conceptual distracter present the area under the curve increased with a degree of competitors. Similarity. So that is, participants had the least efficient reached trajectories when there was the most conceptual competition present. So this would be like the hairdryer comb the pairings and our deflection point analysis is telling a really similar story. So on the left, I'm plotting the reach trajectories to the target with the visual distracter on the right, the trajectories with a conceptual distracter. This is just for illustration purposes, Of course. We never had two targets present on the screen at the same time, so again when there was a visual competitors. We're not really seeing anything interesting in terms of the deflection analyses of the deflection points. But when there's a conceptual distracter, we're seeing larger deflection points with increased conceptual similarity indicating that these competing motor plans were resolved later on in the trial. Okay, but Chris didn't stop there. So he noted that on a significant minority of the trials, the movement was initiated after 500 milliseconds, so they didn't follow our instructions on about 37% of the trials. So Chris decided to look at these trials. This is post talk, and now we're going to replicate it or attempt to replicate it. But the idea being that for these trials participants would have had more time to resolve the competition before making their reach. So that is some of the work that they had been doing in flight would have been resolved before they initiate that movement. So now what do we see? Again we don't see anything for the visual distracters, but for trials with conceptual distracters. Now the story completely flips. So the inflection point is now lowest for trials with the highest similarity and highest for the trial. Low similarity. So now we're seeing the most efficient reach trajectories on the trials that have the most competition. And just finally, this is the last part of my talk. If we're looking at the area under the curve for all of the different conditions in trial types again, when there's visual competition, the area under the curve isn't modulated by competitors. Similarity. Regardless of whether they initiated this trial a little quickly, so that's in less than 500 milliseconds or, um slowly more than 7500 milliseconds. But when there's a conceptual competition, we see a really striking interaction. So when they initiate quickly. The area under the curve increases with competition, but when they take their time to initiate the movement, we see the opposite. So it's like when they've taken their time to initiate the movement there e don't know how else to say it. It's like they're bringing their A game and they successfully inhibited that competitors and get straight to the target. Okay, so were in super early days with this work. We've got lots of different iterations that we're considering. We're going to do this with pictures as well. We're gonna look at how it's affected by perirhinal damage. But the basic take home points is that reaching visually guided reaching can reveal hidden cognitive states. Here in this study, the time required to resolve competition between objects scaled with their conceptual similarities and that resolving competition optimizes reach efficiency. Adding to this growing picture that adaptive behavior in many different cognitive domains requires that we flexibly resolve interference from features that are relevant to the task at hand. I just wanna thank everyone that's involved in this work, Chris, who lead most of it Danielle Douglas and Rachel Newsome and a really excellent undergraduate honors student. Lisa, thank you so much for listening. Yeah, um, so I can understand how you wait for asking that you know what so we debated when we first started this way, do a picture? Should we do with words? And it was exactly that rationale that you just gave. We said we're going to start with pictures, but then I went out and gave this talk, and everyone's like, Why didn't you do it with pictures? Because you're talking about visual attributes, but you're not showing them pictures. Uh, so we have. I mean, we have the data with words, and I feel really good about those data. with pictures, I don't know. I feel like the visual and conceptual are no longer on a level playing field. And and that's why we went towards words first. Because when we show them the words you have to extract either the visual attributes or the conceptual attributes, you know from more than what's there. But when we show them pictures, it's, you know, it's kind of all right there, So yeah, we'll see. But thank you for that. That's actually the first time when somebody said why are you doing it with pictures as opposed to why aren't you? I felt like I was bullied into doing it. Pictures, and so I'm kind of like defensive about it, But, uh, but yeah. Thank you for that question. Yeah, no question about the imaging study. I wondered if you lab, of course, or yes, I can figure out how his name through here. Just, so you were saying that you had subjects? Um, rating this points. And what struck me is while the brain activity seems to be very either industry ratings are very, very Yes, it is easy. Similar morning. Yeah, I wonder. First of all, why that is inseparable. Whether God influences your results, I could imagine. Tend to drive these two apart except right. Yeah. You mean you're referring to the fact that these matrices air really sparse? Yeah. Yeah, I think that I take your point. I think this is a little bit too much. I'll say two things. Um, it doesn't reproduce. This is like a little bit of a cop out. It's not quite as bad as it looks here. Uh, we did yet I mean so people. So what's an example. You know, like a comb and a bullet people way Didn't get the full range of 1 to 5. You know, not every way. Just that people tend to. That's how people rate things in. They don't spread their scores. You begged them to and they don't do it. Um, they are Kendall's tower. Correlation is supposed to take that into account. Um, yeah. No, I mean, we are aware of it. If it yeah, I don't know. It's really hard to get a stimulus set that meets these attributes. And so yeah, have it fully tiled across you. Yeah, it wasn't there. Yeah. Mm. Yeah, I know. It's fine. with clicking longer. Response time. Yes, I guess I was wondering like my first thought would be that there might be a different response time. And I was wondering, Right there are taking longer, right? So I should have said that again. There were no overall RT differences between the conditions, so it's like how they're accelerating. So it's not just like they moved faster once, like they're still taking one. So these are what I'm showing you here. So these are the not so these are the trials in which movement was initiated after 500 milliseconds, they're still It still takes them about, you know, let's say 400 milliseconds to get from the start point to the target, they just get there differently. And this is something that we're gonna be looking at. This is our first time we've ever done any visual reaching study. And we realized that we didn't code it in such a way that we have, like, the data to look at acceleration like we don't have the point by point data to be able to say what's going like. How is that possible, Right? How is it there? Take their traveling more ground in the pink bars than in the than in the green? But it must be just because they're taking the same amount of time to get there. But they're traveling more distance. It must be that they may be accelerate once that the computing motor plans are resolved. That's our hypothesis. But yeah, we need to look at that. So did so. Did that answer your question? I feel like it might not have So So there is no overall RT differences across the different conditions it's just the way in which they're getting there. Is there a difference I guess I'm just not understanding why, right? I not overall, Well, I think in that time before they've initiated they So here we're looking at them resolving. I feel like when I turn, you can't hear me anywhere. Um, here. They're like resolving, resolving, resolving, they got it and they go, So this is when they're moving quickly and there resolving those competing plans in flight. Whereas if they stay and kind of think about it going anyway. Then they made their mind up and they go straight there. Yeah, supposing acute them to reach a second or second and a half. Yes, we're looking at that. So this just emerged. Chris said he just looked at these data. This is a post talk analysis. He said, God, we've lost 37% of our trials. Like maybe we should look to see what goes on there. So now we're gonna give those specific instructions to see how it I think that when we give them more time, it's gonna look like this. Yeah, I think so. Uh oh. Well, with a second and a half. You mean between visual similarity? I don't know. Yeah, I don't know. It will be interesting. Yeah. Three. Yes. Yes. Everything is matched. And also each target they see. They see. It's not like they see hairdryer more than they owe long. They take too long on the same number of trials. I think so. But that I'm not sure. Yeah, I'll ask Chris that. Yeah, yeah. Thank you for asking. I think he would have told me if if these were disproportionately driven by one of the trials types, but I'm Yeah, that's it. I'll ask him. Thank you for that question. So slide so in the files longer than principles. Their correlation between how much longer it took them and the degrees polarity. No, we didn't way will. Thank you. Yeah. Thank you. Yeah, this is great. Great to get feedback. These are super new today Yeah, looking at the my left side field looking at what's that? Negative? Yeah. Mm. Interpret that as me people are just really fast, you know, processing speed for identifying visual protest. Different? Sure. Or it's like the visual distracter just doesn't matter. Like they see, um, gun and hairdryer and gun isn't interfering like maybe it's that those when you're reading it, it doesn't really fit with neuroimaging data. Where I said, you know, this visual information comes along, Um, when you know you're doing a conceptual task. But maybe when you're reading it in this way like there was visual attributes aren't activated sufficiently. To regenerate that interference. So to get it, this one manipulation that we're going to do is prime people to have them think about the visual attributes and then make the do something that kind of get them or into that zone. Yeah. So in the past, when you look at the
SPEAKER 1
play around cortex, how much individual very