This is, presumably, the condition for a similarity-based representation to be tolerant to changes in viewing conditions.
In this case, some representations of Jim in face—space might be closer to some representations of Dan than to each other. Under such circumstances, it seems that similarity evaluations—and the framework itself—lose their aptness as an account of face processing, of which tolerance is a main characteristic.
Fortunately, this is but an illusory problem. One can reconstruct the theoretical link between face—space and tolerance to achieve a straightforward solution. In fact, we might need to abandon it altogether. Thus, a representation of Jim's face need not relate in any way to Jim's actual face—rather, the similarity between Jim and Dan's representations should relate to the similarities between their actual faces.
Note that, given the variability in Jim's images induced by varying viewing conditions, Shepard's argument does not imply different images of Jim having similar representations.
However, it does allow the following: The similarity between Jim and Dan's representations under one viewing condition resembles the similarity between their representations under a different viewing condition. By extension, if, under one viewing condition, Jim and Dan's representations are more similar than Jim and Joe's, this pattern of similarities should also be observed under a different viewing condition.
This common similarity pattern across viewing conditions can serve as a face—space correlate of the fact that all of Jim's images belong to the same individual, even when their locations are very distant from each other i.
In other words, the structure of face—space may itself exhibit tolerance to identity-preserving transformations also see Newell et al. Figure 1. View Original Download Slide. An illustration of our hypothesis that similarity patterns in face—space exhibit tolerance to identity-preserving transformations. In both spaces, Jim and Dan are more similar to each other than Jim and Joe names are for illustration purposes.
This common pattern of similarities preserves the structure of face—space across the two viewpoints. Note that the dimensions of the two spaces need not necessarily correspond—only the relative location of a representation with respect to other representations is preserved. Figure 1 An illustration of our hypothesis that similarity patterns in face—space exhibit tolerance to identity-preserving transformations.
The goal of the present study was, therefore, to examine the tolerance of similarity patterns in face—space to identity-preserving transformations. To quantify the structure of the human face—space, we first collected subjects' inter-item perceived similarity ratings for a set of facial stimuli. These ratings were then converted to a spatial arrangement of the images in a concrete face—space, via multidimensional scaling MDS; Shepard, , To only evaluate similarities within different viewing conditions, ratings were made separately for two variants of the same stimuli set differing either in illumination Experiment 1 or viewpoint Experiment 2.
In each experiment, configurations in face—space were separately generated for each variant of the stimuli set, and the correspondence between these configurations was evaluated. Experiment 1 tested the tolerance of similarity patterns in face—space to illumination changes.
If similarity patterns are tolerant to lighting transformations, then faces that are relatively similar under one lighting condition should remain so under a different lighting condition, and faces that are relatively dissimilar should thus remain dissimilar under a change in illumination.
Therefore, we examined whether the similarity pattern i. Twenty-four Caucasian subjects 10 males volunteered to participate in the study. All reported normal or corrected-to-normal vision and were not familiar with the face stimuli used in the study.
Two photographs of each face were included, creating two sets: frontally-lit FL and top-lit TL. All faces were of frontal view, neutral expression, free from external features e. Each subject was randomly assigned to either the Frontal Lighting or Top Lighting condition and was presented accordingly with faces from either the FL or TL variants. In each trial, a centered fixation point appeared for ms, followed by a sequential presentation of two faces.
Each face appeared for 1 s, with an inter-stimulus interval of ms. After the second face disappeared, the subject was asked to press a key corresponding to the level of perceived similarity between the two faces, on a scale ranging from 1 identical to 7 extremely different. The next trial started 1 s after a response was made Figure 2. Subjects were encouraged to give their first impression, but the duration of trials was not limited.
In addition to similarity ratings, reaction time RT was also recorded. Prior to the experiment, subjects were presented with a trial practice session, including 5 faces not used in the experiment itself.
Figure 2. The perceived similarity rating task used in Experiment 1. Figure 2 The perceived similarity rating task used in Experiment 1. Each subject rated a total of randomly presented pairs, which included every possible pairing of faces in both presentation orders and two pairings of every face with itself.
The session was split into 5 blocks, each lasting approximately 20 min. Upon completion of the experiment, subjects were debriefed by the experimenter as to the purpose of the study. Correspondence of perceived similarity ratings across lighting transformations. In each experimental condition, the remaining perceived similarity ratings given to each pairing of different faces were averaged across subjects and order of presentation to produce a set of Mean Similarity Ratings an MSR matrix.
The ratings given to identical pairs were not considered further. To roughly evaluate whether similarity relations were preserved under the two lighting conditions, we first tested the correlation between corresponding inter-item similarities across the FL and TL MSR matrices. Next, tolerance was more accurately assessed by examining the correspondence between similarity patterns of single faces across illumination changes.
The area under the curve AUC was 0. Figure 3. Dashed gray line indicates chance performance. Multidimensional scaling and Procrustes analysis. The metric parameter was specified as Euclidian e. Since the dimensionality of the spaces underlying the observed similarity ratings is a mere speculation, FL and TL MDS solutions were generated in 2 to 15 dimensions, showing a decrement in stress Shepard, with increasing dimensionality.
Each pair of configurations was analyzed for structural correspondence, which enabled us to exclude the possibility that our results were dependent upon the specific dimensionality of the MDS solutions.
This descriptive analysis minimizes the sum of squared residuals between the point values of the two spaces, by transforming one configuration e.
The transformation is a combination of scaling, orthogonal rotation, reflection, and translation and, hence, does not affect the shape of the transformed configuration. Figure 4 shows the distribution of faces on the first two dimensions of the dimensional TL and FL spaces prior to, and following, this transformation the dimensional spaces replicated well the similarity data, having stress values of 0.
Figure 4. Since the choice of dimensions for the MDS solution is random, the configurations in a and b are plotted following principal component analysis PCA. Locations connected by a line correspond to FL and TL variants of the same facial identity and are marked by a number. Note that TL and FL variants of the same face have relatively similar locations, as indicated by the short connecting lines. Each permuted space was Procrustes-transformed to fit the original FL space, and badness of fit was measured using the d statistic—a standardized sum of squared residuals between the two spaces.
Figure 5. The observed d values, indicating the badness of fit of the TL space to the FL space, are plotted both for the raw similarities white bars and for similarities following intra-subject ranking gray bars.
Finally, in order to assure that differences between subjects' individual rating tendencies did not influence our results, all analyses were performed on perceived similarity ratings that were subjected to intra-subject ranking. Analysis of the ranked data yielded comparable results to those described above Figure 5.
Our findings indicate that the face—space configuration of frontally-lit face images is similar to that of the corresponding top-lit image variants. It should be noted, however, that the two configurations were each based on ratings obtained from a different group of individuals and were thus also averaged across subjects.
This design has not allowed us to evaluate the within-subject correspondence of two individual spaces. In addition, having examined only one class of transformations, the ability to generalize our results to other identity-preserving transformations was limited.
Experiment 2 was designed to address these issues directly. Experiment 2 employed a within-subject design, whereby every subject rated two variants of a stimulus set, each in a separate session.
Furthermore, tolerance was studied not under illumination changes but rather under changes in viewpoint. Twelve undergraduate students in the Department of Psychology, Tel-Aviv University 2 males who did not participate in Experiment 1 participated in the study, receiving credit toward a course requirement.
One subject was removed from the analysis due to long RTs and apparent misunderstanding of instructions. All reported normal or corrected-to-normal vision and were not familiar with the faces used as stimuli. To avoid the laborious rating session of trials, we used a subset containing 24 of the 36 faces presented in Experiment 1 analysis of the data collected in Experiment 1 for this subset revealed comparable findings to those found for the entire set of 36 faces.
All other characteristics of the stimuli and experimental parameters were identical to those in Experiment 1 , except the location of the second stimulus that was displaced relative to the first stimulus pixels lower and pixels to the right. This was done to ensure that subjects compared high-level face representations rather than rated iconic picture similarity.
The procedure was similar to that of Experiment 1 , with the exception that each subject rated perceived similarities both within the V0 variant and within the V60 variant. Each session consisted of a total of pairs of faces and lasted 45 min. Outlier removal was carried out in the same fashion as in Experiment 1 2. Before averaging data across sessions to create one MSR matrix for each variant, we sought to confirm that familiarity with the stimuli had not significantly affected subjects' ratings in the 2nd session.
Therefore, we tested the correspondence between the average Vst and Vnd MSR matrices, as well as between the average Vst and Vnd MSR matrices: If familiarity had a negligible effect, subjects who had been exposed to one of the variants during the 1st session when the facial identities were still unfamiliar should have rated perceived similarity in concordance with subjects who were exposed to the same variant during the 2nd session when the identities might have been familiar.
MDS solutions in 2 to 15 dimensions were generated based on the MSR matrices, following the considerations outlined in Experiment 1. Procrustean analyses revealed that across the two sessions both the V0 and V60 variants shared similar face—space configurations Table 1. Table 1 View Table.
Comparison of ratings from different sessions in Experiment 2. Table 1 Comparison of ratings from different sessions in Experiment 2. In addition, as in Experiment 1 , an ROC curve was plotted for the Spearman correlations between pairs of V0 and V60 similarity patterns of single faces.
Figure 6. Only the first three dimensions are shown. Locations connected by a horizontal line correspond to V0 and V60 variants of the same facial identity. Figure 7. Results of Procrustean analysis for comparing V0 and V60 configurations in 2 to 15 dimensions.
Plotted are observed d values, indicating the badness of fit of the V0 space to the V60 space white bars. Figure 7 Results of Procrustean analysis for comparing V0 and V60 configurations in 2 to 15 dimensions.
Comparison of V0 and V60 similarity ratings was also carried out for each subject separately, using the same analyses as described above. When testing the overall correspondence of individual MSR matrices across experimental conditions, it seemed that similarities did not exhibit much tolerance to viewpoint changes: The correlation between corresponding V0 and V60 inter-item similarities was relatively weak, with a mean Spearman's r of 0.
Figure 8. Subjects' individual MSR matrices were also converted to MDS configurations in 2 to 15 dimensions and analyzed as previously described.
In addition, a separate analysis was performed to test whether the V0 and V60 spaces of the same subject were more concordant than V0 and V60 spaces of two different subjects. Figure 9. Results of Procrustean analysis for comparing individual V0 and V60 configurations in 2 to 15 dimensions. Figure 9 Results of Procrustean analysis for comparing individual V0 and V60 configurations in 2 to 15 dimensions.
To further appreciate the pattern of tolerance reflected in the structure of face—space, we used Procrustean analysis to compare each of the V0 and V60 spaces with each of the FL and TL spaces constructed for the subset of 24 FL or TL stimuli presented in Experiment 1.
The results, which are purely descriptive, suggest that the FL and TL spaces—both representing configurations of frontal pose stimuli—are more similar to the V0 than to the V60 space.
In addition, the V0 and V60 spaces—both representing configurations of relatively frontally-lit stimuli—are more similar to the FL than to the TL space. These results are in line with the physical changes induced by the illumination and viewpoint transformations, as measured by the Euclidean distance separating the stimuli in pixel space Figure Figure Data are presented for 9-dimensional configurations and are representative of the pattern of results for the other dimensionalities.
Below each comparison of two spaces, one stimulus from each space is shown for illustration purposes. Thus, they extend the tolerance of similarity relations in face—space, previously observed for illumination changes, to another identity-preserving transformation—that of viewpoint. Such high degree of tolerance is evident when comparing both group-averaged configurations and individual i.
The findings of the current study demonstrate that two variants of a set of faces—differing either in illumination frontally-lit vs. Correspondence between space configurations was found irrespective of dimensionality, both at the group-averaged and individual levels, reflecting the tolerance of similarity relations to the transformations used.
These results are in line with previous studies showing that representations of visual information in similarity spaces preserve their relative locations across different transformations e. However, the common approach in such studies has been to examine similarities both within and across viewing conditions: Similarity has usually been measured not only for different objects under the same view but also for different views of the same object.
Interpreting the results of such designs is somewhat problematic, since similarity across viewing conditions only measures the behavioral manifestation of tolerance; it is thus not suited for tapping an underlying representation that may, or may not, reflect that tolerance.
In other words, if subjects are presented with two different views of Jim, measuring similarity only inform us of their behavioral, surface ability to attribute these images to the same person. Hence, the observed tolerance of similarities could not have been directly affected by subjects' knowledge that the frontally-lit and top-lit Jim were indeed the same person. Moreover, unlike the previous studies concerned with similarity-based representations of non-face e.
Thus, our finding that similarities within one viewing condition correspond to similarities within another viewing condition remains to be tested with regard to objects' shape-spaces in future studies. While we interpret the tolerance of similarities revealed in the current study as indicating correspondence between space configurations under different viewing conditions, an alternative account must also be considered. While the current study cannot exclude this alternative account, there is evidence that faces are encoded first using transformation-dependent schemes, e.
Each of these schemes results, by definition, in a separate space of transformation-dependent representations. Yet the idea was not clearly expressed until The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Notify me of new comments via email.
Cancel Report. Create a new account. Log In. Powered by CITE. Are we missing a good definition for FaceSpace? Don't keep it to yourself Submit Definition. The ASL fingerspelling provided here is most commonly used for proper names of people and places; it is also used in some languages for concepts for which no sign is available at that moment.
There are obviously specific signs for many words available in sign language that are more appropriate for daily usage. Browse Definitions. Get instant definitions for any word that hits you anywhere on the web! Two clicks install ». Quiz Are you a words master?
0コメント