Human vs computer: What effect does the source of information have on cognitive performance and achievement goal orientation? (2024)

1 Introduction

In the next decade, students will increasingly interact with computers in class, in the form of assistive technologies known as tutoring systems. Tutoring systems for human learning are computer environments whose objectives are to promote or encourage learning as well as to supervise and assess knowledge integration [1,2]. Research in psychology, educational science, and educational technology is very much concerned with the bottom-up nature of information presentation, asking questions such as the following: In what form should the information appear? Is the information efficient enough for the task at hand? What is the model of adaptation? Paradoxically, this approach has obscured a fundamental question – which is the other side of the coin – namely, the question of top-down processes: the evaluation of the information considering its source. Indeed, presenting information for the purpose of learning or helping from a computer or algorithm compared to the same information from human agents could result in different sociocognitive processing of the information in terms of reliability evaluation or trustworthiness.

Even considering that people tend to treat computers as though they were real people (especially when computers respond in an unexpected way)[1] [3], Friedman, Khan, and Howe assert that “people trust people, not technology” [5]. Therefore, the way in which information is presented and the source of this information could have a strong impact on how the receiver agrees to rely on or follow this information, especially in critical situations. In this study, we address this question in the framework of tutoring systems. We examine whether the integration of information presented by a human or a computer to individuals can be considered as comparable. Consequently, we aim to evaluate whether the test performance of human learners differs when additional information about the task is presented as either human made (by teachers) or computer made (by algorithms). In light of the relative agreement among the studies described in the next section, we assume that perceived reliability of the source modulates task outcomes through sociocognitive processing specific to human–human interactions. In addition, we are interested in the role of motivational factors that influence learning processes, such as achievement goals, that have been an important focus of the achievement motivation literature in educational psychology for some decades [6,7].

2 Human–computer interaction

When processing information, the source is one of the main features for the evaluation of the content [8,9,10,11,12]. Depending on the characteristics of the information source, people may be positively or negatively biased according to their evaluation of the source itself [10]. In the context of comparing information from humans and computers, the social identity approach [13] is a psychological theory that explains such biases in evaluation processes. Following the social identity approach, individuals rely on others as a function of their in- or out-group appurtenance [14,15]. When considering information from their own group members (e.g., humans), individuals are more willing to evaluate the information as reliable compared to that same information from an out-group member (e.g., computers). This process is strengthened by the level of identification with the group of belonging and perceived distance between the in- and the out-group [16,17,18]. This in-group bias may even result in attributing fewer human characteristics to people from the out-group and therefore lowering their opinion of them [19,20]. Therefore, the distance that individuals consider between the source of information and their group (as a symbolic value) modulates their acknowledgment of this information, especially when the information is ambiguous or difficult to process [21]. The source of information is also sufficient to bias decisional processes, even when the information is not accessible. For instance, when individuals are asked to choose between envelopes containing a certain amount of money, without the knowledge of the contents that were previously allocated by an in-group or out-group member, they are more willing to select the in-group envelope, arguing that group membership is sufficient to bring about group-based reliability [22]. Sharing traits or characteristics with others results in a more positive assessment of their behavior toward the observer.

Now considering human–robot interaction in this framework, researchers have already demonstrated the dichotomy between humans and computers as a cognitive reality; people conceptualize artificial (e.g., robots and algorithms) and natural agents (e.g., human and nonhuman animals) as two different clusters [23] and as a potential basis for human social evaluation processes [24]. For instance, based on this human–computer dichotomy, people declare more trust in human drivers than in autonomous vehicles [25]. This positive bias toward human agents is generalizable to various automation contexts [26]. Apart from the difference in type between humans and computers, another hypothesis to explain this pro-human bias is the lack of potential feedback in decisions involving autonomous agents [27]. Feedback plays an important role in computer–human interactions to maintain the user’s feeling of control, a central predictor of human motivation [28]. Therefore, individuals may not consider information coming from humans and nonhumans as equally reliable, and this difference comes from the perceived nature of the entity rather than the informational content or form. In other words, this difference in the extent to which people will rely on computers compared to humans will depend on their in-(human) vs out-(robot) group bias and their identification their (human) group. Little is known about such biases in terms of human cognitive performance, although computers have been introduced in various sectors of human life, including educational learning environments.

3 Motivation in learning: students’ goal orientations

Another determinant that can explain how individuals understand and use information, especially in learning or performance contexts, is their achievement goal orientation. Achievement goal orientations are important motivational factors that affect individuals’ cognitive, motivational, and affective learning outcomes [29,30]. Achievement goal orientations have been defined as students’ general orientation toward learning, that is, the kinds of goals they tend to choose and the kinds of outcomes they prefer in relation to studying [31]. The achievement goal orientations have been differentiated into approach and avoidance goals [30]. In approach goal orientations, behavior is directed by a positive event or possibility (e.g., mastery-approach goals: striving to gain knowledge), whereas in avoidance goal orientations behavior is directed by a negative event or possibility (e.g., performance-avoidance goal orientation: avoiding failure). This theoretical conception is closely related to Atkinson and colleagues’ [32] theoretical conceptualization of achievement motivation, which postulates the existence of a motivation linked to the attraction of success (approach motivation) and another linked to the avoidance of failure (avoidance motivation). Elliot et al. [5] developed the theoretical concept of achievement goal orientations further and proposed a 3 × 2 achievement goal model in which the authors distinguish between the following six goal orientations: task-approach, task-avoidance, self-approach, self-avoidance, other-approach, and other-avoidance. Task-based goals use the demands of the task as the evaluative referent, self-based goals use an individual’s own prior accomplishments and competence development as the evaluative referent, and other-based goals use the accomplishments and competence of others as the evaluative referent.

Taking account of achievement goals is fundamental regarding the incentive to introduce assistive technology in contexts such as schools, because achievement goals are a key factor in predicting students’ performance, academic engagement, and well-being [33,34].

Achievement goals might differ according to the representation of the task as fully abstract (computer version) or anchored in a more standard educational context (teacher version), and the sensitivity of users in terms of their use of a specific learning method could depend on their current achievement goal. The reason is threefold. First, social constructivist theory [35] posits that, in the context of social learning, motivation is inseparable from the instructional process and the social environment. For instance, the social nature of the context results in an internal state of interest and cognitive and affective engagement. Also, motivation is increased in learning situations that involve social settings [36]. However, research has already demonstrated that humans and computers are considered as entities of a different nature [23] – even though people may relate to computers in the same way as to social agents [3] – reducing the degree of social nature primarily associated with computers compared to human agents. Therefore, humans and computers could diverge in terms of their likelihood of creating or maintaining the social characteristic of a setting, resulting in a different level and type of contextual motivations. Second, individuals tend to trust computers less than they trust other humans, even when experiencing cooperative attitudes exhibited by these computers, impairing human willingness to cooperate on the task [37]. Therefore, sourcing information as human or computer-generated modulates the social nature of this information as well as the motivation to work on the given task. Processing social vs nonsocial information engages more neural reward circuitry and particularly the striatum, which is also involved in the motivation of processing social information [38].[2] Third, with respect to their goal orientation, individuals may not consider a task, a lesson, or a learning method in the same way and with the same efficiency. Individuals who tend to compare themselves and feel threatened by social comparison, impairing their performance [39], could feel better working with a computer-assistive technology that does not provide any potentially threatening social feedback. However, the link between learning technology and goal orientations remains poorly explored [40]. Consequently, it is important to investigate whether achievement goals are affected by top-down processing of information displayed by technologies and whether new technologies such as intelligent tutor systems are perceived as a reliable use context.

3.1 The present study

The first objective of this study was to evaluate whether the source of information (human or computer) may bias the processing of information, as indexed by performance on a reasoning task, which is widely used in IQ assessments and is involved in learning. Because the scarcity of cognitive resources available for a given task increases the saliency of the information source under evaluation, we contrasted different levels of task difficulty with the hypothesis that source effects should be most pronounced when the task is difficult [41,42,43,44]. In addition, considering that the usefulness of pedagogical agents (human or computer) is particularly relevant in school contexts, our second objective was to examine the influence of the source of information on learners. Finally, given that the social dimension is crucial in explaining the influence of the information source on performance, our last objective was to determine whether different types of achievement goals, which rely more or less on social comparison, would mediate the influence of the information source on test performance. We assumed that achievement goal orientations related to social comparison (other-goal orientation), would be particularly strongly affected by the source of information, and would in turn be particularly relevant for participants’ test performance.

To test our assumptions, we conducted two experiments. In our first experiment, we tested how the source of information (computer vs human teacher) affected test performance and how the information source interacted with the level of task difficulty when affecting cognitive performance (measured via test performance). In our second experiment, we tested how the source of information (computer vs human teacher) affected test performance and achievement goal orientation and how the information source interacted with the level of task difficulty when affecting motivation, and how motivation (measured via achievement goal orientations) then in turn related to performance on the test (mediation model).

3.2 Experiment 1

The first experiment aimed to investigate the central point of our hypothesis: the role of the source of information as a determinant of the use of the information. To do so, we used a logical reasoning task named “Raven matrices.” Raven matrices are a family of multiple-choice intelligence tests originally created by John Carlyle Raven. Each question includes a series of matrices that need to be completed by the participant. The task of the participants is to complete the Raven matrices with written support presented as developed by either a human teacher or a computer. We hypothesized that participants should be more likely to perform better when assuming that information about tasks was provided by a human teacher vs computer and expected that a possible explanation of their better performance in a human condition might be positive human bias [26]. We further assumed that performance differences between the teacher group and the computer group would be more pronounced when working on difficult tasks because of the higher saliency of the information source and more stereotypical processing [41,42,43,44].

3.3 Method

The participants were recruited online (M age = 22.3 years, SD = 5.06, 42 males, 61 females and 3 nondeclared). Participants were informed about the voluntary nature of the experiment and performed the task online. At the beginning of the experiment, participants were instructed with the following text: In this study you will have to identify the logical sequence of nine matrices. A matrix is a grid divided into nine cells where eight of them contain graphic figures arranged according to a precise logic. You must therefore discover what this logic is in order to choose, among several proposals, the one that can fit into the empty box. At the end of the experiment you will be able to access your score which represents your success in the different matrices.

The nine matrices were divided into three levels of difficulty (see Figure 1). The three easy (a), medium (b), and hard (c) matrices were presented in random order for all participants.

Human vs computer: What effect does the source of information have on cognitive performance and achievement goal orientation? (1)

Figure 1

Example of matrices for easy (a), medium (b), and hard (c) trials.

For each matrix, participants were given two trials. If the participants failed on the first trial, they received a cue to solve the problem on the second trial. If they succeeded, the trial ended. For example, on the difficult trial C presented in Figure 1, the cue after the first trial was whether horizontally or vertically, the third square is the transformed result of a superimposition. Participants were randomly assigned to one of our two conditions (teacher vs computer). For half of the participants, the cues were presented as emanating from human teachers who designed these cues to help them solve the present matrices (i.e., teachers’ cue condition). For the other half of the participants, the cues were presented as emanating from an intelligent tutoring system that designed the cues to assist a better understanding of logic and problem solving (i.e., computer’s cue condition):

If you make a mistake, you will have a second chance with a clue that has been defined by [a human teacher/an intelligent tutoring system] to assist a better understanding of the logic and a resolution of the problem. We chose a semantic priming paradigm, as in our experiment it was a reliable way to ensure that experimental conditions were comparable.[3] In both conditions, the cues were strictly identical. The only difference was the top-down priming about the source of the information.

3.4 Variables

We manipulated the type of helping cues presented as provided by either teachers or a computer as a between factor. The level of difficulty (easy, medium, and hard) was manipulated as a within factor. We measured the performance of participants as their accuracy on the task.

4 Results

We conducted a mixed-design analysis of variance (ANOVA), with accuracy in the first and second trials (Table 1) as within factor and the experimental group (teachers’ cue vs computer’s cue) as between factor. Results showed a significant interaction, F(1, 94) = 6.09, p = 0.015, η p 2 = 0.06 . Contrasts showed that no difference was observed in the first trials, F(1, 94) = 0.05, p = 0.819, η p 2 = 0.01 . As expected, in the second trials, participants in the teachers’ cue condition performed better than participants in the computer’s cue condition, F(1, 94) = 6.62, p = 0.012, η p 2 = 0.07 .

Table 1

Experiment 1: Participants’ test performance measured via accuracy score as a function of difficulty and trial session

First trials (control trials) Second trials (experimental trials)
Easy Medium Hard Easy Medium Hard
Computer 0.80 0.71 0.42 0.76 (10) 0.50 (31) 0.34 (46)
Teacher 0.93 0.68 0.42 0.70 (8) 0.60 (33) 0.51 (47)

Note. The first trials (in italic) represent the trial without any instruction, p (success), and serve as control measures. The second trials represent the trial after an error in the first trial, p (success/initial failure), and receiving instruction presented as from either the computer or a human teacher. The number of participants at each level is presented in parentheses. The number of participants failing at the first trial and receiving a cue is the inverse of the p(success) in the first trial (e.g., 0.20 and 0.17 in easy first trials in computer and teacher conditions, respectively). p refers to a contingent probability.

To further compare the experimental groups on each level of difficulty, we conducted a separate mixed-design ANOVA on test performance (accuracy scores) in the second trials as the dependent variable (DV), with the difficulty as the within-participant factor and the source as the between-participant factor. These separate analyses made it possible to account for the population difference between analyses due to the difference in accuracy on each difficulty level (more participants went to the second trials in the difficult compared to easy trials). In the second trials, we did not find any difference between groups for the easy trials, F(1, 17) = 0.01, p = 0.926, η p 2 < 0.01 , or medium trials, F(1, 64) = 0.43, p = 0.516, η p 2 = 0.01 . However, we found an effect for the difficult trials, F(1, 93) = 4.93, p = 0.029, η p 2 = 0.05 . Participants in the computer’s cue condition showed a lower performance (i.e., lower accuracy scores) than participants in the teacher’s cue condition.

5 Discussion

This first experiment aimed to compare how the use of instructions to solve a task presented as generated by a human vs computer might result in a difference in performance on a standard logical reasoning task. Our results argue for a difference only in difficult tasks, which is congruent with the literature on stereotypical processing of information under cognitive load [41,42,43,44].

5.1 Experiment 2

The first experiment showed an effect of the source of information on logical reasoning performance when using the instructions provided by either a human or a computer. Because performance is not only defined by pure reasoning cognitive processes, this second experiment extends our understanding of the effects that computers vs humans as sources of information have on cognitive processes by integrating achievement motivation as a main determinant of performance. To do so, we used the 3 × 2 achievement goal model proposed by Elliot et al. [6] in which the authors identified six goal orientations: task-approach, task-avoidance, self-approach, self-avoidance, other-approach, and other-avoidance.

As far as we know, the relationship between source-dependent performance cueing and achievement goal orientations has not yet been explored. Our approach is therefore exploratory but informed by achievement goal theory. Students develop their achievement goal orientations based on their perceptions of teachers’ evaluations, autonomy, recognition, and authority. Presenting instruction as coming from two different sources (human vs computer) should change the saliency of individual goal orientations. More specifically, task orientation and self-orientation tend to be enhanced by teachers’ provision of autonomy, recognition, and evaluation [45]. In addition, because students have an inherent need for relatedness to teachers, the incentive to outperform others by being better or not worse than others (other-approach vs other-avoidance orientation) is expected to be higher in the human compared to the computer condition. As we already showed in the first experiment, the simple nature of the source of information is integrated as a relevant dimension in the processing of information. In this second experiment, we extended this approach to test whether the nature of the information source is also integrated in human motivational functioning. We also tested whether the effect of the source of information on achievement motivation (operationalized via achievement goal orientations) depends on task difficulty. We further expected that achievement goal orientations might work as a mediator of the effect of the condition (human vs computer) on test performance, specifically in difficult trials that strengthen the dichotomy between human and computerized information sources.

5.2 Method

The participants were recruited online (M age = 21.31 years, SD = 7.05, 302 males, 692 female and 15 nondeclared). Participants were informed about the voluntary nature of the experiment and performed the task online. The procedure was similar to Experiment 1 except for an Achievement goal orientation questionnaire at the end of the experiment. The aim of this questionnaire was twofold: (1) to evaluate the difference in goal orientation as the function of the experimental condition and (2) to evaluate whether achievement goal orientation could bias participants’ performances.[4]

5.2.1 Achievement goal orientation

We asked participants to complete the 3 × 2 achievement goal questionnaire [6] to evaluate their achievement goal orientation during the task. Questions were presented in random order. Participants were informed that they would be presented with statements representing different types of goals that they may or may not have for the current task (e.g., “To do well compared to others in the class on the exams” and “To know the right answers to the questions on the exams in this class”). Participants were instructed to indicate how true each statement was for them on a 1 (not true of me) to 7 (extremely true of me) scale.

5.3 Variables of interest

We manipulated the type of helping cues presented as provided by either teachers or a computer as between factor. The level of difficulty (easy, medium, and hard) was manipulated as within factor. We measured the performance of participants as the accuracy on the task. We also measured the achievement goal orientation of participants with a questionnaire.

5.4 Results

5.4.1 Analysis strategy

First, we replicated the analysis of Experiment 1 (i.e., Performance section). Second, we controlled for the achievement goal orientation scale reliability and evaluated whether the experimental condition (human vs computer) could influence participants’ achievement goal orientation using a MANOVA (i.e., Achievement goals section). Finally, to investigate the (mediating) influence of the achievement goal orientation and the effect of the experimental condition on performance, we conducted mediation analyses (i.e., Mediation section).

5.4.2 Performance

As in Study 1, we first conducted a mixed-design ANOVA on test performance (accuracy scores), with the first and second trials (see Table 2) as within factor and the experimental group (teacher’s cue vs computer’s cue) as between factor. Results showed a significant interaction, F(1, 858) = 7.74, p = 0.005, η p 2 = 0.01 . Contrasts showed that while no difference was observed in the first trials, F(1, 858) = 0.40, p = 0.528, η p 2 < 0.01 , similar to Experiment 1, participants in the teacher’s cue condition performed better in the second trials than participants in the computer’s cue condition, F(1, 858) = 11.22, p = 0.001, η p 2 < 0.02 .

Table 2

Experiment 2. Participants’ test performance measured via accuracy score as a function of difficulty and trial session

First trials (control trials) Second trials (experimental trials)
Easy Medium Hard Easy Medium Hard
Computer 0.92 0.75 0.46 0.72 (96) 0.68 (265) 0.59 (423)
Teacher 0.93 0.78 0.49 0.78 (93) 0.73 (268) 0.68 (437)

Note. The first trials (in italic) represent the trial without any instruction, p (success) and serve as control measures. The second trials represent the trial after an error in the first trial, p (success/initial failure), and receiving an instruction presented as from either the computer or a human teacher. The number of participants at each level is presented in parentheses. The number of participants failing at the first trial and receiving a cue is the inverse of the p (success) in the first trial (e.g., 0.08 and 0.07 in easy first trials in computer and teacher conditions, respectively).

Again, we conducted separate mixed-design ANOVA on test performance (accuracy score) in the second trials to compare the experimental groups for each level of difficulty. For the second trials, we did not find any difference between groups in the easy trials, F(1, 188) = 0.86, p = 0.355, η p 2 = 0.01 , or medium trials, F(1, 532) = 1.80, p = 0.180, η p 2 = 0.01 . However, as in Experiment 1, we found an effect for the difficult trials, F(1, 858) = 11.22, p = 0.001, η p 2 = 0.01 , in which participants presented with human cues outperformed those presented with computer cues.

5.4.3 Achievement goals

We first controlled the reliability of the scale by processing a confirmatory factor analysis. Originally the scale consisted of six constructs (three items each); however, in our data, the confirmatory factorial analysis showed a three-factor solution (the factorial analysis is available in the Appendix, https://osf.io/8xjev/), with no difference between task-approach and task-avoidance goal constructs (10.30% of explained variance, α = 0.90), self-approach and self-avoidance goal constructs (18.35% of explained variance, α = 0.93), and other-approach and other-avoidance goal constructs (47.38% of explained variance, α = 0.97). Therefore, we used these three factors as the task-goal, self-goal, and other-goal constructs at continuation.

We processed a MANOVA including the three task-goal, self-goal, and other-goal achievement constructs as DVs and the experimental group as the IV. No difference was observed on the task-goal dimension, F(1, 1009) = 0.19, p = 0.660, η p 2 = 0.01 . However, participants in the computer’s cue condition declared higher self-goal orientation, F(1, 1009) = 16.42, p < 0.001, η p 2 = 0.02 , and lower other-goal orientation on the task, F(1, 1009) = 6.75, p = 0.010, η p 2 = 0.01 , compared to participants in the teacher’s cue condition. To assess the independence of two dimensions, we examined the correlation between self- and other-goal orientations. The results indicate a medium-sized positive correlation between two goal orientations, implying that including both goal orientations in the analysis would not lead to confounding results due to high multicollinearity, r = 0.39, p < 0.001.

5.4.4 Mediation analyses

Finally, we conducted mediation analyses including the group (human teacher vs computer) as the IV (X), the three achievement goal dimensions (task goal, self-goal, and other goal; M), and the test performance (measured via accuracy scores) on easy, medium, and hard second session trials as DVs (Y). For each mediator, we controlled for the two other achievement dimensions as covariates. Results were only significant for the difficult trials (Figure 2). First, the mediation analysis confirmed that participants in the teacher’s cue condition declared lower self-goal achievement orientation, b = −0.12, t(855) = −4.65, p = 001, CI 95% [−0.301, −0.122], and higher other-goal achievement orientation, b = 0.10, t(855) = 3.29, p = 001, CI 95% [0.077, 0.321], compared to participants in the computer’s cue condition on the task. Still no difference was observed in the task-goal achievement orientation, b = 0.04, t(855) = 1.57, p = 0.118, CI 95% [−0.016, 0.141]. While self-goal orientation was not related to performance, b = −0.08, t(854) = −1.91, p = 0.058, CI 95% [−0.040, 0.001], task-goal orientation was positively, b = 0.14, t(854) = 3.29, p = 0.001, CI 95% [0.016, 0.061] and other-goal orientation negatively, b = −0.10, t(854) = −2.74, p = 0.006, CI 95% [−0.035, −0.006] related to performance. Consistent with the previous analysis, only the other-goal achievement mediation was significant, b = −0.01, CI 95% [−0.021, −0.002]. In sum, participants receiving human cues compared their performance on the “other” dimension to a greater extent than participants receiving cues from a computer; also, the higher the other goal orientation was, the lower the participants performed on difficult trials. All other p > 0.05.

Human vs computer: What effect does the source of information have on cognitive performance and achievement goal orientation? (2)

Figure 2

Mediation model with the experimental group (human vs computer) as IV, the orientation goal as mediator and the performance on difficult trials as DV. Green lines present positive β (human > computer) and red lines negative β (human < computer). **p < 0.01 and ***p < 0.001.

6 Discussion

The first objective of Experiment 2 was to replicate the results of Experiment 1 with an effect of human vs computer information source on a difficult item’s accuracy. Our results confirmed the observations of Experiment 1. In a second step, we sought to link the source effects on performance with the nature of the participants’ achievement goals, which is crucial for performance, engagement, and well-being in the academic context. First of all, our results showed an absence of effect on task-orientation, as we had hypothesized. Contrary to our expectations, however, we found a greater self-orientation when the source of the information was a computer compared to a human, but this was not related to performance. Finally, in line with our hypotheses, participants in the “human” condition reported higher levels of other-goal orientation, which in turn mediated the effect of the experimental condition on test performance (accuracy) as it negatively affected test performance.

6.1 General discussion

Any technology that relies on the use of external information in the production of a response, performance, learning, or any task must address the cognitive processing of this information. Considering the perception of a message from the user’s point of view is fundamental if we want to anticipate the sociocognitive effects that are central to the evaluation processes of individuals. In this study, we were interested in the differences between humans and computers as a source of instructional support on a cognitive task, illustrating the processes involved in the daily lives of individuals. We were able to demonstrate that cognitive performance depended on three factors: the source of information, the difficulty of the task, and the achievement goal orientation.

Our findings reinforce the relevance of current research into human–computer interactions (e.g., chatbots, intelligent tutoring systems) by highlighting the crucial role of the top-down nature of information presentation: Whether the information is perceived as coming from a human source or a computer agent modulates cognitive and motivational task-related processes in humans. Indeed, if effects can already be observed on reasoning tasks that have been shown to predict academic achievement [47,48], it seems quite likely to find similar source effects on more extended content (e.g., reading and arithmetic) in educational settings. Likewise, one can thus imagine that fundamental cognitive processes such as memorization or comprehension of information could be modulated through manipulation of the source. Moreover, in our study, the experimental priming (human vs computer instruction) was intended to be reduced to ensure a highly controlled paradigm. It would therefore be interesting to develop this paradigm in a more ecological interaction with a human agent and a computer, and in particular to look at the long-term effects of information integration.

It is interesting to note that the effect of the source on performance is dependent on the difficulty of the task, in other words, on the associated cognitive effort. It is under high cognitive demand that the source effect appears. These results echo research on stereotypical assessment in conditions of high cognitive load, where the source becomes more important than the message [43]. During a difficult task, a larger part of the individual’s cognitive resources, allowing in particular the in-depth processing of information, is taken up by the task. As a result, the processing of subsequent information becomes more superficial due to a lack of resources for deeper processing. An individual unable to process information in a detailed manner will therefore use the most easily accessible and assessable information, such as the nature of the source, and rely on its representations, which may be individual or cultural, to process and use the information. As mentioned above, it seems important to consider the introduction of assistive technologies within a comprehensive framework of the perception, processing, and decision processes of individuals. To consider the introduction of assistive agents from the simple technological perspective only by neglecting the sociocognitive aspects would mean neglecting the major part of what defines individuals in their relationship to this environment.

In order to support our basic argument, particularly with regard to the school context, we also looked at individuals’ motivation using achievement goal theory [31]. Our data, through confirmatory factorial analysis, allowed a more marked dichotomy between the nature of the different goal achievement orientations, which are task-orientation, self-orientation, and other-orientation. Interestingly, we found that self-oriented motivation was higher in the computer compared to the human teacher condition. However, it was only when considering the intrinsic need for relatedness, illustrated by the comparison to others, that we found an effect of performance. Our results argue that the simple top-down inference about the source of an information may interact with the dispositional factors of individuals. When the source is human, people who strive for relatedness also perform better. If computers are to be considered as a vehicle for adapting content to the user, we see that we must consider not only the characteristics involved in using a technology but also the psychological characteristics of the user. More specifically, we argue that individuals’ social and motivational needs have to be taken into account in the context of human–computer interactions. These needs go beyond the sole consideration of performance-oriented factors, such as memory and working memory capacity. In addition, goal orientation can change dynamically as individuals progress through a learning or performance experience and have been shown to vary over longer time periods (e.g., a semester) [49,50]. Therefore, we could hypothesize that the present results could vary over time and that a reliable assistive agent should take into account motivation and emotions of the user.

The mediation analyses also revealed that the use of computer-assistive devices could reduce the other-goal orientation and produce positive effects on performance. To reason about this result, we have to take into account social comparison theory [51]. This theory frames the way humans evaluate their abilities (or opinions) in (direct or indirect) comparison to others. In particular, in the absence of objective criteria, the theory details how people will compare themselves to other individuals. On performance tasks, because individuals want to maintain their self-esteem [52], they can feel threatened when they think that they may not reach the expectation or performance of other individuals. This comparison might be direct (comparing two scores) but also indirect (comparing one’s score to an abstract prototype or to what is conceived as the expectation[5] of others [52,53]) [38]. When this comparison is negative, the result may be a threatened feeling and an impairment of the performance due to a depletion of the available cognitive resources. With respect to the mediation analysis showing that the use of computer-assistive devices could reduce the other-goal orientation and produce positive effects on performance in difficult trials, we could hypothesize that, when participants feel threatened by the difficulty, they would consider the human cues as relative to an expectation, while the computer cues would be decorrelated from any comparative feature [40]. Therefore, the use of computer-assistive tools would reduce the threat to people with a high other-goal orientation and inhibit the impairment of performance related to this comparison situation. However, while this explanation is coherent with social comparison theory and previous results in comparison to artificial agents, this explanation remains conjectural and further studies will have to investigate this hypothesis.

Our study had several strengths but also some limitations. First, the group sizes varied across task difficulty groups, with particularly small easy task groups. Although our statistical analyses are somewhat robust against different group sizes, we aim to conduct further research with larger group sizes in order to replicate our findings. Second, our operationalization of the perceived reliability of the source of information was effective but could be further elaborated, and future research might want to add additional measures of reliability, such as perceived stress or exhaustion when working on a test. Third, the effect size was relatively small. Several factors could explain this point: In Experiment 2, participants were recruited online, which increased the between-participants’ variability. Also, the task and the number of trials had been set for an online experiment, which reduced the number of points to estimate the slope of each participant. Finally, in the second study, we did not control for differences in achievement goal orientation prior to the experiment. While the small sample size reduces the likelihood that such a random difference will occur, we cannot be assertive on this point.

Despite these limitations, this study is unique and highly important for research in the context of assistive systems, as it emphasizes the need to take into account motivational processes when dealing with questions of interactions between humans and computers. We show here that to design better assistive tools one has to consider how individuals perceive these tools (in term of top-down attributions) and how these tools fit with their achievement goals. Therefore, these measures could be added prior to testing to evaluate their influence on the different tools that are in development to ensure their adaptability to each user. Also, our results support the idea of a complementarity between assistive technology and humans. In particular, technological tools provide an interesting adaptation for people who are threatened by social comparison situations. Assistive intelligent technologies (AIT) and AI will possibly play a more important role in modern classrooms due to an increasing diversity of the student population and to large classrooms, in which teachers have limited resources to address the individual needs of each learner. AIT and AI offer the possibility to process information from heterogeneous students in parallel and to provide individual pedagogical strategies to each student simultaneously, which is currently limited physically by class size and psychologically by attention. It is therefore important to conduct research on the topic of AIT in classrooms and related challenges pertaining to ethics, data protection laws, and data privacy as well as the complexity of teacher–student relationships that cannot be substituted by intelligent tutors. Indeed, AI technology is a resource that should always assist teachers and will be important in the future of classroom research and practice.

  1. Funding: This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 “Science of Intelligence” – project number 390523135.

  2. Data availability statement: The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

[1] M. Chassignol, A. Khoroshavin, A. Klimova, and A. Bilyatdinova, “Artificial intelligence trends in education: a narrative overview,” in Procedia Computer Science, vol. 136, pp. 16–24, 2018, https://doi.org/10.1016/j.procs.2018.08.233.Search in Google Scholar

[2] H. S. Nwana, “Intelligent tutoring systems: an overview,” Artif. Intell. Rev., vol. 4, no. 4, pp. 251–277, 1990.10.1007/BF00168958Search in Google Scholar

[3] C. Nass and Y. Moon, “Machines and mindlessness: social responses to computers,” J. Soc. Issues, vol. 56, no. 1, pp. 81–103, 2000.10.1111/0022-4537.00153Search in Google Scholar

[4] N. Epley, A. Waytz, and J. T. Cacioppo, “On seeing human: a three-factor theory of anthropomorphism,” Psychol. Rev., vol. 114, no. 4, pp. 864–886, 2007.10.1037/0033-295X.114.4.864Search in Google Scholar PubMed

[5] B. Friedman, P. H. Kahn, and D. C. Howe, “Trust online,” Commun. ACM, vol. 43, no. 12, pp. 34–40, 2000.10.1145/355112.355120Search in Google Scholar

[6] A. J. Elliot, K. Murayama, and R. Pekrun, “A 3 × 2 achievement goal model,” J. Educ. Psychol., vol. 103, no. 3, pp. 632–648, 2011.10.1037/a0023952Search in Google Scholar

[7] E. S. Elliott and C. S. Dweck, “Goals: an approach to motivation and achievement,” J. Pers. Soc. Psychol., vol. 54, no. 1, pp. 5–12, 1988.10.1037/0022-3514.54.1.5Search in Google Scholar

[8] D. Westerman, P. R. Spence, and B. Van Der Heide, “Social media as information source: recency of updates and credibility of information,” J. Comput. Commun., vol. 19, no. 2, pp. 171–183, 2014.10.1111/jcc4.12041Search in Google Scholar

[9] R. Thomson, N. Ito, H. Suda, F. Lin, Y. Liu, R. Hayasaka, et al., “Trusting tweets: the f*ckushima disaster and information source credibility on Twitter,” in ISCRAM 2012 Conference Proceedings – 9th International Conference on Information Systems for Crisis Response and Management, 2012, pp. 1–10.Search in Google Scholar

[10] M. S. Eastin, “Credibility assessments of online health information: The effects of source expertise and knowledge of content,” J. Comput. Commun., vol. 6, no. 4, JCMC643, 2001.10.1111/j.1083-6101.2001.tb00126.xSearch in Google Scholar

[11] M. J. Metzger, A. J. Flanagin, and R. B. Medders, “Social and heuristic approaches to credibility evaluation online,” J. Commun., vol. 60, no. 3, pp. 413–439, 2010.10.1111/j.1460-2466.2010.01488.xSearch in Google Scholar

[12] C. I. Hovland and W. Weiss, “The influence of source credibility on communication effectiveness,” Public Opin. Q., vol. 15, no. 4, pp. 635–650, 1951.10.1086/266350Search in Google Scholar

[13] N. Ellemers and S. A. Haslam, “Social identity theory,” in Handbook of Theories of Social Psychology, P. A. M. Van Lange, A. W. Kruglanski, E. T. Higgins, Eds., Sage Publications Ltd, 2012, pp. 379–398.10.4135/9781446249222.n45Search in Google Scholar

[14] M. J. Platow, M. Foddy, T. Yamagishi, L. Lim, and A. Chow, “Two experimental tests of trust in in-group strangers: the moderating role of common knowledge of group membership,” Eur. J. Soc. Psychol., vol. 42, no. 1, pp. 30–35, 2012.10.1002/ejsp.852Search in Google Scholar

[15] M. Tanis and T. Postmes, “Short communication a social identity approach to trust: interpersonal perception, group membership and trusting behaviour,” Eur. J. Soc. Psychol., vol. 35, no. 3, pp. 413–424, 2005.10.1002/ejsp.256Search in Google Scholar

[16] W. G. Stephan and C. W. Stephan, “Intergroup threat theory,” in The International Encyclopedia of Intercultural Communication, Y. Y. Kim, Ed., John Wiley & Sons, Inc., https://doi.org/10.1002/9781118783665.ieicc0162.Search in Google Scholar

[17] M. R. Fraune, S. Sabanovic, and E. R. Smith, “Teammates first: favoring ingroup robots over outgroup humans,” in RO-MAN 2017 – 26th IEEE International Symposium on Robot and Human Interactive Communication, 2017, https://doi.org/10.1109/ROMAN.2017.8172492.Search in Google Scholar

[18] Y. R. Chen, J. Brockner, and X. P. Chen, “Individual-collective primacy and ingroup favoritism: enhancement and protection effects,” J. Exp. Soc. Psychol., vol. 38, no. 5, pp. 482–491, 2002.10.1016/S0022-1031(02)00018-5Search in Google Scholar

[19] N. Haslam and S. Loughnan, “Dehumanization and infrahumanization,” Annu. Rev. Psychol., vol. 65, no. 1, pp. 399–423, 2014.10.1146/annurev-psych-010213-115045Search in Google Scholar

[20] R. Gaunt, J. P. Leyens, and S. Demoulin, “Intergroup relations and the attribution of emotions: control over memory for secondary emotions associated with the ingroup and outgroup,” J. Exp. Soc. Psychol., vol. 38, no. 5, pp. 508–514, 2002.10.1016/S0022-1031(02)00014-8Search in Google Scholar

[21] A. Cichocka, M. Marchlewska, A. Golec de Zavala, and M. Olechowski, “‘They will not control us’: ingroup positivity and belief in intergroup conspiracies,” Br. J. Psychol., vol. 107, no. 3, pp. 556–576, 2016.10.1111/bjop.12158Search in Google Scholar PubMed

[22] M. Foddy and R. Dawes, “Group-based trust in social dilemmas,” in New Issues and Paradigms in Research on Social Dilemmas, A. Biel, D. Eek, T. Garling, and M. Gustafsson, Eds., Springer Science and Business Media, New York, 2008, pp. 57–71.10.1007/978-0-387-72596-3_5Search in Google Scholar

[23] N. Spatola and K. Urbanska, “God-like robots: the semantic overlap between representation of divine and artificial entities,” AI Soc., vol. 35, pp. 329–341, 2019, https://doi.org/10.1007/s00146-019-00902-1.Search in Google Scholar

[24] N. Spatola, N. Anier, S. Redersdorff, L. Ferrand, C. Belletier, A. Normand, et al., “National stereotypes and robots’ perception: the ‘made in’ effect,” Front. Robot. AI, vol. 6, 2019, https://doi.org/10.3389/frobt.2019.00021.Search in Google Scholar PubMed PubMed Central

[25] M. A. Nees, “Acceptance of self-driving cars: an examination of idealized versus realistic portrayals with a self-driving car acceptance scale,” in Proceedings of the Human Factors and Ergonomics Society, 2016, pp. 1448–1452.10.1177/1541931213601332Search in Google Scholar

[26] J. D. Lee and K. A. See, “Trust in automation: designing for appropriate reliance,” Human Factors, vol. 46, no. 1. pp. 50–80, 2004.10.1518/hfes.46.1.50_30392Search in Google Scholar PubMed

[27] D. A. Norman, “The ‘problem’ with automation: inappropriate feedback and interaction, not ‘over-automation’,” Philos. Trans. R. Soc. Lond. B. Biol. Sci., vol. 327, no. 1241, pp. 585–593, 1990.10.1093/acprof:oso/9780198521914.003.0014Search in Google Scholar

[28] L. A. Leotti, S. S. Iyengar, and K. N. Ochsner, “Born to choose: the origins and value of the need for control,” Trends in Cognitive Sciences, vol. 14, no. 10. pp. 457–463, 2010.10.1016/j.tics.2010.08.001Search in Google Scholar PubMed PubMed Central

[29] E. M. Anderman and C. A. Wolters, “Goals, values, and affect: influences on student motivation,” in Handbook of Educational Psychology, 2nd ed., P. Alexander and P. Winne, Eds., Simon & Schuster/Macmillan, New York, NY, 2015.10.4324/9780203874790.ch17Search in Google Scholar

[30] A. J. Elliot and H. A. McGregor, “A 2 × 2 achievement goal framework,” J. Pers. Soc. Psychol., vol. 80, no. 3, pp. 501–519, 2001.10.1037/0022-3514.80.3.501Search in Google Scholar PubMed

[31] T. C. Urdan, “Examining the relations among early adolescent students’ goals and friends’ orientation toward effort and achievement in school,” Contemp. Educ. Psychol., vol. 22, no. 2, pp. 165–191, 1997.10.1006/ceps.1997.0930Search in Google Scholar

[32] T. A. Ryan, J. W. Atkinson, C. N. Cofer, and M. H. Appley, “An introduction to motivation: theory and research,” Am. J. Psychol., vol. 80, no. 2, pp. 319–322, 1967.10.2307/1421000Search in Google Scholar

[33] R. Lazarides and C. Rubach, “Instructional characteristics in mathematics classrooms: relationships to achievement goal orientation and student engagement,” Math. Educ. Res. J., vol. 29, no. 2, pp. 201–217, 2017.10.1007/s13394-017-0196-4Search in Google Scholar

[34] H. Tuominen-Soini, K. Salmela-Aro, and M. Niemivirta, “Achievement goal orientations and academic well-being across the transition to upper secondary education,” Learn. Individ. Differ., vol. 22, no. 3, pp. 290–305, 2012.10.1016/j.lindif.2012.01.002Search in Google Scholar

[35] L. S. Vygtosky, Mind in Society, MIT Press, Cambridge, MA, 2019.Search in Google Scholar

[36] E. Sivan, “Motivation in social constructivist theory,” Educ. Psychol., vol. 21, no. 3, pp. 209–233, 1986.10.1207/s15326985ep2103_4Search in Google Scholar

[37] F. Ishowo-Oloko, J.-F. Bonnefon, Z. Soroye, J. Crandall, I. Rahwan, and T. Rahwan, “Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation,” Nat. Mach. Intell., vol. 1, no. 11, pp. 517–521, 2019, https://doi.org/10.1038/s42256-019-0113-5.Search in Google Scholar

[38] J. P. Bhanji and M. R. Delgado, “The social brain and reward: social information processing in the human striatum,” Wiley Interdiscip. Rev. Cogn. Sci., vol. 5, no. 1. pp. 61–73, 2014.10.1002/wcs.1266Search in Google Scholar PubMed PubMed Central

[39] J. Blascovich, W. B. Mendes, S. B. Hunter, and K. Salomon, “Social ‘facilitation’ as challenge and threat,” J. Pers. Soc. Psychol., vol. 77, no. 1, pp. 68–77, 1999.10.1037/0022-3514.77.1.68Search in Google Scholar

[40] N. Spatola and A. Normand, “Human vs machine: the psychological and behavioral consequences of being compared to an outperforming artificial agent,” Psychol. Res., 2020, https://doi.org/10.1007/s00426-020-01317-0.Search in Google Scholar PubMed

[41] C. N. Macrae, M. Hewstone, and R. J. Griffiths, “Processing load and memory for stereotype‐based information,” Eur. J. Soc. Psychol., vol. 23, no. 1, pp. 77–87, 1993.10.1002/ejsp.2420230107Search in Google Scholar

[42] J. W. Sherman, A. Y. Lee, G. R. Bessenoff, and L. A. Frost, “Stereotype efficiency reconsidered: Encoding flexibility under cognitive load,” J. Pers. Soc. Psychol., vol. 75, no. 3, pp. 589–606, 1998.10.1037/0022-3514.75.3.589Search in Google Scholar

[43] J. W. Sherman and L. A. Frost, “On the encoding of stereotype-relevant information under cognitive load,” Personal. Soc. Psychol. Bull., vol. 26, no. 1, pp. 26–34, 2000.10.1177/0146167200261003Search in Google Scholar

[44] R. Spears and S. A. Haslam, “Stereotyping and the burden of cognitive load,” in The social psychology of stereotyping and group life, R. Spears, P. J. Oakes, N. Ellemers, and S. A. Haslam, Eds., Blackwell Publishing, 1997, pp. 171–207.Search in Google Scholar

[45] T. Chaminade, B. Rauchbauer, B. Nazarian, M. Bourhis, M. Ochs, and L. Prévot, “Brain neurophysiology to objectify the social competence of conversational agents,” in HAI 2018 – Proceedings of the 6th International Conference on Human–Agent Interaction, 2018, pp. 333–335, https://doi.org/10.1145/3284432.3287177.Search in Google Scholar

[46] M. Lüftenegger, B. Schober, R. van de Schoot, P. Wagner, M. Finsterwald, and C. Spiel, “Lifelong learning as a goal – Do autonomy and self-regulation in school result in well prepared pupils?” Learn. Instr., vol. 22, no. 1, pp. 27–36, 2012, https://doi.org/10.1016/j.learninstruc.2011.06.001.Search in Google Scholar

[47] I. Gómez-Veiga, J. O. Vila Chaves, G. Duque, and J. A. García Madruga, “A new look to a classic issue: reasoning and academic achievement at secondary school,” Front. Psychol., vol. 9, art. 400, 2018, https://doi.org/10.3389/fpsyg.2018.00400.Search in Google Scholar PubMed PubMed Central

[48] I. J. Deary, S. Strand, P. Smith, and C. Fernandes, “Intelligence and educational achievement,” Intelligence, vol. 35, no. 1, pp. 13–21, 2007.10.1016/j.intell.2006.02.001Search in Google Scholar

[49] J. W. Fryer and A. J. Elliot, “Stability and change in achievement goals,” J. Educ. Psychol., vol. 99, no. 4, pp. 700–714, 2007.10.1037/e633962013-763Search in Google Scholar

[50] K. R. Muis and O. Edwards, “Examining the stability of achievement goal orientation,” Contemp. Educ. Psychol., vol. 34, no. 4, pp. 265–277, 2009.10.1016/j.cedpsych.2009.06.003Search in Google Scholar

[51] L. Festinger, “A theory of social comparison processes,” Hum. Relations, vol. 7, no. 2, pp. 117–140, 1954.10.1177/001872675400700202Search in Google Scholar

[52] S. M. Garcia, A. Tor, and R. Gonzalez, “Some affective consequences of social comparison and reflection processes: the pain and pleasure of being close,” Personal. Soc. Psychol. Bull., vol. 32, no. 7, pp. 970–982, 2006.10.1177/0146167206287640Search in Google Scholar PubMed

[53] S. M. Garcia and A. Tor, “Rankings, standards, and competition: task vs scale comparisons,” Organ. Behav. Hum. Decis. Process., vol. 102, no. 1, pp. 95–108, 2007.10.1016/j.obhdp.2006.10.004Search in Google Scholar

This work is licensed under the Creative Commons Attribution 4.0 International License.

Human vs computer: What effect does the source of information have on cognitive performance and achievement goal orientation? (2024)
Top Articles
Latest Posts
Article information

Author: Kimberely Baumbach CPA

Last Updated:

Views: 5746

Rating: 4 / 5 (61 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Kimberely Baumbach CPA

Birthday: 1996-01-14

Address: 8381 Boyce Course, Imeldachester, ND 74681

Phone: +3571286597580

Job: Product Banking Analyst

Hobby: Cosplaying, Inline skating, Amateur radio, Baton twirling, Mountaineering, Flying, Archery

Introduction: My name is Kimberely Baumbach CPA, I am a gorgeous, bright, charming, encouraging, zealous, lively, good person who loves writing and wants to share my knowledge and understanding with you.