DOI QR코드

DOI QR Code

Impact of Proctoring Environments on Student Performance: Online vs Offline Proctored Exams

  • LEE, Jung Wan (School of International Economics and Trade, Anhui University of Finance and Economics)
  • Received : 2020.05.24
  • Accepted : 2020.07.03
  • Published : 2020.08.30

Abstract

The paper examines the impact of proctoring environments on student performance in two different exam proctoring environments: online versus offline proctored exams. This study employs a set of aggregated data from 1,762 students over the eight-year period from 2009 to 2016 in a university. Although there were nine courses offered, they could have been counted more than once as students may appear several times to take exams for different courses. This study employs independent samples t-test and regression analysis to compare the means of two independent groups and to test the hypothesis. The results of the independent samples t-test and the regression analysis indicate that there is no difference in the mean scores of exams and, therefore, the findings suggest that the exam proctoring environment is unlikely related to student performance even when students take their exams either in online proctoring or offline proctoring environments. This study concludes that the proctoring environment unlikely results in a statistically significant difference of exam scores and, thus, the exam proctoring environment does not appear to cause any change in student performance. The findings suggest that the exam proctoring environments does not appear to impact on student academic achievements and assessments.

Keywords

1. Introduction

Increasing numbers of colleges and universities are offering courses in online or hybrid formats. While enrollment in online degree programs and online courses has tripled for the past ten years little is known about the relationship between exam proctoring environments and student performance in online assessments for students. With an evolving online learning environment, colleges and universities are faced with the challenge of maintaining their rigorous standards, quality and a consistent improvement in online degree programs and online courses as well as online assessments for students. One challenge in online assessments is the maintenance of academic integrity. Cheating on exams is contrary to ethical standards and academic integrity standards. Concerns about student cheating in online assessments will be subjected to close scrutiny of online assessments for students. Though many institutions at every level are likely to embrace it, some still have concerns about the legitimacy of online assessments. There are many digital solutions, including online learning management systems, identity management systems and exam management systems, which are designed to address the most common challenges to the academic integrity of online degree programs and online assessments.

Online exam proctoring service providers are changing the perception by offering remote proctoring services that are relatively more secure and reliable than in-person proctoring services. They help protect the academic integrity of online assessments. According to a report by Boston-based higher education research firm Eduventures (Hartman, 2016), about 2,000 colleges and universities use some forms of online proctoring exams, about half of the total in 2016 in the United States. The market for online proctoring services seems to be poised to go mainstream for student assessments of online courses in the near future. They include Examity (2019), ProctorU, ProctorFree, Xproctor, Microsoft online proctored exam, Pearson VUE (Pearson VUE acquired ProctorCam in 2015), among others. Online exam proctoring service providers offer comprehensive menus of online proctoring options, including record-and-review, live online proctoring using a webcam and other options. Thanks to the technology and services, students are no longer required to physically sit in test centers or classrooms or use unfamiliar equipment because those online proctoring applications facilitate students to take exams by using their own equipment at convenient time in a preferred location.

On the other hand, there are various testing center proctoring service providers, including nationwide and regional testing center networks (i.e., the National College Testing Association and the Old Colony Library Network), colleges and universities testing centers, public libraries testing centers, and for-profit institutions for proctoring services i.e., Prometric (2019) and Pearson VUE (2019). Prometric is a company in the United States providing exam proctoring services. The firm operates a testing center network composed of over 10,000 test sites around the world. Independent proctoring service centers provide computer-based test facilities and are located anywhere in the world. Libraries, churches, school computer labs, town halls, hospitals, company meeting rooms, and other public places are frequently used as independent proctored exam taking sites. In the case that exam takers live far away from testing centers, face logistical problems (i.e., military deployment) or disabilities, or cannot be accommodated at a testing center, they can take their exams at an independent proctoring site.

Though the literature about online assessment methods is substantial, little is known about the impact of exam proctoring environments (online versus offline proctoring) and student performance in online assessments. To fill the research gap, this paper aims to examine the impact of exam proctoring environments on student performance by comparing the results of exam scores in two different exam proctoring environments: online proctored exams (i.e., remote proctoring or online proctoring exams) versus offline proctored exams (i.e., Prometric, college and university testing centers, public library testing centers).

2. Literature Review

2.1. Performance Evaluations, Online Assessments and Proctoring Environments

Performance evaluations for students enable us to understand and interpret the course examination measures and then improve learning/teaching methods based on these results. According to Brookhart (2003), the evaluation is an important part of learning, while an effective evaluation is an episode of genuine learning. Kaufman (2009) reported that the performance measurement system should provide trustworthy and necessary data and information to stakeholders, including instructors and students. Cousins (2003) reported that connections exist between the evaluation context and policy setting, participatory evaluation practices, and various evaluation consequences. Mark and Henry (2004) detailed Cousins (2003)’ work to visually represent a theory of evaluation influence. Guerra-López and Toker (2012) suggested an application of the impact evaluation process for designing a performance measurement framework. All of these efforts describe mechanisms and pathways to a particular evaluation approach; they identify the time and resources available for conducting an evaluation as well as important aspects of the evaluation context. In that sense, external factors, for example, conditions or events apart from the inherent influence of the evaluation, may affect the extent to which the evaluation accomplishes its intended effects for students.

Kalyuga and Sweller (2005) introduced a method of evaluating learner’s expertise based on the assessment of the content of working memory. They reported that the learner-adapted experimental group, for which instruction was dynamically tailored to changing levels of expertise using rapid tests of knowledge combined with measures of cognitive load, demonstrated higher knowledge and cognitive efficiency gains than the control group. Liu (2013) reported that an online software application for assessing students’ understanding of curricular content based on concept maps, called the Assessment Agent System, was a useful tool for large-scale assessments based on concept maps. Greiff, Wüstenberg, Holt, Goldhammer and Funke (2013) introduced that complex problem-solving skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences.

Pirnay-Dummer, Ifenthaler and Spector (2010) introduced an integrated set of web-based assessment tools, called highly integrated model assessment technology and tools, has been shown to scale up for practical use in educational and workplace settings to study basic issues in human learning and performance. Wouters, van der Spek and van Oostendorp (2011) introduced a pathfinder structural assessment measuring learners’ knowledge organization and comparing this with a referent structure for the use of structural assessments and showcased its application with games. Hooshyar et al. (2016) introduced a solution-based intelligent tutoring system that integrated with an online game-based formative assessment game, which combines tic-tac-toe with online assessments for learning computer programming.

Kim and Ryu (2013) introduced a web-based formative peer assessment system that emphasizing learners’ metacognitive awareness for their performance in ill-structured tasks and discussed the challenges and implications of the system. Van Gog, Sluijsmans, Brinke and Prins (2010) introduced an online learning environment that consists of formative assessment tasks (i.e., assessments for learning) that center on professional situations. Corn (2010) introduced a user-friendly online survey tool for planning and formative evaluations of technology projects in educational settings.

Considering aforementioned various characteristics of the evaluation, Baeten, Dochy and Struyven (2008) reported that students often express preferences for a written, closedbook exam with time pressure, which is typical for the exam format. Thus, traditional exams on campus do not have to be abandoned in favor of online exams. This result corroborates a statement of educational researchers that a variety of assessment methods and test conditions were desirable (e.g., Birenbaum, 2007; Birenbaum & Rosenau, 2006; Struyven, Dochy, Janssens, & Gielen, 2006; Struyven, Dochy, & Janssens, 2008).

Numerous studies examined the usage effectiveness of proctoring formats for conducting assessments in online education (e.g., Anstine & Skidmore, 2005; Deal, 2002; Grijalva, Nowell, & Kerkvliet, 2006; Harmon & Lambrinos, 2008). According to Rovai (2000), a proctored exam can be particularly useful for evaluating student performance since it would be much more difficult for students to cheat on any exams. According to Meijer and Riemersma (2002), offering optional assistance enhances problem-solving ability more efficiently than traditional instruction does. Although some evidence indicated that knowledge and skills are highly situational, it is quite clear that certain abstract knowledge and common skills can be successfully applied across various assessment situations. However, according to Stack (2015) and Hollister and Berenson (2009), there was no significant difference in the mean scores of student online exams in proctored versus unproctored environments.

2.2. Hypothesis

Though the literature about online assessment methods is substantial, the research on exam proctoring conditions and their impact on student performance is somewhat limited. In that sense, this study is different from previous ones since the effects of exam proctoring conditions on student performance for their academic achievements are addressed. This study assumes if higher controls are imposed, students may not be able to fully apply acquired knowledge and skills to exams. In other words, the friendlier a proctoring condition a student is in, the better he or she will perform on it. In order to find some empirical evidence for our speculation, this paper examines the impact of exam proctoring environments on student performance by comparing the results of exams in two different proctoring environments: online proctored (e.g., ProctorCam or Examity) versus offline proctored (e.g., American College Testing or Prometric, on campus, and independent test sites). Accordingly, this study considers the following hypothesis:

H1: There is mean difference of test scores between online proctored exams and offline proctored exams.

3. Research Methodology

3.1. Data and Sample

A set of aggregated data was obtained from 1,762 students who had appeared for tests in online courses offered by a university over the eight-year period from 2009 to 2016. It should be noted that students who appeared in these tests are enrolled in master’s degree programs. Therefore, they could have been counted more than once as they may appear several times to take exams for different courses. Although there are nine courses offered, a few of them may have been taught more than once during the research.

All courses were entirely provided through online formats under an online course management system (i.e., Blackboard) and included a final exam. Grading criteria were established for each course independently; for instance, each course required participation in a discussion bulletin board on a weekly basis, which accounted for 30% to 40% of the total as well as assignments, which accounted for another 30% to 40% of the final grade. The exam grade contributed up to 30% to 40% of the final grade. Structures of each exam were made in a format of a combination of multiple-choice, true-or-false questions, and open-ended questions.

Students were scheduled to take exams on a specific date and time during a final exam week, which was normally a five-day period. The exam can be taken at any location where students are able to access the Internet. However, students were required to take the exam at a pre-approved test site. Exams were proctored in all cases of test center network sites, on campus, and independent test sites. Proctors refer to those who are usually staffs or administrators of the testing center. At non-networked test sites, proctors would be chosen among supervisors, clergies, doctors, librarians, or faculty at university or college. All proctors in any test sites were provided identical guidelines.

In addition, all exam takers were required to present valid photo identifications in order to access the test system, which was controlled by authentication mechanisms and the login page. The exam was administered for a 100-minute duration, the same for all students even at different locations. Students were not allowed to carry any notes, books, scrap paper, computer files, and calculators during the exam. Printing or copying the exam, or parts thereof, was prohibited. Moreover, mobile phones, or any other forms of electronic communication devices, were not acceptable to be carried to the test room.

3.2. Descriptive Statistics

Table 1 and Figure 1 display descriptive statistics along with various summary statistics of the sample. Table 1 shows the average exam scores from various online proctoring exam modes. Table 1 shows that there is a bit of variation in mean score of exam scores, with the highest mean score of 68.21 in the online ProctorCam proctoring condition, and the lowest mean score of 66.63 in the online Examity proctoring condition. Figure 1 shows histogram of student performance on online exams. Table 2 shows descriptive statistics of student performance on class activities by delivery modes.

Table 1: Descriptive statistics of student performance on online proctoring exams

OTGHEU_2020_v7n8_653_t0001.png 이미지

OTGHEU_2020_v7n8_653_f0001.png 이미지

Figure 1: Histogram of student performance on online proctoring exams

Table 2: Descriptive statistics of student performance on class activities by delivery modes

OTGHEU_2020_v7n8_653_t0002.png 이미지

4. Empirical Results

4.1. Independent Samples t-Test

Table 3 reports the results of independent samples t-test. In testing hypothesis 1, that there is mean difference of test scores between online proctored exams and offline proctored exams, Table 3 shows that mean difference t-statistic is insignificant at the 5% significance level. The results suggest that the null hypothesis (i.e., there is no mean difference between samples) cannot be rejected. That is, the average exam score of students between online proctored exams and offline proctored exams shows no difference. The results imply that there is no difference in student performance at the two different proctoring environments: online versus offline proctored.

Table 3: Results of independent samples t-test of proctoring environments

OTGHEU_2020_v7n8_653_t0003.png 이미지

1 Levene’s test for equality of variances

2 t-test for equality of means

4.2. Regression Analysis

Table 4 reports the results of multiple regression analysis. In testing hypothesis 1, that there is mean difference of test scores between online proctored exams and offline proctored exams, Table 4 shows that exam proctoring mode coefficient is insignificant at the 5% significance level. The results suggest that the null hypothesis (i.e., there is no difference between mean scores) cannot be rejected. That is, the average exam score of students between online proctored exams and offline proctored exams shows no difference. The results indicate that there is no difference in student performance at the two different proctoring environments: online versus offline proctored.

Table 4: Results of regression analysis

OTGHEU_2020_v7n8_653_t0004.png 이미지

Dependent variable: Total score for final grade

R-square = 0.978 Adjusted R-square = 0.976 ANOVA F-statistic = 17194.472

Probability values for rejection of the null hypothesis are employed at the 0.05 level (***, p-value < 0.01).

Overall, the results of the independent samples t-test and the regression analysis indicate that there is no difference in the mean scores of exams and, therefore, the results suggest that the exam proctoring environment is unlikely related to student performance even when students take their exams either online proctoring or offline proctoring environments. The findings suggest that the impact of exam proctoring environments does not appear to be related to student performance in online assessments. As a result, this study concludes that the exam proctoring environment is unlikely to result in a statistically significant difference of exam scores, and the exam proctoring environment does not appear to cause any changes in student performance for online assessments.

5. Discussion and Managerial Implication

It is plausible that the greater the unfamiliarity about an exam proctoring environment is, the greater the stress will be. Usually, unfamiliar conditions or locations of offline proctoring test centers may induce students to experience greater anxiety and play an important role in negatively influencing their performance. If these assumptions are reasonable, exam proctoring environments do have crucial impacts on creating performance differences between students who are in a stressful condition and those who are in a relieved condition. Accordingly, exam proctoring environments may influence the cognitive psychology and behaviors of exam takers, and these could change their performance at exams eventually.

For example, one student who had taken an exam at an offline proctoring test center provided the researcher with the following anecdote:

[“The offline proctors in the test center tend to add stress to exam. Since it is very difficult to schedule an exam that is convenient to anybody, there is an element of stress for participants to physically be there. In addition, some proctors are a bit overbearing. For instance, the writer, as an exam taker, once backed up my chair to stretch arms during a test, and had to answer for it by taking a few minutes out of the exam time, with the feeling that the proctor would have investigated me if that person could.”]

Another student, who had taken an exam at an offline proctoring independent test site, provided the researcher with the following anecdote:

[“The process of taking exams did not require me to specify an exact place and time, so I worked it out with a proctor that if I needed to start five minutes later, it was not a problem. The proctor seemed to follow the procedures set forth, but that person did not take any uncomfortable steps that can put me under stress. For example, everything would be fine if I had a glass of water. I also found that the proctor was more concerned about the exam taker’s personal comfort, such as chair and lighting.”]

Even though the anecdotal evidence mentioned above is mutually linked to a single observation, they suggest that an exam proctoring condition in which a student takes an exam may influence student performance. Generally speaking, students’ belief and stress about exam proctoring situations are formed as a result of their personal experience. As they are exposed to a wide range of selections of different exam proctoring environments across semesters, the cognitive psychology and behaviors of students are challenged concerning the impact of an exam proctoring environment. Students who have taken exams at an offline proctoring independent test site generally performed well on their exams because they were familiar with the exam proctoring environment and comfortable since they have experienced this type of proctoring exams more than once before.

Conversely, it is plausible that greater controls in exam proctoring situations produce greater stress for students. The difference in median scores under the two different proctoring environments indicates that the comfort of an exam proctoring condition is likely to influence student performance on their exams, and can even result in a performance difference. Moreover, the possibility of students being comforted by proctors at offline independent proctoring conditions will be greater, according to the anecdotal evidence provided above. Consequently, students at offline independent proctoring conditions are more likely to exploit their best knowledge and ability to perform well in the exam under such conditions. Interestingly, though the variance was minimal, it is reasonable to believe that students performed well in their exams at offline independent proctoring conditions over a long period of time.

Although the results of this study reveal that external factors such as exam proctoring environments and proctoring conditions unlikely influence student performance, self-efficacy in taking online assessments may also warrant their performance consistency. Self-efficacy refers to beliefs in one’s capabilities to organize and execute the causes of action required to produce the given attainments. Individuals with a strong sense of self-efficacy will treat highly controlled exam proctoring environments similar to the challenges that can be dealt with rather than avoided. This type of positive outlook may foster intrinsic interest and strong engrossment in their performance. Such an efficacious outlook may improve self-confidence and reduce stress and vulnerability to depression. Therefore, instructors are recommended to encourage students to develop strong self-efficacy beliefs by providing suitable guidance, information about exam proctoring environments, and opportunities of simulated tests under each exam proctoring environment.

If investigations support the fact that there are only limited reasons to favor offline proctoring test centers or classrooms, given the nature of online assessments, decisions regarding offline proctoring test centers/classrooms versus online/offline proctoring independent test sites should probably be conservative. As a result, instructors ensure to undertake necessary precautions in order to control any grading difference occurred due to exam proctoring environments, hence several implications from this study are provided to the faculty and online assessment administrators.

Firstly, it is important to recognize the non-similarity of exam proctoring environments where students may experience anxiety and adversely influence their performance later on. Guidance should be as simple as advising students to arrive to the test center early in order to make them relaxed and familiar with the test center, the proctor as well as the computer. Since trivial matters can cause frustrations during the exam, instructors should contribute ways to reduce their anxiety by advising students in advance of potential encumbrances of taking an exam at unfamiliar atmospheres or environments.

Secondly, in the case of test centers and alternative test environments where independent offline proctoring exams are available, very similar/same grading distributions need to be considered in order to counter possible unfair variances that may emerge because of the nature of specific exam proctoring conditions.

6. Conclusion and Limitations

This paper examined the impact of exam proctoring environments on student performance in online assessments. The results of the independent samples t-test and the regression analysis indicate that there is no difference in the mean scores of exams and, therefore, the results suggest that the exam proctoring environment is unlikely related to student performance even when students take their exams either online proctoring or offline proctoring environments. The findings suggest that the impact of exam proctoring environments does not appear to impact student performance in online assessments. As a result, this study concludes that the exam proctoring environment unlikely results in a statistically significant difference of exam scores, and the exam proctoring environment does not appear to cause any change in student performance in online assessments.

The paper also indicates that instructors should consider grading consequences that may arise owing to the nature of specific exam proctoring environments, even for a single online class. In that effort, when new exam proctoring environments of online assessments are available, test results from the new proctoring environment must be analyzed to ensure that the new exam proctoring environment do not place any group of students at a grading disadvantage or advantage. The analysis will reduce some of the issues associated with new technology or new management systems of online assessments.

There are some limitations of the current study. First of all, the possibility that there may be a significant amount of duplication of the students would influence the results. Second, it appears that the data is derived from a single program. Also, the sample in this study is related to graduate students, and it would be difficult to generalize any results for the undergraduate population. Therefore, further research could apply this method for the undergraduate population.

References

  1. Anstine, J., & Skidmore, M. (2005). A small sample study of traditional and online courses with sample selection adjustment. Journal of Economic Education, 36(2), 107-127.
  2. Baeten, M., Dochy, F., & Struyven, K. (2008). Students' approaches to learning and assessment preferences in a portfolio-based learning environment. Instructional Science, 36(5-6), 359-374. https://doi.org/10.1007/s11251-008-9060-y
  3. Birenbaum, M. (2007). Assessment and instruction preferences and their relationship with test anxiety and learning strategies. Higher Education, 53(6), 749-768. https://doi.org/10.1007/s10734-005-4843-4
  4. Birenbaum, M., & Rosenau, S. (2006). Assessment preferences, learning orientations and learning strategies of pre-service and in-service teachers. Journal of Education for Teaching: International Research and Pedagogy, 32(2), 213-225. https://doi.org/10.1080/02607470600655300
  5. Brookhart, S. M. (2003). Developing measurement theory for classroom assessment purposes and uses. Educational Measurement: Issues and Practice, 22(4), 5-12. https://doi.org/10.1111/j.1745-3992.2003.tb00139.x
  6. Corn, J. O. (2010). Investigating the quality of the school technology needs assessment (STNA) 3.0: A validity and reliability study. Educational Technology Research and Development, 58(4), 353-376. doi:10.1007/s11423-009-9140-y
  7. Cousins, J. B. (2003). Utilization effects of participatory evaluation. In T. Kellaghan. & D. L., Stufflebeam (eds.), International Handbook of Educational Evaluation (pp. 245-266). Dordrecht, Netherlands: Kluwer Academic Publishers.
  8. Deal, W. F. (2002). Distance learning: Teaching technology online. Technology Teacher, 61(8), 21-27.
  9. Examity. (2019). Online Proctoring All Day and All of the Night. Retrieved April 21, 2019, from: http://examity.com/
  10. Greiff, S., Wustenberg, S., Holt, D. V., Goldhammer, F., & Funke, J. (2013). Computer-based assessment of complex problem solving: concept, implementation, and application. Educational Technology Research and Development, 61(3), 407-421. doi:10.1007/s11423-013-9301-x
  11. Grijalva, T. C., Nowell, C., & Kerkvliet, J. (2006). Academic honesty and online courses. College Student Journal, 40(l), 180-186.
  12. Guerra-Lopez, I., & Toker, S. (2012). An application of the impact evaluation process for designing a performance measurement and evaluation framework in K-12 environments. Evaluation and Program Planning, 35(2), 222-235. https://doi.org/10.1016/j.evalprogplan.2011.10.001
  13. Harmon, O. R., & Lambrinos, J. (2008). Are online exams an invitation to cheat? Journal of Economic Education, 39(2), 116-125. https://doi.org/10.3200/JECE.39.2.116-125
  14. Hartman, K. (2016). Eduventures' 2016 higher education predictions: a year to unite. Retrieved January 12, 2019, from Eduventures Inc.: http://www.eduventures.com/2016/01/eduventures-2016-higher-ed-predictions-a-year-to-unite/
  15. Hollister, K. K., & Berenson, M. L. (2009). Proctored Versus Unproctored Online Exams: Studying the Impact of Exam Environment on Student Performance. Decision Sciences Journal of Innovative Education, 7(1), 271-294. doi:10.1111/j.1540-4609.2008.00220.x
  16. Hooshyar, D., Ahmad, R. B., Yousefi, M., Fathi, M., Abdollahi, A., Horng, S.-J., & Lim, H. (2016). A solution-based intelligent tutoring system integrated with an online game-based formative assessment: development and evaluation. Educational Technology Research and Development, 64(4), 787-808. doi:10.1007/s11423-016-9433-x
  17. Kalyuga, S., & Sweller, J. (2005). Rapid dynamic assessment of expertise to improve the efficiency of adaptive e-learning. Educational Technology Research and Development, 53(3), 83-93. doi:10.1007/BF02504800
  18. Kaufman, T. E. (2009). Performance management and school reform: A multi-case study of middle and high school performance management for instructional improvement. Dissertation No.3385021. Harvard University, Cambridge, MA.
  19. Kim, M., & Ryu, J. (2013). The development and implementation of a web-based formative peer assessment system for enhancing students' metacognitive awareness and performance in ill-structured tasks. Educational Technology Research and Development, 61(4), 549-561. doi:10.1007/s11423-012-9266-1
  20. Liu, J. (2013). The assessment agent system: design, development, and evaluation. Educational Technology Research and Development, 61(2), 197-215. doi:10.1007/s11423-013-9286-5
  21. Mark, M. M., & Henry, G. T. (2004). The mechanisms and outcomes of evaluation influence. Evaluation, 10(1), 35-57. https://doi.org/10.1177/1356389004042326
  22. Meijer, J., & Riemersma, F. (2002). Teaching and testing mathematical problem solving by offering optional assistance. Instructional Science, 30(3), 187-220. https://doi.org/10.1023/A:1015129031935
  23. Pearson VUE. (2019). The global leader in computer-based testing. Retrieved April 21, 2019, from Person VUE: https://home.pearsonvue.com/About-Pearson-VUE/What-we-do.aspx
  24. Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2010). Highly integrated model assessment technology and tools. Educational Technology Research and Development, 58(1), 3-18. doi:10.1007/s11423-009-9119-8
  25. Prometric. (2019). The Prometric network is global and robust. Retrieved April 21, 2019, from Prometric: https://www.prometric.com/en-us/about-prometric/pages/global-network-strength.aspx
  26. Rovai, A. P. (2000). Online and traditional assessments: What is the difference? The Internet and Higher Education, 3(3), 141-151. https://doi.org/10.1016/S1096-7516(01)00028-8
  27. Stack, S. (2015). The Impact of Exam Environments on Student Test Scores in Online Courses. Journal of Criminal Justice Education, 26(3), 273-282. doi: http://dx.doi.org/10.1080/10511253.2015.1012173
  28. Struyven, K., Dochy, F., Janssens, S., & Gielen, S. (2006). On the dynamics of students' approaches to learning: The effects of the teaching/learning environment. Learning and Instruction, 16(4), 279-294. https://doi.org/10.1016/j.learninstruc.2006.07.001
  29. Struyven, K., Dochy, F., & Janssens, S. (2008). The effects of hands-on experience on students’ preferences for assessment methods. Journal of Teacher Education, 59(1), 69-88. https://doi.org/10.1177/0022487107311335
  30. Van Gog, T., Sluijsmans, D. M. A., Brinke, D. J., & Prins, F. J. (2010). Formative assessment in an online learning environment to support flexible on-the-job learning in complex professional domains. Educational Technology Research and Development, 58(3), 311-324. doi:10.1007/s11423-008-9099-0
  31. Wouters, P., van der Spek, E. D., & van Oostendorp, H. (2011). Measuring learning in serious games: a case study with structural assessment. Educational Technology Research and Development, 59(6), 741-763. doi:10.1007/s11423-010-9183-0

Cited by

  1. Applying Stochastic Fractal Search Algorithm (SFSA) in Ranking the Determinants of Undergraduates Employability: Evidence from Vietnam vol.7, pp.12, 2020, https://doi.org/10.13106/jafeb.2020.vol7.no12.583
  2. Remote proctored exams: Integrity assurance in online education? vol.42, pp.2, 2021, https://doi.org/10.1080/01587919.2021.1910495
  3. Do outcomes from high stakes examinations taken in test centres and via live remote proctoring differ? vol.2, 2020, https://doi.org/10.1016/j.caeo.2021.100061