Fair Assessment Method Reflecting Individual Ability in Capstone Design Course

캡스톤 디자인 수업에서 개인 능력을 반영하는 공정한 평가 방법

  • Kim, Jongwan (Division of Computer and Information Engineering, Daegu University)
  • 김종완 (대구대학교 컴퓨터정보공학부)
  • Received : 2019.01.03
  • Accepted : 2019.02.16
  • Published : 2019.03.31

Abstract

Capstone design is a subject taught in a setting where students gather in a team, decide on their own selected topic, and collaborate with one another to perform a project. A fair assessment is very important in a team project-based capstone design course for students. Many instructors agree that harmonizing creative evaluation and outcome assessment is hard in capstone design class. In also, it is not easy to assess students' individual efforts and achievements fairly in accordance with team-based assessment practices. To resolve this issue in this paper, we have surveyed various engineering design education methodologies, and have modelled existing evaluating elements into a modified creative process and outcome assessment framework for team project assessment. In particular, we focused on a method of fairly assigning credits by combining team based and individual-level assessments. Analyzing students' achievement and grade evaluation and verifying the validity of the proposed method was performed.

Keywords

I. 서론

With the continuous advancement of technology and increasing pressure for new innovation, new products and technologies are announced on a day-by-day basis. Today’s engineering students are asked by the society and industries to focus on learning to be creative to come up with more innovative products. These products cannot be developed singlehandedly but are rather developed from some talented individuals whom collaboratively worked together to create such products. Companies are also demanding the release of talented graduates that combine creativity and collaboration skills necessary for problem solving at universities.

After the digital computer technologies led the third wave since the 1970s, the era of the fourth industrial revolution, in which information and communication technologies (ICT) such as artificial intelligence (AI) and Internet of Things (IoT), big data, and block chains converge on the economy and society. The term 'Fourth Industrial Revolution' was mentioned in the World Economic Forum (WEF) in 2016 and has become a term for a new industrial age based on ICT (Schwab, 2017). Engineering students who are living in the 4th Industrial Revolution era are demanding more creative learning ability to develop innovative technologies and products required by society and corporations. Accordingly, it is necessary to preemptively respond to rapid changes in industrial paradigm by concentrating on expanding promising new technology such as AI and new industry education and strengthening on-site practical skills such as capstone design through industry-academia cooperation.

The author has a keen interest in team project-based engineering design education. In his previous research, he suggested an assessment method that reflects both the progress of the team project and the outcome of the project (Kim, 2014). Building off of this, he has presented his findings of A Team Project Based Assessment Method for Engineering Design Course (Kim, 2016). Through these studies, it was found that the students' performance was influenced more by the performance of the team to which the individual belonged than how well the individual was in the team. Efforts are needed to provide a better account of the individual's contribution to the team to which the student belongs in order to fairly evaluate the student. For this purpose, the author’s short work (Kim, 2018) modified the evaluation method presented in the previous study (Kim, 2016) and tried to analyze the new results in accordance with the purposeof this research. The contents described in this paper include specific descriptions and analyzes of the ideas mentioned in (Kim, 2018). This research work implements the improved evaluation method and presents a concrete analysis result and the new findings.

This paper is organized as follows: Section 2 describes related works. The assessment method and the rubrics for team project based capstone design are presented in Section 3. In Section 4, the course syllabus during a semester is specifically explained. Section 5 will evaluate the fairness of the proposed assessment method through student progress and grades received. Finally a conclusion for this work is given in Section 6.

II. Related works

We are interested in the fair evaluation of engineering capstone design class reflecting creativity assessment. Some related works on engineering design education and fair evaluation have been surveyed as following.

An attempt has been made to provide a comprehensive list of technical and non-technical skills for creative design engineers including analytical, open-ended problem solving, team communication skills, and modern tool skills (Mourtous, 2012). Lee et al. presented the operating method of capstone design courses and an evaluation method of program learning outcomes, and analyzed its pros and cons. In their works, individual evaluation was carried out through the regular interview with the supervisor once a month for the fair grade evaluation of the capstone design class which is carried out with 6 credits per year (Lee et al., 2010). On the other hand, in this research, there is a difference in that the instructor directly evaluates students’ reports every week.

The attributes of a design engineer are difficult to measure and will require the development of special rubrics. Every instructor wants know how to evaluate whether a design or other artificial creature is creative. Most educators have added a common ideation approach called brainstorming to their engineering design curricula, but brainstorming requires designers to look inward for inspiration. Ogot and Okudan presented their experiences with introducing one of systematic creativity methods, the theory of inventive problem solving (TRIZ) (Ogot & Okudan, 2006). TRIZ is “a problem-solving, analysis and forecasting tool derived from the study of patterns of invention in the global patent literature” developed by Genrich Altshuller. Ogot et al. showed that TRIZ made it easier for students to generate feasible concepts to design problems from the comparison between TRIZ learned group and TRIZ non-learned group.

Systematic Inventive Thinking (SIT) is a very useful thinking tool that makes it easier for anyone to learn and use TRIZ by simplifying TRIZ by Israel’s Roni Horowitz and Jacob Goldenberg (Horowitz, 2001). Derived from G. Altshuller’s TRIZ engineering discipline, SIT is a practical approach to creativity, innovation and problem solving that is an attempt to generalize and simplify TRIZ. Focusing not on what makes inventive solutions different but on what they share in common is core to SIT’s approach. Horowitz realizes that TRIZ is helpful, yet needs improvement. He proposed Subtraction, Task Unification, Multiplication, Division, and Attribute Dependency as the five Thinking Tools from TRIZ’s 40 innovation principles (Park, 2016). Using SIT’s five Thinking Tools will help fulfill the creative product design and problem solving aspect of a capstone design project. By observing students‘ works, we discover that they are creatively designed using SIT Thinking Tools.

It is very hard not only for the students to perform well on design engineering projects but also for the instructor to assess student project work in a fair manner. Platanitis and Pop-Iliev developed rubrics to evaluate students’ level of knowledge application for the three core design courses for 1st, 2nd, and 3rd year students and the capstone design course for senior students (Platanitis & Pop-Iliev, 2010). This was developed based on a methodical tool useful in such evaluation called the ICE (Ideas, Connections, and Extensions) theory, to evaluate the extent to which students have applied their knowledge for various engineering design projects. Each component of ICE theory represents a level of application; Ideas presenting the basic understanding of a concept, Connections meaning the ability of one to relate knowledge and articulate relationships among the fundamental elements, and Extensions illustrating the ability of one to take knowledge and to apply it to a novel situation (Young & Wilson, 2000). The obtained results indicated comprehensive rubrics, which could be used as roadmaps for evaluating engineering design project courses.

Current engineering design curriculum still needs more work in assessing the design process and the final product outcome of a team with a balance. Since the creative engineering design course is carried out by the team, the evaluation of the grades according to the team formation in the creative design class has a great influence. Recently, a mathematical model for team formation was presented and case studies were applied to this model in the team project based design class (Kim, J.H. 2018). There are also studies of the relationship between peer evaluation and self-assessment in team-based instruction (Hwang, 2016; Kilpatrick et al., 2001). Based on these related works, we will study methods of classroom operation and evaluation for improving capstone design ability of senior students.

III. Assessment and Rubrics

1. Modified creative process and outcome based assessment

A creative process can be considered in the path from defining a problem to finding a solution in engineering design education. These creative elements are put into three stages including brainstorming, building, and demonstration phases. We have analyzed these creative elements and rearranged them into the creative process of a team project. Novelty, fluency, variety, and feasibility are required for the brainstorming phase; resources, efforts, and cost are needed for the building phase; and value, usefulness, and design are necessary for demonstration phase (Kim, 2014). The ten creativity elements have been modelled in a creative process and outcome assessment framework called CPOA framework (Kim, 2016). Based on his experience teaching the course in 2014, the author has modified the CPOA framework for the 2015 semester to combine fluency and variety required for brainstorming idea formation into just a variety component, and added a motive component instead to get rid of meaningless software project costs as shown in Fig. 1. Particularly in order to evaluate individuals‘ competencies more fairly, regardless of team affiliation, compared to the author’s previous study (Kim, 2016), we added the individual assessment elements to the final evaluation stage in Figure 1. The description of individual assessment elements will be described in detail in Table 4 afterwards. The nine creativity elements are in ellipses in Fig. 1, written in italics.

KHKOCH_2019_v22n2_36_f0001.png 이미지

Fig. 1 Modified creative process and outcome assessment framework in engineering design course

A typical engineering design project consists of 3 to 5 students in a team. Team members brainstorm to find as many possible ideas they could come up with and evaluate the feasibility of selected ideas. With the chosen ideas, students build prototypes or products considering resources, efforts, and cost in the case of a hardware project and resources/development tools and efforts in the case of a software project. In the previous work (Kim, 2016), which takes into consideration the cost, the cost element has one Budget trait which has three level rubrics; the Budget item has “buy materials regardless of necessity” is considered fair, “secure necessary materials” is considered good, and “reduce budget by using recyclable materials” is considered excellent. However, most of the capstone projects were conducted as software projects, and some hardware project teams did not make price comparisons when purchasing the necessary materials on the Internet and did not use recycled materials. In this research, the cost element was eliminated in order to exclude the differentiation of the evaluation according to the nature of the project. However, efforts are needed to find out the evaluation element of the software project against the cost of the hardware project.

During the demonstration process, students present their works. An instructor evaluates the project based on its value, usefulness, and design. Their score is determined not only by the creativity process assessment in the brainstorming and building phase but also by the outcome assessment in the demonstration phase (Kim, 2016). Final presentation and peer evaluation are included in the assessment. After the final presentation and demonstration assessment, students evaluate other team’s works as well as their individual level of contribution in their respective teams. Peer evaluation helps students’ develop their critical thinking abilities whileself-assessment helps them reflect on what they have learned from the course. When a problem happens in the process of the above framework, feedback to the previous stage should be done.

Table 1, Table 2, and Table 3 show the assessment traits and decision criteria for each team in three phases respectively. In the team evaluation, the evaluation elements shown in Tables 1, 2 and 3 were used. In other words, the midterm scores were summed up by the primary traits of the evaluation elements in Table 1, and the final scores were summed up by the primary traits in Table 2 and 3. For example, Feasibility element has three primary traits - possibility, effectiveness, and sketch - in which each trait is measured by student notes such as workbook (WB), or mostly by instructors. The quality of each trait is decided by either process or outcome according to the trait (Kim, 2016).

 Table 1 Assessment traits and decision criteria in the brainstorming phase

KHKOCH_2019_v22n2_36_t0001.png 이미지

 Table 2 Assessment traits and decision criteria in the building phase

KHKOCH_2019_v22n2_36_t0002.png 이미지

 Table 3 Assessment traits and decision criteria in the demonstration phase

KHKOCH_2019_v22n2_36_t0003.png 이미지

Table 4 presents the individual assessment traits and decision criteria. We use the team project proposal, weekly individual progress report and self-assessment list as rubric of individual assessment. The self-assessment list includes the individual's efforts and responsibilities, evaluations of other teams, and logical evaluation of other teams. In particular, the overall success score is evaluated by the instructor based on contribution from student’s effort and responsibility.

Table 4 Individual assessment traits and decision criteria

KHKOCH_2019_v22n2_36_t0004.png 이미지

Self-assessment is measured in a 3 point Likert scale. In terms of their effort spent on the project, they have the following three options to choose from: I did not put a lot of effort on the project, I put a lot of effort on the project,I put much more effort on the project than my group members. In terms of their individual responsibilities on the project, they have the following three options to choose from: I did not fulfill my responsibility on the project, I fulfilled my responsibility on the project, I fulfilled my responsibility and helped peer team members on the project.

2. Inspired rubrics from ICE approach

Every coursework requires a fair grading policy to assess its course work. Midterm and final exams as well as assignments are general measurements taken to achieve this goal. However, some engineering design courses in the ABEEK program have no written exams in South Korea. Instead, the courses are evaluated solely based on the student’s team project activity scores. Thus, an effective assessment method to evaluate team project based capstone design courses should implement a fair team grading policy. Every instructor has his or her own marking criteria and standards. However, many of the grading elements tend to be very subjective because the rubrics are unclear. Having clear and descriptive rubrics allows instructors to make the evaluation process consistent and fair, demonstrate their expectations from the students taking the course, and help team teachers or teaching assistants grade student works in a consistent manner (Kim, 2016). A method called Primary Trait Analysis (PTA) could be used to assess student achievements or the portfolio of student achievements that includes written, oral, assembled, and fabricated work. Walvoord and Anderson demonstrated how teachers could use PTA inside their class to make criteria and standards clear to themselves and to their students (Walvoord & Anderson, 1998).

Nine creative elements in Fig. 1 are transformed into primary traits for team project capstone design course. Novelty element is composed of three traits; difference, keywords, and comparison to previous works. Each trait has three level descriptive statements called rubrics. Each rubric is inspired from the ICE theory (Young & Wilson, 2000). According to this criterion, the first trait, differencehas “enumerate existing ideas” is considered fair (Ideas level), “converge existing ideas” is considered good (Connections level), and “make an innovative idea” is considered excellent (Extensions level) (Kim, 2016). Newly introduced Motive element in this work is composed of two traits: problem finding and necessity. The problem finding trait represents that “students present problems subjectively” in the fair level, “students objectively present problems” in the good level, and “students present problems and its improvements together” in the excellent level. Other evaluating elements in Table 1, Table 2, and Table 3 also have rubrics assigned in three levels.

On the other hand, three additional evaluating elements such as peer evaluation, self-assessment, and overall success by subjective evaluation are proposed to assess the team’s collaborative work. Three level rubrics of the presentation element has already been described in the previous work (Kim, 2016). Students did peer evaluation by first examining other team’s idea, design scheme, and completion, then choosing a team that they thought did the best and another team that they thought needed improvement, and writing down reasons for why they chose those two respective teams. Based on the quality of the write up, the students are evaluated for their critical thinking ability in three levels as outlined above. For the self-assessment portion, students are evaluated again in three levels based on individual proposal, their weekly individual progress reports and self-assessment list. Finally overall success has each instructor’s subjective evaluation from the beginning of performed project to the ending phase.

IV. Capstone Design Course Syllabus

Capstone design course is an engineering design class that opens in the first semester for 4th year students. In the first three weeks of class, students learn about the theorybehind capstone design project and are introduced to the grading criteria of the course. In the very first week, every student registered in the course writes a 12 week team project proposal in 1-2 report pages and submits them to the instructor. The best 25% of the proposals are selected by the instructor based on their excellence. The remaining 75% of the students join the selected 25% of the students voluntarily to form a team of four. Occasionally, depending on the student’s situation or subject, 3 or 5 students become a team. A 12 week team project will be divided into half (6 weeks). Studentswill give out two presentations; one after 6 weeks, and the final presentation after 12 weeks (Kim, 2016).

For 5 weeks from week 4 to 8, students brainstorm together by performing each of their assigned tasks within their team. Additionally, they meet up with the instructor once a week for help and assistance on the project. Here, students are required to submit their individual progress reports along with the team workbooks. The instructor will then examine the progress of the project by reading each team workbook during his/her weekly appointment with the students. Individual progress reports will be used to assess each individual separately after weekly meeting with each team is over. There is no TA to help out during class, hence all the inquiries and assessments were handled by the instructor (Kim, 2016).

In the beginning of the term project, team workbook and individual report guideline were given to the students as guidance to help the students distinguish between team workbook and individual progress report. Many students are confused about the differences between these two reports and we explain them as follows.

Team workbook guideline: It is a weekly report outlining the purpose and the direction of progress of the project written collaboratively as a team by based on the each of the team member’s individual progress reports. Each workbook should be submitted in 3 to 5 pages in length. Individual progress report guideline: After everyone is assigned in their individual tasks during team meeting, each team member writes his/her own work and role in the team for a week on their individual progress report. The report should be written in 1 to 2 pages (Kim, 2016).

Some students questioned the necessity of individual reports when they also have team reports to hand in. Both reports are necessary for the capstone design course, however, as team reports outline what the team did as a whole whereas individual reports outline what each individual did for the project. The importance of individual reports was emphasized to students as a way to eliminate unfair grading from being in bad group.

By week 9, every team must present the progress of their project and what they expect to achieve from the project to the entire class. Instructor will evaluate the progress of the team project based on the rubrics given in Section 3.

From week 10 to 14, students will fulfill the project based on their team workbooks and prepare the final demonstration. Once a week appointment with the instructor to check up on the project will continue through these weeks. On week 15, final presentation of the project will be done and everyonemust attend and present their project in teams and demonstrate the outcome of the project. The same examination process will occur for the final presentation as did for mid-presentation in week 9 (Kim, 2016).

During the author’s capstone design course in 2016 spring semester, 8 team projects were developed. An example of a prototype developed by a team is an Automatic Sun Visor (ASV), a tool that automatically operates sun visors based on the intensity of the light. This prototype is an example of using the SIT attribute dependency tool. Typical sun visors are manually operated by human. In this prototype model, an illuminance sensor detects the intensity of the sunlight, in which the Arduino determines when it is necessary to propel the motor and determine the angle in which the sun should be blocked using a sun visor. The motive for the project was to prevent the inconvenience of having to manually switch on the visor when the view of the driver is obstructed. During the brainstorming stage, the team members studied automobile companies in Korea to determine if there were any companies who patented the idea, and verified that there were no cars released to the market with an ASV. The prototype is novel in that it uses an Arduino to operate the visor. In the building phase, the Arduino was used to turn on the sun visor when the lux value of sunlight glared on to the human’s eye exceeded a certain value. When the lux value was lower than the certain value, then the Arduino was put into a standby mode while continuously determining the lux value. While in the standby phase, if the recalculated lux value remains low, then the sun visor is moved back to its original position. Fig. 2 illustrates the above steps in more detail showing the completed product attached to the vehicle.

KHKOCH_2019_v22n2_36_f0002.png 이미지

Fig. 2 Key features of completed prototype and attachment to vehicle

The prototyped ASV automatically calculates the amount of light and turns on the motor that signals the sun visor to block incoming light. The adjustment switch has two buttons to control the position of the sun visor to accommodate for drivers with varying heights. The sun visor automatically shuts down when the vehicle is in the shade or when sunlight is blocked off completely. The prototype senses the darkness of the outside atmosphere using its luminance sensor, and prevents the sun visor from malfunctioning due to a sudden introduction of light coming from oncoming vehicles. The team who created this prototype overall received an excellent grade for exceeding the expectations of the nine components of creativity, the members’ collaborative efforts on creating the prototype and completing the project on time. However, the four members of the team each received a different grade overall (two A+’s, one A’s, and one B+’s) after taking individual assessment marks into consideration. The midterm and final scores of A+ students and A and B+ students were the same because they were team scores. However, there was a difference between the scores of individual assessment marks including weekly personal report, individual proposal, and self-assessment list. Therefore, different credits were assigned. Other example cases can be found in Table 10. This outcome of this team is a good example how effective the new grading scheme is in fair assessment of students.

V. Class Survey and Fair Grading Analysis

Student surveys were conducted after the final presentations of the course in 2014, 2015, and 2016. Thirty-one students participated in the survey in 2016. Table 5 shows the means of the survey questions with the modified CPOA framework based assessment in 2016. Each answer has 5 scales such as 1(= Strongly Disagree), 2(= Disagree), 3(= Neutral), 4(= Agree), and 5(= Strongly Agree). Out of the 10 questions asked, the first 9 questions had been used to know the response of the students. Question 10 was excluded because it asks for improvements for a class composed of short subjective comments and opinions.

Table 5 Statistics for the survey taken after final presentation

KHKOCH_2019_v22n2_36_t0005.png 이미지

Students responded with an average score of 3.87 for Question 6 (the research’s main question) and responded with a higher score than Question 7 and 8, which asked whether either the individual or team-oriented components of the assessment needed to be weighed more heavily or not. The study also had a lower standard deviation for Question 6 than Question 7 and 8. This shows that the students gave a positive feedback for the new assessment method.

Lastly, further comments and opinions regarding the improvement of future classes have been collected. Some people enjoyed working as a team in completing a term project. Others said the project was completed successfully by effectively cooperating with other team members. One student suggested that they prefer the old evaluation schema of being assessed purely based on the outcome of the final project. However, the old method did not portray fair grading schema for each individual as it failed to account for the individual’s effort put towards the project.

We additionally give Table 6 comparing the results of the previous work (Kim, 2016), which had a high proportion of team evaluation, and Table 5, which is a survey of this work with an increased proportion of individual evaluations.

Table 6 Statistical comparison of survey in 2014 and 2016

KHKOCH_2019_v22n2_36_t0006.png 이미지

A comparison of the surveys in Table 6 reveals that the agreement rate on the evaluation criteria of the proposed team evaluation and the individual evaluation is improved through questionnaires 2, 3, 4 and 5. In addition, through the questionnaire items 6 and 7, we can see that students preferred the proposed work with an increased proportion of individual assessment. Therefore, it can be confirmed that this work is consistent with the research goal of strengtheningthe individual assessment while preventing the unreasonable evaluation structure in which the credit is determined according to the team to which the student belongs.

According to the results of the survey, the proposed modified CPOA framework based assessment model showcases an example of assessing a capstone design course with a focus in two different criteria: creativity and outcome. The new model has an individual component to the original assessment method, which was based solely on team reports and team meetings. The new model puts an emphasis on the use of individual progress reports to take each of the team member’s individual efforts into account and prevents individuals from getting a biased mark based on the quality of their group members. Although this new assessment model requires more effort and time for the instructor, it is a more objective method of assessment that increased the satisfaction rate of course for the instructor as well as the students taking the course.

D. University has a rule for weighing the grading scheme into four categories: 30% for midterm, 30% for final, 20% for reports, and 20% for attendance. The study’s suggested evaluation method has 3 major components: midterm (team-based), final (team-based), and individual reports/self-assessment (individual). To meet the university’s criteria, the individual portion of the grading scheme was considered to be the reports category. The suggested evaluation method excluded attendance as one of the components because getting marks for attending the lecture did not seem to align with the research’s purpose of introducing a fair evaluation method.

After evaluating class students’ individual grades, the Cronbach’s alpha coefficient was 0.822 for the 2014 student class. The correlation matrix for the 3 grading components is shown in Table 7. As shown in Table 7, the Pearson correlation coefficient in relation to grade was highest for the final being 0.830. Report represents individual in Table 7, Table 8, and Table 9.

Table 7 Correlation matrix for 3 grading components in 2014 class

KHKOCH_2019_v22n2_36_t0007.png 이미지

Table 8 Correlation matrix for 3 grading components in 2015 class

KHKOCH_2019_v22n2_36_t0008.png 이미지

Table 9 Correlation matrix for 3 grading components in 2016 class

KHKOCH_2019_v22n2_36_t0009.png 이미지

For the 2015 class and the 2016 class, the Cronbach’s alpha coefficient for each was 0.800 and 0.774. The correlationmatrix for the 3 grading components for both years is outlined in Table 8 and Table 9 respectively. For both years, the Pearson correlation coefficient in relation to grade was highest for the individual, at 0.955 and 0.881 respectively. This is because in 2014 when the suggested CPOA framework was first introduced, there was a higher weight on the Final portion. This is perhaps the result of increasing the emphasis for fair assessment based on effort and outcome.

In the 2016 class of capstone design course, there are 31 students who conducted 8 team projects. This work shows the final grade of the students in Table 10 after applying the newly proposed evaluation method. Students may receive a poor grade even when the team were in did very well overall, or they may manage to get an excellent grade even if their team did not do very well. For example, not every student in team T1 and team T5 received an A+ grade in the end even though the team grade received was an A+. A student in team T8 managed to pull of an A even when he/she did not achieve an A grade for the team portion of the assessment. The results further prove that the proposed evaluation method is more effective in evaluating students fairly as the method not only accounts for the team portion of the grading but also accounts for the individual portion as well.

Table 10 Individual grading for each team in 2016 class

KHKOCH_2019_v22n2_36_t0010.png 이미지

VI. Conclusion

In this research, we presented a new team project based assessment method, modified Creative Process and Outcome Assessment (mCPOA), for creative engineering design course. This work has several features that distinguish itself other than the previous works. By identifying the nine creativity assessment elements from surveying many related works, we categorized them into three phases based on the project’s progression to propose a new assessment framework called mCPOA. This framework has special feature considering not only the outcome assessment of the project but also the assessment of creative process. Then we derived the primary traits from each assessment element and categorized them into three level rubrics: fair, good, and excellent. Lastly, the proposed assessment method was implemented into a capstone design course for senior students and the effectiveness of the method was surveyed after final presentation. It also suggested that the class has been beneficial to students as they will have experienced working cooperatively in a team. In particular, the proposed method aims to overcome the limit of students assessed purely on how well their team performs as opposed to how well they individually perform. As outlined in Section 5. Class survey and fair grading analysis, the proposed assessment method received positive feedback from the students and had a high Pearson correlation coefficient in the Individual portion of the grading schema. This verifies that the proposed assessment method should be used for project-based capstone design course evaluation.

In the future, it is necessary to shorten the brainstorming phase by two weeks and to increase the building stage by two weeks to lead to more robust project implementation. In also, there is a need to cooperate with other researchers who instruct similar courses and to apply the proposed team project based assessment principle to different subjects to proliferate team project based education. Although the proposed research has conducted three phases of team evaluation (brainstorming, development, and demonstration), it is also necessary to consider the study of procedural evaluation within the stage.

This research was supported by the Daegu University Research Grant, 2017.

References

  1. Horowitz, R. (2001). From TRIZ to ASIT in 4 steps. https://triz-journal.com/triz-asit-4-steps/
  2. Hwang, S. (2016). Relationship between Peer- and Self-Evaluation in Team Based Learning Class for Engineering Students. Journal of Engineering Education Research. 19 (5), pp. 3-12. https://doi.org/10.18108/JEER.2016.19.5.3
  3. Kilpatrick, D.J. et al. (2001). Procedural justice and the development and use of peer evaluations in business and accounting classes. Journal of Accounting Education. 19 (3), pp. 225-246. https://doi.org/10.1016/S0748-5751(01)00021-5
  4. Kim, J. (2014). An assessment method to evaluate team project based engineering design. Proc. of Int'l Conf. on Computer Science & Education, pp.257-260.
  5. Kim, J. (2016). A Team Project Based Assessment Method for Engineering Design Course. International Journal of Multimedia and Ubiquitous Engineering. 11 (6), pp. 159-170. https://doi.org/10.14257/ijmue.2016.11.10.15
  6. Kim, J. (2018). Fair Assessment for Team Project Engineering Design Course. Convergence Research Letter. 4 (4), pp. 201-204.
  7. Kim, J.H. (2018). A Mathematical Model for Balanced Team Formation in Capstone Design Class. Journal of Engineering Education Research. 21 (4), pp. 28-34.
  8. Lee, H. W., Kim, S. H., Park, K. & Kim, J. Y. (2010). A Study on the Assessment of Program Outcomes Based on Capstone Design Course. Journal of Engineering Education Research. 13 (6), pp. 143-151. https://doi.org/10.18108/jeer.2010.13.6.143
  9. Mourtous, N. (2012). Defining, teaching, and assessing engineering design skills. Int'l Journal for Quality Assurance in Engineering and Technology Education. 2 (1), pp. 14-30. https://doi.org/10.4018/ijqaete.2012010102
  10. Ogot, M. & Okudan, G. E. (2006). Integrating systematic creativity into first-year engineering design curriculum. International Journal of Engineering Education. 22 (1), pp. 109-115. https://www.ijee.ie/articles/Vol22-1/IJEE1618.pdf
  11. Park, Y. (2016). Creative Thinking Theory. Korea Standards Association Media, Seoul, Korea.
  12. Platanitis, G. & Pop-Iliev, R. (2010). Establishing fair objectives and grading criteria for undergraduate design engineering project work: an ongoing experiment. International Journal of Research and Reviews in Applied Sciences. 5 (3), pp. 271-288. http://www.arpapress.com/Volumes/Vol5Issue3/IJRRAS_5_3_09.pdf
  13. Schwab, K. (2017). The Fourth Industrial Revolution. Penguin Books, Limited.
  14. Walvoord, B. E. & Anderson, V. J. (1998). Effective Grading: A Tool for Learning and Assessment. Jossey-Bass Inc., San Francisco, CA.
  15. Young, S. F. & Wilson, R. J. (2000). Assessment and Learning: The ICE Approach. Portage & Main Press, Winnipeg, Canada.