• Title/Summary/Keyword: Memory Analysis

Search Result 2,090, Processing Time 0.035 seconds

How to improve the accuracy of recommendation systems: Combining ratings and review texts sentiment scores (평점과 리뷰 텍스트 감성분석을 결합한 추천시스템 향상 방안 연구)

  • Hyun, Jiyeon;Ryu, Sangyi;Lee, Sang-Yong Tom
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.219-239
    • /
    • 2019
  • As the importance of providing customized services to individuals becomes important, researches on personalized recommendation systems are constantly being carried out. Collaborative filtering is one of the most popular systems in academia and industry. However, there exists limitation in a sense that recommendations were mostly based on quantitative information such as users' ratings, which made the accuracy be lowered. To solve these problems, many studies have been actively attempted to improve the performance of the recommendation system by using other information besides the quantitative information. Good examples are the usages of the sentiment analysis on customer review text data. Nevertheless, the existing research has not directly combined the results of the sentiment analysis and quantitative rating scores in the recommendation system. Therefore, this study aims to reflect the sentiments shown in the reviews into the rating scores. In other words, we propose a new algorithm that can directly convert the user 's own review into the empirically quantitative information and reflect it directly to the recommendation system. To do this, we needed to quantify users' reviews, which were originally qualitative information. In this study, sentiment score was calculated through sentiment analysis technique of text mining. The data was targeted for movie review. Based on the data, a domain specific sentiment dictionary is constructed for the movie reviews. Regression analysis was used as a method to construct sentiment dictionary. Each positive / negative dictionary was constructed using Lasso regression, Ridge regression, and ElasticNet methods. Based on this constructed sentiment dictionary, the accuracy was verified through confusion matrix. The accuracy of the Lasso based dictionary was 70%, the accuracy of the Ridge based dictionary was 79%, and that of the ElasticNet (${\alpha}=0.3$) was 83%. Therefore, in this study, the sentiment score of the review is calculated based on the dictionary of the ElasticNet method. It was combined with a rating to create a new rating. In this paper, we show that the collaborative filtering that reflects sentiment scores of user review is superior to the traditional method that only considers the existing rating. In order to show that the proposed algorithm is based on memory-based user collaboration filtering, item-based collaborative filtering and model based matrix factorization SVD, and SVD ++. Based on the above algorithm, the mean absolute error (MAE) and the root mean square error (RMSE) are calculated to evaluate the recommendation system with a score that combines sentiment scores with a system that only considers scores. When the evaluation index was MAE, it was improved by 0.059 for UBCF, 0.0862 for IBCF, 0.1012 for SVD and 0.188 for SVD ++. When the evaluation index is RMSE, UBCF is 0.0431, IBCF is 0.0882, SVD is 0.1103, and SVD ++ is 0.1756. As a result, it can be seen that the prediction performance of the evaluation point reflecting the sentiment score proposed in this paper is superior to that of the conventional evaluation method. In other words, in this paper, it is confirmed that the collaborative filtering that reflects the sentiment score of the user review shows superior accuracy as compared with the conventional type of collaborative filtering that only considers the quantitative score. We then attempted paired t-test validation to ensure that the proposed model was a better approach and concluded that the proposed model is better. In this study, to overcome limitations of previous researches that judge user's sentiment only by quantitative rating score, the review was numerically calculated and a user's opinion was more refined and considered into the recommendation system to improve the accuracy. The findings of this study have managerial implications to recommendation system developers who need to consider both quantitative information and qualitative information it is expect. The way of constructing the combined system in this paper might be directly used by the developers.

Verifying Execution Prediction Model based on Learning Algorithm for Real-time Monitoring (실시간 감시를 위한 학습기반 수행 예측모델의 검증)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.243-250
    • /
    • 2004
  • Monitoring is used to see if a real-time system provides a service on time. Generally, monitoring for real-time focuses on investigating the current status of a real-time system. To support a stable performance of a real-time system, it should have not only a function to see the current status of real-time process but also a function to predict executions of real-time processes, however. The legacy prediction model has some limitation to apply it to a real-time monitoring. First, it performs a static prediction after a real-time process finished. Second, it needs a statistical pre-analysis before a prediction. Third, transition probability and data about clustering is not based on the current data. We propose the execution prediction model based on learning algorithm to solve these problems and apply it to real-time monitoring. This model gets rid of unnecessary pre-processing and supports a precise prediction based on current data. In addition, this supports multi-level prediction by a trend analysis of past execution data. Most of all, We designed the model to support dynamic prediction which is performed within a real-time process' execution. The results from some experiments show that the judgment accuracy is greater than 80% if the size of a training set is set to over 10, and, in the case of the multi-level prediction, that the prediction difference of the multi-level prediction is minimized if the number of execution is bigger than the size of a training set. The execution prediction model proposed in this model has some limitation that the model used the most simplest learning algorithm and that it didn't consider the multi-regional space model managing CPU, memory and I/O data. The execution prediction model based on a learning algorithm proposed in this paper is used in some areas related to real-time monitoring and control.

Development of the Information Delivery System for the Home Nursing Service (가정간호사업 운용을 위한 정보전달체계 개발 I (가정간호 데이터베이스 구축과 뇌졸중 환자의 가정간호 전산개발))

  • Park, J.H;Kim, M.J;Hong, K.J;Han, K.J;Park, S.A;Yung, S.N;Lee, I.S;Joh, H.;Bang, K.S
    • Journal of Home Health Care Nursing
    • /
    • v.4
    • /
    • pp.5-22
    • /
    • 1997
  • The purpose of the study was to development an information delivery system for the home nursing service, to demonstrate and to evaluate the efficiency of it. The period of research conduct was from September 1996 to August 31, 1997. At the 1st stage to achieve the purpose, Firstly Assessment tool for the patients with cerebral vascular disease who have the first priority of HNS among the patients with various health problems at home was developed through literature review. Secondly, after identification of patient nursing problem by the home care nurse with the assessment tool, the patient's classification system developed by Park (1988) that was 128 nursing activities under 6 categories was used to identify the home care nurse's activities of the patient with CAV at home. The research team had several workshops with 5 clinical nurse experts to refine it. At last 110 nursing activities under 11 categories for the patients with CVA were derived. At the second stage, algorithms were developed to connect 110 nursing activities with the patient nursing problems identified by assessment tool. The computerizing process of the algorithms is as follows: These algorithms are realized with the computer program by use of the software engineering technique. The development is made by the prototyping method, which is the requirement analysis of the software specifications. The basic features of the usability, compatibility, adaptability and maintainability are taken into consideration. Particular emphasis is given to the efficient construction of the database. To enhance the database efficiency and to establish the structural cohesion, the data field is categorized with the weight of relevance to the particular disease. This approach permits the easy adaptability when numerous diseases are applied in the future. In paralleled with this, the expandability and maintainability is stressed through out the program development, which leads to the modular concept. However since the disease to be applied is increased in number as the project progress and since they are interrelated and coupled each other, the expand ability as well as maintainability should be considered with a big priority. Furthermore, since the system is to be synthesized with other medical systems in the future, these properties are very important. The prototype developed in this project is to be evaluated through the stage of system testing. There are various evaluation metrics such as cohesion, coupling and adaptability so on. But unfortunately, direct measurement of these metrics are very difficult, and accordingly, analytical and quantitative evaluations are almost impossible. Therefore, instead of the analytical evaluation, the experimental evaluation is to be applied through the test run by various users. This system testing will provide the viewpoint analysis of the user's level, and the detail and additional requirement specifications arising from user's real situation will be feedback into the system modeling. Also. the degree of freedom of the input and output will be improved, and the hardware limitation will be investigated. Upon the refining, the prototype system will be used as a design template. and will be used to develop the more extensive system. In detail. the relevant modules will be developed for the various diseases, and the module will be integrated by the macroscopic design process focusing on the inter modularity, generality of the database. and compatibility with other systems. The Home care Evaluation System is comprised of three main modules of : (1) General information on a patient, (2) General health status of a patient, and (3) Cerebrovascular disease patient. The general health status module has five sub modules of physical measurement, vitality, nursing, pharmaceutical description and emotional/cognition ability. The CVA patient module is divided into ten sub modules such as subjective sense, consciousness, memory and language pattern so on. The typical sub modules are described in appendix 3.

  • PDF

Analysis of Misconception on the North Korea Cold Current in Secondary-School Science and Earth Science Textbooks (중등학교 과학 및 지구과학 교과서 북한한류 오개념 분석)

  • Park, Kyung-Ae;Lee, Jae Yon;Lee, Eun-Young;Kim, Young Ho;Byun, Do-Seong
    • Journal of the Korean earth science society
    • /
    • v.41 no.5
    • /
    • pp.490-503
    • /
    • 2020
  • Oceanic current and circulation have played an important role as regulators of the earth's energy distribution. The science and earth science textbooks for secondary schools based on the 2015 revised curriculum included a misconception of the seasonal variation of the North Korea Cold Current (NKCC) among the currents around the Korean Peninsula. To analyze this, the contents related to the NKCC were collected in the textbooks of five middle and six high schools, and a questionnaire survey was conducted on 30 middle school science teachers. The survey consisted of questions about whether the textbook mentions the NKCC and whether there is an error in the concept of the temporal variation of the NKCC, and the teachers' free opinions related to the NKCC were collected. The textbooks suggest that the NKCC is strongest in winter, which is not consistent with scientific findings so far. In fact, there is scientific evidence that the NKCC is the strongest in the summer. In this study, the causes and processes of misconceptions were investigated. According to an analysis of the survey, most teachers had an knowledge that the NKCC is stronger in winter. These errors began with a misconception of the terms, which teachers had imprinted on their memory as firm knowledge. These misconceptions originated from the knowledge that teachers themselves acquired from their secondary school years and have long been transferred back to teachers and students without revising the misconceptions of textbooks. This situation is expected to have a seriously recurrent structure that produces students' serious misconceptions in the future. Therefore, this study summarizes existing results on the seasonal variability of the NKCC and suggests the necessity for re-education to improve teachers' professionalism and to eliminate the misconceptions of teachers and students.

The Adoption and Diffusion of Semantic Web Technology Innovation: Qualitative Research Approach (시맨틱 웹 기술혁신의 채택과 확산: 질적연구접근법)

  • Joo, Jae-Hun
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.33-62
    • /
    • 2009
  • Internet computing is a disruptive IT innovation. Semantic Web can be considered as an IT innovation because the Semantic Web technology possesses the potential to reduce information overload and enable semantic integration, using capabilities such as semantics and machine-processability. How should organizations adopt the Semantic Web? What factors affect the adoption and diffusion of Semantic Web innovation? Most studies on adoption and diffusion of innovation use empirical analysis as a quantitative research methodology in the post-implementation stage. There is criticism that the positivist requiring theoretical rigor can sacrifice relevance to practice. Rapid advances in technology require studies relevant to practice. In particular, it is realistically impossible to conduct quantitative approach for factors affecting adoption of the Semantic Web because the Semantic Web is in its infancy. However, in an early stage of introduction of the Semantic Web, it is necessary to give a model and some guidelines and for adoption and diffusion of the technology innovation to practitioners and researchers. Thus, the purpose of this study is to present a model of adoption and diffusion of the Semantic Web and to offer propositions as guidelines for successful adoption through a qualitative research method including multiple case studies and in-depth interviews. The researcher conducted interviews with 15 people based on face-to face and 2 interviews by telephone and e-mail to collect data to saturate the categories. Nine interviews including 2 telephone interviews were from nine user organizations adopting the technology innovation and the others were from three supply organizations. Semi-structured interviews were used to collect data. The interviews were recorded on digital voice recorder memory and subsequently transcribed verbatim. 196 pages of transcripts were obtained from about 12 hours interviews. Triangulation of evidence was achieved by examining each organization website and various documents, such as brochures and white papers. The researcher read the transcripts several times and underlined core words, phrases, or sentences. Then, data analysis used the procedure of open coding, in which the researcher forms initial categories of information about the phenomenon being studied by segmenting information. QSR NVivo version 8.0 was used to categorize sentences including similar concepts. 47 categories derived from interview data were grouped into 21 categories from which six factors were named. Five factors affecting adoption of the Semantic Web were identified. The first factor is demand pull including requirements for improving search and integration services of the existing systems and for creating new services. Second, environmental conduciveness, reference models, uncertainty, technology maturity, potential business value, government sponsorship programs, promising prospects for technology demand, complexity and trialability affect the adoption of the Semantic Web from the perspective of technology push. Third, absorptive capacity is an important role of the adoption. Fourth, suppler's competence includes communication with and training for users, and absorptive capacity of supply organization. Fifth, over-expectance which results in the gap between user's expectation level and perceived benefits has a negative impact on the adoption of the Semantic Web. Finally, the factor including critical mass of ontology, budget. visible effects is identified as a determinant affecting routinization and infusion. The researcher suggested a model of adoption and diffusion of the Semantic Web, representing relationships between six factors and adoption/diffusion as dependent variables. Six propositions are derived from the adoption/diffusion model to offer some guidelines to practitioners and a research model to further studies. Proposition 1 : Demand pull has an influence on the adoption of the Semantic Web. Proposition 1-1 : The stronger the degree of requirements for improving existing services, the more successfully the Semantic Web is adopted. Proposition 1-2 : The stronger the degree of requirements for new services, the more successfully the Semantic Web is adopted. Proposition 2 : Technology push has an influence on the adoption of the Semantic Web. Proposition 2-1 : From the perceptive of user organizations, the technology push forces such as environmental conduciveness, reference models, potential business value, and government sponsorship programs have a positive impact on the adoption of the Semantic Web while uncertainty and lower technology maturity have a negative impact on its adoption. Proposition 2-2 : From the perceptive of suppliers, the technology push forces such as environmental conduciveness, reference models, potential business value, government sponsorship programs, and promising prospects for technology demand have a positive impact on the adoption of the Semantic Web while uncertainty, lower technology maturity, complexity and lower trialability have a negative impact on its adoption. Proposition 3 : The absorptive capacities such as organizational formal support systems, officer's or manager's competency analyzing technology characteristics, their passion or willingness, and top management support are positively associated with successful adoption of the Semantic Web innovation from the perceptive of user organizations. Proposition 4 : Supplier's competence has a positive impact on the absorptive capacities of user organizations and technology push forces. Proposition 5 : The greater the gap of expectation between users and suppliers, the later the Semantic Web is adopted. Proposition 6 : The post-adoption activities such as budget allocation, reaching critical mass, and sharing ontology to offer sustainable services are positively associated with successful routinization and infusion of the Semantic Web innovation from the perceptive of user organizations.

Semiological Implication of Dance Images in TV Advertisement (TV광고에 나타난 무용이미지의 기호학적 의미에 관한 연구)

  • Park, Ayoung
    • Trans-
    • /
    • v.1
    • /
    • pp.21-44
    • /
    • 2016
  • Advertisement is composed with symbol and sign with messages trying to express. Especially, ad with dancer introduces goods or meaning of contents with the motion of dance. In this, contents of dance or motion of dancer contains symbol and sign, understanding how ad and dance are expressed meanings with which symbol and the symbolic meaning of dance or dancer on ad. To that end, this study is for analyzing expressed symbol with dance corresponds with the aim of ad and finding the way or attitude of how normal people accept dance by reevaluating symbolic meaning of dance itself. In this study, advertisement producer and director's related data is secured for understanding direction and intention of producer, and previous study related with the study purpose, image, and effect are analyzed for understanding image of dance as a physical sign on TV advertisement. With data from www.TVCF.co.kr. TV advertisement analysis is conducted only with four ads in 2008(Nam Kwang Eng. & Const Co., Lotte Dept. Store(premium sale/gift card), Hyundai Motor Company Santa Fe -Pilobolus) and one ad in 2011(PNS The zone Sash Italy Arena di Verona when dance was used for advertisement with the highest frequency per year. Also, based on considered important factors from repeatedly watching each advertisement, scenes where movement or motion of dancer and screen word is greatly changed are analyzed as a priority. Image analysis of dance is conducted with structure studies based on physical image(line, costume, expression) and dan image(type motion, qualitative feature, mood of dance). As a result, the symbolic dance image appeared in TV advertisement can be discussed as follows. First, symbol and sign of dance on advertisement corresponds with material objects of advertisement. For instance, on the TV advertisement where Lee Youngwoo appeared, his motion as a signifer means challenge for the future of Nam Kwang Eng. & Const Co., with fast turn, jump, assemble turning jump, and sliding. Second, physical image of dancer depending on intention of sender corresponds in general, but there are somewhat differences in image of dance. This makes people to unconsciously recognize symbolic image of dance on TV ad while they watch it at the same time. Especially, when it comes to advertisement, it exposes frequently with broadcasting of organized programs from a broadcaster, living long-time memory. It can be differ based on idea and character of each of receiver. Advertisement is a medium making people naturally adopt cultural art for ordinary people in their lives. Broadcasting public art from TV advertisement widely exposes pure art to people, which was only avaliable for minority, sublimating it as an art of public culture.

  • PDF

The Role of Radiation Therapy in the Treatment of Intracranial Glioma : Retrospective Analysis of 96 Cases (뇌 교종 96예에 대한 방사선치료 성적의 후향적 분석)

  • Kim Yeon Sil;Kang Ki Mun;Choi Byung Ock;Yoon Sei Chul;Shinn Kyung Sub;Kang Jun Gi
    • Radiation Oncology Journal
    • /
    • v.11 no.2
    • /
    • pp.249-258
    • /
    • 1993
  • Between March 1983 and December 1989, ninety-six patients with intracranial glioma were treated in the Department of Therapeutic Radiology, Kangnam St. Mary's Hospital, Catholic University Medical College. We retrospectively reviewed each case to evaluate variable factors influencing the treatment results and to develop an optimal therapy Policy. Median follow-up is 57 months (range: 31~133 months). Of the 96 patients, 60 $(63\%)$ were males and 36 $(37\%)$ were females. Ages ranged from 3 to 69 years (median 42 years). The most common presenting symtoms were headeche $(67\%)$ followed by cerebral motor and sensory discrepancy $(54\%),$ nausea and vomiting $(34\%),$ seizure $(19\%),$ mental change $(10\%)$ and memory and calculation impairment $(8\%).$ Eighty five $(88.5\%)$ patients all, except 11 $(11.5\%)$ brain stem lesions, were biopsy proven intracranial glioma. The distribution by histologic type was 64 astrocytomas $(75\%),$ 4 mixed oligoastrocytomas $(5\%),$ and 17 oligodendrogliomas $(20\%).$ Fourty nine patients $(58\%$ were grade I, II histology and 36 $(42\%)$ patients were grade III, IV histology. Of the 96 patients, 64 $(67\%)$ recieved postoperative RT and 32 $(33\%)$ were treated with primary radiotherapy. Gross total resection was peformed in 14 $(16\%)$ patients, subtotal resection En 29 $(34\%),$ partial resection in 21 $(25\%),$ and biopsy only in 21 $(25\%).$ Median survival time was 53 months (range 2~ 133 months), and 2- and, 5-year survival rate were $69\%,49\%$ respectively. 5-year survival rate by histologic grade was grade I, $70\%,$ grade II, $58\%,$ grade III, $28\%,$ and grade IV, $15\%.$ Multivariated analysis demonstrate that age at diagnosis (p=0.0121), Karnofsky performance Status (KPS) (p=0.0002), histologic grade (p=0.0001), postoperative radiation therapy (p=0.0278), surgical extent (p =0.024), cerebellar location of tumor (p=0.0095) were significant prognostic factors influencing on survival.

  • PDF

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

An Analysis of the Home Economics Education Discipline Items in the Teacher Recruitment Examination for Secondary School (중등교사 신규임용 후보자 선정 경쟁시험 가정과 교과교육학 출제 문항 분석)

  • Kim, Sung-Sook;Chae, Jung-Hyun
    • Journal of Korean Home Economics Education Association
    • /
    • v.19 no.3
    • /
    • pp.149-168
    • /
    • 2007
  • The purpose of this study was to analyze the home economics education items in the teacher recruitment examination for secondary school. To achieve the purpose, all the home economics education items, which were carried out for seven times from the school year 2001 to the most recent year 2007, were compared and analyzed. The form of items was analyzed by frequency and rate. Behavioral domain of items was analyzed by content analysis. In this study, some recommendations were suggested for the quality of home economics education items through discussion of science education and society education items, which were abstracted from the school year 2001 to the most recent year 2007. The results of this study were as follows. First, the score ratio of home economics education items was fluid as 20-30% from the school year 2001 to 2004 but it fixed as 30-35% since the school year 2005. In subcategory of home economics education, curriculum items accounted for highest ratio(43%). In the next thing, items of teaching-learning method(35%), evaluation(19%) and philosophy(3%) related to home economics education were followed in order. Second, the form of home economics education items was coexistent form of single item and subordinate item from the school year 2001 to 2004. But it was changed into form of single item by 100% since the school year 2005. Third, regarding the content of home economics education items, most of the curriculum items were related to the content of the 7th National Curriculum. Teaching-learning method items were taken mostly from model of teaching-learning. Evaluation items were taken mostly from performance assessment. Philosophy items related to home economics education were taken only from Habermas's three systems of action on the school year 2005. Fourth, about behavioral domain of home economics education items, most of the curriculum items were level of 'simple knowledge or memory'. Therefore, it was suggested that behavioral domain of curriculum items had to be changed into 'complex knowledge or comprehension and application'. The behavioral domain of teaching-learning method items and education evaluation items was mostly 'complex knowledge or comprehension and application'. However, to bettering the items it was suggested that the behavioral domain of them has to be changed 'comprehension' into more 'application'. Fifth, regarding the coverage of home economics education items, curriculum items were limited only superficial content of the 7th National Curriculum. Therefore, it was suggested that coverage of curriculum items had to be extended to theoretical content, which was philosophical background and various principles of curriculum. It was suggested that coverage of teaching-learning method items had to be extended to the content including various teaching-learning theories and the practical reasoning home economics instruction proved effective as home economics instruction recently. Evaluation items were taken mostly from performance assessment. Therefore, it was suggested that coverage of evaluation items had to be extended to analysis of evaluation result, item validity and reliability, and evaluator's philosophical perspective.

  • PDF