• Title/Summary/Keyword: Frequency based Text Analysis

Search Result 236, Processing Time 0.026 seconds

A Study on the Use of Supplementary Teaching Materials and Implements in the High School Home Economics Education (고등학교 가정과 교육에서 보조학습 교재.교구의 활용실태 연구)

  • 조은경;김용숙
    • Journal of Korean Home Economics Education Association
    • /
    • v.9 no.1
    • /
    • pp.1-17
    • /
    • 1997
  • This study was conducted to obtain basic materials to improve the teaching method of Home Economics by theoretically looking into the supplementary teaching materials or implements usable in teaching Costume History area. And based on these data, the types and the applications of the supplementary teaching materials or implements highschool owned were examined. The subjects of this study were 111 Home Economics and Housework curriculum highschool teachers who give a lecture in the country by using self-administered questionnaires. SAS program was used to calculate frequency, percentage, average, standard deviation, and $\chi$(sup)2-test analysis. The results of the study were as follows; 1. Most of the highschool teachers used the school expenses for experiments in preparing the supplementary teaching materials or implements. 2. Of the supplementary teaching materials and implements concerning Costume History, visual implements such as slides and pictures were the mostly owned. CD and audio implements as cassette-tapes were not used. 3. Most of the teachers recognized the importance of the audio-visual teaching materials and implements concerning Costume History. 4. Among the audio-visual materials and implements concerning Costume History by which can be made by school teachers of Home Economics and Housework curriculum, the mostly used one was ‘cutting pictorials from magazines and newspapers’, and the next were ‘orbital materials’, and ‘copy the pictorials’, and the least was ‘recording from the radio’. 5. Most of the annual expenses assigned to the department of Home Economics was used in cooking practice, and the least of the expenses was assigned in buying audio-visual teaching materials and implements. 6. Time assigned to the area of Home Economics was for the most part one or two hours per week, and among this, time assigned to the history of western costume and the history ok korean costume was for the most part five to eight hours. 7. The areas that the highschool teachers felt difficulties mostly during clothing and textiles curriculum were ‘textiles’and the next were ‘knitting’, ‘western costume history’, and ‘korean clothing construction’. 8. The difficulties the highschool teachers faced while teaching Costume History were mostly that ‘the pictorials in the text is not fully explainable’, the next were ‘most of the supplementary teaching materials or implements are not owned’, ‘have to explain very much in a short time’, and ‘the lectural explanation is insufficient’. 9. The solution for the difficulties that the highschool teachers faced while teaching Costume History was mostly ‘the information, on which audio-visual materials and implements are distributed in the market, should be easy to obtain’, the next opinions were ‘the school should provide enough experiment and practice expenses to buy audio-visual materials and implements’, and ‘education facilities of the Home Economics Department should be the main aspects in improving the teaching methods and should give special lectures about it’.

  • PDF

A Study on Establishing a Market Entry Strategy for the Satellite Industry Using Future Signal Detection Techniques (미래신호 탐지 기법을 활용한 위성산업 시장의 진입 전략 수립 연구)

  • Sehyoung Kim;Jaehyeong Park;Hansol Lee;Juyoung Kang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.249-265
    • /
    • 2023
  • Recently, the satellite industry has been paying attention to the private-led 'New Space' paradigm, which is a departure from the traditional government-led industry. The space industry, which is considered to be the next food industry, is still receiving relatively little attention in Korea compared to the global market. Therefore, the purpose of this study is to explore future signals that can help determine the market entry strategies of private companies in the domestic satellite industry. To this end, this study utilizes the theoretical background of future signal theory and the Keyword Portfolio Map method to analyze keyword potential in patent document data based on keyword growth rate and keyword occurrence frequency. In addition, news data was collected to categorize future signals into first symptom and early information, respectively. This is utilized as an interpretive indicator of how the keywords reveal their actual potential outside of patent documents. This study describes the process of data collection and analysis to explore future signals and traces the evolution of each keyword in the collected documents from a weak signal to a strong signal by specifically visualizing how it can be used through the visualization of keyword maps. The process of this research can contribute to the methodological contribution and expansion of the scope of existing research on future signals, and the results can contribute to the establishment of new industry planning and research directions in the satellite industry.

Analysis of Safety Education Contents of 『Field of home life』 in Technology·Home Economics Textbook developed by the revised curriculum in 2009 (2009 개정 기술·가정 교과서 『가정생활영역』의 안전교육 내용 분석)

  • Kim, Nam Eun
    • Journal of Korean Home Economics Education Association
    • /
    • v.29 no.2
    • /
    • pp.23-39
    • /
    • 2017
  • The Purpose of this study is to present the basic data for selecting and improving the safety education contents which help practically middle school students through analysis of contents of safety education in 'field of home life' of 2009 revised middle school textbooks. The subjects of analysis are 12 types of middle school textbooks: in total 24 books written by 12 publishers in terms of the revised curriculum in 2009. The analysis criteria is developed by the researcher referring to preceding studies regarding safety education based on the seventh safety education standard presented by the Ministry of Education (2015). With such analysis criteria, all words related to the contents of the safety education of analysis criteria were extracted from each textbook, such as words directly mentioned as 'safety', words mean as 'psychological safety' and 'happy life', words related to 'attention', 'note', 'stability' etc. Under the analytic frame of safety education contents according to a home economics textbook, content analysis method was used for producing the frequency and percent of those words. The textbook analysis shows that the number of pages regarding safety education is 336.3 pages, as 9.8% in total 3,412 pages of 12 types of technology and home economics textbooks. As following the analysis of each textbook volume of the proportion in the contents related to safety education, 224.9 pages are on the first volume and 111.9 pages are on the second volume. As grades increase from year one to year three, the proportion of safety education in home economics textbooks is decreased. The highest number of safety education contents unit is 'Self-management of youth' which includes three parts of safety education. In the case of a unit for emphasizing practice, experience and practical exercise such as 'Life of youth' and 'Practice of eco-living', safety education content in the area of 'life safety' are mostly contained. Safety accidents related to the most student experienced, Household accidents (1.4%) and experiment or practice accidents (0.3%) are presented in a low figure. The contents of universal housing and school violence are duplicated on first and second volume of text. The most presented safety education content in the 12 types of textbooks are proper sexual attitude, dietary problems, family conflict and food choice. The least common contents are dangerous drugs, family welfare, internet addiction and industrial accident compensation insurance. As this study is to analyze 12 textbooks developed in 2009 revision curriculum, it is necessary to compare it with the textbook written by the revised curriculum in 2015 and to clarify the contents system of safety education and to avoid duplication of contents. In addition, it is necessary to develop and distribute a safety education program that can support textbooks.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

The Effect of Domain Specificity on the Performance of Domain-Specific Pre-Trained Language Models (도메인 특수성이 도메인 특화 사전학습 언어모델의 성능에 미치는 영향)

  • Han, Minah;Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.251-273
    • /
    • 2022
  • Recently, research on applying text analysis to deep learning has steadily continued. In particular, researches have been actively conducted to understand the meaning of words and perform tasks such as summarization and sentiment classification through a pre-trained language model that learns large datasets. However, existing pre-trained language models show limitations in that they do not understand specific domains well. Therefore, in recent years, the flow of research has shifted toward creating a language model specialized for a particular domain. Domain-specific pre-trained language models allow the model to understand the knowledge of a particular domain better and reveal performance improvements on various tasks in the field. However, domain-specific further pre-training is expensive to acquire corpus data of the target domain. Furthermore, many cases have reported that performance improvement after further pre-training is insignificant in some domains. As such, it is difficult to decide to develop a domain-specific pre-trained language model, while it is not clear whether the performance will be improved dramatically. In this paper, we present a way to proactively check the expected performance improvement by further pre-training in a domain before actually performing further pre-training. Specifically, after selecting three domains, we measured the increase in classification accuracy through further pre-training in each domain. We also developed and presented new indicators to estimate the specificity of the domain based on the normalized frequency of the keywords used in each domain. Finally, we conducted classification using a pre-trained language model and a domain-specific pre-trained language model of three domains. As a result, we confirmed that the higher the domain specificity index, the higher the performance improvement through further pre-training.