• Title/Summary/Keyword: Text data

Search Result 2,956, Processing Time 0.031 seconds

A Study of Pre-Service Secondary Science Teacher's Conceptual Understanding on Carbon Neutral: Focused on Eye Tracking System (탄소중립에 관한 중등 과학 예비교사들의 개념 이해 연구 : 시선추적시스템을 중심으로)

  • Younjeong Heo;Shin Han;Hyoungbum Kim
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.16 no.2
    • /
    • pp.261-275
    • /
    • 2023
  • The purpose of this study was to analyze the conceptual understanding of carbon neutrality among secondary school science pre-service teachers, as well as to identify gaze patterns in visual materials. For this study, gaze tracking data of 20 pre-service secondary school science teachers were analyzed. Through this, the levels of conceptual understanding of carbon neutrality were categorized for the participants, and differences in gaze patterns were analyzed based on the degree of conceptual understanding of carbon neutrality. The research findings are as follows. First, as a result of performing modeling activities to predict carbon emissions and removals until 2100 using the concept of '2050 carbon neutrality,' 50% of the participants held a conception that carbon emissions would continue to increase. Additionally, 25% of the participants did not properly understand the causal relationship between net carbon dioxide emissions and cumulative concentrations. Second, the gaze movements of the participants regarding visual materials related to carbon neutrality were significantly influenced by the information presented in the text area, and in the case of graphs, the focus was mainly on the data area. Moreover, when visual data with the same function and category were arranged, participants showed the most interest in materials explaining concepts or visual data placed on the left side. This implies a preference for specific positions or orders. Participants with lower levels of conceptual understanding and inadequate grasp of causal relationships among elements exhibited notably reduced concentration and overall gaze flow. These findings suggest that conceptual understanding of carbon neutrality including climate change and natural disaster significantly influences interest in and engagement with visual materials.

A Study on Tourism Behavior in the New normal Era Using Big Data (빅데이터를 활용한 뉴노멀(New normal)시대의 관광행태 변화에 관한 연구)

  • Kyoung-mi Yoo;Jong-cheon Kang;Youn-hee Choi
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.167-181
    • /
    • 2023
  • This study utilized TEXTOM, a social network analysis program to analyze changes in current tourism behavior after travel restrictions were eased after the outbreak of COVID-19. Data on the keywords 'domestic travel' and 'overseas travel' were collected from blogs, cafes, and news provided by Naver, Google, and Daum. The collection period was set from April to December 2022 when social distancing was lifted, and 2019 and 2020 were each set as one year and compared and analyzed with 2022. A total of 80 key words were extracted through text mining and centrality analysis was performed using NetDraw. Finally, through the CONCOR, the correlated keywords were clustered into 4. As a result of the study, tourism behavior in 2022 shows tourism recovery before the outbreak of COVID-19, segmentation of travel based on each person's preferred theme, prioritization of each country's corona mitigation policy, and then selecting a tourist destination. It is expected to provide basic data for the development of tourism marketing strategies and tourism products for the newly emerging tourism ecosystem after COVID-19.

The Perception Analysis of Autonomous Vehicles using Network Graph (네트워크 그래프를 활용한 자율주행차에 대한 인식 분석)

  • Hyo-gyeong Park;Yeon-hwi You;Sung-jung Yong;Seo-young Lee;Il-young Moon
    • Journal of Practical Engineering Education
    • /
    • v.15 no.1
    • /
    • pp.97-105
    • /
    • 2023
  • Recently, with the development of artificial intelligence technology, many technologies for user convenience are being developed. Among them, interest in autonomous vehicles is increasing day by day. Currently, many automobile companies are aiming to commercialize autonomous vehicles. In order to lay the foundation for the government's new and reasonable policy establishment to support commercialization, we tried to analyze changes and perceptions of public opinion through news article data. Therefore, in this paper, 35,891 news article data mentioning terms similar to 'autonomous vehicles' over the past three years were collected and network analyzed. As a result of the analysis, major keywords such as 'autonomous driving', 'AI', 'future', 'Hyundai Motor', 'autonomous driving vehicle', 'automobile', 'industrial', and 'electric vehicle' were derived. In addition, the autonomous vehicle industry is developing into a faster and more diverse platform and service industry by converging with various industries such as semiconductor companies and big tech companies as well as automobile companies and is paying attention to the convergence of industries. To continuously confirm changes and perceptions in public opinion, it is necessary to analyze perceptions through continuous analysis of SNS data or technology trends.

Analysis of Dog-Related Outdoor Public Space Conflicts Using Complaint Data (민원 자료를 활용한 반려견 관련 옥외 공공공간 갈등 분석)

  • Yoo, Ye-seul;Son, Yong-Hoon;Zoh, Kyung-Jin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.52 no.1
    • /
    • pp.34-45
    • /
    • 2024
  • Companion animals are increasingly being recognized as members of society in outdoor public spaces. However, the presence of dogs in cities has become a subject of conflict between pet owners and non-pet owners, causing problems in terms of hygiene and noise. This study was conducted to analyze public complaint data using the keywords 'dog,' 'pet,' and 'puppy' through text mining techniques to identify the causes of conflicts in outdoor public spaces related to dogs and to identify key issues. The main findings of the study are as follows. First, the majority of dog-related complaints were related to the use of outdoor public spaces. Second, different types of outdoor public spaces have different spatial issues. Third, there were a total of four topics of dog-related complaints: 'Requesting a dog playground', 'Raising safety issues related to animals', 'Using facilities other than dog-only areas', and 'Requesting increased park management and enforcement related to pet tickets'. This study analyzed the perceptions of citizens surrounding pets at a time when the creation and use of public spaces related to pets are expanding. In particular, it is significant in that it applied a new method of collecting public opinions by adopting complaint data that clearly presents problems and requests.

A Study on an Automatic Classification Model for Facet-Based Multidimensional Analysis of Civil Complaints (패싯 기반 민원 다차원 분석을 위한 자동 분류 모델)

  • Na Rang Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.135-144
    • /
    • 2024
  • In this study, we propose an automatic classification model for quantitative multidimensional analysis based on facet theory to understand public opinions and demands on major issues through big data analysis. Civil complaints, as a form of public feedback, are generated by various individuals on multiple topics repeatedly and continuously in real-time, which can be challenging for officials to read and analyze efficiently. Specifically, our research introduces a new classification framework that utilizes facet theory and political analysis models to analyze the characteristics of citizen complaints and apply them to the policy-making process. Furthermore, to reduce administrative tasks related to complaint analysis and processing and to facilitate citizen policy participation, we employ deep learning to automatically extract and classify attributes based on the facet analysis framework. The results of this study are expected to provide important insights into understanding and analyzing the characteristics of big data related to citizen complaints, which can pave the way for future research in various fields beyond the public sector, such as education, industry, and healthcare, for quantifying unstructured data and utilizing multidimensional analysis. In practical terms, improving the processing system for large-scale electronic complaints and automation through deep learning can enhance the efficiency and responsiveness of complaint handling, and this approach can also be applied to text data processing in other fields.

Korean Ironic Expression Detector (한국어 반어 표현 탐지기)

  • Seung Ju Bang;Yo-Han Park;Jee Eun Kim;Kong Joo Lee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.3
    • /
    • pp.148-155
    • /
    • 2024
  • Despite the increasing importance of irony and sarcasm detection in the field of natural language processing, research on the Korean language is relatively scarce compared to other languages. This study aims to experiment with various models for irony detection in Korean text. The study conducted irony detection experiments using KoBERT, a BERT-based model, and ChatGPT. For KoBERT, two methods of additional training on sentiment data were applied (Transfer Learning and MultiTask Learning). Additionally, for ChatGPT, the Few-Shot Learning technique was applied by increasing the number of example sentences entered as prompts. The results of the experiments showed that the Transfer Learning and MultiTask Learning models, which were trained with additional sentiment data, outperformed the baseline model without additional sentiment data. On the other hand, ChatGPT exhibited significantly lower performance compared to KoBERT, and increasing the number of example sentences did not lead to a noticeable improvement in performance. In conclusion, this study suggests that a model based on KoBERT is more suitable for irony detection than ChatGPT, and it highlights the potential contribution of additional training on sentiment data to improve irony detection performance.

A Study on Generation Quality Comparison of Concrete Damage Image Using Stable Diffusion Base Models (Stable diffusion의 기저 모델에 따른 콘크리트 손상 영상의 생성 품질 비교 연구)

  • Seung-Bo Shim
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.4
    • /
    • pp.55-61
    • /
    • 2024
  • Recently, the number of aging concrete structures is steadily increasing. This is because many of these structures are reaching their expected lifespan. Such structures require accurate inspections and persistent maintenance. Otherwise, their original functions and performance may degrade, potentially leading to safety accidents. Therefore, research on objective inspection technologies using deep learning and computer vision is actively being conducted. High-resolution images can accurately observe not only micro cracks but also spalling and exposed rebar, and deep learning enables automated detection. High detection performance in deep learning is only guaranteed with diverse and numerous training datasets. However, surface damage to concrete is not commonly captured in images, resulting in a lack of training data. To overcome this limitation, this study proposed a method for generating concrete surface damage images, including cracks, spalling, and exposed rebar, using stable diffusion. This method synthesizes new damage images by paired text and image data. For this purpose, a training dataset of 678 images was secured, and fine-tuning was performed through low-rank adaptation. The quality of the generated images was compared according to three base models of stable diffusion. As a result, a method to synthesize the most diverse and high-quality concrete damage images was developed. This research is expected to address the issue of data scarcity and contribute to improving the accuracy of deep learning-based damage detection algorithms in the future.

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.