• Title/Summary/Keyword: Subject Classification

Search Result 811, Processing Time 0.033 seconds

A Comparative Study on the Characteristics of Cultural Heritage in China and Vietnam (중국과 베트남의 문화유산 특성 비교 연구)

  • Shin, Hyun-Sil;Jun, Da-Seul
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.40 no.2
    • /
    • pp.34-43
    • /
    • 2022
  • This study compared the characteristics of cultural heritage in China and Vietnam, which have developed in the relationship of mutual geopolitical and cultural influence in history, and the following conclusions were made. First, the definition of cultural heritage in China and Vietnam has similar meanings in both countries. In the case of cultural heritage classification, both countries introduced the legal concept of intangible cultural heritage through UNESCO, and have similarities in terms of intangible cultural heritage. Second, while China has separate laws for managing tangible and intangible cultural heritages, Vietnam integrally manages the two types of cultural heritages under a single law. Vietnam has a slower introduction of the concept of cultural heritage than China, but it shows high integration in terms of system. Third, cultural heritages in both China and Vietnam are graded, which is applied differently depending on the type of heritage. The designation method has a similarity in which the two countries have a vertical structure and pass through steps. By restoring the value of heritage and complementing integrity through such a step-by-step review, balanced development across the country is being sought through tourism to enjoy heritage and create economic effects. Fourth, it was confirmed that the cultural heritage management organization has a central government management agency in both countries, but in China, the authority of local governments is higher than that of Vietnam. In addition, unlike Vietnam, where tangible and intangible cultural heritage are managed by an integrated institution, China had a separate institution in charge of intangible cultural heritage. Fifth, China is establishing a conservation management policy focusing on sustainability that harmonizes the protection and utilization of heritage. Vietnam is making efforts to integrate the contents and spirit of the agreement into laws, programs, and projects related to cultural heritage, especially intangible heritage and economic and social as a whole. However, it is still dependent on the influence of international organizations. Sixth, China and Vietnam are now paying attention to intangible heritage recently introduced, breaking away from the cultural heritage protection policy centered on tangible heritage. In addition, they aim to unite the people through cultural heritage and achieve the nation's unified policy goals. The two countries need to use intangible heritage as an efficient means of preserving local communities or regions. A cultural heritage preservation network should be established for each subject that can integrate the components of intangible heritage into one unit to lay the foundation for the enjoyment of the people. This study has limitations as a research stage comparing the cultural heritage system and preservation management status in China and Vietnam, and the characteristic comparison of cultural heritage policies by type remains a future research task.

Present Status and Prospect of Valuation for Tangible Fixed Asset in South Korea (유형고정자산 가치평가 현황: 우리나라 사례를 중심으로)

  • Jin-Hyung Cho;Hyun-Seung O;Sae-Jae Lee
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.1
    • /
    • pp.91-104
    • /
    • 2023
  • The records system is believed to have started in Italy in the 14th century in line with trade developments in Europe. In 1491, Luca Pacioli, a mathematician, and an Italian Franciscan monk wrote the first book that described double-entry accounting processes. In many countries, including Korea, the government accounting standards used single-entry bookkeeping rather than double-entry bookkeeping that can be aggregated by account subject. The cash-based and single-entry bookkeeping used by the government in the past had limitations in providing clear information on financial status and establishing a performance-oriented financial management system. Accordingly, the National Accounting Act (promulgated in October 2007) stipulated the introduction of double-entry bookkeeping and accrual accounting systems in the government sector from January 1, 2009. Furthermore, the Korean government has also introduced International Financial Reporting Standards (IFRS), and the System of National Accounts (SNA). Since 2014, Korea owned five national accounts. In Korea, valuation began with the 1968 National Wealth Statistics Survey. The academic origins of the valuation of national wealth statistics which had been investigated by due diligence every 10 years since 1968 are based on the 'Engineering Valuation' of professor Marston in the Department of Industrial Engineering at Iowa State University in the 1930s. This field has spread to economics, etc. In economics, it became the basis of capital stock estimation for positive economics such as econometrics. The valuation by the National Wealth Statistics Survey contributed greatly to converting the book value of accounting data into vintage data. And in 2000 National Statistical Office collected actual disposal data for the 1-digit asset class and obtained the ASL(average service life) by Iowa curve. Then, with the data on fixed capital formation centered on the National B/S Team of the Bank of Korea, the national wealth statistics were prepared by the Permanent Inventory Method(PIM). The asset classification was also classified into 59 types, including 2 types of residential buildings, 4 types of non-residential buildings, 14 types of structures, 9 types of transportation equipment, 28 types of machinery, and 2 types of intangible fixed assets. Tables of useful lives of tangible fixed assets published by the Korea Appraisal Board in 1999 and 2013 were made by the Iowa curve method. In Korea, the Iowa curve method has been adopted as a method of ASL estimation. There are three types of the Iowa curve method. The retirement rate method of the three types is the best because it is based on the collection and compilation of the data of all properties in service during a period of recent years, both properties retired and that are still in service. We hope the retirement rate method instead of the individual unit method is used in the estimation of ASL. Recently Korean government's accounting system has been developed. When revenue expenditure and capital expenditure were mixed in the past single-entry bookkeeping we would like to suggest that BOK and National Statistical Office have accumulated knowledge of a rational difference between revenue expenditure and capital expenditure. In particular, it is important when it is estimated capital stock by PIM. Korea also needs an empirical study on economic depreciation like Hulten & Wykoff Catalog A of the US BEA.

Evaluation of Applicability of Sea Ice Monitoring Using Random Forest Model Based on GOCI-II Images: A Study of Liaodong Bay 2021-2022 (GOCI-II 영상 기반 Random Forest 모델을 이용한 해빙 모니터링 적용 가능성 평가: 2021-2022년 랴오둥만을 대상으로)

  • Jinyeong Kim;Soyeong Jang;Jaeyeop Kwon;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1651-1669
    • /
    • 2023
  • Sea ice currently covers approximately 7% of the world's ocean area, primarily concentrated in polar and high-altitude regions, subject to seasonal and annual variations. It is very important to analyze the area and type classification of sea ice through time series monitoring because sea ice is formed in various types on a large spatial scale, and oil and gas exploration and other marine activities are rapidly increasing. Currently, research on the type and area of sea ice is being conducted based on high-resolution satellite images and field measurement data, but there is a limit to sea ice monitoring by acquiring field measurement data. High-resolution optical satellite images can visually detect and identify types of sea ice in a wide range and can compensate for gaps in sea ice monitoring using Geostationary Ocean Color Imager-II (GOCI-II), an ocean satellite with short time resolution. This study tried to find out the possibility of utilizing sea ice monitoring by training a rule-based machine learning model based on learning data produced using high-resolution optical satellite images and performing detection on GOCI-II images. Learning materials were extracted from Liaodong Bay in the Bohai Sea from 2021 to 2022, and a Random Forest (RF) model using GOCI-II was constructed to compare qualitative and quantitative with sea ice areas obtained from existing normalized difference snow index (NDSI) based and high-resolution satellite images. Unlike NDSI index-based results, which underestimated the sea ice area, this study detected relatively detailed sea ice areas and confirmed that sea ice can be classified by type, enabling sea ice monitoring. If the accuracy of the detection model is improved through the construction of continuous learning materials and influencing factors on sea ice formation in the future, it is expected that it can be used in the field of sea ice monitoring in high-altitude ocean areas.

Text Mining-Based Emerging Trend Analysis for e-Learning Contents Targeting for CEO (텍스트마이닝을 통한 최고경영자 대상 이러닝 콘텐츠 트렌드 분석)

  • Kyung-Hoon Kim;Myungsin Chae;Byungtae Lee
    • Information Systems Review
    • /
    • v.19 no.2
    • /
    • pp.1-19
    • /
    • 2017
  • Original scripts of e-learning lectures for the CEOs of corporation S were analyzed using topic analysis, which is a text mining method. Twenty-two topics were extracted based on the keywords chosen from five-year records that ranged from 2011 to 2015. Research analysis was then conducted on various issues. Promising topics were selected through evaluation and element analysis of the members of each topic. In management and economics, members demonstrated high satisfaction and interest toward topics in marketing strategy, human resource management, and communication. Philosophy, history of war, and history demonstrated high interest and satisfaction in the field of humanities, whereas mind health showed high interest and satisfaction in the field of in lifestyle. Studies were also conducted to identify topics on the proportion of content, but these studies failed to increase member satisfaction. In the field of IT, educational content responds sensitively to change of the times, but it may not increase the interest and satisfaction of members. The present study found that content production for CEOs should draw out deep implications for value innovation through technology application instead of simply ending the technical aspect of information delivery. Previous studies classified contents superficially based on the name of content program when analyzing the status of content operation. However, text mining can derive deep content and subject classification based on the contents of unstructured data script. This approach can examine current shortages and necessary fields if the service contents of the themes are displayed by year. This study was based on data obtained from influential e-learning companies in Korea. Obtaining practical results was difficult because data were not acquired from portal sites or social networking service. The content of e-learning trends of CEOs were analyzed. Data analysis was also conducted on the intellectual interests of CEOs in each field.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Development and Research into Functional Foods from Hydrolyzed Whey Protein Powder with Sialic Acid as Its Index Component - I. Repeated 90-day Oral Administration Toxicity Test using Rats Administered Hydrolyzed Whey Protein Powder containing Normal Concentration of Sialic Acid (7%) with Enzyme Separation Method - (Sialic Acid를 지표성분으로 하는 유청가수분해단백분말의 기능성식품 개발연구 - I. 효소분리로 7% Siailc Acid가 표준적으로 함유된 유청가수분해단백분말(7%)의 랫드를 이용한 90일 반복경구투여 독성시험 평가 연구 -)

  • Noh, Hye-Ji;Cho, Hyang-Hyun;Kim, Hee-Kyong
    • Journal of Dairy Science and Biotechnology
    • /
    • v.34 no.2
    • /
    • pp.99-116
    • /
    • 2016
  • We herein performed animal safety assessment in accordance with Good Laboratory Practice (GLP) regulations with the aim of developing sialic acid from glycomacropeptide (hereafter referred to as "GMP") as an index ingredient and functional component in functional foods. GMP is a type of whey protein derived from milk and a safe food, with multiple functions, such as antiviral activity. A test substance was produced containing 7% (w/w) sialic acid and mostly-hydrolyzed whey protein (hereafter referred to as "7%-GNANA") by enzymatic treatment of substrate GMP. The maximum intake test dose level was selected based on 5,000 mg/kg/day dose set for male NOEL (no-observed-effect-level) and female NOAEL (no-observed-adverse-effect-level) determined by a dose-range finding (DRF) test (GLP Center of Catholic University of Daegu, Report No. 15-NREO-001) that was previously conducted with the same test substance. To evaluate the toxicity of a repeated oral dose of the test substance in connection with the previous DRF study, 1,250, 2,500, and 5,000 mg/kg of the substance were administered by a probe into the stomachs of 6-week-old SPF Sprague-Dawley male and female rats for 90 d. Each test group consisted of 10 male and 10 female rats. To determine the toxicity index, all parameters, such as observation of common signs; measurements of body weight and food consumption; ophthalmic examination; urinalysis, electrolyte, hematological, and serum biochemical examination; measurement of organ weights during autopsy; and visual and histopathological examinations were conducted according to GLP standards. After evaluating the results based on the test toxicity assessment criteria, it was determined that NOAEL of the test substance, 7%-GNANA, was 5,000 mg/kg/day, for both male and female rats. No animal death was noted in any of the test groups, including the control group, during the study period, and there was no significant difference associated with test substance, as compared with the control group, with respect to general symptoms, body weight changes, food consumption, ophthalmic examination, urinalysis, hematological and serum biochemical examination, and electrolyte and blood coagulation tests during the administration period (P<0.05). As assessed by the effects of the test substance on organ weights, food consumption, autopsy, and histopathological safety, change in kidney weight as an indicator of male NOAEL revealed up to 20% kidney weight increase in the high-dose group (5,000 mg/kg/day) compared with the change in the control group. However, it was concluded that this effect of the test substance was minor. In the case of female rats, reduction of food consumption, increase of kidney weight, and decrease of thymus weight were observed in the high-dose group. The kidney weight increased by 10.2% (left) and 8.9% (right) in the high-dose group, with a slight dose-dependency compared with that of the control group. It was observed that the thymus weight decreased by 25.3% in the high-dose group, but it was a minor test substance-associated effect. During the autopsy, botryoid tumor was detected on the ribs of one subject in the high-dose group, but we concluded that the tumor has been caused by a naturally occurring (non-test) substance. Histopathological examination revealed lesions on the kidney, liver, spleen, and other organs in the low-dose test group. Since these lesions were considered a separate phenomenon, or naturally occurring and associated with aging, it was checked whether any target organ showed clear symptoms caused by the test substance. In conclusion, different concentrations of the test substance were fed to rats and, consequently, it was verified that only a minor effect was associated with the test substance in the high-dose (5,000 mg/kg/day) group of both male and female rats, without any other significant effects associated with the test substance. Therefore, it was concluded that NOAEL of 7%-GNANA (product name: Helicobactrol) with male and female rats as test animals was 5,000 mg/kg/day, and it thus was determined that the substance is safe for the ultimate use as an ingredient of health functional foods.

Essay on Form and Function Design (디자인의 형태와 기능에 관한 연구)

  • 이재국
    • Archives of design research
    • /
    • v.2 no.1
    • /
    • pp.63-97
    • /
    • 1989
  • There is nothing more important than the form and function in design, because every design product can be done on the basis of them. Form and Function are already existed before the word of design has been appeared and all the natural and man-made things' basic organization is based on their organic relations. The organic relations is the source of vitality which identifies the subsistance of all the objects and the evolution of living creatures has been changed their appearances by the natural law and order. Design is no exception. Design is a man-made organic thing which is developed its own way according to the purposed aim and given situations. If so, what is the ultimate goal of design. It is without saying that the goal is to make every effort to contribute to the -human beings most desirable life by the designer who is devoting himself to their convenience and well-being. Therefore, the designer can be called the man of rich life practitioner. This word implies a lot of meanings since the essence of design is improving the guality of life by the man-made things which are created by the designer. Also, the things are existed through the relations between form and function, and the things can keep their value when they are answered to the right purpose. In design, thus, it is to be a main concern how to create valuable things and to use them in the right way, and the subject of study is focused on the designer's outlook of value and uk relations between form and function. Christopher Alexander mentioned the importance of form as follows. The ultimate object of design is form. Every design problem begins with an effort to achieve fittness between the form and its context. The form is the solution to the problem: the context defmes the problem. In other words, when we speak of design, the real object of discussion is not form alone, but the ensemble comprising the form and its context. Good fit is a desirable property of this ensemble which relates to some particular division of the ensemble into form and context. Max Bill mainatined how important form is in design. Form represents a self-contained concept, and its embodiment in an object results in that object becoming a work of art. Futhermore, this explains why we use form so freguently in a comparative sense for determining whether one thing is less or more beautiful than another, and why the ideal of absolute beauty is always the standard by which we appraise form, and through form, art itself. Hence form has became synonymous with beauty. On the other hand, Laszlo Moholy-Nagy stated the importance of function as follows. Function means the task an object is designed to fulfill the task instrument is shaping the form. Unfortunately, this principle was not appreciated at the same time but through the endeavors of Frank Lloyd Wright and of the Bauhaus group and its many collegues in Europe, the idea of functionalism became the keynote of the twenites. Functionalism soon became a cheap slogan, however, and its original meaning blurred. It is neccessary to reexamine it in the light of present circumstances. Charles William Eliot expressed his idea on the relations between function and beauty. Beauty often results chiefly from fittness: indeed it is easy to manitain that nothing is fair except what is fit its uses or functions. If the function of the product of a machine be useful and valuable, an the machine be eminently fit for its function, it conspicuously has the beauty of fittness. A locomotive or a steamship has the same sort of beauty, derived from the supreme fittness for its function. As functions vary, so will those beauty..vary. However, it is impossible to study form and function in separate beings. Function can't be existed without form, and without function, form is nothing. In other words, form is a function's container, and function is content in form. It can be said that, therefore, the form and function are indispensable and commensal individuals which have coetemal relations. From the different point of view, sometimes, one is more emphasized than the other, but in this case, the logic is only accepted on the assumption of recognizing the importance of the other's entity. The fact can be proved what Frank Hoyd wright said that form and function are one. In spite of that, the form and function should be considered as independent indivisuals, because they are too important to be treated just as the simple single one. Form and function have flexible properties to the context. In other words, the context plays a role as the barometer to define the form and function, also which implies every meaning of surroun'||'&'||'not;dings. Thus, design is formed under the influence of situations. Situations are dynamic, like the design process itself, in which fixed focus can be cripping. Moreover, situations control over making the good design. Judging from the respect, I defined the good design in my thesis An Analytic Research on Desigh Ethic, "good design is to solve the problem by the most proper way in the situations." Situations are changeable, and so is design. There is no progress without change, but change is not neccessarily progress. It is highly desirable that there changes be beneficial to mankind. Our main problem is to be able to discriminate between that which should be discarded and that which should be kept, built upon, and improved. Form and Function are no exception. The practical function gives birth to the inevitable form and the $$\mu$ti-classified function is delivered to the varieties of form. All of these are depended upon changeable situations. That is precisely the situations of "situation de'||'&'||'not;sign", the concept of moving from the design of things to the design of the circumstances in which things are used. From this point of view, the core of form and function is depended upon how the designer can manage it efficiently in given situations. That is to say that the creativity designer plays an important role to fulfill the purpose. Generally speaking, creativity is the organization of a concept in response to a human need-a solution that is both satisfying and innovative. In order to meet human needs, creative design activities require a special intuitive insight which is set into motion by purposeful imagination. Therefore, creativity is the most essential quality of every designer. In addition, designers share with other creative people a compulsive ingenuity and a passion for imaginative solutions which will meet their criteria for excellence. Ultimately, it is said that the form and function is the matter which belongs to the desire of creative designers who constantly try to bring new thing into being to create new things. In accordance with that the main puppose of this thesis is to catch every meaning of the form and function and to close analyze their relations for the promotion of understanding and devising practical application to gradual progression in design. The thesis is composed of four parts: Introduction, Form, Function and Conclusion. Introduction, the purpose and background of the research are presented. In Chapter I, orgin of form, perception of form, and classification of form are studied. In Chapter II, generation of function, development of function, and diversification of function are considered. Conclusion, some concluding words are mentioned.ioned.

  • PDF

Term Mapping Methodology between Everyday Words and Legal Terms for Law Information Search System (법령정보 검색을 위한 생활용어와 법률용어 간의 대응관계 탐색 방법론)

  • Kim, Ji Hyun;Lee, Jong-Seo;Lee, Myungjin;Kim, Wooju;Hong, June Seok
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.137-152
    • /
    • 2012
  • In the generation of Web 2.0, as many users start to make lots of web contents called user created contents by themselves, the World Wide Web is overflowing by countless information. Therefore, it becomes the key to find out meaningful information among lots of resources. Nowadays, the information retrieval is the most important thing throughout the whole field and several types of search services are developed and widely used in various fields to retrieve information that user really wants. Especially, the legal information search is one of the indispensable services in order to provide people with their convenience through searching the law necessary to their present situation as a channel getting knowledge about it. The Office of Legislation in Korea provides the Korean Law Information portal service to search the law information such as legislation, administrative rule, and judicial precedent from 2009, so people can conveniently find information related to the law. However, this service has limitation because the recent technology for search engine basically returns documents depending on whether the query is included in it or not as a search result. Therefore, it is really difficult to retrieve information related the law for general users who are not familiar with legal terms in the search engine using simple matching of keywords in spite of those kinds of efforts of the Office of Legislation in Korea, because there is a huge divergence between everyday words and legal terms which are especially from Chinese words. Generally, people try to access the law information using everyday words, so they have a difficulty to get the result that they exactly want. In this paper, we propose a term mapping methodology between everyday words and legal terms for general users who don't have sufficient background about legal terms, and we develop a search service that can provide the search results of law information from everyday words. This will be able to search the law information accurately without the knowledge of legal terminology. In other words, our research goal is to make a law information search system that general users are able to retrieval the law information with everyday words. First, this paper takes advantage of tags of internet blogs using the concept for collective intelligence to find out the term mapping relationship between everyday words and legal terms. In order to achieve our goal, we collect tags related to an everyday word from web blog posts. Generally, people add a non-hierarchical keyword or term like a synonym, especially called tag, in order to describe, classify, and manage their posts when they make any post in the internet blog. Second, the collected tags are clustered through the cluster analysis method, K-means. Then, we find a mapping relationship between an everyday word and a legal term using our estimation measure to select the fittest one that can match with an everyday word. Selected legal terms are given the definite relationship, and the relations between everyday words and legal terms are described using SKOS that is an ontology to describe the knowledge related to thesauri, classification schemes, taxonomies, and subject-heading. Thus, based on proposed mapping and searching methodologies, our legal information search system finds out a legal term mapped with user query and retrieves law information using a matched legal term, if users try to retrieve law information using an everyday word. Therefore, from our research, users can get exact results even if they do not have the knowledge related to legal terms. As a result of our research, we expect that general users who don't have professional legal background can conveniently and efficiently retrieve the legal information using everyday words.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

A study on the classification of research topics based on COVID-19 academic research using Topic modeling (토픽모델링을 활용한 COVID-19 학술 연구 기반 연구 주제 분류에 관한 연구)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.155-174
    • /
    • 2022
  • From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (

    ) were the topic modeling results for each research topic (
    ) was found to be derived from For example, as a result of topic modeling for papers related to 'vaccine', a new topic titled Topic 05 'neutralizing antibodies' was extracted. A neutralizing antibody is an antibody that protects cells from infection when a virus enters the body, and is said to play an important role in the production of therapeutic agents and vaccine development. In addition, as a result of extracting topics from papers related to 'treatment', a new topic called Topic 05 'cytokine' was discovered. A cytokine storm is when the immune cells of our body do not defend against attacks, but attack normal cells. Hidden topics that could not be found for the entire thesis were classified according to keywords, and topic modeling was performed to find detailed topics. In this study, we proposed a method of extracting topics from a large amount of literature using the LDA algorithm and extracting similar words using the Skip-gram method that predicts the similar words as the central word among the Word2vec models. The combination of the LDA model and the Word2vec model tried to show better performance by identifying the relationship between the document and the LDA subject and the relationship between the Word2vec document. In addition, as a clustering method through PCA dimension reduction, a method for intuitively classifying documents by using the t-SNE technique to classify documents with similar themes and forming groups into a structured organization of documents was presented. In a situation where the efforts of many researchers to overcome COVID-19 cannot keep up with the rapid publication of academic papers related to COVID-19, it will reduce the precious time and effort of healthcare professionals and policy makers, and rapidly gain new insights. We hope to help you get It is also expected to be used as basic data for researchers to explore new research directions.


  • (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.