• Title/Summary/Keyword: 정보 적합성

Search Result 4,946, Processing Time 0.037 seconds

Exploring the 4th Industrial Revolution Technology from the Landscape Industry Perspective (조경산업 관점에서 4차 산업혁명 기술의 탐색)

  • Choi, Ja-Ho;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.47 no.2
    • /
    • pp.59-75
    • /
    • 2019
  • This study was carried out to explore the 4th Industrial Revolution technology from the perspective of the landscape industry to provide the basic data necessary to increase the virtuous circle value. The 4th Industrial Revolution, the characteristics of the landscape industry and urban regeneration were considered and the methodology was established and studied including the technical classification system suitable for systematic research, which was selected as a framework. First, the 4th Industrial Revolution technology based on digital data was selected, which could be utilized to increase the value of the virtuous circle for the landscape industry. From 'Element Technology Level', and 'Core Technology' such as the Internet of Things, Cloud Computing, Big Data, Artificial Intelligence, Robot, 'Peripheral Technology', Virtual or Augmented Reality, Drones, 3D 4D Printing, and 3D Scanning were highlighted as the 4th Industrial Revolution technology. It has been shown that it is possible to increase the value of the virtuous circle when applied at the 'Trend Level', in particular to the landscape industry. The 'System Level' was analyzed as a general-purpose technology, and based on the platform, the level of element technology(computers, and smart devices) was systematically interconnected, and illuminated with the 4th Industrial Revolution technology based on digital data. The application of the 'Trend Level' specific to the landscape industry has been shown to be an effective technology for increasing the virtuous circle values. It is possible to realize all synergistic effects and implementation of the proposed method at the trend level applying the element technology level. Smart gardens, smart parks, etc. have been analyzed to the level they should pursue. It was judged that Smart City, Smart Home, Smart Farm, and Precision Agriculture, Smart Tourism, and Smart Health Care could be highly linked through the collaboration among technologies in adjacent areas at the Trend Level. Additionally, various utilization measures of related technology applied at the Trend Level were highlighted in the process of urban regeneration, public service space creation, maintenance, and public service. In other words, with the realization of ubiquitous computing, Hyper-Connectivity, Hyper-Reality, Hyper-Intelligence, and Hyper-Convergence were proposed, reflecting the basic characteristics of digital technology in the landscape industry can be achieved. It was analyzed that the landscaping industry was effectively accommodating and coordinating with the needs of new characters, education and consulting, as well as existing tasks, even when participating in urban regeneration projects. In particular, it has been shown that the overall landscapig area is effective in increasing the virtuous circle value when it systems the related technology at the trend level by linking maintenance with strategic bridgehead. This is because the industrial structure is effective in distributing data and information produced from various channels. Subsequent research, such as demonstrating the fusion of the 4th Industrial Revolution technology based on the use of digital data in creation, maintenance, and service of actual landscape space is necessary.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Systematic Review of Developmental Coordination Disorders in South Korea: Evaluation and Intervention (국내의 발달성협응장애(DCD) 연구에 관한 체계적 고찰 : 평가와 중재접근 중심으로)

  • Kim, Min Joo;Choi, Jeong-Sil
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.19 no.1
    • /
    • pp.69-82
    • /
    • 2021
  • Objective : This recent work intended to provide basic information for researchers and practitioners related to occupational therapy about Developmental Coordination Disorder (DCD) in South Korea. The previous research of screening DCD and the effects of intervention programs were reviewed. Methods : Peer-reviewed papers relating to DCD and published in Korea from January 1990 to December 2020 were systematically reviewed. The search terms "developmental coordination disorder," "development coordination," and "developmental coordination" were used to identify previous Korean research in this area from three representation database, the Research Information Sharing Service, Korean Studies Information Service System, and Google Scholar. We found a total of 4,878 articles identified through the three search engines and selected seventeen articles for analysis after removing those that corresponded to the overlapping or exclusion criteria. We adopted "the conceptual model" to analyze the selected articles about DCD assessment and intervention. Results : We found that twelve of the 17 studies showed the qualitative level of Level 2 using non-randomized approach between the two groups. The Movement Assessment Battery for Children and its second edition were the most frequently used tools in assessing children for DCD. Among the intervention studies, the eight articles (47%) were adopted a dynamic systems approach; a normative functional skill framework and cognitive neuroscience were each used in 18% of the pieces; and 11% of the articles were applied neurodevelopmental theory. Only one article was used a combination approach of normative functional skill and general abilities. These papers were mainly focused on the movement characteristics of children with DCD and the intervention effect of exercise or sports programs. Conclusion : Most of the reviewed studies investigated the movement characteristics of DCD or explore the effectiveness of particular intervention programs. In the future, it would be useful to investigate the feasibility of different assessment tools and to establish the effectiveness of various interventions used in rehabilitation for better motor performance in children with DCD.

A Study on the Retrieval of River Turbidity Based on KOMPSAT-3/3A Images (KOMPSAT-3/3A 영상 기반 하천의 탁도 산출 연구)

  • Kim, Dahui;Won, You Jun;Han, Sangmyung;Han, Hyangsun
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1285-1300
    • /
    • 2022
  • Turbidity, the measure of the cloudiness of water, is used as an important index for water quality management. The turbidity can vary greatly in small river systems, which affects water quality in national rivers. Therefore, the generation of high-resolution spatial information on turbidity is very important. In this study, a turbidity retrieval model using the Korea Multi-Purpose Satellite-3 and -3A (KOMPSAT-3/3A) images was developed for high-resolution turbidity mapping of Han River system based on eXtreme Gradient Boosting (XGBoost) algorithm. To this end, the top of atmosphere (TOA) spectral reflectance was calculated from a total of 24 KOMPSAT-3/3A images and 150 Landsat-8 images. The Landsat-8 TOA spectral reflectance was cross-calibrated to the KOMPSAT-3/3A bands. The turbidity measured by the National Water Quality Monitoring Network was used as a reference dataset, and as input variables, the TOA spectral reflectance at the locations of in situ turbidity measurement, the spectral indices (the normalized difference vegetation index, normalized difference water index, and normalized difference turbidity index), and the Moderate Resolution Imaging Spectroradiometer (MODIS)-derived atmospheric products(the atmospheric optical thickness, water vapor, and ozone) were used. Furthermore, by analyzing the KOMPSAT-3/3A TOA spectral reflectance of different turbidities, a new spectral index, new normalized difference turbidity index (nNDTI), was proposed, and it was added as an input variable to the turbidity retrieval model. The XGBoost model showed excellent performance for the retrieval of turbidity with a root mean square error (RMSE) of 2.70 NTU and a normalized RMSE (NRMSE) of 14.70% compared to in situ turbidity, in which the nNDTI proposed in this study was used as the most important variable. The developed turbidity retrieval model was applied to the KOMPSAT-3/3A images to map high-resolution river turbidity, and it was possible to analyze the spatiotemporal variations of turbidity. Through this study, we could confirm that the KOMPSAT-3/3A images are very useful for retrieving high-resolution and accurate spatial information on the river turbidity.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Comparison of ESG Evaluation Methods: Focusing on the K-ESG Guideline (ESG 평가방법 비교: K-ESG 가이드라인을 중심으로)

  • Chanhi Cho;Hyoung-Yong Lee
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.1-25
    • /
    • 2023
  • ESG management is becoming a necessity of the times, but there are about 600 ESG evaluation indicators worldwide, causing confusion in the market as different ESG ratings were assigned to individual companies according to evaluation agencies. In addition, since the method of applying ESG was not disclosed, there were not many ways for companies that wanted to introduce ESG management to get help. Accordingly, the Ministry of Trade, Industry and Energy announced the K-ESG guideline jointly with the ministries. In previous studies, there were few studies on the comparison of evaluation grades by ESG evaluation company or the application of evaluation diagnostic items. Therefore, in this study, the ease of application and improvement of the K-ESG guideline was attempted by applying the K-ESG guideline to companies that already have ESG ratings. The position of the K-ESG guideline is also confirmed by comparing the scores calculated through the K-ESG guideline for companies that have ESG ratings from global ESG evaluation agencies and domestic ESG evaluation agencies. As a result of the analysis, first, the K-ESG guideline provide clear and detailed standards for individual companies to set their own ESG goals and set the direction of ESG practice. Second, the K-ESG guideline is suitable for domestic and global ESG evaluation standards as it has 61 diagnostic items and 12 additional diagnostic items covering the evaluation indicators of global representative ESG evaluation agencies and KCGS in Korea. Third, the ESG rating of the K-ESG guideline was higher than that of a global ESG rating company and lower than or similar to that of a domestic ESG rating company. Fourth, the ease of application of the K-ESG guideline is judged to be high. Fifth, the point to be improved in the K-ESG guideline is that the government needs to compile industry average statistics on diagnostic items in the K-ESG environment area and publish them on the government's ESG-only site. In addition, the applied weights of E, S, and G by industry should be determined and disclosed. This study will help ESG evaluation agencies, corporate management, and ESG managers interested in ESG management in establishing ESG management strategies and contributing to providing improvements to be referenced when revising the K-ESG guideline in the future.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

Status and Management Strategy of Pesticide Use in Golf Courses in Korea (우리나라 골프장의 농약사용 실태 및 관리방안)

  • Kim, Dongjin;Yoon, Jeongki;Yoo, Jiyoung;Kim, Su-Jung;Yang, Jae E.
    • Journal of Applied Biological Chemistry
    • /
    • v.57 no.3
    • /
    • pp.267-277
    • /
    • 2014
  • Objective of this paper is to assess the available data on the pesticide uses and regulations in the golf courses, and provide the nationwide systematic management options. Numbers of golf courses in Korea are rapidly increasing from 2000s and reached at 421 sites by the end of 2011. Accordingly pesticide usage has been increased with years in direct proportion to the increasing number of golf courses. Amounts of pesticide applied in 2011 were 118,669 kg as of an active ingredient and were in the orders of fungicides (54.9%) > insecticides (24.4%) > herbicides (13.3%) > growth regulators (0.1%). Average pesticide usages in 2011 were 280.9 kg per golf course and $5.4kg\;ha^{-1}$. Frequencies of the residual pesticide detections in green and turf were higher than those in fairway and soil, respectively. Residue of highly toxic pesticides was not detected in golf courses. Ministry of Environment in 2010 has developed the 'golf course pesticide monitoring and management system' which is the advanced online registry for kind and amount of pesticides applied in each golf course. This system is intended for monitoring of the pesticide uses and residual levels and protecting the environmental pollution from pesticides in the golf course. In 2009, management of pesticides in the golf courses became the task of Ministry of Environment, being merged from many federal agency and ministries. The protocol for the site-specific best management practices, on which to base results from the risk assessment, should be set for pesticides in the golf to minimize the environmental impacts.

A Study on Shaker's Free Design from Fashion (유행(流行)으로부터 자유로운 세이커(Shaker) 디자인에 대한 고찰)

  • Choi, Sung-Woon;Huh, Jin
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.279-288
    • /
    • 2007
  • Today, design is not free from fashion, which emerges and vanishes temporarily, and aims at equalization. As a result, products quickly become obsolete because of fashion. This means that the span of products is determined by a social concept, which is not clarified, regardless of their functions. Usable products will gradually disappear from us and it will cause serious environmental problems, unless we can find out measures against fashion. As such, it is important to study the characteristics of the shaker's design in this circumstance. The Shaker's community has a distinguishable difference from other general societies. Temporary fashion and misled information cannot interfere with their consciousness. Religion, the life and the principle of design have developed on the same level in their community. Especially, any decoration or the difference of materials is not allowed in shaker's design. It reflects their thinking that all people are equal in the sight of God. Therefore, any decoration for social and economical superiority can not be used. Through this consciousness, they can be free from fashion or decoration. They, also, believe that they can reach perfection through practicality and simplicity. The reason why shaker's design is not disturbed by fashion is that their belief is involved in their design. Consequently, if religious or conscious contents are primarily set up, design can be free from fashion and products can be used for a long time.

  • PDF