• Title/Summary/Keyword: 모델링 과정

Search Result 2,267, Processing Time 0.036 seconds

Vegetation classification based on remote sensing data for river management (하천 관리를 위한 원격탐사 자료 기반 식생 분류 기법)

  • Lee, Chanjoo;Rogers, Christine;Geerling, Gertjan;Pennin, Ellis
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.6-7
    • /
    • 2021
  • Vegetation development in rivers is one of the important issues not only in academic fields such as geomorphology, ecology, hydraulics, etc., but also in river management practices. The problem of river vegetation is directly connected to the harmony of conflicting values of flood management and ecosystem conservation. In Korea, since the 2000s, the issue of river vegetation and land formation has been continuously raised under various conditions, such as the regulating rivers downstream of the dams, the small eutrophicated tributary rivers, and the floodplain sites for the four major river projects. In this background, this study proposes a method for classifying the distribution of vegetation in rivers based on remote sensing data, and presents the results of applying this to the Naeseong Stream. The Naeseong Stream is a representative example of the river landscape that has changed due to vegetation development from 2014 to the latest. The remote sensing data used in the study are images of Sentinel 1 and 2 satellites, which is operated by the European Aerospace Administration (ESA), and provided by Google Earth Engine. For the ground truth, manually classified dataset on the surface of the Naeseong Stream in 2016 were used, where the area is divided into eight types including water, sand and herbaceous and woody vegetation. The classification method used a random forest classification technique, one of the machine learning algorithms. 1,000 samples were extracted from 10 pre-selected polygon regions, each half of them were used as training and verification data. The accuracy based on the verification data was found to be 82~85%. The model established through training was also applied to images from 2016 to 2020, and the process of changes in vegetation zones according to the year was presented. The technical limitations and improvement measures of this paper were considered. By providing quantitative information of the vegetation distribution, this technique is expected to be useful in practical management of vegetation such as thinning and rejuvenation of river vegetation as well as technical fields such as flood level calculation and flow-vegetation coupled modeling in rivers.

  • PDF

Exploring Branch Structure across Branch Orders and Species Using Terrestrial Laser Scanning and Quantitative Structure Model (지상형 라이다와 정량적 구조 모델을 이용한 분기별, 종별 나무의 가지 구조 탐구)

  • Seongwoo Jo;Tackang Yang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.1
    • /
    • pp.31-52
    • /
    • 2024
  • Considering the significant relationship between a tree's branch structure and physiology, understanding the detailed branch structure is crucial for fields such as species classification, and 3D tree modelling. Recently, terrestrial laser scanning (TLS) and quantitative structure model (QSM) have enhanced the understanding of branch structures by capturing the radius, length, and branching angle of branches. Previous studies examining branch structure with TL S and QSM often relied on mean or median of branch structure parameters, such as the radius ratio and length ratio in parent-child relationships, as representative values. Additionally, these studies have typically focused on the relationship between trunk and the first order branches. This study aims to explore the distribution of branch structure parameters up to the third order in Aesculus hippocastanum, Ginkgo biloba, and Prunus yedoensis. The gamma distribution best represented the distributions of branch structure parameters, as evidenced by the average of Kolmogorov-Smirnov statistics (radius = 0.048; length = 0.061; angle = 0.050). Comparisons of the mode, mean, and median were conducted to determine the most representative measure indicating the central tendency of branch structure parameters. The estimated distributions showed differences between the mode and mean (average of normalized differences for radius ratio = 11.2%; length ratio = 17.0%; branching angle = 8.2%), and between the mode and median (radius ratio = 7.5%; length ratio = 11.5%; branching angle = 5.5%). Comparisons of the estimated distributions across branch orders and species were conducted, showing variations across branch orders and species. This study suggests that examining the estimated distribution of the branch structure parameter offers a more detailed description of branch structure, capturing the central tendencies of branch structure parameters. We also emphasize the importance of examining higher branch orders to gain a comprehensive understanding of branch structure, highlighting the differences across branch orders.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Radiation Therapy Using M3 Wax Bolus in Patients with Malignant Scalp Tumors (악성 두피 종양(Scalp) 환자의 M3 Wax Bolus를 이용한 방사선치료)

  • Kwon, Da Eun;Hwang, Ji Hye;Park, In Seo;Yang, Jun Cheol;Kim, Su Jin;You, Ah Young;Won, Young Jinn;Kwon, Kyung Tae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.75-81
    • /
    • 2019
  • Purpose: Helmet type bolus for 3D printer is being manufactured because of the disadvantages of Bolus materials when photon beam is used for the treatment of scalp malignancy. However, PLA, which is a used material, has a higher density than a tissue equivalent material and inconveniences occur when the patient wears PLA. In this study, we try to treat malignant scalp tumors by using M3 wax helmet with 3D printer. Methods and materials: For the modeling of the helmet type M3 wax, the head phantom was photographed by CT, which was acquired with a DICOM file. The part for helmet on the scalp was made with Helmet contour. The M3 Wax helmet was made by dissolving paraffin wax, mixing magnesium oxide and calcium carbonate, solidifying it in a PLA 3D helmet, and then eliminated PLA 3D Helmet of the surface. The treatment plan was based on Intensity-Modulated Radiation Therapy (IMRT) of 10 Portals, and the therapeutic dose was 200 cGy, using Analytical Anisotropic Algorithm (AAA) of Eclipse. Then, the dose was verified by using EBT3 film and Mosfet (Metal Oxide Semiconductor Field Effect Transistor: USA), and the IMRT plan was measured 3 times in 3 parts by reproducing the phantom of the head human model under the same condition with the CT simulation room. Results: The Hounsfield unit (HU) of the bolus measured by CT was $52{\pm}37.1$. The dose of TPS was 186.6 cGy, 193.2 cGy and 190.6 cGy at the M3 Wax bolus measurement points of A, B and C, and the dose measured three times at Mostet was $179.66{\pm}2.62cGy$, $184.33{\pm}1.24cGy$ and $195.33{\pm}1.69cGy$. And the error rates were -3.71 %, -4.59 %, and 2.48 %. The dose measured with EBT3 film was $182.00{\pm}1.63cGy$, $193.66{\pm}2.05cGy$ and $196{\pm}2.16cGy$. The error rates were -2.46 %, 0.23 % and 2.83 %. Conclusions: The thickness of the M3 wax bolus was 2 cm, which could help the treatment plan to be established by easily lowering the dose of the brain part. The maximum error rate of the scalp surface dose was measured within 5 % and generally within 3 %, even in the A, B, C measurements of dosimeters of EBT3 film and Mosfet in the treatment dose verification. The making period of M3 wax bolus is shorter, cheaper than that of 3D printer, can be reused and is very useful for the treatment of scalp malignancies as human tissue equivalent material. Therefore, we think that the use of casting type M3 wax bolus, which will complement the making period and cost of high capacity Bolus and Compensator in 3D printer, will increase later.

Documentation of Intangible Cultural Heritage Using Motion Capture Technology Focusing on the documentation of Seungmu, Salpuri and Taepyeongmu (부록 3. 모션캡쳐를 이용한 무형문화재의 기록작성 - 국가지정 중요무형문화재 승무·살풀이·태평무를 중심으로 -)

  • Park, Weonmo;Go, Jungil;Kim, Yongsuk
    • Korean Journal of Heritage: History & Science
    • /
    • v.39
    • /
    • pp.351-378
    • /
    • 2006
  • With the development of media, the methods for the documentation of intangible cultural heritage have been also developed and diversified. As well as the previous analogue ways of documentation, the have been recently applying new multi-media technologies focusing on digital pictures, sound sources, movies, etc. Among the new technologies, the documentation of intangible cultural heritage using the method of 'Motion Capture' has proved itself prominent especially in the fields that require three-dimensional documentation such as dances and performances. Motion Capture refers to the documentation technology which records the signals of the time varing positions derived from the sensors equipped on the surface of an object. It converts the signals from the sensors into digital data which can be plotted as points on the virtual coordinates of the computer and records the movement of the points during a certain period of time, as the object moves. It produces scientific data for the preservation of intangible cultural heritage, by displaying digital data which represents the virtual motion of a holder of an intangible cultural heritage. National Research Institute of Cultural Properties (NRICP) has been working on for the development of new documentation method for the Important Intangible Cultural Heritage designated by Korean government. This is to be done using 'motion capture' equipments which are also widely used for the computer graphics in movie or game industries. This project is designed to apply the motion capture technology for 3 years- from 2005 to 2007 - for 11 performances from 7 traditional dances of which body gestures have considerable values among the Important Intangible Cultural Heritage performances. This is to be supported by lottery funds. In 2005, the first year of the project, accumulated were data of single dances, such as Seungmu (monk's dance), Salpuri(a solo dance for spiritual cleansing dance), Taepyeongmu (dance of peace), which are relatively easy in terms of performing skills. In 2006, group dances, such as Jinju Geommu (Jinju sword dance), Seungjeonmu (dance for victory), Cheoyongmu (dance of Lord Cheoyong), etc., will be documented. In the last year of the project, 2007, education programme for comparative studies, analysis and transmission of intangible cultural heritage and three-dimensional contents for public service will be devised, based on the accumulated data, as well as the documentation of Hakyeonhwadae Habseolmu (crane dance combined with the lotus blossom dance). By describing the processes and results of motion capture documentation of Salpuri dance (Lee Mae-bang), Taepyeongmu (Kang seon-young) and Seungmu (Lee Mae-bang, Lee Ae-ju and Jung Jae-man) conducted in 2005, this report introduces a new approach for the documentation of intangible cultural heritage. During the first year of the project, two questions have been raised. First, how can we capture motions of a holder (dancer) without cutoffs during quite a long performance? After many times of tests, the motion capture system proved itself stable with continuous results. Second, how can we reproduce the accurate motion without the re-targeting process? The project re-created the most accurate motion of the dancer's gestures, applying the new technology to drew out the shape of the dancers's body digital data before the motion capture process for the first time in Korea. The accurate three-dimensional body models for four holders obtained by the body scanning enhanced the accuracy of the motion capture of the dance.

Dismantling and Restoration of the Celadon Stool Treasure with an Openwork Ring Design (보물 청자 투각고리문 의자의 해체 및 복원)

  • KWON, Ohyoung;LEE, Sunmyung;LEE, Jangjon;PARK, Younghwan
    • Korean Journal of Heritage: History & Science
    • /
    • v.55 no.2
    • /
    • pp.200-211
    • /
    • 2022
  • The celadon stools with an openwork ring design which consist of four items as one collection were excavated from Gaeseong, Gyeonggi-do Province. The celadon stools were designated and managed as treasures due to their high arthistorical value in the form of demonstrating the excellence of celadon manufacturing techniques and the fanciful lifestyles during the Goryeo Dynasty. However, one of the items, which appeared to have been repaired and restored in the past, suffered a decline in aesthetic value due to the aging of the treatment materials and the lack of skill on the part of the conservator, raising the need for re-treatment as a result of structural instability. An examination of the conservation condition prior to conservation treatment found structural vulnerabilities because physical damage had been artificially inflicted throughout the area that was rendered defective at the time of manufacturing. The bonded surfaces for the cracked areas and detached fragments did not fit, and these areas and fragments had deteriorated because the adhesive trickled down onto the celadon surface or secondary contaminants, such as dust, were on the adhesive surface. The study identified the position, scope, and conditions of the bonded areas at the cracks UV rays and microscopy in order to investigate the condition of repair and restoration. By conducting Fourier-transform infrared spectroscopy(FT-IR) and portable x-ray fluorescence spectroscopy on the materials used for the former conservation treatment, the study confirmed the use of cellulose resins and epoxy resins as adhesives. Furthermore, the analysis revealed the addition of gypsum(CaSO4·2H2O) and bone meal(Ca10 (PO4)6(OH)2) to the adhesive to increase the bonding strength of some of the bonded areas that sustained force. Based on the results of the investigation, the conservation treatment for the artifact would focus on completely dismantling the existing bonded areas and then consolidating vulnerable areas through bonding and restoration. After removing and dismantling the prior adhesive used, the celadon stool was separated into 6 large fragments including the top and bottom, the curved legs, and some of the ring design. After dismantling, the remaining adhesive and contaminants were chemically and physically removed, and a steam cleaner was used to clean the fractured surfaces to increase the bonding efficacy of the re-bonding. The bonding of the artifact involved applying the adhesive differently depending on the bonding area and size. The cyanoacrylate resin Loctite 401 was used on the bonding area that held the positions of the fragments, while the acrylic resin Paraloid B-72 20%(in xylene) was treated on cross sections for reversibility in the areas that provided structural stability before bonding the fragments using the epoxy resin Epo-tek 301-2. For areas that would sustain force, as in the top and bottom, kaolin was added to Epo-tek 301-2 in order to reinforce the bonding strength. For the missing parts of the ring design where a continuous pattern could be assumed, a frame was made using SN-sheets, and the ring design was then modeled and restored by connecting the damaged cross section with Wood epos. Other restoration areas that occurred during bonding were treated by being filled with Wood epos for aesthetic and structural stabilization. Restored and filled areas were color-matched to avoid the feeling of disharmony from differences of texture in case of exhibitions in the future. The investigation and treatment process involving a variety of scientific technology was systematically documented so as to be utilized as basic data for the conservation and maintenance.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.