• Title/Summary/Keyword: 자동 검증

Search Result 1,672, Processing Time 0.037 seconds

Characteristics of Pollution Loading from Kyongan Stream Watershed by BASINS/SWAT. (BASINS/SWAT 모델을 이용한 경안천 유역의 오염부하 배출 특성)

  • Jang, Jae-Ho;Yoon, Chun-Gyeong;Jung, Kwang-Wook;Lee, Sae-Bom
    • Korean Journal of Ecology and Environment
    • /
    • v.42 no.2
    • /
    • pp.200-211
    • /
    • 2009
  • A mathematical modeling program called Soil and Water Assessment Tool (SWAT) developed by USDA was applied to Kyongan stream watershed. It was run under BASINS (Better Assessment Science for Integrating point and Non-point Sources) program, and the model was calibrated and validated using KTMDL monitoring data of 2004${\sim}$2008. The model efficiency of flow ranged from very good to fair in comparison between simulated and observed data and it was good in the water quality parameters like flow range. The model reliability and performance were within the expectation considering complexity of the watershed and pollutant sources. The results of pollutant loads estimation as yearly (2004${\sim}$2008), pollutant loadings from 2006 were higher than rest of year caused by high precipitation and flow. Average non-point source (NPS) pollution rates were 30.4%, 45.3%, 28.1% for SS, TN and TP respectably. The NPS pollutant loading for SS, TN and TP during the monsoon rainy season (June to September) was about 61.8${\sim}$88.7% of total NPS pollutant loading, and flow volume was also in a similar range. SS concentration depended on precipitation and pollution loading patterns, but TN and TP concentration was not necessarily high during the rainy season, and showed a decreasing trend with increasing water flow. SWAT based on BASINS was applied to the Kyongan stream watershed successfully without difficulty, and it was found that the model could be used conveniently to assess watershed characteristics and to estimate pollutant loading including point and non-point sources in watershed scale.

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.

Protective effect of Gabjubaekmok (Diospyros kaki) extract against amyloid beta (Aβ)-induced cognitive impairment in a mouse model (아밀로이드 베타(amyloid beta)로 유도된 인지장애 마우스 모델에서 갑주백목(Diospyros kaki) 추출물의 인지기능 및 뇌 신경세포 보호 효과)

  • Yoo, Seul Ki;Kim, Jong Min;Park, Seon Kyeong;Kang, Jin Yong;Han, Hye Ju;Park, Hyo Won;Kim, Chul-Woo;Lee, Uk;Heo, Ho Jin
    • Korean Journal of Food Science and Technology
    • /
    • v.51 no.4
    • /
    • pp.379-392
    • /
    • 2019
  • The current study investigated the effect of Gabjubaekmok (Diospyros kaki) ethanolic extract (GEE) on $H_2O_2$-induced human neuroblastoma MC-IXC cells and amyloid beta $(A{\beta})_{1-42}$-induced ICR (Institute of Cancer Research) mice. GEE showed significant antioxidant activity that was evaluated based on ABTS, DPPH scavenging activity, and inhibition of malondialdehyde (MDA) and acetylcholinesterase activity. Further, GEE inhibited ROS production and increased cell viability in $H_2O_2$-induced MC-IXC cells. Administration of GEE ameliorated the cognitive dysfunction on $A{\beta}$-induced ICR mice as evaluated using Y-maze, passive avoidance, and Morris water maze tests. Results of ex vivo test using brain tissues showed that, GEE protected the cholinergic system and mitochondrial functions by increasing the levels of antioxidants such as ROS, mitochondrial membrane potential (MMP), and adenosine triphosphate (ATP) against $A{\beta}$-induced cognitive dysfunction. Moreover, GEE decreasd the expression levels of apoptosis-related proteins such as $TNF-{\alpha}$, p-JNK, p-tau, BAX and caspase 3. While, expression levels of p-Akt and $p-GSK3{\beta}$ increased than $A{\beta}$ group. Finally, gallic acid was identified as the main compound of GEE using high performance liquid chromatography.

Modeling and mapping fuel moisture content using equilibrium moisture content computed from weather data of the automatic mountain meteorology observation system (AMOS) (산악기상자료와 목재평형함수율에 기반한 산림연료습도 추정식 개발)

  • Lee, HoonTaek;WON, Myoung-Soo;YOON, Suk-Hee;JANG, Keun-Chang
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.21-36
    • /
    • 2019
  • Dead fuel moisture content is a key variable in fire danger rating as it affects fire ignition and behavior. This study evaluates simple regression models estimating the moisture content of standardized 10-h fuel stick (10-h FMC) at three sites with different characteristics(urban and outside/inside the forest). Equilibrium moisture content (EMC) was used as an independent variable, and in-situ measured 10-h FMC was used as a dependent variable and validation data. 10-h FMC spatial distribution maps were created for dates with the most frequent fire occurrence during 2013-2018. Also, 10-h FMC values of the dates were analyzed to investigate under which 10-h FMC condition forest fire is likely to occur. As the results, fitted equations could explain considerable part of the variance in 10-h FMC (62~78%). Compared to the validation data, the models performed well with R2 ranged from 0.53 to 0.68, root mean squared error (RMSE) ranged from 2.52% to 3.43%, and bias ranged from -0.41% to 1.10%. When the 10-h FMC model fitted for one site was applied to the other sites, $R^2$ was maintained as the same while RMSE and bias increased up to 5.13% and 3.68%, respectively. The major deficiency of the 10-h FMC model was that it poorly caught the difference in the drying process after rainfall between 10-h FMC and EMC. From the analysis of 10-h FMC during the dates fire occurred, more than 70% of the fires occurred under a 10-h FMC condition of less than 10.5%. Overall, the present study suggested a simple model estimating 10-h FMC with acceptable performance. Applying the 10-h FMC model to the automatic mountain weather observation system was successfully tested to produce a national-scale 10-h FMC spatial distribution map. This data will be fundamental information for forest fire research, and will support the policy maker.

Application of deep learning method for decision making support of dam release operation (댐 방류 의사결정지원을 위한 딥러닝 기법의 적용성 평가)

  • Jung, Sungho;Le, Xuan Hien;Kim, Yeonsu;Choi, Hyungu;Lee, Giha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1095-1105
    • /
    • 2021
  • The advancement of dam operation is further required due to the upcoming rainy season, typhoons, or torrential rains. Besides, physical models based on specific rules may sometimes have limitations in controlling the release discharge of dam due to inherent uncertainty and complex factors. This study aims to forecast the water level of the nearest station to the dam multi-timestep-ahead and evaluate the availability when it makes a decision for a release discharge of dam based on LSTM (Long Short-Term Memory) of deep learning. The LSTM model was trained and tested on eight data sets with a 1-hour temporal resolution, including primary data used in the dam operation and downstream water level station data about 13 years (2009~2021). The trained model forecasted the water level time series divided by the six lead times: 1, 3, 6, 9, 12, 18-hours, and compared and analyzed with the observed data. As a result, the prediction results of the 1-hour ahead exhibited the best performance for all cases with an average accuracy of MAE of 0.01m, RMSE of 0.015 m, and NSE of 0.99, respectively. In addition, as the lead time increases, the predictive performance of the model tends to decrease slightly. The model may similarly estimate and reliably predicts the temporal pattern of the observed water level. Thus, it is judged that the LSTM model could produce predictive data by extracting the characteristics of complex hydrological non-linear data and can be used to determine the amount of release discharge from the dam when simulating the operation of the dam.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.

Building Change Detection Methodology in Urban Area from Single Satellite Image (단일위성영상 기반 도심지 건물변화탐지 방안)

  • Seunghee Kim;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1097-1109
    • /
    • 2023
  • Urban is an area where small-scale changes to individual buildings occur frequently. An existing urban building database requires periodic updating to increase its usability. However, there are limitations in data collection for building changes over a wide urban. In this study, we check the possibility of detecting building changes and updating a building database by using satellite images that can capture a wide urban region by a single image. For this purpose, building areas in a satellite image are first extracted by projecting 3D coordinates of building corners available in a building database onto the image. Building areas are then divided into roof and facade areas. By comparing textures of the roof areas projected, building changes such as height change or building removal can be detected. New height values are estimated by adjusting building heights until projected roofs align to actual roofs observed in the image. If the projected image appeared in the image while no building is observed, it corresponds to a demolished building. By checking buildings in the original image whose roofs and facades areas are not projected, new buildings are identified. Based on these results, the building database is updated by the three categories of height update, building deletion, or new building creation. This method was tested with a KOMPSAT-3A image over Incheon Metropolitan City and Incheon building database available in public. Building change detection and building database update was carried out. Updated building corners were then projected to another KOMPSAT-3 image. It was confirmed that building areas projected by updated building information agreed with actual buildings in the image very well. Through this study, the possibility of semi-automatic building change detection and building database update based on single satellite image was confirmed. In the future, follow-up research is needed on technology to enhance computational automation of the proposed method.

Pipetting Stability and Improvement Test of the Robotic Liquid Handling System Depending on Types of Liquid (용액에 따른 자동분주기의 분주능력 평가와 분주력 향상 실험)

  • Back, Hyangmi;Kim, Youngsan;Yun, Sunhee;Heo, Uisung;Kim, Hosin;Ryu, Hyeonggi;Lee, Guiwon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.62-68
    • /
    • 2016
  • Purpose In a cyclosporine experiment using a robotic liquid handing system has found a deviation of its standard curve and low reproducibility of patients's results. The difference of the test is that methanol is mixed with samples and the extractions are used for the test. Therefore, we assumed that the abnormal test results came from using methanol and conducted this test. In a manual of a robotic liquid handling system mentions that we can choose several setting parameters depending on the viscosity of the liquids being used, the size of the sampling tips and the motor speeds that you elect to use but there's no exact order. This study was undertaken to confirm pipetting ability depending on types of liquids and investigate proper setting parameters for the optimum dispensing ability. Materials and Methods 4types of liquids(water, serum, methanol, PEG 6000(25%)) and $TSH^{125}I$ tracer(515 kBq) are used to confirm pipetting ability. 29 specimens for Cyclosporine test are used to compare results. Prepare 8 plastic tubes for each of the liquids and with multi pipette $400{\mu}l$ of each liquid is dispensed to 8 tubes and $100{\mu}l$ of $TSH^{125}I$ tracer are dispensed to all of the tubes. From the prepared samples, $100{\mu}l$ of liquids are dispensed using a robotic liquid handing system, counted and calculated its CV(%) depending on types of liquids. And then by adjusting several setting parameters(air gap, dispense time, delay time) the change of the CV(%)are calcutated and finds optimum setting parameters. 29 specimens are tested with 3 methods. The first(A) is manual method and the second(B) is used robotic liquid handling system with existing parameters. The third(C) is used robotic liquid handling system with adjusted parameters. Pipetting ability depending on types of liquids is assessed with CV(%). On the basis of (A), patients's test results are compared (A)and(B), (A)and(C) and they are assessed with %RE(%Relative error) and %Diff(%Difference). Results The CV(%) of the CPM depending on liquid types were water 0.88, serum 0.95, methanol 10.22 and PEG 0.68. As expected dispensing of methanol using a liquid handling system was the problem and others were good. The methanol's dispensing were conducted by adjusting several setting parameters. When transport air gap 0 was adjusted to 2 and 5, CV(%) were 20.16, 12.54 and when system air gap 0 was adjusted to 2 and 5, CV(%) were 8.94, 1.36. When adjusted to system air gap 2, transport air gap 2 was 12.96 and adjusted to system air gap 5, Transport air gap 5 was 1.33. When dispense speed was adjusted 300 to 100, CV(%) was 13.32 and when dispense delay was adjusted 200 to 100 was 13.55. When compared (B) to (A), the result increased 99.44% and %RE was 93.59%. When compared (C-system air gap was adjusted 0 to 5) to (A), the result increased 6.75% and %RE was 5.10%. Conclusion Adjusting speed and delay time of aspiration and dispense was meaningless but changing system air gap was effective. By adjusting several parameters proper value was found and it affected the practical result of the experiment. To optimize the system active efforts are needed through the test and in case of dispensing new types of liquids proper test is required to check the liquid is suitable for using the equipment.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.