• Title/Summary/Keyword: Convergence speed

Search Result 1,995, Processing Time 0.024 seconds

Detection of flash drought using evaporative stress index in South Korea (증발스트레스지수를 활용한 국내 돌발가뭄 감지)

  • Lee, Hee-Jin;Nam, Won-Ho;Yoon, Dong-Hyun;Mark, D. Svoboda;Brian, D. Wardlow
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.8
    • /
    • pp.577-587
    • /
    • 2021
  • Drought is generally considered to be a natural disaster caused by accumulated water shortages over a long period of time, taking months or years and slowly occurring. However, climate change has led to rapid changes in weather and environmental factors that directly affect agriculture, and extreme weather conditions have led to an increase in the frequency of rapidly developing droughts within weeks to months. This phenomenon is defined as 'Flash Drought', which is caused by an increase in surface temperature over a relatively short period of time and abnormally low and rapidly decreasing soil moisture. The detection and analysis of flash drought is essential because it has a significant impact on agriculture and natural ecosystems, and its impacts are associated with agricultural drought impacts. In South Korea, there is no clear definition of flash drought, so the purpose of this study is to identify and analyze its characteristics. In this study, flash drought detection condition was presented based on the satellite-derived drought index Evaporative Stress Index (ESI) from 2014 to 2018. ESI is used as an early warning indicator for rapidly-occurring flash drought a short period of time due to its similar relationship with reduced soil moisture content, lack of precipitation, increased evaporative demand due to low humidity, high temperature, and strong winds. The flash droughts were analyzed using hydrometeorological characteristics by comparing Standardized Precipitation Index (SPI), soil moisture, maximum temperature, relative humidity, wind speed, and precipitation. The correlation was analyzed based on the 8 weeks prior to the occurrence of the flash drought, and in most cases, a high correlation of 0.8(-0.8) or higher(lower) was expressed for ESI and SPI, soil moisture, and maximum temperature.

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.

A Relative Study of 3D Digital Record Results on Buried Cultural Properties (매장문화재 자료에 대한 3D 디지털 기록 결과 비교연구)

  • KIM, Soohyun;LEE, Seungyeon;LEE, Jeongwon;AHN, Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.55 no.1
    • /
    • pp.175-198
    • /
    • 2022
  • With the development of technology, the methods of digitally converting various forms of analog information have become common. As a result, the concept of recording, building, and reproducing data in a virtual space, such as digital heritage and digital reconstruction, has been actively used in the preservation and research of various cultural heritages. However, there are few existing research results that suggest optimal scanners for small and medium-sized relics. In addition, scanner prices are not cheap for researchers to use, so there are not many related studies. The 3D scanner specifications have a great influence on the quality of the 3D model. In particular, since the state of light reflected on the surface of the object varies depending on the type of light source used in the scanner, using a scanner suitable for the characteristics of the object is the way to increase the efficiency of the work. Therefore, this paper conducted a study on nine small and medium-sized buried cultural properties of various materials, including earthenware and porcelain, by period, to examine the differences in quality of the four types of 3D scanners. As a result of the study, optical scanners and small and medium-sized object scanners were the most suitable digital records of the small and medium-sized relics. Optical scanners are excellent in both mesh and texture but have the disadvantage of being very expensive and not portable. The handheld method had the advantage of excellent portability and speed. When considering the results compared to the price, the small and medium-sized object scanner was the best. It was the photo room measurement that was able to obtain the 3D model at the lowest cost. 3D scanning technology can be largely used to produce digital drawings of relics, restore and duplicate cultural properties, and build databases. This study is meaningful in that it contributed to the use of scanners most suitable for buried cultural properties by material and period for the active use of 3D scanning technology in cultural heritage.

Case Analysis on Platform Business Models for IT Service Planning (IT서비스 기획을 위한 플랫폼 비즈니스 모델 사례 분석연구)

  • Kim, Hyun Ji;Cha, yun so;Kim, Kyung Hoon
    • Korea Science and Art Forum
    • /
    • v.25
    • /
    • pp.103-118
    • /
    • 2016
  • Due to the rapid development of ICT, corporate business models quickly changed and because of the radical growth of IT technology, sequential or gradual survival has become difficult. Internet-based new businesses such as IT service companies are seeking for new convergence business models that have not existed before to create business models that are more competitive, but the economic efficiency of business models that were successful in the past is wearing off. Yet, as reaching the critical point where the platform value becomes extremely high for platforms via the Internet is happening at a much higher speed than before, platform-ization has become a very important condition for rapid business expansion for all kinds of businesses. This study analyzes the necessity of establishing platform business models in IT service planning and identifies their characteristics through case analyses of platform business models. The results derived features First, there is a need to ensure sufficient buyers and sellers, and second, platform business model should provide customers with distinctive value of the only platforms are generating. third, the common interests between platform-driven company and a partner, participants Should be existing. Fourthly, by expanding base of participants and upgrades, expansion of adjacent areas we must have a continuous scalability and evolution must be sustainable. While it is expected that the identified characteristics will cause tremendous impacts to the establishment of platform business models and to the graphing of service planning, we also look forward to this study serving as the starting point for the establishment of theories of profit models for platform businesses, which were not mentioned in the study, so that planners responsible for platform-based IT service planning will spend less time and draw bigger schemes in building planning drafts.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.