• Title/Summary/Keyword: Division Algorithm

Search Result 3,039, Processing Time 0.033 seconds

Application of Machine Learning Algorithm and Remote-sensed Data to Estimate Forest Gross Primary Production at Multi-sites Level (산림 총일차생산량 예측의 공간적 확장을 위한 인공위성 자료와 기계학습 알고리즘의 활용)

  • Lee, Bora;Kim, Eunsook;Lim, Jong-Hwan;Kang, Minseok;Kim, Joon
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_2
    • /
    • pp.1117-1132
    • /
    • 2019
  • Forest covers 30% of the Earth's land area and plays an important role in global carbon flux through its ability to store much greater amounts of carbon than other terrestrial ecosystems. The Gross Primary Production (GPP) represents the productivity of forest ecosystems according to climate change and its effect on the phenology, health, and carbon cycle. In this study, we estimated the daily GPP for a forest ecosystem using remote-sensed data from Moderate Resolution Imaging Spectroradiometer (MODIS) and machine learning algorithms Support Vector Machine (SVM). MODIS products were employed to train the SVM model from 75% to 80% data of the total study period and validated using eddy covariance measurement (EC) data at the six flux tower sites. We also compare the GPP derived from EC and MODIS (MYD17). The MODIS products made use of two data sets: one for Processed MODIS that included calculated by combined products (e.g., Vapor Pressure Deficit), another one for Unprocessed MODIS that used MODIS products without any combined calculation. Statistical analyses, including Pearson correlation coefficient (R), mean squared error (MSE), and root mean square error (RMSE) were used to evaluate the outcomes of the model. In general, the SVM model trained by the Unprocessed MODIS (R = 0.77 - 0.94, p < 0.001) derived from the multi-sites outperformed those trained at a single-site (R = 0.75 - 0.95, p < 0.001). These results show better performance trained by the data including various events and suggest the possibility of using remote-sensed data without complex processes to estimate GPP such as non-stationary ecological processes.

A Study on the Retrieval of River Turbidity Based on KOMPSAT-3/3A Images (KOMPSAT-3/3A 영상 기반 하천의 탁도 산출 연구)

  • Kim, Dahui;Won, You Jun;Han, Sangmyung;Han, Hyangsun
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1285-1300
    • /
    • 2022
  • Turbidity, the measure of the cloudiness of water, is used as an important index for water quality management. The turbidity can vary greatly in small river systems, which affects water quality in national rivers. Therefore, the generation of high-resolution spatial information on turbidity is very important. In this study, a turbidity retrieval model using the Korea Multi-Purpose Satellite-3 and -3A (KOMPSAT-3/3A) images was developed for high-resolution turbidity mapping of Han River system based on eXtreme Gradient Boosting (XGBoost) algorithm. To this end, the top of atmosphere (TOA) spectral reflectance was calculated from a total of 24 KOMPSAT-3/3A images and 150 Landsat-8 images. The Landsat-8 TOA spectral reflectance was cross-calibrated to the KOMPSAT-3/3A bands. The turbidity measured by the National Water Quality Monitoring Network was used as a reference dataset, and as input variables, the TOA spectral reflectance at the locations of in situ turbidity measurement, the spectral indices (the normalized difference vegetation index, normalized difference water index, and normalized difference turbidity index), and the Moderate Resolution Imaging Spectroradiometer (MODIS)-derived atmospheric products(the atmospheric optical thickness, water vapor, and ozone) were used. Furthermore, by analyzing the KOMPSAT-3/3A TOA spectral reflectance of different turbidities, a new spectral index, new normalized difference turbidity index (nNDTI), was proposed, and it was added as an input variable to the turbidity retrieval model. The XGBoost model showed excellent performance for the retrieval of turbidity with a root mean square error (RMSE) of 2.70 NTU and a normalized RMSE (NRMSE) of 14.70% compared to in situ turbidity, in which the nNDTI proposed in this study was used as the most important variable. The developed turbidity retrieval model was applied to the KOMPSAT-3/3A images to map high-resolution river turbidity, and it was possible to analyze the spatiotemporal variations of turbidity. Through this study, we could confirm that the KOMPSAT-3/3A images are very useful for retrieving high-resolution and accurate spatial information on the river turbidity.

Development and Performance Evaluation of Multi-sensor Module for Use in Disaster Sites of Mobile Robot (조사로봇의 재난현장 활용을 위한 다중센서모듈 개발 및 성능평가에 관한 연구)

  • Jung, Yonghan;Hong, Junwooh;Han, Soohee;Shin, Dongyoon;Lim, Eontaek;Kim, Seongsam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1827-1836
    • /
    • 2022
  • Disasters that occur unexpectedly are difficult to predict. In addition, the scale and damage are increasing compared to the past. Sometimes one disaster can develop into another disaster. Among the four stages of disaster management, search and rescue are carried out in the response stage when an emergency occurs. Therefore, personnel such as firefighters who are put into the scene are put in at a lot of risk. In this respect, in the initial response process at the disaster site, robots are a technology with high potential to reduce damage to human life and property. In addition, Light Detection And Ranging (LiDAR) can acquire a relatively wide range of 3D information using a laser. Due to its high accuracy and precision, it is a very useful sensor when considering the characteristics of a disaster site. Therefore, in this study, development and experiments were conducted so that the robot could perform real-time monitoring at the disaster site. Multi-sensor module was developed by combining LiDAR, Inertial Measurement Unit (IMU) sensor, and computing board. Then, this module was mounted on the robot, and a customized Simultaneous Localization and Mapping (SLAM) algorithm was developed. A method for stably mounting a multi-sensor module to a robot to maintain optimal accuracy at disaster sites was studied. And to check the performance of the module, SLAM was tested inside the disaster building, and various SLAM algorithms and distance comparisons were performed. As a result, PackSLAM developed in this study showed lower error compared to other algorithms, showing the possibility of application in disaster sites. In the future, in order to further enhance usability at disaster sites, various experiments will be conducted by establishing a rough terrain environment with many obstacles.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.

The Comparative Study of NHPP Software Reliability Model Based on Exponential and Inverse Exponential Distribution (지수 및 역지수 분포를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.2
    • /
    • pp.133-140
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, we were proposed the reliability model with the exponential and inverse exponential distribution, which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, were employed. Analysis of failure, using real data set for the sake of proposing the exponential and inverse exponential distribution, was employed. This analysis of failure data compared with the exponential and inverse exponential distribution property. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the inverse exponential distribution model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.

Vibration Reduction Simulation of UH-60A Helicopter Airframe Using Active Vibration Control System (능동 진동 제어 시스템을 이용한 UH-60A 헬리콥터 기체의 진동 감소 시뮬레이션)

  • Lee, Ye-Lin;Kim, Do-Young;Kim, Do-Hyung;Hong, Sung-Boo;Park, Jae-Sang
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.6
    • /
    • pp.443-453
    • /
    • 2020
  • This study using the active vibration control technique attempts to alleviate numerically the airframe vibration of a UH-60A helicopter. The AVCS(Active Vibration Control System) is applied to reduce the 4/rev vibration responses at the specified locations of the UH-60A airframe. The 4/rev hub vibratory loads of the UH-60A rotor is predicted using the nonlinear flexible dynamics analysis code, DYMORE II. Various tools such as NDARC, MSC.NASTRAN, and MATLAB Simulink are used for the AVCS simulation with five CRFGs and seven accelerometers. At a flight speed of 158knots, the predicted 4/rev hub vibratory loads of UH-60A rotor excite the airframe, and then the 4/rev vibration responses at the specified airframe positions such as the pilot seat, rotor-fuselage joint, mid-cabin, and aft-cabin are calculated without and with AVCS. The 4/rev vibration responses at all the locations and directions are reduced by from 25.14 to 96.05% when AVCS is used, as compared to the baseline results without AVCS.

A small-area implementation of public-key cryptographic processor for 224-bit elliptic curves over prime field (224-비트 소수체 타원곡선을 지원하는 공개키 암호 프로세서의 저면적 구현)

  • Park, Byung-Gwan;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.6
    • /
    • pp.1083-1091
    • /
    • 2017
  • This paper describes a design of cryptographic processor supporting 224-bit elliptic curves over prime field defined by NIST. Scalar point multiplication that is a core arithmetic function in elliptic curve cryptography(ECC) was implemented by adopting the modified Montgomery ladder algorithm. In order to eliminate division operations that have high computational complexity, projective coordinate was used to implement point addition and point doubling operations, which uses addition, subtraction, multiplication and squaring operations over GF(p). The final result of the scalar point multiplication is converted to affine coordinate and the inverse operation is implemented using Fermat's little theorem. The ECC processor was verified by FPGA implementation using Virtex5 device. The ECC processor synthesized using a 0.18 um CMOS cell library occupies 2.7-Kbit RAM and 27,739 gate equivalents (GEs), and the estimated maximum clock frequency is 71 MHz. One scalar point multiplication takes 1,326,985 clock cycles resulting in the computation time of 18.7 msec at the maximum clock frequency.

A Case Study for Simulation of a Debris Flow with DEBRIS-2D at Inje, Korea (DEBRIS-2D를 이용한 인제지역 토석류 산사태 거동모사 사례 연구)

  • Chae, Byung-Gon;Liu, Ko-Fei;Kim, Man-Il
    • The Journal of Engineering Geology
    • /
    • v.20 no.3
    • /
    • pp.231-242
    • /
    • 2010
  • In order to assess applicability of debris flow simulation on natural terrain in Korea, this study introduced the DEBRIS-2D program which had been developed by Liu and Huang (2006). For simulation of large debris flows composed of fine and coarse materials, DEBRIS-2D was developed using the constitutive relation proposed by Julien and Lan (1991). Based on the theory of DEBRIS-2D, this study selected a valley where a large debris flow was occurred on July 16th, 2006 at Deoksanri, Inje county, Korea. The simulation results show that all mass were already flowed into the stream at 10 minutes after starting. In 10minutes, the debris flow reached the first geological turn and an open area, resulting in slow velocity and changing its flow direction. After that, debris flow started accelerating again and it reached the village after 40 minutes. The maximum velocity is rather low between 1 m/sec and 2 m/sec. This is the reason why debris flow took 50 minutes to reach the village. The depth change of debris flow shows enormous effect of the valley shape. The simulated result is very similar to what happened in the field. It means that DEBRIS-2D program can be applied to the geologic and topographic conditions in Korea without large modification of analysis algorithm. However, it is necessary to determine optimal reference values of Korean geologic and topographic properties for more reliable simulation of debris flows.