• Title/Summary/Keyword: Algorithm selection

Search Result 2,493, Processing Time 0.027 seconds

A Study on the Designation of Scenic Sites Considering Visual Perception Intensity (시지각강도를 고려한 명승 구역설정에 관한 연구)

  • Ha, Tae-Il;Kim, Choong-Sik
    • Korean Journal of Heritage: History & Science
    • /
    • v.50 no.1
    • /
    • pp.58-77
    • /
    • 2017
  • This study applied the index called Visual Perception Intensity (VPI) which quantitatively deals with landscape values and viewpoints to designate the cultural heritage areas in the Scenic Sites. The results of the study are as follows. First, a VPI selection index was presented for designating the cultural heritage areas in the Scenic Sites. The index was applied in consideration of the distance from the viewing point to the object and its incident angle. In addition, the process of the VPI analysis was implemented with GIS and the analysis algorithm was constructed. Second, the possibility of VPI was examined by comparing the simple frequency of the cumulative visibility with the results of the VPI. The VPI was analyzed to be more influenced by the incidence angle than the distance between the viewpoint and the object within a 4.74 km area. Third, a field survey was performed to investigate the effectiveness of the VPI classification. The survey was implemented based on the results of the investigation into the VPI to examine whether human visual perception was fully reflected. It was confirmed through the field survey that an area with high VPI was also an important area. Fourth, a plan for the cultural heritage area adjustment was constructed by applying the VPI to the areas already designated as Scenic Sites. As a result of classifying the VPI into three classes, it was found that the areas with the second class or higher were needed to be designated as cultural heritage areas and the areas with the third class as the Historical and Cultural Environments Preservation Area.

Doubly-robust Q-estimation in observational studies with high-dimensional covariates (고차원 관측자료에서의 Q-학습 모형에 대한 이중강건성 연구)

  • Lee, Hyobeen;Kim, Yeji;Cho, Hyungjun;Choi, Sangbum
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.309-327
    • /
    • 2021
  • Dynamic treatment regimes (DTRs) are decision-making rules designed to provide personalized treatment to individuals in multi-stage randomized trials. Unlike classical methods, in which all individuals are prescribed the same type of treatment, DTRs prescribe patient-tailored treatments which take into account individual characteristics that may change over time. The Q-learning method, one of regression-based algorithms to figure out optimal treatment rules, becomes more popular as it can be easily implemented. However, the performance of the Q-learning algorithm heavily relies on the correct specification of the Q-function for response, especially in observational studies. In this article, we examine a number of double-robust weighted least-squares estimating methods for Q-learning in high-dimensional settings, where treatment models for propensity score and penalization for sparse estimation are also investigated. We further consider flexible ensemble machine learning methods for the treatment model to achieve double-robustness, so that optimal decision rule can be correctly estimated as long as at least one of the outcome model or treatment model is correct. Extensive simulation studies show that the proposed methods work well with practical sample sizes. The practical utility of the proposed methods is proven with real data example.

Improving Efficiency of Food Hygiene Surveillance System by Using Machine Learning-Based Approaches (기계학습을 이용한 식품위생점검 체계의 효율성 개선 연구)

  • Cho, Sanggoo;Cho, Seung Yong
    • The Journal of Bigdata
    • /
    • v.5 no.2
    • /
    • pp.53-67
    • /
    • 2020
  • This study employees a supervised learning prediction model to detect nonconformity in advance of processed food manufacturing and processing businesses. The study was conducted according to the standard procedure of machine learning, such as definition of objective function, data preprocessing and feature engineering and model selection and evaluation. The dependent variable was set as the number of supervised inspection detections over the past five years from 2014 to 2018, and the objective function was to maximize the probability of detecting the nonconforming companies. The data was preprocessed by reflecting not only basic attributes such as revenues, operating duration, number of employees, but also the inspections track records and extraneous climate data. After applying the feature variable extraction method, the machine learning algorithm was applied to the data by deriving the company's risk, item risk, environmental risk, and past violation history as feature variables that affect the determination of nonconformity. The f1-score of the decision tree, one of ensemble models, was much higher than those of other models. Based on the results of this study, it is expected that the official food control for food safety management will be enhanced and geared into the data-evidence based management as well as scientific administrative system.

Linear programming models using a Dantzig type risk for portfolio optimization (Dantzig 위험을 사용한 포트폴리오 최적화 선형계획법 모형)

  • Ahn, Dayoung;Park, Seyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.2
    • /
    • pp.229-250
    • /
    • 2022
  • Since the publication of Markowitz's (1952) mean-variance portfolio model, research on portfolio optimization has been conducted in many fields. The existing mean-variance portfolio model forms a nonlinear convex problem. Applying Dantzig's linear programming method, it was converted to a linear form, which can effectively reduce the algorithm computation time. In this paper, we proposed a Dantzig perturbation portfolio model that can reduce management costs and transaction costs by constructing a portfolio with stable and small (sparse) assets. The average return and risk were adjusted according to the purpose by applying a perturbation method in which a certain part is invested in the existing benchmark and the rest is invested in the assets proposed as a portfolio optimization model. For a covariance estimation, we proposed a Gaussian kernel weight covariance that considers time-dependent weights by reflecting time-series data characteristics. The performance of the proposed model was evaluated by comparing it with the benchmark portfolio with 5 real data sets. Empirical results show that the proposed portfolios provide higher expected returns or lower risks than the benchmark. Further, sparse and stable asset selection was obtained in the proposed portfolios.

A study on the selection of the target scope for destruction of personal credit information of customers whose financial transaction effect has ended (금융거래 효과가 종료된 고객의 개인신용정보 파기 대상 범위 선정에 관한 연구)

  • Baek, Song-Yi;Lim, Young-Bin;Lee, Chang-Gil;Chun, Sam-Hyun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.163-169
    • /
    • 2022
  • According to the Credit Information Act, in order to protect customer information by relationship of credit information subjects, it is destroyed and stored separately in two stages according to the period after the financial transaction effect is over. However, there is a limitation in that the destruction of personal credit information of customers whose financial transaction effect has expired cannot be collectively destroyed when the transaction has been terminated, depending on the nature of the financial product and transaction. To this end, the IT person in charge is developing a computerized program according to the target and order of destruction by investigating the business relationship by transaction type in advance. In this process, if the identification of the upper relation between tables is unclear, a compliance issue arises in which personal credit information cannot be destroyed or even information that should not be destroyed because it depends on the subjective judgment of the IT person in charge. Therefore, in this paper, we propose a model and algorithm for identifying the referenced table based on SQL executed in the computer program, analyzing the upper relation between tables with the primary key information of the table, and visualizing and objectively selecting the range to be destroyed. presented and implemented.

Research on Advanced Measures for Emergency Response to Water Accidents based on Big-Data (빅데이터 기반 수도사고 위기대응 고도화 방안에 관한 연구)

  • Kim, Ho-sung;Kim, Jong-rip;Kim, Jae-jong;Yoon, Young-min;Kim, Dae-kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.317-321
    • /
    • 2022
  • In response to Incheon tap water accident in 2019, the Ministry of Environment has created the "Comprehensive Measures for Water Safety Management" to improve water operation management, provide systematic technical support, and respond to accidents. Accordingly, K-water is making a smart water supply management system for the entire process of tap water. In order to advance the response to water accidents, it is essential to secure the reliability of real-time water operation data such as flow rate, pressure, and water level, and to develop and apply a warning algorithm in advance using big data analysis techniques. In this paper, various statistical techniques are applied using water supply operation data (flow, pressure, water level, etc) to prepare the foundation for the selection of the optimal operating range and advancement of the monitoring and alarm system. In addition, the arrival time is analyzed through cross-correlation analysis of changes in raw water turbidity between the water intake and water treatment plants. The purpose of this paper is to study the model that predicts the raw water turbidity of a water treatment plant by applying raw water turbidity data considering the time delay according to the flow rate change.

  • PDF

Data analysis by Integrating statistics and visualization: Visual verification for the prediction model (통계와 시각화를 결합한 데이터 분석: 예측모형 대한 시각화 검증)

  • Mun, Seong Min;Lee, Kyung Won
    • Design Convergence Study
    • /
    • v.15 no.6
    • /
    • pp.195-214
    • /
    • 2016
  • Predictive analysis is based on a probabilistic learning algorithm called pattern recognition or machine learning. Therefore, if users want to extract more information from the data, they are required high statistical knowledge. In addition, it is difficult to find out data pattern and characteristics of the data. This study conducted statistical data analyses and visual data analyses to supplement prediction analysis's weakness. Through this study, we could find some implications that haven't been found in the previous studies. First, we could find data pattern when adjust data selection according as splitting criteria for the decision tree method. Second, we could find what type of data included in the final prediction model. We found some implications that haven't been found in the previous studies from the results of statistical and visual analyses. In statistical analysis we found relation among the multivariable and deducted prediction model to predict high box office performance. In visualization analysis we proposed visual analysis method with various interactive functions. Finally through this study we verified final prediction model and suggested analysis method extract variety of information from the data.

Deep learning-based Multi-view Depth Estimation Methodology of Contents' Characteristics (다 시점 영상 콘텐츠 특성에 따른 딥러닝 기반 깊이 추정 방법론)

  • Son, Hosung;Shin, Minjung;Kim, Joonsoo;Yun, Kug-jin;Cheong, Won-sik;Lee, Hyun-woo;Kang, Suk-ju
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.4-7
    • /
    • 2022
  • Recently, multi-view depth estimation methods using deep learning network for the 3D scene reconstruction have gained lots of attention. Multi-view video contents have various characteristics according to their camera composition, environment, and setting. It is important to understand these characteristics and apply the proper depth estimation methods for high-quality 3D reconstruction tasks. The camera setting represents the physical distance which is called baseline, between each camera viewpoint. Our proposed methods focus on deciding the appropriate depth estimation methodologies according to the characteristics of multi-view video contents. Some limitations were found from the empirical results when the existing multi-view depth estimation methods were applied to a divergent or large baseline dataset. Therefore, we verified the necessity of obtaining the proper number of source views and the application of the source view selection algorithm suitable for each dataset's capturing environment. In conclusion, when implementing a deep learning-based depth estimation network for 3D scene reconstruction, the results of this study can be used as a guideline for finding adaptive depth estimation methods.

  • PDF

Mapping Mammalian Species Richness Using a Machine Learning Algorithm (머신러닝 알고리즘을 이용한 포유류 종 풍부도 매핑 구축 연구)

  • Zhiying Jin;Dongkun Lee;Eunsub Kim;Jiyoung Choi;Yoonho Jeon
    • Journal of Environmental Impact Assessment
    • /
    • v.33 no.2
    • /
    • pp.53-63
    • /
    • 2024
  • Biodiversity holds significant importance within the framework of environmental impact assessment, being utilized in site selection for development, understanding the surrounding environment, and assessing the impact on species due to disturbances. The field of environmental impact assessment has seen substantial research exploring new technologies and models to evaluate and predict biodiversity more accurately. While current assessments rely on data from fieldwork and literature surveys to gauge species richness indices, limitations in spatial and temporal coverage underscore the need for high-resolution biodiversity assessments through species richness mapping. In this study, leveraging data from the 4th National Ecosystem Survey and environmental variables, we developed a species distribution model using Random Forest. This model yielded mapping results of 24 mammalian species' distribution, utilizing the species richness index to generate a 100-meter resolution map of species richness. The research findings exhibited a notably high predictive accuracy, with the species distribution model demonstrating an average AUC value of 0.82. In addition, the comparison with National Ecosystem Survey data reveals that the species richness distribution in the high-resolution species richness mapping results conforms to a normal distribution. Hence, it stands as highly reliable foundational data for environmental impact assessment. Such research and analytical outcomes could serve as pivotal new reference materials for future urban development projects, offering insights for biodiversity assessment and habitat preservation endeavors.

Development of a Real-time Action Recognition-Based Child Behavior Analysis Service System (실시간 행동인식 기반 아동 행동분석 서비스 시스템 개발)

  • Chimin Oh;Seonwoo Kim;Jeongmin Park;Injang Jo;Jaein Kim;Chilwoo Lee
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.68-84
    • /
    • 2024
  • This paper describes the development of a system and algorithms for high-quality welfare services by recognizing behavior development indicators (activity, sociability, danger) in children aged 0 to 2 years old using action recognition technology. Action recognition targeted 11 behaviors from lying down in 0-year-olds to jumping in 2-year-olds, using data directly obtained from actual videos provided for research purposes by three nurseries in the Gwangju and Jeonnam regions. A dataset of 1,867 actions from 425 clip videos was built for these 11 behaviors, achieving an average recognition accuracy of 97.4%. Additionally, for real-world application, the Edge Video Analyzer (EVA), a behavior analysis device, was developed and implemented with a region-specific random frame selection-based PoseC3D algorithm, capable of recognizing actions in real-time for up to 30 people in four-channel videos. The developed system was installed in three nurseries, tested by ten childcare teachers over a month, and evaluated through surveys, resulting in a perceived accuracy of 91 points and a service satisfaction score of 94 points.