• Title/Summary/Keyword: Preprocessing Process

Search Result 427, Processing Time 0.021 seconds

The Robust Skin Color Correction Method in Distorted Saturation by the Lighting (조명에 의한 채도 왜곡에 강건한 피부 색상 보정 방법)

  • Hwang, Dae-Dong;Lee, Keunsoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.2
    • /
    • pp.1414-1419
    • /
    • 2015
  • A method for detecting a skin region on the image is generally used to detect the color information. However, If saturation lowered, skin detection is difficult because hue information of the pixels is lost. So in this paper, we propose a method of correcting color of lower saturation of skin region images by the lighting. Color correction process of this method is saturation image acquisition and low-saturation region classification, segmentation, and the saturation of the split in the low saturation region extraction and color values, the color correction sequence. This method extracts the low saturation regions in the image and extract the color and saturation in the region and the surrounding region to produce a color similar to the original color. Therefore, the method of extracting the low saturation region should be correctly preceding. Because more accurate segmentation in the process of obtaining a low saturation regions, we use a multi-threshold method proposed Otsu in Hue values of the HSV color space, and create a binary image. Our experimental results for 170 portrait images show a possibility that the proposed method could be used efficiently preprocessing of skin color detection method, because the detection result of proposed method is 5.8% higher than not used it.

A Path Travel Time Estimation Study on Expressways using TCS Link Travel Times (TCS 링크통행시간을 이용한 고속도로 경로통행시간 추정)

  • Lee, Hyeon-Seok;Jeon, Gyeong-Su
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.5
    • /
    • pp.209-221
    • /
    • 2009
  • Travel time estimation under given traffic conditions is important for providing drivers with travel time prediction information. But the present expressway travel time estimation process cannot calculate a reliable travel time. The objective of this study is to estimate the path travel time spent in a through lane between origin tollgates and destination tollgates on an expressway as a prerequisite result to offer reliable prediction information. Useful and abundant toll collection system (TCS) data were used. When estimating the path travel time, the path travel time is estimated combining the link travel time obtained through a preprocessing process. In the case of a lack of TCS data, the TCS travel time for previous intervals is referenced using the linear interpolation method after analyzing the increase pattern for the travel time. When the TCS data are absent over a long-term period, the dynamic travel time using the VDS time space diagram is estimated. The travel time estimated by the model proposed can be validated statistically when compared to the travel time obtained from vehicles traveling the path directly. The results show that the proposed model can be utilized for estimating a reliable travel time for a long-distance path in which there are a variaty of travel times from the same departure time, the intervals are large and the change in the representative travel time is irregular for a short period.

Change Attention-based Vehicle Scratch Detection System (변화 주목 기반 차량 흠집 탐지 시스템)

  • Lee, EunSeong;Lee, DongJun;Park, GunHee;Lee, Woo-Ju;Sim, Donggyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.228-239
    • /
    • 2022
  • In this paper, we propose an unmanned vehicle scratch detection deep learning model for car sharing services. Conventional scratch detection models consist of two steps: 1) a deep learning module for scratch detection of images before and after rental, 2) a manual matching process for finding newly generated scratches. In order to build a fully automatic scratch detection model, we propose a one-step unmanned scratch detection deep learning model. The proposed model is implemented by applying transfer learning and fine-tuning to the deep learning model that detects changes in satellite images. In the proposed car sharing service, specular reflection greatly affects the scratch detection performance since the brightness of the gloss-treated automobile surface is anisotropic and a non-expert user takes a picture with a general camera. In order to reduce detection errors caused by specular reflected light, we propose a preprocessing process for removing specular reflection components. For data taken by mobile phone cameras, the proposed system can provide high matching performance subjectively and objectively. The scores for change detection metrics such as precision, recall, F1, and kappa are 67.90%, 74.56%, 71.08%, and 70.18%, respectively.

320 Pesticides Analysis of Essential Oils by LC-MS/MS and GC-MS/MS (LC-MS/MS 와 GC-MS/MS 를 이용한 에센셜 오일 중 320 종 잔류농약 분석법 개발)

  • Oh, Ka Hyang;Park, Sung Mak;Lee, So Min;Jung, So Young;Kwak, Byeong-Mun;Lee, Mi-Gi;Lee, Mi Ae;Choi, Sung Min;Bin, Bum-Ho
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.47 no.4
    • /
    • pp.317-331
    • /
    • 2021
  • Essential oil is a volatile substance obtained by physically obtaining fragrant plant materials made by one single plant and plant species, and is widely used for cosmetics, fragrances, and aroma therapy due to its excellent preservation, sterilization, and antibacterial effects. When essential oil would undergo the extraction and concentration processes, the agricultural chemicals thereof would be extracted and concentrated only to be harmful to the human body. This study analyzes 320 residual agricultural chemicals concentrated in the essential oil, and to this end, LC-MS/MS and GC-MS/MS are used, while the freezing process is applied instead of the conventional refining process hexane, to improve the preprocessing method. As a result of analyzing the essential oil, such ingredients as chlorpyrifos, piperonyl butoxide and silafluofen have been detected in Basil oil and Clove leaf oil. Hence, it is perceived that the residual agricultural chemicals should continue to be monitored for the essential oil.

Development of Registration Post-Processing Technology to Homogenize the Density of the Scan Data of Earthwork Sites (토공현장 스캔데이터 밀도 균일화를 위한 정합 후처리 기술 개발)

  • Kim, Yonggun;Park, Suyeul;Kim, Seok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.5
    • /
    • pp.689-699
    • /
    • 2022
  • Recently, high productivity capabilities have been improved due to the application of advanced technologies in various industries, but in the construction industry, productivity improvements have been relatively low. Research on advanced technology for the construction industry is being conducted quickly to overcome the current low productivity. Among advanced technologies, 3D scan technology is widely used for creating 3D digital terrain models at construction sites. In particular, the 3D digital terrain model provides basic data for construction automation processes, such as earthwork machine guidance and control. The quality of the 3D digital terrain model has a lot of influence not only on the performance and acquisition environment of the 3D scanner, but also on the denoising, registration and merging process, which is a preprocessing process for creating a 3D digital terrain model after acquiring terrain scan data. Therefore, it is necessary to improve the terrain scan data processing performance. This study seeks to solve the problem of density inhomogeneity in terrain scan data that arises during the pre-processing step. The study suggests a 'pixel-based point cloud comparison algorithm' and verifies the performance of the algorithm using terrain scan data obtained at an actual earthwork site.

Generative optical flow based abnormal object detection method using a spatio-temporal translation network

  • Lim, Hyunseok;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.11-19
    • /
    • 2021
  • An abnormal object refers to a person, an object, or a mechanical device that performs abnormal and unusual behavior and needs observation or supervision. In order to detect this through artificial intelligence algorithm without continuous human intervention, a method of observing the specificity of temporal features using optical flow technique is widely used. In this study, an abnormal situation is identified by learning an algorithm that translates an input image frame to an optical flow image using a Generative Adversarial Network (GAN). In particular, we propose a technique that improves the pre-processing process to exclude unnecessary outliers and the post-processing process to increase the accuracy of identification in the test dataset after learning to improve the performance of the model's abnormal behavior identification. UCSD Pedestrian and UMN Unusual Crowd Activity were used as training datasets to detect abnormal behavior. For the proposed method, the frame-level AUC 0.9450 and EER 0.1317 were shown in the UCSD Ped2 dataset, which shows performance improvement compared to the models in the previous studies.

Development of Deep Learning Structure for Defective Pixel Detection of Next-Generation Smart LED Display Board using Imaging Device (영상장치를 이용한 차세대 스마트 LED 전광판의 불량픽셀 검출을 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.345-349
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure for defective pixel detection of next-generation smart LED display board using imaging device. In this research, a technique utilizing imaging devices and deep learning is introduced to automatically detect defects in outdoor LED billboards. Through this approach, the effective management of LED billboards and the resolution of various errors and issues are aimed. The research process consists of three stages. Firstly, the planarized image data of the billboard is processed through calibration to completely remove the background and undergo necessary preprocessing to generate a training dataset. Secondly, the generated dataset is employed to train an object recognition network. This network is composed of a Backbone and a Head. The Backbone employs CSP-Darknet to extract feature maps, while the Head utilizes extracted feature maps as the basis for object detection. Throughout this process, the network is adjusted to align the Confidence score and Intersection over Union (IoU) error, sustaining continuous learning. In the third stage, the created model is employed to automatically detect defective pixels on actual outdoor LED billboards. The proposed method, applied in this paper, yielded results from accredited measurement experiments that achieved 100% detection of defective pixels on real LED billboards. This confirms the improved efficiency in managing and maintaining LED billboards. Such research findings are anticipated to bring about a revolutionary advancement in the management of LED billboards.

Application Development for Text Mining: KoALA (텍스트 마이닝 통합 애플리케이션 개발: KoALA)

  • Byeong-Jin Jeon;Yoon-Jin Choi;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.21 no.2
    • /
    • pp.117-137
    • /
    • 2019
  • In the Big Data era, data science has become popular with the production of numerous data in various domains, and the power of data has become a competitive power. There is a growing interest in unstructured data, which accounts for more than 80% of the world's data. Along with the everyday use of social media, most of the unstructured data is in the form of text data and plays an important role in various areas such as marketing, finance, and distribution. However, text mining using social media is difficult to access and difficult to use compared to data mining using numerical data. Thus, this study aims to develop Korean Natural Language Application (KoALA) as an integrated application for easy and handy social media text mining without relying on programming language or high-level hardware or solution. KoALA is a specialized application for social media text mining. It is an integrated application that can analyze both Korean and English. KoALA handles the entire process from data collection to preprocessing, analysis and visualization. This paper describes the process of designing, implementing, and applying KoALA applications using the design science methodology. Lastly, we will discuss practical use of KoALA through a block-chain business case. Through this paper, we hope to popularize social media text mining and utilize it for practical and academic use in various domains.

A Study on the Applicability of the Crack Measurement Digital Data Graphics Program for Field Investigations of Buildings Adjacent to Construction Sites (건설 현장 인접 건물의 현장 조사를 위한 균열 측정 디지털 데이터 그래픽 프로그램 적용 가능성에 관한 연구)

  • Ui-In Jung;Bong-Joo Kim
    • Journal of the Korean Recycled Construction Resources Institute
    • /
    • v.12 no.1
    • /
    • pp.63-71
    • /
    • 2024
  • Through the development of construction technology, various construction projects such as redevelopment projects, undergrounding of roads, expansion of subways, and metro railways are being carried out. However, this has led to an increase in the number of construction projects in existing urban centers and neighborhoods, resulting in an increase in the number of damages and disputes between neighboring buildings and residents, as well as an increase in safety accidents due to the aging of existing buildings. In this study, digital data was applied to a graphics program to objectify the progress of cracks by comparing the creation of cracks and the increase in length and width through photographic images and presenting the degree of cracks numerically. Through the application of the program, the error caused by the subjective judgment of crack change, which was mentioned as a shortcoming of the existing field survey, was solved. It is expected that the program can be used universally in the building diagnosis process by improving its reliability if supplemented and improved in the process of use. As a follow-up study, it is necessary to apply the extraction algorithm of the digital graphic data program to calculate the length and width of the crack by itself without human intervention in the preprocessing work and to check the overall change of the building.

A Study on Forecasting Accuracy Improvement of Case Based Reasoning Approach Using Fuzzy Relation (퍼지 관계를 활용한 사례기반추론 예측 정확성 향상에 관한 연구)

  • Lee, In-Ho;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.67-84
    • /
    • 2010
  • In terms of business, forecasting is a work of what is expected to happen in the future to make managerial decisions and plans. Therefore, the accurate forecasting is very important for major managerial decision making and is the basis for making various strategies of business. But it is very difficult to make an unbiased and consistent estimate because of uncertainty and complexity in the future business environment. That is why we should use scientific forecasting model to support business decision making, and make an effort to minimize the model's forecasting error which is difference between observation and estimator. Nevertheless, minimizing the error is not an easy task. Case-based reasoning is a problem solving method that utilizes the past similar case to solve the current problem. To build the successful case-based reasoning models, retrieving the case not only the most similar case but also the most relevant case is very important. To retrieve the similar and relevant case from past cases, the measurement of similarities between cases is an important key factor. Especially, if the cases contain symbolic data, it is more difficult to measure the distances. The purpose of this study is to improve the forecasting accuracy of case-based reasoning approach using fuzzy relation and composition. Especially, two methods are adopted to measure the similarity between cases containing symbolic data. One is to deduct the similarity matrix following binary logic(the judgment of sameness between two symbolic data), the other is to deduct the similarity matrix following fuzzy relation and composition. This study is conducted in the following order; data gathering and preprocessing, model building and analysis, validation analysis, conclusion. First, in the progress of data gathering and preprocessing we collect data set including categorical dependent variables. Also, the data set gathered is cross-section data and independent variables of the data set include several qualitative variables expressed symbolic data. The research data consists of many financial ratios and the corresponding bond ratings of Korean companies. The ratings we employ in this study cover all bonds rated by one of the bond rating agencies in Korea. Our total sample includes 1,816 companies whose commercial papers have been rated in the period 1997~2000. Credit grades are defined as outputs and classified into 5 rating categories(A1, A2, A3, B, C) according to credit levels. Second, in the progress of model building and analysis we deduct the similarity matrix following binary logic and fuzzy composition to measure the similarity between cases containing symbolic data. In this process, the used types of fuzzy composition are max-min, max-product, max-average. And then, the analysis is carried out by case-based reasoning approach with the deducted similarity matrix. Third, in the progress of validation analysis we verify the validation of model through McNemar test based on hit ratio. Finally, we draw a conclusion from the study. As a result, the similarity measuring method using fuzzy relation and composition shows good forecasting performance compared to the similarity measuring method using binary logic for similarity measurement between two symbolic data. But the results of the analysis are not statistically significant in forecasting performance among the types of fuzzy composition. The contributions of this study are as follows. We propose another methodology that fuzzy relation and fuzzy composition could be applied for the similarity measurement between two symbolic data. That is the most important factor to build case-based reasoning model.