• Title/Summary/Keyword: Processing Accuracy

Search Result 3,690, Processing Time 0.034 seconds

Research about feature selection that use heuristic function (휴리스틱 함수를 이용한 feature selection에 관한 연구)

  • Hong, Seok-Mi;Jung, Kyung-Sook;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.281-286
    • /
    • 2003
  • A large number of features are collected for problem solving in real life, but to utilize ail the features collected would be difficult. It is not so easy to collect of correct data about all features. In case it takes advantage of all collected data to learn, complicated learning model is created and good performance result can't get. Also exist interrelationships or hierarchical relations among the features. We can reduce feature's number analyzing relation among the features using heuristic knowledge or statistical method. Heuristic technique refers to learning through repetitive trial and errors and experience. Experts can approach to relevant problem domain through opinion collection process by experience. These properties can be utilized to reduce the number of feature used in learning. Experts generate a new feature (highly abstract) using raw data. This paper describes machine learning model that reduce the number of features used in learning using heuristic function and use abstracted feature by neural network's input value. We have applied this model to the win/lose prediction in pro-baseball games. The result shows the model mixing two techniques not only reduces the complexity of the neural network model but also significantly improves the classification accuracy than when neural network and heuristic model are used separately.

Measurement and Quality Control of MIROS Wave Radar Data at Dokdo (독도 MIROS Wave Radar를 이용한 파랑관측 및 품질관리)

  • Jun, Hyunjung;Min, Yongchim;Jeong, Jin-Yong;Do, Kideok
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.32 no.2
    • /
    • pp.135-145
    • /
    • 2020
  • Wave observation is widely used to direct observation method for observing the water surface elevation using wave buoy or pressure gauge and remote-sensing wave observation method. The wave buoy and pressure gauge can produce high-quality wave data but have disadvantages of the high risk of damage and loss of the instrument, and high maintenance cost in the offshore area. On the other hand, remote observation method such as radar is easy to maintain by installing the equipment on the land, but the accuracy is somewhat lower than the direct observation method. This study investigates the data quality of MIROS Wave and Current Radar (MWR) installed at Dokdo and improve the data quality of remote wave observation data using the wave buoy (CWB) observation data operated by the Korea Meteorological Administration. We applied and developed the three types of wave data quality control; 1) the combined use (Optimal Filter) of the filter designed by MIROS (Reduce Noise Frequency, Phillips Check, Energy Level Check), 2) Spike Test Algorithm (Spike Test) developed by OOI (Ocean Observatories Initiative) and 3) a new filter (H-Ts QC) using the significant wave height-period relationship. As a result, the wave observation data of MWR using three quality control have some reliability about the significant wave height. On the other hand, there are still some errors in the significant wave period, so improvements are required. Also, since the wave observation data of MWR is different somewhat from the CWB data in high waves of over 3 m, further research such as collection and analysis of long-term remote wave observation data and filter development is necessary.

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.

Vitamin B5 and B6 Contents in Fresh Materials and after Parboiling Treatment in Harvested Vegetables (채소류의 수확 후 원재료 및 데침 처리에 의한 비타민 B5 및 B6 함량 변화)

  • Kim, Gi-Ppeum;Ahn, Kyung-Geun;Kim, Gyeong-Ha;Hwang, Young-Sun;Kang, In-Kyu;Choi, Youngmin;Kim, Haeng-Ran;Choung, Myoung-Gun
    • Horticultural Science & Technology
    • /
    • v.34 no.1
    • /
    • pp.172-182
    • /
    • 2016
  • This study was aimed to determine the changes in vitamin $B_5$ and $B_6$ contents compared to fresh materials after parboiling treatment of the main vegetables consumed in Korea. The specificity of accuracy and precision for vitamin $B_5$ and $B_6$ analysis method were validated using high-performance liquid chromatography (HPLC). The recovery rate of standard reference material (SRM) was excellent, and all analysis was under the control line based on the quality control chart for vitamin $B_5$ and $B_6$. The Z-score for vitamin $B_6$ in food analysis performance assessment scheme (FAPAS) proficiency test was -1.0, confirming reliability of analytical performance. The vitamin $B_5$ and $B_6$ contents in a total of 39 fresh materials and parboiled samples were analyzed. The contents of vitamin $B_5$ and $B_6$ ranged from 0.000 to 2.462 and from 0.000 to $0.127mg{\cdot}100g^{-1}$, respectively. The highest contents of vitamin $B_5$ and $B_6$ were $2.462mg{\cdot}100g^{-1}$ in fresh fatsia shoots (stem vegetables), and $0.127mg{\cdot}100g^{-1}$ in fresh spinach beet (leafy vegetables), respectively. Moreover, the vitamin $B_5$ and $B_6$ contents for parboiling treatment in most vegetables were reduced or not detected. In particular, the contents of vitamin $B_5$ in parboiled fatsia shoots and vitamin $B_6$ in parboiled yellow potato and spinach beet were decreased 20- and 4-fold compared with fresh material, respectively. These results can be used as important basic data for utilization and processing of various vegetable crops, information for dietary life, management of school meals, and national health for Koreans.

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.

The Effect of Bilateral Eye Movements on Face Recognition in Patients with Schizophrenia (양측성 안구운동이 조현병 환자의 얼굴 재인에 미치는 영향)

  • Lee, Na-Hyun;Kim, Ji-Woong;Im, Woo-Young;Lee, Sang-Min;Lim, Sanghyun;Kwon, Hyukchan;Kim, Min-Young;Kim, Kiwoong;Kim, Seung-Jun
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.24 no.1
    • /
    • pp.102-108
    • /
    • 2016
  • Objectives : The deficit of recognition memory has been found as one of the common neurocognitive impairments in patients with schizophrenia. In addition, they were reported to fail to enhance the memory about emotional stimuli. Previous studies have shown that bilateral eye movements enhance the memory retrieval. Therefore, this study was conducted in order to investigate the memory enhancement of bilaterally alternating eye movements in schizophrenic patients. Methods : Twenty one patients with schizophrenia participated in this study. The participants learned faces (angry or neutral faces), and then performed a recognition memory task in relation to the faces after bilateral eye movements and central fixation. Recognition accuracy, response bias, and mean response time to hits were compared and analysed. Two-way repeated measure analysis of variance was performed for statistical analysis. Results : There was a significant effect of bilateral eye movements condition in mean response time(F=5.812, p<0.05) and response bias(F=10.366, p<0.01). Statistically significant interaction effects were not observed between eye movement condition and face emotion type. Conclusions : Irrespective of the emotional difference of facial stimuli, recognition memory processing was more enhanced after bilateral eye movements in patients with schizophrenia. Further study will be needed to investigate the underlying neural mechanism of bilateral eye movements-induced memory enhancement in patients with schizophrenia.

Application of Terrestrial LiDAR for Reconstructing 3D Images of Fault Trench Sites and Web-based Visualization Platform for Large Point Clouds (지상 라이다를 활용한 트렌치 단층 단면 3차원 영상 생성과 웹 기반 대용량 점군 자료 가시화 플랫폼 활용 사례)

  • Lee, Byung Woo;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.54 no.2
    • /
    • pp.177-186
    • /
    • 2021
  • For disaster management and mitigation of earthquakes in Korea Peninsula, active fault investigation has been conducted for the past 5 years. In particular, investigation of sediment-covered active faults integrates geomorphological analysis on airborne LiDAR data, surface geological survey, and geophysical exploration, and unearths subsurface active faults by trench survey. However, the fault traces revealed by trench surveys are only available for investigation during a limited time and restored to the previous condition. Thus, the geological data describing the fault trench sites remain as the qualitative data in terms of research articles and reports. To extend the limitations due to temporal nature of geological studies, we utilized a terrestrial LiDAR to produce 3D point clouds for the fault trench sites and restored them in a digital space. The terrestrial LiDAR scanning was conducted at two trench sites located near the Yangsan Fault and acquired amplitude and reflectance from the surveyed area as well as color information by combining photogrammetry with the LiDAR system. The scanned data were merged to form the 3D point clouds having the average geometric error of 0.003 m, which exhibited the sufficient accuracy to restore the details of the surveyed trench sites. However, we found more post-processing on the scanned data would be necessary because the amplitudes and reflectances of the point clouds varied depending on the scan positions and the colors of the trench surfaces were captured differently depending on the light exposures available at the time. Such point clouds are pretty large in size and visualized through a limited set of softwares, which limits data sharing among researchers. As an alternative, we suggested Potree, an open-source web-based platform, to visualize the point clouds of the trench sites. In this study, as a result, we identified that terrestrial LiDAR data can be practical to increase reproducibility of geological field studies and easily accessible by researchers and students in Earth Sciences.

Temperature and Solar Radiation Prediction Performance of High-resolution KMAPP Model in Agricultural Areas: Clear Sky Case Studies in Cheorwon and Jeonbuk Province (고해상도 규모상세화모델 KMAPP의 농업지역 기온 및 일사량 예측 성능: 맑은 날 철원 및 전북 사례 연구)

  • Shin, Seoleun;Lee, Seung-Jae;Noh, Ilseok;Kim, Soo-Hyun;So, Yun-Young;Lee, Seoyeon;Min, Byung Hoon;Kim, Kyu Rang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.22 no.4
    • /
    • pp.312-326
    • /
    • 2020
  • Generation of weather forecasts at 100 m resolution through a statistical downscaling process was implemented by Korea Meteorological Administration Post- Processing (KMAPP) system. The KMAPP data started to be used in various industries such as hydrologic, agricultural, and renewable energy, sports, etc. Cheorwon area and Jeonbuk area have horizontal planes in a relatively wide range in Korea, where there are many complex mountainous areas. Cheorwon, which has a large number of in-situ and remotely sensed phenological data over large-scale rice paddy cultivation areas, is considered as an appropriate area for verifying KMAPP prediction performance in agricultural areas. In this study, the performance of predicting KMAPP temperature changes according to ecological changes in agricultural areas in Cheorwon was compared and verified using KMA and National Center for AgroMeteorology (NCAM) observations. Also, during the heat wave in Jeonbuk Province, solar radiation forecast was verified using Automated Synoptic Observing System (ASOS) data to review the usefulness of KMAPP forecast data as input data for application models such as livestock heat stress models. Although there is a limit to the need for more cases to be collected and selected, the improvement in post-harvest temperature forecasting performance in agricultural areas over ordinary residential areas has led to indirect guesses of the biophysical and phenological effects on forecasting accuracy. In the case of solar radiation prediction, it is expected that KMAPP data will be used in the application model as detailed regional forecast data, as it tends to be consistent with observed values, although errors are inevitable due to human activity in agricultural land and data unit conversion.

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.