• 제목/요약/키워드: Application accuracy

검색결과 3,295건 처리시간 0.033초

Current Status of Cattle Genome Sequencing and Analysis using Next Generation Sequencing (차세대유전체해독 기법을 이용한 소 유전체 해독 연구현황)

  • Choi, Jung-Woo;Chai, Han-Ha;Yu, Dayeong;Lee, Kyung-Tai;Cho, Yong-Min;Lim, Dajeong
    • Journal of Life Science
    • /
    • 제25권3호
    • /
    • pp.349-356
    • /
    • 2015
  • Thanks to recent advances in next-generation sequencing (NGS) technology, diverse livestock species have been dissected at the genome-wide sequence level. As for cattle, there are currently four Korean indigenous breeds registered with the Domestic Animal Diversity Information System of the Food and Agricultural Organization of the United Nations: Hanwoo, Chikso, Heugu, and Jeju Heugu. These native genetic resources were recently whole-genome resequenced using various NGS technologies, providing enormous single nucleotide polymorphism information across the genomes. The NGS application further provided biological such that Korean native cattle are genetically distant from some cattle breeds of European origins. In addition, the NGS technology was successfully applied to detect structural variations, particularly copy number variations that were usually difficult to identify at the genome-wide level with reasonable accuracy. Despite the success, those recent studies also showed an inherent limitation in sequencing only a representative individual of each breed. To elucidate the biological implications of the sequenced data, further confirmatory studies should be followed by sequencing or validating the population of each breed. Because NGS sequencing prices have consistently dropped, various population genomic theories can now be applied to the sequencing data obtained from the population of each breed of interest. There are still few such population studies available for the Korean native cattle breeds, but this situation will soon be improved with the recent initiative for NGS sequencing of diverse native livestock resources, including the Korean native cattle breeds.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • 제22권4호
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

A Combat Effectiveness Evaluation Algorithm Considering Technical and Human Factors in C4I System (NCW 환경에서 C4I 체계 전투력 상승효과 평가 알고리즘 : 기술 및 인적 요소 고려)

  • Jung, Whan-Sik;Park, Gun-Woo;Lee, Jae-Yeong;Lee, Sang-Hoon
    • Journal of Intelligence and Information Systems
    • /
    • 제16권2호
    • /
    • pp.55-72
    • /
    • 2010
  • Recently, the battlefield environment has changed from platform-centric warfare(PCW) which focuses on maneuvering forces into network-centric warfare(NCW) which is based on the connectivity of each asset through the warfare information system as information technology increases. In particular, C4I(Command, Control, Communication, Computer and Intelligence) system can be an important factor in achieving NCW. It is generally used to provide direction across distributed forces and status feedback from thoseforces. It can provide the important information, more quickly and in the correct format to the friendly units. And it can achieve the information superiority through SA(Situational Awareness). Most of the advanced countries have been developed and already applied these systems in military operations. Therefore, ROK forces also have been developing C4I systems such as KJCCS(Korea Joint Command Control System). And, ours are increasing the budgets in the establishment of warfare information systems. However, it is difficult to evaluate the C4I effectiveness properly by deficiency of methods. We need to develop a new combat effectiveness evaluation method that is suitable for NCW. Existing evaluation methods lay disproportionate emphasis on technical factors with leaving something to be desired in human factors. Therefore, it is necessary to consider technical and human factors to evaluate combat effectiveness. In this study, we proposed a new Combat Effectiveness evaluation algorithm called E-TechMan(A Combat Effectiveness Evaluation Algorithm Considering Technical and Human Factors in C4I System). This algorithm uses the rule of Newton's second law($F=(m{\Delta}{\upsilon})/{\Delta}t{\Rightarrow}\frac{V{\upsilon}I}{T}{\times}C$). Five factors considered in combat effectiveness evaluation are network power(M), movement velocity(v), information accuracy(I), command and control time(T) and collaboration level(C). Previous researches did not consider the value of the node and arc in evaluating the network power after the C4I system has been established. In addition, collaboration level which could be a major factor in combat effectiveness was not considered. E-TechMan algorithm is applied to JFOS-K(Joint Fire Operating System-Korea) system that can connect KJCCS of Korea armed forces with JADOCS(Joint Automated Deep Operations Coordination System) of U.S. armed forces and achieve sensor to shooter system in real time in JCS(Joint Chiefs of Staff) level. We compared the result of evaluation of Combat Effectiveness by E-TechMan with those by other algorithms(e.g., C2 Theory, Newton's second Law). We can evaluate combat effectiveness more effectively and substantially by E-TechMan algorithm. This study is meaningful because we improved the description level of reality in calculation of combat effectiveness in C4I system. Part 2 will describe the changes of war paradigm and the previous combat effectiveness evaluation methods such as C2 theory while Part 3 will explain E-TechMan algorithm specifically. Part 4 will present the application to JFOS-K and analyze the result with other algorithms. Part 5 is the conclusions provided in the final part.

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • 제21권1호
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.

Change Detection for High-resolution Satellite Images Using Transfer Learning and Deep Learning Network (전이학습과 딥러닝 네트워크를 활용한 고해상도 위성영상의 변화탐지)

  • Song, Ah Ram;Choi, Jae Wan;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • 제37권3호
    • /
    • pp.199-208
    • /
    • 2019
  • As the number of available satellites increases and technology advances, image information outputs are becoming increasingly diverse and a large amount of data is accumulating. In this study, we propose a change detection method for high-resolution satellite images that uses transfer learning and a deep learning network to overcome the limit caused by insufficient training data via the use of pre-trained information. The deep learning network used in this study comprises convolutional layers to extract the spatial and spectral information and convolutional long-short term memory layers to analyze the time series information. To use the learned information, the two initial convolutional layers of the change detection network are designed to use learned values from 40,000 patches of the ISPRS (International Society for Photogrammertry and Remote Sensing) dataset as initial values. In addition, 2D (2-Dimensional) and 3D (3-dimensional) kernels were used to find the optimized structure for the high-resolution satellite images. The experimental results for the KOMPSAT-3A (KOrean Multi-Purpose SATllite-3A) satellite images show that this change detection method can effectively extract changed/unchanged pixels but is less sensitive to changes due to shadow and relief displacements. In addition, the change detection accuracy of two sites was improved by using 3D kernels. This is because a 3D kernel can consider not only the spatial information but also the spectral information. This study indicates that we can effectively detect changes in high-resolution satellite images using the constructed image information and deep learning network. In future work, a pre-trained change detection network will be applied to newly obtained images to extend the scope of the application.

Performance Evaluation of Reconstruction Algorithms for DMIDR (DMIDR 장치의 재구성 알고리즘 별 성능 평가)

  • Kwak, In-Suk;Lee, Hyuk;Moon, Seung-Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • 제23권2호
    • /
    • pp.29-37
    • /
    • 2019
  • Purpose DMIDR(Discovery Molecular Imaging Digital Ready, General Electric Healthcare, USA) is a PET/CT scanner designed to allow application of PSF(Point Spread Function), TOF(Time of Flight) and Q.Clear algorithm. Especially, Q.Clear is a reconstruction algorithm which can overcome the limitation of OSEM(Ordered Subset Expectation Maximization) and reduce the image noise based on voxel unit. The aim of this paper is to evaluate the performance of reconstruction algorithms and optimize the algorithm combination to improve the accurate SUV(Standardized Uptake Value) measurement and lesion detectability. Materials and Methods PET phantom was filled with $^{18}F-FDG$ radioactivity concentration ratio of hot to background was in a ratio of 2:1, 4:1 and 8:1. Scan was performed using the NEMA protocols. Scan data was reconstructed using combination of (1)VPFX(VUE point FX(TOF)), (2)VPHD-S(VUE Point HD+PSF), (3)VPFX-S (TOF+PSF), (4)QCHD-S-400((VUE Point HD+Q.Clear(${\beta}-strength$ 400)+PSF), (5)QCFX-S-400(TOF +Q.Clear(${\beta}-strength$ 400)+PSF), (6)QCHD-S-50(VUE Point HD+Q.Clear(${\beta}-strength$ 50)+PSF) and (7)QCFX-S-50(TOF+Q.Clear(${\beta}-strength$ 50)+PSF). CR(Contrast Recovery) and BV(Background Variability) were compared. Also, SNR(Signal to Noise Ratio) and RC(Recovery Coefficient) of counts and SUV were compared respectively. Results VPFX-S showed the highest CR value in sphere size of 10 and 13 mm, and QCFX-S-50 showed the highest value in spheres greater than 17 mm. In comparison of BV and SNR, QCFX-S-400 and QCHD-S-400 showed good results. The results of SUV measurement were proportional to the H/B ratio. RC for SUV is in inverse proportion to the H/B ratio and QCFX-S-50 showed highest value. In addition, reconstruction algorithm of Q.Clear using 400 of ${\beta}-strength$ showed lower value. Conclusion When higher ${\beta}-strength$ was applied Q.Clear showed better image quality by reducing the noise. On the contrary, lower ${\beta}-strength$ was applied Q.Clear showed that sharpness increase and PVE(Partial Volume Effect) decrease, so it is possible to measure SUV based on high RC comparing to conventional reconstruction conditions. An appropriate choice of these reconstruction algorithm can improve the accuracy and lesion detectability. In this reason, it is necessary to optimize the algorithm parameter according to the purpose.

Application of Terrestrial LiDAR for Reconstructing 3D Images of Fault Trench Sites and Web-based Visualization Platform for Large Point Clouds (지상 라이다를 활용한 트렌치 단층 단면 3차원 영상 생성과 웹 기반 대용량 점군 자료 가시화 플랫폼 활용 사례)

  • Lee, Byung Woo;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • 제54권2호
    • /
    • pp.177-186
    • /
    • 2021
  • For disaster management and mitigation of earthquakes in Korea Peninsula, active fault investigation has been conducted for the past 5 years. In particular, investigation of sediment-covered active faults integrates geomorphological analysis on airborne LiDAR data, surface geological survey, and geophysical exploration, and unearths subsurface active faults by trench survey. However, the fault traces revealed by trench surveys are only available for investigation during a limited time and restored to the previous condition. Thus, the geological data describing the fault trench sites remain as the qualitative data in terms of research articles and reports. To extend the limitations due to temporal nature of geological studies, we utilized a terrestrial LiDAR to produce 3D point clouds for the fault trench sites and restored them in a digital space. The terrestrial LiDAR scanning was conducted at two trench sites located near the Yangsan Fault and acquired amplitude and reflectance from the surveyed area as well as color information by combining photogrammetry with the LiDAR system. The scanned data were merged to form the 3D point clouds having the average geometric error of 0.003 m, which exhibited the sufficient accuracy to restore the details of the surveyed trench sites. However, we found more post-processing on the scanned data would be necessary because the amplitudes and reflectances of the point clouds varied depending on the scan positions and the colors of the trench surfaces were captured differently depending on the light exposures available at the time. Such point clouds are pretty large in size and visualized through a limited set of softwares, which limits data sharing among researchers. As an alternative, we suggested Potree, an open-source web-based platform, to visualize the point clouds of the trench sites. In this study, as a result, we identified that terrestrial LiDAR data can be practical to increase reproducibility of geological field studies and easily accessible by researchers and students in Earth Sciences.

Study on the Application of Ultrasound Traits as Selection Trait in Hanwoo (한우 선발형질로써 초음파 형질의 활용방안 연구)

  • Choi, Tae Jeong;Choy, Yun Ho;Park, Byoungho;Cho, Kwang Hyun;Alam, M;Kang, Ha Yeon;Lee, Seung Soo;Lee, Jae Gu
    • Journal of agriculture & life science
    • /
    • 제51권2호
    • /
    • pp.117-126
    • /
    • 2017
  • Hanwoo young bulls are selected based on performance test using the weight at 12 months and pedigree index comprising marbling score. Pedigree index was not based on the progeny tested data but the breeding value of the proven bulls; resulting a lower accuracy. The progeny testing of the young bulls was categorized into testing at farm and at the test station. The farm tested data was difficult to compare with those from test station data. Farm tested bulls had different slaughter ages than those for test station bulls. Therefore, this study had considered a different age at slaughter for respective records on ultrasound traits. Records on body weight at 12 months, ultrasound measures at 12 and 24 months(uIMF, uEMA, uBFT, and uRFT), and carcass traits(CWT, EMA, BFT, and MS) were collected from steers and bulls of Hanwoo national improvement scheme between 2008 and 2013. Fixed effects of batch, test date, test station, personnel for measurement, personnel for judging, and a linear covariate of weight at measurement were fitted in the animal models for ultrasound traits. The ranges of heritability estimates of the ultrasound traits at 12 and 24 months were 0.21-0.43 and 0.32-0.47, respectively. Ultrasound traits at 12 and 24 months between similar carcass traits was genetically correlated at 0.52-0.75 and 0.86-0.89, respectively.

A Study on the Impacters of the Disabled Worker's Subjective Career Success in the Competitive Labour Market: Application of the Multi-Level Analysis of the Individual and Organizational Properties (경쟁고용 장애인근로자의 주관적 경력성공에 대한 영향요인 분석: 개인 및 조직특성에 대한 다층분석의 적용)

  • Kwon, Jae-yong;Lee, Dong-Young;Jeon, Byong-Ryol
    • 한국사회정책
    • /
    • 제24권1호
    • /
    • pp.33-66
    • /
    • 2017
  • Based on the premise that the systematic career process of workers in the general labor market was one of core elements of successful achievements and their establishment both at the individual and organizational level, this study set out to conduct empirical analysis of factors influencing the subjective career success of disabled workers in competitive employment at the multi-dimensional levels of individuals and organizations(corporations) and thus provide practical implications for the career management directionality of their successful vocational life with data based on practical and statistical accuracy. For those purposes, the investigator administered a structured questionnaire to 126 disabled workers at 48 companies in Seoul, Gyeonggi, Chungcheong, and Gangwon and collected data about the individual and organizational characteristics. Then the influential factors were analyzed with the multilevel analysis technique by taking into consideration the organizational effects. The analysis results show that organizational characteristics explained 32.1% of total variance of subjective career success, which confirms practical implications for the importance of organizational variables and the legitimacy of applying the multilevel model. The significant influential factors include the degree of disability, desire for growth, self-initiating career attitude and value-oriented career attitude at the individual level and the provision of disability-related convenience, career support, personnel support, and interpersonal support at the organizational level. The latter turned out to have significant moderating effects on the influences of subjective career success on the characteristic variables at the individual level. Those findings call for plans to increase subjective career success through the activation of individual factors based on organizational effects. The study thus proposed and discussed integrated individual-corporate practice strategies including setting up a convenience support system by reflecting the disability characteristics, applying a worker support program, establishing a frontier career development support system, and providing assistance for a human network.

Validation of Surface Reflectance Product of KOMPSAT-3A Image Data: Application of RadCalNet Baotou (BTCN) Data (다목적실용위성 3A 영상 자료의 지표 반사도 성과 검증: RadCalNet Baotou(BTCN) 자료 적용 사례)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • 제36권6_2호
    • /
    • pp.1509-1521
    • /
    • 2020
  • Experiments for validation of surface reflectance produced by Korea Multi-Purpose Satellite (KOMPSAT-3A) were conducted using Chinese Baotou (BTCN) data among four sites of the Radical Calibration Network (RadCalNet), a portal that provides spectrophotometric reflectance measurements. The atmosphere reflectance and surface reflectance products were generated using an extension program of an open-source Orfeo ToolBox (OTB), which was redesigned and implemented to extract those reflectance products in batches. Three image data sets of 2016, 2017, and 2018 were taken into account of the two sensor model variability, ver. 1.4 released in 2017 and ver. 1.5 in 2019, such as gain and offset applied to the absolute atmospheric correction. The results of applying these sensor model variables showed that the reflectance products by ver. 1.4 were relatively well-matched with RadCalNet BTCN data, compared to ones by ver. 1.5. On the other hand, the reflectance products obtained from the Landsat-8 by the USGS LaSRC algorithm and Sentinel-2B images using the SNAP Sen2Cor program were used to quantitatively verify the differences in those of KOMPSAT-3A. Based on the RadCalNet BTCN data, the differences between the surface reflectance of KOMPSAT-3A image were shown to be highly consistent with B band as -0.031 to 0.034, G band as -0.001 to 0.055, R band as -0.072 to 0.037, and NIR band as -0.060 to 0.022. The surface reflectance of KOMPSAT-3A also indicated the accuracy level for further applications, compared to those of Landsat-8 and Sentinel-2B images. The results of this study are meaningful in confirming the applicability of Analysis Ready Data (ARD) to the surface reflectance on high-resolution satellites.