• Title/Summary/Keyword: Automatic validation

Search Result 180, Processing Time 0.035 seconds

A Numerical Simulation of Blizzard Caused by Polar Low at King Sejong Station, Antarctica (극 저기압(Polar Low) 통과에 의해 발생한 남극 세종기지 강풍 사례 모의 연구)

  • Kwon, Hataek;Park, Sang-Jong;Lee, Solji;Kim, Seong-Joong;Kim, Baek-Min
    • Atmosphere
    • /
    • v.26 no.2
    • /
    • pp.277-288
    • /
    • 2016
  • Polar lows are intense mesoscale cyclones that mainly occur over the sea in polar regions. Owing to their small spatial scale of a diameter less than 1000 km, simulating polar lows is a challenging task. At King Sejong station in West Antartica, polar lows are often observed. Despite the recent significant climatic changes observed over West Antarctica, adequate validation of regional simulations of extreme weather events such as polar lows are rare for this region. To address this gap, simulation results from a recent version of the Polar Weather Research and Forecasting model (Polar WRF) covering Antartic Peninsula at a high horizontal resolution of 3 km are validated against near-surface meteorological observations. We selected a case of high wind speed event on 7 January 2013 recorded at Automatic Meteorological Observation Station (AMOS) in King Sejong station, Antarctica. It is revealed by in situ observations, numerical weather prediction, and reanalysis fields that the synoptic and mesoscale environment of the strong wind event was due to the passage of a strong mesoscale polar low of center pressure 950 hPa. Verifying model results from 3 km grid resolution simulation against AMOS observation showed that high skill in simulating wind speed and surface pressure with a bias of $-1.1m\;s^{-1}$ and -1.2 hPa, respectively. Our evaluation suggests that the Polar WRF can be used as a useful dynamic downscaling tool for the simulation of Antartic weather systems and the near-surface meteorological instruments installed in King Sejong station can provide invaluable data for polar low studies over West Antartica.

Auto-detection of Halo CME Parameters as the Initial Condition of Solar Wind Propagation

  • Choi, Kyu-Cheol;Park, Mi-Young;Kim, Jae-Hun
    • Journal of Astronomy and Space Sciences
    • /
    • v.34 no.4
    • /
    • pp.315-330
    • /
    • 2017
  • Halo coronal mass ejections (CMEs) originating from solar activities give rise to geomagnetic storms when they reach the Earth. Variations in the geomagnetic field during a geomagnetic storm can damage satellites, communication systems, electrical power grids, and power systems, and induce currents. Therefore, automated techniques for detecting and analyzing halo CMEs have been eliciting increasing attention for the monitoring and prediction of the space weather environment. In this study, we developed an algorithm to sense and detect halo CMEs using large angle and spectrometric coronagraph (LASCO) C3 coronagraph images from the solar and heliospheric observatory (SOHO) satellite. In addition, we developed an image processing technique to derive the morphological and dynamical characteristics of halo CMEs, namely, the source location, width, actual CME speed, and arrival time at a 21.5 solar radius. The proposed halo CME automatic analysis model was validated using a model of the past three halo CME events. As a result, a solar event that occurred at 03:38 UT on Mar. 23, 2014 was predicted to arrive at Earth at 23:00 UT on Mar. 25, whereas the actual arrival time was at 04:30 UT on Mar. 26, which is a difference of 5 hr and 30 min. In addition, a solar event that occurred at 12:55 UT on Apr. 18, 2014 was estimated to arrive at Earth at 16:00 UT on Apr. 20, which is 4 hr ahead of the actual arrival time of 20:00 UT on the same day. However, the estimation error was reduced significantly compared to the ENLIL model. As a further study, the model will be applied to many more events for validation and testing, and after such tests are completed, on-line service will be provided at the Korean Space Weather Center to detect halo CMEs and derive the model parameters.

Development of a Simultaneous Detection and Quantification Method of Anorectics in Human Urine Using GC-MS and its Application to Legal Cases (GC-MS를 이용한 사람 뇨시료 중 비만치료제 분석 및 비만치료제 남용 현황의 법과학적 고찰)

  • Choi, Hyeyoung;Lee, Jaesin;Jang, Moonhee;Yang, Wonkyung;Kim, Eunmi;Choi, Hwakyung
    • YAKHAK HOEJI
    • /
    • v.57 no.6
    • /
    • pp.420-425
    • /
    • 2013
  • Phentermine (PT) and phenmetrazine (PM) have been widely used as anti-obesity drugs. These drugs should be used with caution due to its close relation to amphetamine in its structure and toxicity. PT and PM, amphetamine-type anorectics, have recently been considered as alternatives for methamphetamine abuse in Korea. In addition, the misuse and abuse of PT and PM obtained by illegal sources such as the internet become a serious social problem. In the present study, a simultaneous detection and quantification method for determining PT and PM in human urine was developed and validated according to the international guidelines. The urine samples were screened using a fluorescence polarization immunooassay and analyzed by gas chromatography mass spectrometry (GC-MS) after extraction using automatic solid phase extraction (SPE) with a mixed-mode cation exchange cartridge and derivatization with pentafluoropropionic anhydride (PFPA). The validation results for selectivity, linearity, limits of detection (LOD) and quantification (LOQ), intra- and inter-assay precision and accuracy and recovery were satisfactory. The validated method was successfully applied to authentic urine samples collected from 38 drug abuse suspects. PT and/or PM were identified with or without methamphetamine in urine samples. Abuse of PT and PM have increased continuously in Korea, therefore, closer supervision of the inappropriate use of anoretics is necessary.

Validation and selection of GCPs obtained from ERS SAR and the SRTM DEM: Application to SPOT DEM Construction

  • Jung, Hyung-Sup;Hong, Sang-Hoon;Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.5
    • /
    • pp.483-496
    • /
    • 2008
  • Qualified ground control points (GCPs) are required to construct a digital elevation model (DEM) from a pushbroom stereo pair. An inverse geolocation algorithm for extracting GCPs from ERS SAR data and the SRTM DEM was recently developed. However, not all GCPs established by this method are accurate enough for direct application to the geometric correction of pushbroom images such as SPOT, IRS, etc, and thus a method for selecting and removing inaccurate points from the sets of GCPs is needed. In this study, we propose a method for evaluating GCP accuracy and winnowing sets of GCPs through orientation modeling of pushbroom image and validate performance of this method using SPOT stereo pair of Daejon City. It has been found that the statistical distribution of GCP positional errors is approximately Gaussian without bias, and that the residual errors estimated by orientation modeling have a linear relationship with the positional errors. Inaccurate GCPs have large positional errors and can be iteratively eliminated by thresholding the residual errors. Forty-one GCPs were initially extracted for the test, with mean the positional error values of 25.6m, 2.5m and -6.1m in the X-, Y- and Z-directions, respectively, and standard deviations of 62.4m, 37.6m and 15.0m. Twenty-one GCPs were eliminated by the proposed method, resulting in the standard deviations of the positional errors of the 20 final GCPs being reduced to 13.9m, 8.5m and 7.5m in the X-, Y- and Z-directions, respectively. Orientation modeling of the SPOT stereo pair was performed using the 20 GCPs, and the model was checked against 15 map-based points. The root mean square errors (RMSEs) of the model were 10.4m, 7.1m and 12.1m in X-, Y- and Z-directions, respectively. A SPOT DEM with a 20m ground resolution was successfully constructed using a automatic matching procedure.

Study on Effect of Micro Tooth Shape Modification on Power Transmission Characteristics based on the Driving Gear of Rotating Machining Unit (마이크로 치형수정이 선회가공 유닛 구동기어의 동력전달 특성에 미치는 영향에 관한 연구)

  • Jang, Jeong-Hwan;Qin, Zhen;Kim, Dong-Seon;Wu, Yu-Ting;Lyu, Sung Ki
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.18 no.6
    • /
    • pp.91-97
    • /
    • 2019
  • Rotating machining unit is a revolutionary product that can process worm shaft or spiral shaft with fast and precise, a rotary type cutting tool, which is attached to automatic lathe and processes spiral groove on outer circumference of round bar. In this work, a study on micro tooth shape modification method of driving gear train in the rotating machining unit was presented. To observe the effect on power transmission characteristics of the driving gear pair, visualize the gear meshing condition and the load distribution on the gear teeth by using the professional gear train analysis program RomaxDesigner. By comparing the repeated analysis results, the effect of micro tooth shape modification on power transmission characteristics on driving gear can be summarized. The optimized gears were fabricated and measured by precision tester as a validation in this research.

Image Mood Classification Using Deep CNN and Its Application to Automatic Video Generation (심층 CNN을 활용한 영상 분위기 분류 및 이를 활용한 동영상 자동 생성)

  • Cho, Dong-Hee;Nam, Yong-Wook;Lee, Hyun-Chang;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.9
    • /
    • pp.23-29
    • /
    • 2019
  • In this paper, the mood of images was classified into eight categories through a deep convolutional neural network and video was automatically generated using proper background music. Based on the collected image data, the classification model is learned using a multilayer perceptron (MLP). Using the MLP, a video is generated by using multi-class classification to predict image mood to be used for video generation, and by matching pre-classified music. As a result of 10-fold cross-validation and result of experiments on actual images, each 72.4% of accuracy and 64% of confusion matrix accuracy was achieved. In the case of misclassification, by classifying video into a similar mood, it was confirmed that the music from the video had no great mismatch with images.

Implementation of Speech Recognition and Flight Controller Based on Deep Learning for Control to Primary Control Surface of Aircraft

  • Hur, Hwa-La;Kim, Tae-Sun;Park, Myeong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.9
    • /
    • pp.57-64
    • /
    • 2021
  • In this paper, we propose a device that can control the primary control surface of an aircraft by recognizing speech commands. The speech command consists of 19 commands, and a learning model is constructed based on a total of 2,500 datasets. The training model is composed of a CNN model using the Sequential library of the TensorFlow-based Keras model, and the speech file used for training uses the MFCC algorithm to extract features. The learning model consists of two convolution layers for feature recognition and Fully Connected Layer for classification consists of two dense layers. The accuracy of the validation dataset was 98.4%, and the performance evaluation of the test dataset showed an accuracy of 97.6%. In addition, it was confirmed that the operation was performed normally by designing and implementing a Raspberry Pi-based control device. In the future, it can be used as a virtual training environment in the field of voice recognition automatic flight and aviation maintenance.

An Ensemble Approach to Detect Fake News Spreaders on Twitter

  • Sarwar, Muhammad Nabeel;UlAmin, Riaz;Jabeen, Sidra
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.294-302
    • /
    • 2022
  • Detection of fake news is a complex and a challenging task. Generation of fake news is very hard to stop, only steps to control its circulation may help in minimizing its impacts. Humans tend to believe in misleading false information. Researcher started with social media sites to categorize in terms of real or fake news. False information misleads any individual or an organization that may cause of big failure and any financial loss. Automatic system for detection of false information circulating on social media is an emerging area of research. It is gaining attention of both industry and academia since US presidential elections 2016. Fake news has negative and severe effects on individuals and organizations elongating its hostile effects on the society. Prediction of fake news in timely manner is important. This research focuses on detection of fake news spreaders. In this context, overall, 6 models are developed during this research, trained and tested with dataset of PAN 2020. Four approaches N-gram based; user statistics-based models are trained with different values of hyper parameters. Extensive grid search with cross validation is applied in each machine learning model. In N-gram based models, out of numerous machine learning models this research focused on better results yielding algorithms, assessed by deep reading of state-of-the-art related work in the field. For better accuracy, author aimed at developing models using Random Forest, Logistic Regression, SVM, and XGBoost. All four machine learning algorithms were trained with cross validated grid search hyper parameters. Advantages of this research over previous work is user statistics-based model and then ensemble learning model. Which were designed in a way to help classifying Twitter users as fake news spreader or not with highest reliability. User statistical model used 17 features, on the basis of which it categorized a Twitter user as malicious. New dataset based on predictions of machine learning models was constructed. And then Three techniques of simple mean, logistic regression and random forest in combination with ensemble model is applied. Logistic regression combined in ensemble model gave best training and testing results, achieving an accuracy of 72%.

Is Text Mining on Trade Claim Studies Applicable? Focused on Chinese Cases of Arbitration and Litigation Applying the CISG

  • Yu, Cheon;Choi, DongOh;Hwang, Yun-Seop
    • Journal of Korea Trade
    • /
    • v.24 no.8
    • /
    • pp.171-188
    • /
    • 2020
  • Purpose - This is an exploratory study that aims to apply text mining techniques, which computationally extracts words from the large-scale text data, to legal documents to quantify trade claim contents and enables statistical analysis. Design/methodology - This is designed to verify the validity of the application of text mining techniques as a quantitative methodology for trade claim studies, that have relied mainly on a qualitative approach. The subjects are 81 cases of arbitration and court judgments from China published on the website of the UNCITRAL where the CISG was applied. Validation is performed by comparing the manually analyzed result with the automatically analyzed result. The manual analysis result is the cluster analysis wherein the researcher reads and codes the case. The automatic analysis result is an analysis applying text mining techniques to the result of the cluster analysis. Topic modeling and semantic network analysis are applied for the statistical approach. Findings - Results show that the results of cluster analysis and text mining results are consistent with each other and the internal validity is confirmed. And the degree centrality of words that play a key role in the topic is high as the between centrality of words that are useful for grasping the topic and the eigenvector centrality of the important words in the topic is high. This indicates that text mining techniques can be applied to research on content analysis of trade claims for statistical analysis. Originality/value - Firstly, the validity of the text mining technique in the study of trade claim cases is confirmed. Prior studies on trade claims have relied on traditional approach. Secondly, this study has an originality in that it is an attempt to quantitatively study the trade claim cases, whereas prior trade claim cases were mainly studied via qualitative methods. Lastly, this study shows that the use of the text mining can lower the barrier for acquiring information from a large amount of digitalized text.

Three-Dimensional Evaluation of Skeletal Stability following Surgery-First Orthognathic Approach: Validation of a Simple and Effective Method

  • Nabil M. Mansour;Mohamed E. Abdelshaheed;Ahmed H. El-Sabbagh;Ahmed M. Bahaa El-Din;Young Chul Kim;Jong-Woo Choi
    • Archives of Plastic Surgery
    • /
    • v.50 no.3
    • /
    • pp.254-263
    • /
    • 2023
  • Background The three-dimensional (3D) evaluation of skeletal stability after orthognathic surgery is a time-consuming and complex procedure. The complexity increases further when evaluating the surgery-first orthognathic approach (SFOA). Herein, we propose and validate a simple time-saving method of 3D analysis using a single software, demonstrating high accuracy and repeatability. Methods This retrospective cohort study included 12 patients with skeletal class 3 malocclusion who underwent bimaxillary surgery without any presurgical orthodontics. Computed tomography (CT)/cone-beam CT images of each patient were obtained at three different time points (preoperation [T0], immediately postoperation [T1], and 1 year after surgery [T2]) and reconstructed into 3D images. After automatic surface-based alignment of the three models based on the anterior cranial base, five easily located anatomical landmarks were defined to each model. A set of angular and linear measurements were automatically calculated and used to define the amount of movement (T1-T0) and the amount of relapse (T2-T1). To evaluate the reproducibility, two independent observers processed all the cases, One of them repeated the steps after 2 weeks to assess intraobserver variability. Intraclass correlation coefficients (ICCs) were calculated at a 95% confidence interval. Time required for evaluating each case was recorded. Results Both the intra- and interobserver variability showed high ICC values (more than 0.95) with low measurement variations (mean linear variations: 0.18 mm; mean angular variations: 0.25 degree). Time needed for the evaluation process ranged from 3 to 5 minutes. Conclusion This approach is time-saving, semiautomatic, and easy to learn and can be used to effectively evaluate stability after SFOA.