• Title/Summary/Keyword: Automated Error Detection

Search Result 42, Processing Time 0.023 seconds

A Design Procedure for Safety Simulation System Using Virtual Reality

  • Jae-seug Ki
    • Proceedings of the Safety Management and Science Conference
    • /
    • 1999.11a
    • /
    • pp.381-389
    • /
    • 1999
  • One of the objectives of any task design is to provide a safe and helpful workplace for the employees. The safety and health module may include means for confronting the design with safety and health regulations and standards as well as tools for obstacles and collisions detection (such as error models and simulators). Virtual Reality is a leading edge technology which has only very recently become available on platforms and at prices accessible to the majority of simulation engineers. The design of an automated manufacturing system is a complicated, multidisciplinary task that requires involvement of several specialists. In this paper, a design procedure that facilitates the safety and ergonomic considerations of an automated manufacturing system are described. The procedure consists of the following major steps: Data collection and analysis of the data, creation of a three-dimensional simulation model of the work environment, simulation for safety analysis and risk assessment, development of safety solutions, selection of the preferred solutions, implementation of the selected solutions, reporting, and training When improving the safety of an existing system the three-dimensional simulation model helps the designer to perceive the work from operators point of view objectively and safely without the exposure to hazards of the actual system.

  • PDF

Realtime Monitoring System using AJAX + XML (AJAX+XML 기반의 모니터링 시스템)

  • Choi, Yun Jeong;Park, Seung Soo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.4
    • /
    • pp.39-49
    • /
    • 2009
  • Nowadays, according to rapid development of computing environments, information processing and analysis system are very interesting research area. As a viewpoint of data preparation-processing-analysis in knowledge technology, the goal of automated information system is to satisfy high reliability and confidence and to minimize of human-administrator intervention. In addition, we expect the system which can deal with problem and abnormal error effectively as a fault detection and fault tolerance. In this paper, we design a monitoring system as follows. A productive monitoring information from various systems has unstructured forms and characteristics and crawls informative data by conditions and gathering rules. For representing of monitering information which requested by administrator, running-status can be able to check dynamically and systematic like connection/closed status in real-time. Our proposed system can easily correct and processing for monitoring information from various type of server and support to make objective judgement and analysis of administrator under operative target of information system. We implement semi-realtime monitering system using AJAX technology for dynamic browsing of web information and information processing using XML and XPATH. We apply our system to SMS server for checking running status and the system shows that has high utility and reliability.

COMPUTER AND INTERNET RESOURCES FOR PRONUNCIATION AND PHONETICS TEACHING

  • Makarova, Veronika
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.338-349
    • /
    • 2000
  • Pronunciation teaching is once again coming into the foreground of ELT. Japan is, however, lagging far behind many countries in the development of pronunciation curricula and in the actual speech performance of the Japanese learners of English. The reasons for this can be found in the prevalence of communicative methodologies unfavorable for pronunciation teaching, in the lack of trained professionals, and in the large numbers of students in Japanese foreign language classes. This paper offers a way to promote foreign language pronunciation teaching in Japan and other countries by means of employing computer and internet facilities. The paper outlines the major directions of using modem speech technologies in pronunciation classes, like EVF (electronic visual feedback) training at segmental and prosodic levels; automated error detection, testing, grading and fluency assessment. The author discusses the applicability of some specific software packages (CSLU, SUGIspeech, Multispeech, Wavesurfer, etc.) for the needs of pronunciation teaching. Finally, the author talks about the globalization of pronunciation education via internet resources, such as computer corpora and speech and pronunciation training related web pages.

  • PDF

An implementation of automated ECG interpretation algorithm and system(II) - Estimation and Eliminator of interference components (심전도 자동 진단 알고리즘 및 장치 구현(II) - 잡음 성분 평가 및 제거기)

  • Kweon, H.J.;Kong, I.W.;Lee, S.H.;Shin, K.S.;Lee, M.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.05
    • /
    • pp.283-287
    • /
    • 1996
  • This paper described the estimator and eliminator far three kinds of artifacts in electrocardiogram. The most efficient estimation of baseline drift could be obtain in the cubic spline interpolation method with the PQ and TP segment which are considered to be isoelectric, from the experimental results obtained from the applied 4 types of algorithms. The time loss and distortion could be avoided with the aid of detection criteria by checking if baseline drifts exist or not. The AIEF proposed in this paper was verified as having the best removal performance with less distortion in the QRS complex through the comparison of 5 proposed algorithms. furthermore, the AIEF are most suitable far the ECG analyzer which was only needed relatively short time data due to the fast conversion into the stable state. The proposed parabolic filter with 11 points width was identified as having the best performance for the elimination of muscle artifacts. Also we could obtain 99.7% detection accuracy of spike component and minimize the error identifying QRS complex as spike.

  • PDF

The GOCI-II Early Mission Marine Fog Detection Products: Optical Characteristics and Verification (천리안 해양위성 2호(GOCI-II) 임무 초기 해무 탐지 산출: 해무의 광학적 특성 및 초기 검증)

  • Kim, Minsang;Park, Myung-Sook
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1317-1328
    • /
    • 2021
  • This study analyzes the early satellite mission marine fog detection results from Geostationary Ocean Color Imager-II (GOCI-II). We investigate optical characteristics of the GOCI-II spectral bands for marine fog between October 2020 and March 2021 during the overlapping mission period of Geostationary Ocean Color Imager (GOCI) and GOCI-II. For Rayleigh-corrected reflection (Rrc) at 412 nm band available for the input of the GOCI-II marine fog algorithm, the inter-comparison between GOCI and GOCI-II data showed a small Root Mean Square Error (RMSE) value (0.01) with a high correlation coefficient (0.988). Another input variable, Normalized Localization Standard (NLSD), also shows a reasonable correlation (0.798) between the GOCI and GOCI-II data with a small RMSE value (0.007). We also found distinctive optical characteristics between marine fog and clouds by the GOCI-II observations, showing the narrower distribution of all bands' Rrc values centered at high values for cloud compared to marine fog. The GOCI-II marine fog detection distribution for actual cases is similar to the GOCI but more detailed due to the improved spatial resolution from 500 m to 250 m. The validation with the automated synoptic observing system (ASOS) visibility data confirms the initial reliability of the GOCI-II marine fog detection. Also, it is expected to improve the performance of the GOCI-II marine fog detection algorithm by adding sufficient samples to verify stable performance, improving the post-processing process by replacing real-time available cloud input data and reducing false alarm by adding aerosol information.

Computer Assisted EPID Analysis of Breast Intrafractional and Interfractional Positioning Error (유방암 방사선치료에 있어 치료도중 및 분할치료 간 위치오차에 대한 전자포탈영상의 컴퓨터를 이용한 자동 분석)

  • Sohn Jason W.;Mansur David B.;Monroe James I.;Drzymala Robert E.;Jin Ho-Sang;Suh Tae-Suk;Dempsey James F.;Klein Eric E.
    • Progress in Medical Physics
    • /
    • v.17 no.1
    • /
    • pp.24-31
    • /
    • 2006
  • Automated analysis software was developed to measure the magnitude of the intrafractional and interfractional errors during breast radiation treatments. Error analysis results are important for determining suitable planning target volumes (PTV) prior to Implementing breast-conserving 3-D conformal radiation treatment (CRT). The electrical portal imaging device (EPID) used for this study was a Portal Vision LC250 liquid-filled ionization detector (fast frame-averaging mode, 1.4 frames per second, 256X256 pixels). Twelve patients were imaged for a minimum of 7 treatment days. During each treatment day, an average of 8 to 9 images per field were acquired (dose rate of 400 MU/minute). We developed automated image analysis software to quantitatively analyze 2,931 images (encompassing 720 measurements). Standard deviations ($\sigma$) of intrafractional (breathing motion) and intefractional (setup uncertainty) errors were calculated. The PTV margin to include the clinical target volume (CTV) with 95% confidence level was calculated as $2\;(1.96\;{\sigma})$. To compensate for intra-fractional error (mainly due to breathing motion) the required PTV margin ranged from 2 mm to 4 mm. However, PTV margins compensating for intefractional error ranged from 7 mm to 31 mm. The total average error observed for 12 patients was 17 mm. The intefractional setup error ranged from 2 to 15 times larger than intrafractional errors associated with breathing motion. Prior to 3-D conformal radiation treatment or IMRT breast treatment, the magnitude of setup errors must be measured and properly incorporated into the PTV. To reduce large PTVs for breast IMRT or 3-D CRT, an image-guided system would be extremely valuable, if not required. EPID systems should incorporate automated analysis software as described in this report to process and take advantage of the large numbers of EPID images available for error analysis which will help Individual clinics arrive at an appropriate PTV for their practice. Such systems can also provide valuable patient monitoring information with minimal effort.

  • PDF

Comparison of ACFAS method and DNPH-LC method for quantitative analysis of formaldehyde in Drinking water (자동연속흐름-흡광광도법과 DNPH-LC법에 의한 먹는물 중 포름알데히드 정량분석 비교)

  • Yi, Geon-Ho;Yun, In-Chul;Kim, Yeong-Kwan;Kim, Chong-Chaul;Choi, Geum-Jong;Lee, Teak-Soo
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.27 no.6
    • /
    • pp.827-836
    • /
    • 2013
  • Due to the stringent drinking water quality, formaldehyde will be included in Korean drinking water standard from year 2014. However, its standard analytical method has not yet been established. This study compares two analytical methods, DNPH-LC and ACFAS with respect to their analysis principles, Method Detection Limit (MDL), Limit Of Quantitation(LOQ), precision, accuracy, reproducibility, convenience, number of samples analyzed per hour and analysis cost. These methods measure absorption intensity at 360 nm by using HPLC after DNPH-derivatization (DNPH-LC) and at 410 nm by using Automated Continuous Flow Absorption Spectrophotometer (ACFAS), respectively. Reproducibility was tested by repeating the analysis 7 times using a standard solution for each method. For DNPH-LC method, MDL was $0.5{\mu}g/L$, LOQ was $1.58{\mu}g/L$ with standard deviation of $0.16{\mu}g/L$. For ACFAS method, they were $0.27{\mu}g/L$, $0.85{\mu}g/L$L with standard deviation of $0.09{\mu}g/L$, respectively. Both methods satisfied the requirement set by the Korean drinking water quality standard. Complexity of sample pretreatment procedure for DNPH-LC method may cause large error and, consequently, the analytical result will depend on the level of skill of analyst. In contrast, ACFAS method which used only one reagent equipped with an automated injection device showed little analytical error. It costs about $5.00 and $1.00 for one sample to analyze by the DNPH-LC method and the ACFAS method, respectively. Compared to the DNPH-LC method, ACFAS method provided more reliable analytical results. In terms of convenience, easiness and analytical cost, ACFAS method was demonstrated to be superior to the DNPH-LC method. The results of this study suggested that the ACFAS method could be adapted as a proper method for determining formaldehyde content in drinking water.

Implementation of Sonar Bearing Accuracy Measurement Equipment with Parallax Error and Time Delay Error Correction (관측위치오차와 시간지연오차를 보정하는 소나방위정확도 측정 장비 구현)

  • Kim, Sung-Duk;Kim, Do-Young;Park, Gyu-Tae;Shin, Kee-Cheol
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.4
    • /
    • pp.245-251
    • /
    • 2019
  • Sonar bearing accuracy is the correspondence between the target orientation predicted by sonar and actual target orientation, and is obtained from measurements. However, when measuring sonar bearing accuracy, many errors are included in the results because they are made at sea, where complex and diverse environmental factors are applied. In particular, parallax error caused by the difference between the position of the GPS receiver and the sonar sensor, and the time delay error generated between the speed of underwater sound waves and the speed of electromagnetic waves in the air have a great influence on the accuracy. Correcting these parallax errors and time delay errors without an automated tool is a laborious task. Therefore, in this study, we propose a sonar bearing accuracy measurement equipment with parallax error and time delay error correction. The tests were carried out through simulation data and real data. As a result of the test it was confirmed that the parallax error and time delay error were systematically corrected so that 51.7% for simulation data and more than 18.5% for real data. The proposed method is expected to improve the efficiency and accuracy of sonar system detection performance verification in the future.

Study on Automated Error Detection Method for Enhancing High Definition Map (정밀도로지도 레이어의 품질향상을 위한 자동오류 판독 연구)

  • Hong, Song Pyo;Oh, Jong Min;Song, Yong Hyun;Shin, Young Min;Sung, Dong Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.391-399
    • /
    • 2020
  • Autonomous driving can be limited by only using sensors if the sensor is blocked by sudden changes in surrounding environments or large features such as heavy vehicles. In order to overcome the limitations, the precise road-map has been used additionally. In korea, the NGII (National Geographic Information Institute) produces and supplies high definition map for autonomous vehicles. Accordingly, in this study, errors occurring in the process of e data editing and dtructured esditing of high definition map are systematically typed providing by the National Geographic Information Institute. In addition, by presenting the error search process and solution for each situation, we conducted a study to quickly correct errors in high definition map, and largely classify the error items for shape integrity, spatial relationship, and reference relationship, and examine them in detail. The method was derived.

Automated Image Co-registration Using Pre-qualified Area Based Matching Technique (사전검수 영역기반 정합법을 활용한 영상좌표 상호등록)

  • Kim Jong-Hong;Heo Joon;Sohn Hong-Gyoo
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.181-185
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene, one of which represents a reference image, while the other is geometrically transformed to the one. In order to improve efficiency and effectiveness of the co-registration approach, the author proposed a pre-qualified area matching algorithm which is composed of feature extraction with canny operator and area matching algorithm with cross correlation coefficient. For refining matching points, outlier detection using studentized residual was used and iteratively removes outliers at the level of three standard deviation. Throughout the pre-qualification and the refining processes, the computation time was significantly improved and the registration accuracy is enhanced. A prototype of the proposed algorithm was implemented and the performance test of 3 Landsat images of Korea showed: (1) average RMSE error of the approach was 0.436 Pixel (2) the average number of matching points was over 38,475 (3) the average processing time was 489 seconds per image with a regular workstation equipped with a 3 GHz Intel Pentium 4 CPU and 1 Gbytes Ram. The proposed approach achieved robustness, full automation, and time efficiency.

  • PDF