• Title/Summary/Keyword: Robust algorithm

Search Result 2,732, Processing Time 0.033 seconds

Development of Attitude Heading Reference System based on MEMS for High Speed Autonomous Underwater Vehicle (고속 자율 무인잠수정 적용을 위한 MEMS 기술기반 자세 측정 장치 개발)

  • Hwang, A-Rom;Ahn, Nam-Hyun;Yoon, Seon-Il
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.19 no.6
    • /
    • pp.666-673
    • /
    • 2013
  • This paper proposes the performance evaluation test of attitude heading reference system (AHRS) suitable for small high speed autonomous underwater vehicle(AUV). Although IMU can provides the detail attitude information, it is sometime not suitable for small AUV with short operation time in view of price and the electrical power consumption. One of alternative for tactical grade IMU is the AHRS based micro-machined electro mechanical system(MEMS) which can overcome many problems that have inhibited the adoption of inertial system for small AUV such as cost and power consumption. A cost effective and small size AHRS which incorporates measurements from 3-axis MEMS gyroscopes, accelerometers, and 3-axis magnetometers has been developed to provide a complete attitude solution for AUV and the attitude calculation algorithm is derived based the coordinate transform equation and Kalman filter. The developed AHRS was validated through various performance tests as like the magnetometer calibration, operating experiments using land mobile vehicle and flight motion simulator (FMS). The test of magnetometer calibration shows the developed MEMS AHRS is robust to the external magent field change and the test with land vehicle proves the leveling error of developed MEMS AHRS is below $0.5^{\circ}/hr$. The results of FMS test shows the fact that AHRS provides the measurement with $0.5^{\circ}/hr$ error during 5 minutes operation time. These results of performance evaluation tests showed that the developed AHRS provides attitude information which error of roll and pitch are below $1^{\circ}$ and the error of yaw is below $5^{\circ}$ and satisfies the required specification. It is expected that developed AHRS can provide the precise attitude measurement under sea trial with real AUV.

Digital Image Watermarking Technique using Scrambled Binary Phase Computer Generated Hologram in Discrete Cosine Transform Domain (DCT영역에서 스크램블된 이진 위상 컴퓨터형성홀로그램을 이용한 디지털 영상 워터마킹 기술)

  • Kim, Cheol-Su
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.3
    • /
    • pp.403-413
    • /
    • 2011
  • In this paper, we proposed a digital image watermarking technique using scrambled binary phase computer generated hologram in the discrete cosine transform(DCT) domain. For the embedding process of watermark. Using simulated annealing algorithm, we would generate a binary phase computer generated hologram(BPCGH) which can reconstruct hidden image perfectly instead of hidden image and encrypt it through the scramble operation. We multiply the encrypted watermark by the weight function and embed it into the DC coefficients in the DCT domain of host image and an inverse DCT is performed. For the extracting process of watermark, we compare the DC coefficients of watermarked image and original host image in the DCT domain and dividing it by the weight function and decrypt it using descramble operation. And we recover the hidden image by inverse Fourier transforming the decrypted watermark. Finally, we compute the correlation between the original hidden image and recovered hidden image to determine if a watermark exits in the host image. The proposed watermarking technique use the hologram information of hidden image which consist of binary values and scramble encryption technique so it is very secure and robust to the various external attacks such as compression, noises and cropping. We confirmed the advantages of the proposed watermarking technique through the computer simulations.

An Adaptive Multi-Level Thresholding and Dynamic Matching Unit Selection for IC Package Marking Inspection (IC 패키지 마킹검사를 위한 적응적 다단계 이진화와 정합단위의 동적 선택)

  • Kim, Min-Ki
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.245-254
    • /
    • 2002
  • IC package marking inspection system using machine vision locates and identifies the target elements from input image, and decides the quality of marking by comparing the extracted target elements with the standard patterns. This paper proposes an adaptive multi-level thresholding (AMLT) method which is suitable for a series of operations such as locating the target IC package, extracting the characters, and detecting the Pinl dimple. It also proposes a dynamic matching unit selection (DMUS) method which is robust to noises as well as effective to catch out the local marking errors. The main idea of the AMLT method is to restrict the inputs of Otsu's thresholding algorithm within a specified area and a partial range of gray values. Doing so, it can adapt to the specific domain. The DMUS method dynamically selects the matching unit according to the result of character extraction and layout analysis. Therefore, in spite of the various erroneous situation occurred in the process of character extraction and layout analysis, it can select minimal matching unit in any environment. In an experiment with 280 IC package images of eight types, the correct extracting rate of IC package and Pinl dimple was 100% and the correct decision rate of marking quality was 98.8%. This result shows that the proposed methods are effective to IC package marking inspection.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

A Study on Joint Damage Model and Neural Networks-Based Approach for Damage Assessment of Structure (구조물 손상평가를 위한 접합부 손상모델 및 신경망기법에 관한 연구)

  • 윤정방;이진학;방은영
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.3 no.3
    • /
    • pp.9-20
    • /
    • 1999
  • A method is proposed to estimate the joint damages of a steel structure from modal data using the neural networks technique. The beam-to-column connection in a steel frame structure is represented by a zero-length rotational spring of the end of the beam element, and the connection fixity factor is defined based on the rotational stiffness so that the factor may be in the range 0~1.0. Then, the severity of joint damage is defined as the reduction ratio of the connection fixity factor. Several advanced techniques are employed to develop the robust damage identification technique using neural networks. The concept of the substructural indentification is used for the localized damage assessment in the large structure. The noise-injection learning algorithm is used to reduce the effects of the noise in the modal data. The data perturbation scheme is also employed to assess the confidence in the estimated damages based on a few sets of actual measurement data. The feasibility of the proposed method is examined through a numerical simulation study on a 2-bay 10-story structure and an experimental study on a 2-story structure. It has been found that the joint damages can be reasonably estimated even for the case where the measured modal vectors are limited to a localized substructure and the data are severely corrupted with noise.

  • PDF

An Improvement of Still Image Quality Based on Error Resilient Entropy Coding for Random Error over Wireless Communications (무선 통신상 임의 에러에 대한 에러내성 엔트로피 부호화에 기반한 정지영상의 화질 개선)

  • Kim Jeong-Sig;Lee Keun-Young
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.9-16
    • /
    • 2006
  • Many image and video compression algorithms work by splitting the image into blocks and producing variable-length code bits for each block data. If variable-length code data are transmitted consecutively over error-prone channel without any error protection technique, the receiving decoder cannot decode the stream properly. So the standard image and video compression algorithms insert some redundant information into the stream to provide some protection against channel errors. One of redundancies is resynchronization marker, which enables the decoder to restart the decoding process from a known state in the event of transmission errors, but its usage should be restricted not to consume bandwidth too much. The Error Resilient Entropy Code(EREC) is well blown method which can regain synchronization without any redundant information. It can work with the overall prefix codes, which many image compression methods use. This paper proposes EREREC method to improve FEREC(Fast Error-Resilient Entropy Coding). It first calculates initial searching position according to bit lengths of consecutive blocks. Second, initial offset is decided using statistical distribution of long and short blocks, and initial offset can be adjusted to insure all offset sequence values can be used. The proposed EREREC algorithm can speed up the construction of FEREC slots, and can improve the compressed image quality in the event of transmission errors. The simulation result shows that the quality of transmitted image is enhanced about $0.3{\sim}3.5dB$ compared with the existing FEREC when random channel error happens.

Efficient Algorithms for Multicommodity Network Flow Problems Applied to Communications Networks (다품종 네트워크의 효율적인 알고리즘 개발 - 정보통신 네트워크에의 적용 -)

  • 윤석진;장경수
    • The Journal of Information Technology
    • /
    • v.3 no.2
    • /
    • pp.73-85
    • /
    • 2000
  • The efficient algorithms are suggested in this study for solving the multicommodity network flow problems applied to Communications Systems. These problems are typical NP-complete optimization problems that require integer solution and in which the computational complexity increases numerically in appropriate with the problem size. Although the suggested algorithms are not absolutely optimal, they are developed for computationally efficient and produce near-optimal and primal integral solutions. We supplement the traditional Lagrangian method with a price-directive decomposition. It proceeded as follows. First, A primal heuristic from which good initial feasible solutions can be obtained is developed. Second, the dual is initialized using marginal values from the primal heuristic. Generally, the Lagrangian optimization is conducted from a naive dual solution which is set as ${\lambda}=0$. The dual optimization converged very slowly because these values have sort of gaps from the optimum. Better dual solutions improve the primal solution, and better primal bounds improve the step size used by the dual optimization. Third, a limitation that the Lagrangian decomposition approach has Is dealt with. Because this method is dual based, the solution need not converge to the optimal solution in the multicommodity network problem. So as to adjust relaxed solution to a feasible one, we made efficient re-allocation heuristic. In addition, the computational performances of various versions of the developed algorithms are compared and evaluated. First, commercial LP software, LINGO 4.0 extended version for LINDO system is utilized for the purpose of implementation that is robust and efficient. Tested problem sets are generated randomly Numerical results on randomly generated examples demonstrate that our algorithm is near-optimal (< 2% from the optimum) and has a quite computational efficiency.

  • PDF

Diagnosis of Ictal Hyperperfusion Using Subtraction Image of Ictal and Interictal Brain Perfusion SPECT (발작기와 발작간기 뇌 관류 SPECT 감산영상을 이용한 간질원인 병소 진단)

  • Lee, Dong Soo;Seo, Jong-Mo;Lee, Jae Sung;Lee, Sang-Kun;Kim, Hyun Jip;Chung, June-Key;Lee, Myung Chul;Koh, Chang-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.1
    • /
    • pp.20-31
    • /
    • 1998
  • A robust algorithm to disclose and display the difference of ictal and interictal perfusion may facilitate the detection of ictal hyperfusion foci. Diagnostic performance of localizing epileptogenic zones with subtracted SPECT images was compared with the visual diagnosis using ictal and interictal SPECT, MR, or PET. Ietal and interictal Tc-99m-HMPAO cerebral perfusion SPECT images of 48 patients(pts) were processed to get parametric subtracted images. Epileptogenic foci of all pts were diagnosed by seizure free state after resection of epileptogenic zones. In subtraction SPECT, we used normalized difference ratio of pixel counts(ictal-interictal)/interictal ${\times}100%$) after correcting coordinates of ictal and interictal SPECT in semi-automatized 3-dimensional fashion. We found epileptogenic zones in subtraction SPECT and compared the performance with visual diagnosis of ictal and interictal SPECT, MR and PET using post-surgical diagnosis as gold standard. The concordance of subtraction SPECT and ictal-interictal SPECT was moderately good(kappa=0.49). The sensitivity of ictal-interictal SPECT was 73% and that of subtraction SPECT 58%. Positive predictive value of ictal-interictal SPECT was 76% and that of subtraction SPECT was 64%. There was no statistical difference between sensitivity or positive predictive values of subtraction SPECT and ictal-interictal SPECT, MR or PET. Such was also the case when we divided patients into temporal lobe epilepsy and neocortical epilepsy. We conclude that subtraction SPECT we produced had equivalent diagnostic performance compared with ictal-interictal SPECT in localizing epileptogenic zones. Additional value of these subtraction SPECT in clinical interpretation of ictal and interictal SPECT should be further evaluated.

  • PDF

Development of Rotation Invariant Real-Time Multiple Face-Detection Engine (회전변화에 무관한 실시간 다중 얼굴 검출 엔진 개발)

  • Han, Dong-Il;Choi, Jong-Ho;Yoo, Seong-Joon;Oh, Se-Chang;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.116-128
    • /
    • 2011
  • In this paper, we propose the structure of a high-performance face-detection engine that responds well to facial rotating changes using rotation transformation which minimize the required memory usage compared to the previous face-detection engine. The validity of the proposed structure has been verified through the implementation of FPGA. For high performance face detection, the MCT (Modified Census Transform) method, which is robust against lighting change, was used. The Adaboost learning algorithm was used for creating optimized learning data. And the rotation transformation method was added to maintain effectiveness against face rotating changes. The proposed hardware structure was composed of Color Space Converter, Noise Filter, Memory Controller Interface, Image Rotator, Image Scaler, MCT(Modified Census Transform), Candidate Detector / Confidence Mapper, Position Resizer, Data Grouper, Overlay Processor / Color Overlay Processor. The face detection engine was tested using a Virtex5 LX330 FPGA board, a QVGA grade CMOS camera, and an LCD Display. It was verified that the engine demonstrated excellent performance in diverse real life environments and in a face detection standard database. As a result, a high performance real time face detection engine that can conduct real time processing at speeds of at least 60 frames per second, which is effective against lighting changes and face rotating changes and can detect 32 faces in diverse sizes simultaneously, was developed.

Real-Time Face Recognition Based on Subspace and LVQ Classifier (부분공간과 LVQ 분류기에 기반한 실시간 얼굴 인식)

  • Kwon, Oh-Ryun;Min, Kyong-Pil;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.8 no.3
    • /
    • pp.19-32
    • /
    • 2007
  • This paper present a new face recognition method based on LVQ neural net to construct a real time face recognition system. The previous researches which used PCA, LDA combined neural net usually need much time in training neural net. The supervised LVQ neural net needs much less time in training and can maximize the separability between the classes. In this paper, the proposed method transforms the input face image by PCA and LDA sequentially into low-dimension feature vectors and recognizes the face through LVQ neural net. In order to make the system robust to external light variation, light compensation is performed on the detected face by max-min normalization method as preprocessing. PCA and LDA transformations are applied to the normalized face image to produce low-level feature vectors of the image. In order to determine the initial centers of LVQ and speed up the convergency of the LVQ neural net, the K-Means clustering algorithm is adopted. Subsequently, the class representative vectors can be produced by LVQ2 training using initial center vectors. The face recognition is achieved by using the euclidean distance measure between the center vector of classes and the feature vector of input image. From the experiments, we can prove that the proposed method is more effective in the recognition ratio for the cases of still images from ORL database and sequential images rather than using conventional PCA of a hybrid method with PCA and LDA.

  • PDF