• Title/Summary/Keyword: normalization method

Search Result 640, Processing Time 0.029 seconds

Recognition Performance Enhancement by License Plate Normalization (번호판 정규화에 의한 인식 성능 향상 기법)

  • Kim, Do-Hyeon;Kang, Min-Kyung;Cha, Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.7
    • /
    • pp.1278-1290
    • /
    • 2008
  • This paper proposes a preprocessing method and a neural network based character recognizer to enhance the overall performance of the license plate recognition system. First, plate outlines are extracted by virtual line matching, and then the 4 vertexes are obtained by calculating intersecting points of extracted lines. By these vertexes, plate image is reconstructed as rectangle-shaped image by bilinear transform. Finally, the license plate is recognized by the neural network based classifier which had been trained using delta-bar-delta algorithm. Various license plate images were used in the experiments, and the proposed plate normalization enhanced the recognition performance up to 16 percent.

Automatic Pancreas Detection on Abdominal CT Images using Intensity Normalization and Faster R-CNN (복부 CT 영상에서 밝기값 정규화 및 Faster R-CNN을 이용한 자동 췌장 검출)

  • Choi, Si-Eun;Lee, Seong-Eun;Hong, Helen
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.396-405
    • /
    • 2021
  • In surgery to remove pancreatic cancer, it is important to figure out the shape of a patient's pancreas. However, previous studies have a limit to detect a pancreas automatically in abdominal CT images, because the pancreas varies in shape, size and location by patient. Therefore, in this paper, we propose a method of learning various shapes of pancreas according to the patients and adjacent slices using Faster R-CNN based on Inception V2, and automatically detecting the pancreas from abdominal CT images. Model training and testing were performed using the NIH Pancreas-CT Dataset, and intensity normalization was applied to all data to improve pancreatic detection accuracy. Additionally, according to the shape of the pancreas, the test dataset was classified into top, middle, and bottom slices to evaluate the model's performance on each data. The results show that the top data's mAP@.50IoU achieved 91.7% and the bottom data's mAP@.50IoU achieved 95.4%, and the highest performance was the middle data's mAP@.50IoU, 98.5%. Thus, we have confirmed that the model can accurately detect the pancreas in CT images.

The development of new electromyographic parameters to diagnose low-back pain patients during sagittal flexion/extension motion

  • Kim, J.Y.
    • Proceedings of the ESK Conference
    • /
    • 1996.10a
    • /
    • pp.21-25
    • /
    • 1996
  • The Electomyographic (EMG) signals of flexor-extensor muscle pairs were investigated to identify the neural excitation pattern of low-back pain (LBP) patients during a repetitive bending motion. New parameters and EMG normalization technique were developed to quantitatively represent the difference of temporal EMG patterns between ten healthy subjects and ten LBP patients. Flexor-extensor muscle pairs such as rectus abdominis(RA)-erector spinae (ES at LS), external oblique(EO)-internal oblique(IO), rectus femois (quadriceps: QUD)-biceps femoris(hamstrings:HAM), and tibialis anterior(TA)-gastrocnemius(GAS) pairs of muscles were selected in this study. Results indicated that the temporal EMG pattern such as the peak timing difference of QUD-HAM muscle pair and the duration of coexcitation of ES-RA muscle pair showed a statistically isgnificant difference between healthy subjects and LBP patients. These results indicated that the new technique and parameters could be used as a diagnostic tool especially for LBP patients with soft tissue injuries that are rarely dentified by traditional imaging techniques such as X-ray, CT scan or MRI. Improtantly, the new EMG technique did not require the maximal volutary contraction(MVC) measure for normalization that helped patients minimize the pain experience during and after the session. Further study needs to be made to validate and refine this method for clinical application.

  • PDF

Comparison of Muscle Activities between the Diagonal Pattern of Shoulder Exercises Using the %Normalization and %Isolation Method (독립화 비율과 정량화 비율을 사용한 대각선 패턴의 어깨운동 간의 근활성도 비교)

  • Sang-Yeol Lee;Se-Yeon Park
    • PNF and Movement
    • /
    • v.21 no.1
    • /
    • pp.87-94
    • /
    • 2023
  • Purpose: The present study aims to investigate the effects of a diagonal exercise pattern on selective activation of the upper extremity muscles using both normalization and isolation methods. Methods: In total, 17 asymptomatic subjects participated in this study. During the two diagonal patterns of exercise (diagonals 1 and 2), muscular activities of the upper trapezius (UT), lower trapezius (LT), serratus anterior (SA), anterior deltoid (AD), and infraspinatus (IS) were measured. The collected data were analyzed in two ways, according to the dominance of muscle activities (%Isolation) and according to normalized activities (%MVIC). Results: There were significant differences in LT, SA, AD, and IS between %MVIC and %Isolation (p<0.05), and the diagonal 1 pattern of exercise showed significantly more LT activities compared with the diagonal 2 pattern (p<0.05). Further, except for LT, there were no significant differences in muscle activities between the diagonal 1 and 2 exercises. Conclusion: The present study suggests that a diagonal pattern of exercise is advantageous for strengthening shoulder muscles, but caution is needed when applying to patients requiring selective strengthening. Regarding both the concentric and eccentric phases of exercise, there was no significant difference in muscular activation, except in LT, between the two diagonal patterns of exercises.

Truncation Artifact Reduction Using Weighted Normalization Method in Prototype R/F Chest Digital Tomosynthesis (CDT) System (프로토타입 R/F 흉부 디지털 단층영상합성장치 시스템에서 잘림 아티팩트 감소를 위한 가중 정규화 접근법에 대한 연구)

  • Son, Junyoung;Choi, Sunghoon;Lee, Donghoon;Kim, Hee-Joung
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.1
    • /
    • pp.111-118
    • /
    • 2019
  • Chest digital tomosynthesis has become a practical imaging modality because it can solve the problem of anatomy overlapping in conventional chest radiography. However, because of both limited scan angle and finite-size detector, a portion of chest cannot be represented in some or all of the projection. These bring a discontinuity in intensity across the field of view boundaries in the reconstructed slices, which we refer to as the truncation artifacts. The purpose of this study was to reduce truncation artifacts using a weighted normalization approach and to investigate the performance of this approach for our prototype chest digital tomosynthesis system. The system source-to-image distance was 1100 mm, and the center of rotation of X-ray source was located on 100 mm above the detector surface. After obtaining 41 projection views with ${\pm}20^{\circ}$ degrees, tomosynthesis slices were reconstructed with the filtered back projection algorithm. For quantitative evaluation, peak signal to noise ratio and structure similarity index values were evaluated after reconstructing reference image using simulation, and mean value of specific direction values was evaluated using real data. Simulation results showed that the peak signal to noise ratio and structure similarity index was improved respectively. In the case of the experimental results showed that the effect of artifact in the mean value of specific direction of the reconstructed image was reduced. In conclusion, the weighted normalization method improves the quality of image by reducing truncation artifacts. These results suggested that weighted normalization method could improve the image quality of chest digital tomosynthesis.

Calculation Model for Function & Cost Score based on Normalization Method in Design VE (정규화 기법 기반의 설계VE 기능 및 비용 점수 산출 모델)

  • Lee, Jongsik
    • Korean Journal of Construction Engineering and Management
    • /
    • v.16 no.4
    • /
    • pp.98-106
    • /
    • 2015
  • VE aims at reduction in a budget, improvement of function, structural safety and quality security for public construction projects. However, there is possibility for the structural safety and quality security review to be insufficient because related regulations are mostly composed of analysis on economic efficiency of design. In addition, due to the misconception about VE as a cost saving methodology, an alternative is being presented which still focuses mainly on cost saving, but with no objective evaluation of function related to cost. In order to improve this, the government adopted the reduction of life cycle cost and proposal of value improvement, and let people specify the cost and function of the original plan versus the alternative plan, and the value changes between them. However, it is written mainly into practical convenience rather than theoretical basis since a specific way is not suggested. The current method sets a different starting point by applying the attributional difference of function and cost. Furthermore, an evaluation standard for correlating is an important element in rational decision making for assessing and choosing an alternative. This paper analyzes the process and method of function & cost scoring when performing VE and suggests a mathematical normalization model in order to support rational decision making when selecting an optimum plan.

A Study of Q$_P^{-1}$ and Q$_S^{-1}$ Based on Data of 9 Stations in the Crust of the Southeastern Korea Using Extended Coda Normalization Method (확장 Coda 규격화 방법에 의한 한국남동부 지각의 Q$_P^{-1}$, Q$_S^{-1}$연구)

  • Chung, Tae-Woong;Sato, Haruo;Lee, Kie-Hwa
    • Journal of the Korean earth science society
    • /
    • v.22 no.6
    • /
    • pp.500-511
    • /
    • 2001
  • For the southeastern Korea aound the Yangsan fault we measured Q$_P^{-1}$ and Q$_S^{-1}$ simultaneously by using the extended coda-normalization method for seismograms registered at 9 stations deployed by KIGAM. We analyzed 707 seismograms of local earthquakes that occurred between December 1994 and February 2000. From seismograms, bandpass filtered traces were made by applying Butterworth filter with frequency-bands of 1${\sim}$2, 2${\sim}$4, 4${\sim}$8, 8${\sim}$16 and 16${\sim}$32 Hz. Estimated Q$_P^{-1}$ and Q$_S^{-1}$ values decrease from (7${\pm}$2)${\times}$10$^{-3}$ and (5${\pm}$4)${\times}$10$^{-4}$ at 1.5 Hz to (5${\pm}$4)${\times}$10$^{-3}$ and (5${\pm}$2)${\times}$10$^{-4}$ at 24 Hz, respectively. By fitting a power-law frequency dependent to estimated values over the whole stations, we obtained 0.009 (${\pm}$0.003)f$^{-1.05({\pm}0.14)$ for Q$_P^{-1}$ and 0.004 (${\pm}$0.001)f$^{-0.75({\pm}0.14)$) for Q$_S^{-1}$, where f is frequency in Hz.

  • PDF

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

Formant-broadened CMS Using the Log-spectrum Transformed from the Cepstrum (켑스트럼으로부터 변환된 로그 스펙트럼을 이용한 포먼트 평활화 켑스트럴 평균 차감법)

  • 김유진;정혜경;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.361-373
    • /
    • 2002
  • In this paper, we propose a channel normalization method to improve the performance of CMS (cepstral mean subtraction) which is widely adopted to normalize a channel variation for speech and speaker recognition. CMS which estimates the channel effects by averaging long-term cepstrum has a weak point that the estimated channel is biased by the formants of voiced speech which include a useful speech information. The proposed Formant-broadened Cepstral Mean Subtraction (FBCMS) is based on the facts that the formants can be found easily in log spectrum which is transformed from the cepstrum by fourier transform and the formants correspond to the dominant poles of all-pole model which is usually modeled vocal tract. The FBCMS evaluates only poles to be broadened from the log spectrum without polynomial factorization and makes a formant-broadened cepstrum by broadening the bandwidths of formant poles. We can estimate the channel cepstrum effectively by averaging formant-broadened cepstral coefficients. We performed the experiments to compare FBCMS with CMS, PFCMS using 4 simulated telephone channels. In the experiment of channel estimation, we evaluated the distance cepstrum of real channel from the cepstrum of estimated channel and found that we were able to get the mean cepstrum closer to the channel cepstrum due to an softening the bias of mean cepstrum to speech. In the experiment of text-independent speaker identification, we showed the result that the proposed method was superior than the conventional CMS and comparable to the pole-filtered CMS. Consequently, we showed the proposed method was efficiently able to normalize the channel variation based on the conventional CMS.

A Suggestion for Counting Efficiency Management of the Automation Instrument (자동화장비 계측효율 관리적 측정방법 제안)

  • Park, Jun Mo;Kim, Han Chul;Choi, Seung Won
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.2
    • /
    • pp.105-111
    • /
    • 2018
  • Purpose Quality control of instrument takes up a large part in the Radioimmunoassays. The gamma-ray instrument, which is one of the important instruments in the laboratory, observes the condition and performance of instrument and performs quality control of the instrument by measuring the Normalization, Calibration, Background and etc. However, there are some automation instruments which can't measure the counting efficiency of gamma-ray meters, resulting in insufficient management in terms of performance evaluation of gamma-ray meters. Therefore, the purpose of this paper is to manage the quality control continuously and regularly by suggesting how to measure the counting efficiency of gamma-ray instruments. Materials and Methods In case of a comparative measurement method to a gamma-ray instrument dedicated to nuclear medical examination, the CPM and counting efficiency can be obtained after the measurement of normalization by inserting the I-125 $200{\mu}L$(CPM 50,000~500,000) into the test tube. With this CPM and counting efficiency values, it's possible to calculate the measurement of the DPM value and count the CPM from the automation instrument from the same source, and enter the DPM to calculate the counting efficiency using a comparative measurement method. Another method is to calculate the counting efficiency by estimating the half life using the radiation source information of the tracer in B test reagents of company A. Results According to the calculation formula using the DPM obtained by counting the normalization of gamma-ray meters, the detection efficiency was 75.16% for Detector 1, 76.88% for Detector 2, 77.13% for Detector 3, 75.36% for Detector 4 and 73.2% for Detector 5 respectively. Using another calculation formula estimated from the shelf life, the data of the detection efficiency from Detector 1 to Detector 5 were 74.9%, 75.1%, 76.5%, 74.9% and 73.2% respectively. Conclusion Although the accuracy of counting efficiencies of both methods are insufficient, this is considered to be useful for ongoing management of quality control if counting efficiency is managed after setting the acceptable ranges. For example, if the measurement efficiency is set to 70% or higher, the allowed %difference between measurements is within 3% and the %difference with the detector wall is set within 5%.