• Title/Summary/Keyword: error back-projection

Search Result 20, Processing Time 0.021 seconds

A Study on Vital Statistics Survey : its Type, Source of Errors and Improvement Scheme (인구동태조사 개선을 위한 방법론적 고찰)

  • 김일현;최봉호
    • Korea journal of population studies
    • /
    • v.12 no.1
    • /
    • pp.19-29
    • /
    • 1989
  • It is well known that vital statistics is of great importance as basic data for establishing various range of national policies. Especially, vital statistics is important among demographic information for monitoring and evaluating the population policy, for constructing life table, for making population projection, and for studying various aspects of the society. In principle, the production of vital statistics is based on the registration system. It is, however, still observed that there are some limitations in utilizing fully the registration system due to the inherent problems such as problems in its coverage, accuracies and timeliness. Thus, as an alternative, many countries conduct survey on vital statistics in order to supplement the registration system and obtain in-depth data. Korea is no exception in this aspect. The National Bureau of Statistics carries out the so-called Continuous Demographic Survey. This is a kind of multi-round retrospective survey, covering 32, 000 households and having reference period of one month. The survey has also characteristics of multi-subject sample. Thus, surveys on economic activity status of population, house-hold income & expenditure, and social indicators are together conducted with the same sample. It is, however, found that the survey itself tends to have some quality problems. Especially, the quality problems connected with field data collection are summarized as coverage error, non-response error and response error. Although it is inevitable not to be free from these errors, we should make all our efforts to reduce the errors. The probable schemes pointed out in this paper are as follows : 1) the strengthening formal quality control activities, 2) the review of the survey method, i. e., the combining interview method with mail-sending and mail-back method or pick-up method, 3) well documentation for various cases found in every stage of data collection, and 4) the strengthening the analytical activities. It is, also, emphasized that sincerity of planners and interviewers is the most important factor among other things.

  • PDF

Optical Character Recognition for Hindi Language Using a Neural-network Approach

  • Yadav, Divakar;Sanchez-Cuadrado, Sonia;Morato, Jorge
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.117-140
    • /
    • 2013
  • Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR. The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document's textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier. In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.

Analyzing the Influence of Spatial Sampling Rate on Three-dimensional Temperature-field Reconstruction

  • Shenxiang Feng;Xiaojian Hao;Tong Wei;Xiaodong Huang;Pan Pei;Chenyang Xu
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.246-258
    • /
    • 2024
  • In aerospace and energy engineering, the reconstruction of three-dimensional (3D) temperature distributions is crucial. Traditional methods like algebraic iterative reconstruction and filtered back-projection depend on voxel division for resolution. Our algorithm, blending deep learning with computer graphics rendering, converts 2D projections into light rays for uniform sampling, using a fully connected neural network to depict the 3D temperature field. Although effective in capturing internal details, it demands multiple cameras for varied angle projections, increasing cost and computational needs. We assess the impact of camera number on reconstruction accuracy and efficiency, conducting butane-flame simulations with different camera setups (6 to 18 cameras). The results show improved accuracy with more cameras, with 12 cameras achieving optimal computational efficiency (1.263) and low error rates. Verification experiments with 9, 12, and 15 cameras, using thermocouples, confirm that the 12-camera setup as the best, balancing efficiency and accuracy. This offers a feasible, cost-effective solution for real-world applications like engine testing and environmental monitoring, improving accuracy and resource management in temperature measurement.

Detection of the co-planar feature points in the three dimensional space (3차원 공간에서 동일 평면 상에 존재하는 특징점 검출 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.499-508
    • /
    • 2023
  • In this paper, we propose a technique to estimate the coordinates of feature points existing on a 2D planar object in the three dimensional space. The proposed method detects multiple 3D features from the image, and excludes those which are not located on the plane. The proposed technique estimates the planar homography between the planar object in the 3D space and the camera image plane, and computes back-projection error of each feature point on the planar object. Then any feature points which have large error is considered as off-plane points and are excluded from the feature estimation phase. The proposed method is archived on the basis of the planar homography without any additional sensors or optimization algorithms. In the expretiments, it was confirmed that the speed of the proposed method is more than 40 frames per second. In addition, compared to the RGB-D camera, there was no significant difference in processing speed, and it was verified that the frame rate was unaffected even in the situation that the number of detected feature points continuously increased.

Recognition of characters on car number plate and best recognition ratio among their layers using Multi-layer Perceptron (다중퍼셉트론을 이용한 자동차 번호판의 최적 입출력 노드의 비율 결정에 관한 연구)

  • Lee, Eui-Chul;Lee, Wang-Heon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.1
    • /
    • pp.73-80
    • /
    • 2016
  • The Car License Plate Recognition(: CLPR) is required in searching the hit-and-run car, measuring the traffic density, investigating the traffic accidents as well as in pursuing vehicle crimes according to the increasing in number of vehicles. The captured images on the real environment of the CLPR is contaminated not only by snow and rain, illumination changes, but also by the geometrical distortion due to the pose changes between camera and car at the moment of image capturing. We propose homographic transformation and intensity histogram of vertical image projection so as to transform the distorted input to the original image and cluster the character and number, respectively. Especially, in this paper, the Multilayer Perceptron Algorithm(: MLP) in the CLPR is used to not only recognize the charcters and car license plate, but also determine the optimized ratio among the number of input, hidden and output layers by the real experimental result.

A Study on Spotlight SAR Image Formation by using Motion Measurement Results of CDGPS (CDGPS의 요동 측정 결과를 이용한 Spotlight SAR 영상 형성에 관한 연구)

  • Hwang, Jeonghun;Ko, Young-Chang;Kim, So-Yeon;Kwon, Kyoung-Il;Yoon, Sang-Ho;Kim, Hyung-Suk;Shin, Hyun-Ik
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.2
    • /
    • pp.166-172
    • /
    • 2018
  • To develop and evaluate the real-time SAR(Synthetic Aperture Radar) motion measurement system, true antenna phase center(APC) positions during SAT(Synthetic Aperture Time) are needed. In this paper, CDGPS(Carrier phase Differential Global Positioning System) post processing method is proposed to get the true APC position for spotlight SAR image formation. The CDGPS position is smoothed to remove high frequency noise which exists inherently in the carrier phase measurement. This paper shows smoothed CDGPS data is enough to provide the true APC for high-quality SAR image formation through motion measurement result, phase error estimation and IRF(Impulse Response Function) analysis.

The Font Recognition of Printed Hangul Documents (인쇄된 한글 문서의 폰트 인식)

  • Park, Moon-Ho;Shon, Young-Woo;Kim, Seok-Tae;Namkung, Jae-Chan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.8
    • /
    • pp.2017-2024
    • /
    • 1997
  • The main focus of this paper is the recognition of printed Hangul documents in terms of typeface, character size and character slope for IICS(Intelligent Image Communication System). The fixed-size blocks extracted from documents are analyzed in frequency domain for the typeface classification. The vertical pixel counts and projection profile of bounding box are used for the character size classification and the character slope classification, respectively. The MLP with variable hidden nodes and error back-propagation algorithm is used as typeface classifier, and Mahalanobis distance is used to classify the character size and slope. The experimental results demonstrated the usefulness of proposed system with the mean rate of 95.19% in typeface classification. 97.34% in character size classification, and 89.09% in character slope classification.

  • PDF

Comparison of Image Quality among Different Computed Tomography Algorithms for Metal Artifact Reduction (금속 인공물 감소를 위한 CT 알고리즘 적용에 따른 영상 화질 비교)

  • Gui-Chul Lee;Young-Joon Park;Joo-Wan Hong
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.4
    • /
    • pp.541-549
    • /
    • 2023
  • The aim of this study wasto conduct a quantitative analysis of CT image quality according to an algorithm designed to reduce metal artifacts induced by metal components. Ten baseline images were obtained with the standard filtered back-projection algorithm using spectral detector-based CT and CT ACR 464 phantom, and ten images were also obtained on the identical phantom with the standard filtered back-projection algorithm after inducing metal artifacts. After applying the to raw data from images with metal artifacts, ten additional images for each were obtained by applying the virtual monoenergetic algorithm. Regions of interest were set for polyethylene, bone, acrylic, air, and water located in the CT ACR 464 phantom module 1 to conduct compare the Hounsfield units for each algorithm. The algorithms were individually analyzed using root mean square error, mean absolute error, signal-to-noise ratio, peak signal-to-noise ratio, and structural similarity index to assess the overall image quality. When the Hounsfield units of each algorithm were compared, a significant difference was found between the images with different algorithms (p < .05), and large changes were observed in images using the virtual monoenergetic algorithm in all regions of interest except acrylic. Image quality analysis indices revealed that images with the metal artifact reduction algorithm had the highest resolution, but the structural similarity index was highest for images with the metal artifact reduction algorithm followed by an additional virtual monoenergetic algorithm. In terms of CT images, the metal artifact reduction algorithm was shown to be more effective than the monoenergetic algorithm at reducing metal artifacts, but to obtain quality CT images, it will be important to ascertain the advantages and differences in image qualities of the algorithms, and to apply them effectively.

MCBP Neural Netwoek for Effcient Recognition of Tire Claddification Code (타이어 분류 코드의 효율적 인식을 위한 MCBP망)

  • Koo, Gun-Seo;O, Hae-Seok
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.2
    • /
    • pp.465-482
    • /
    • 1997
  • In this paper, we have studied on cinstructing code-recognition shstem by neural network according to a image process taking the DOT classification code stamped on tire surface.It happened to a few problems that characters distorted in edge by diffused reflection and two adjacent characters take the same label,even very sen- sitive to illumination ofr recognition the stamped them on tire.Thus,this paper would propose the algorithm for tire code under being cinscious of these properties and prove the algorithm drrciency with a simulation.Also,we have suggerted the MCBP network composing of multi-linked recognizers of dffcient identify the DOT code being tire classification code.The MCBP network extracts the projection balue for classifying each character's rdgion after taking out the prjection of each chracter's region on X,Y axis,processes each chracters by taking 7$\times$8 normalization.We have improved error rate 3% through the MCBP network and post-process comparing the DOT code Database. This approach has a accomplished that learming time get's improvenent at 60% and recognition rate has become to 95% from 90% than BckPropagation with including post- processing it has attained greate rates of entire of tire recoggnition at 98%.

  • PDF

The Comparative Analysis of External Dose Reconstruction in EPID and Internal Dose Measurement Using Monte Carlo Simulation (몬테 카를로 전산모사를 통한 EPID의 외부적 선량 재구성과 내부 선량 계측과의 비교 및 분석)

  • Jung, Joo-Young;Yoon, Do-Kun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.24 no.4
    • /
    • pp.253-258
    • /
    • 2013
  • The purpose of this study is to evaluate and analyze the relationship between the external radiation dose reconstruction which is transmitted from the patient who receives radiation treatment through electronic portal imaging device (EPID) and the internal dose derived from the Monte Carlo simulation. As a comparative analysis of the two cases, it is performed to provide a basic indicator for similar studies. The geometric information of the experiment and that of the radiation source were entered into Monte Carlo n-particle (MCNPX) which is the computer simulation tool and to derive the EPID images, a tally card in MCNPX was used for visualizing and the imaging of the dose information. We set to source to surface distance (SSD) 100 cm for internal measurement and EPID. And the water phantom was set to be 100 cm of the source to surface distance (SSD) for the internal measurement and EPID was set to 90 cm of SSD which is 10 cm below. The internal dose was collected from the water phantom by using mesh tally function in MCNPX, accumulated dose data was acquired by four-portal beam exposures. At the same time, after getting the dose which had been passed through water phantom, dose reconstruction was performed using back-projection method. In order to analyze about two cases, we compared the penetrated dose by calibration of itself with the absorbed one. We also evaluated the reconstructed dose using EPID and partially accumulated (overlapped) dose in water phantom by four-portal beam exposures. The sum dose data of two cases were calculated as each 3.4580 MeV/g (absorbed dose in water) and 3.4354 MeV/g (EPID reconstruction). The result of sum dose match from two cases shows good agreement with 0.6536% dose error.