• Title/Summary/Keyword: Image merging

Search Result 255, Processing Time 0.022 seconds

Destination Address Block Location on Machine-printed and Handwritten Korean Mail Piece Images (인쇄 및 필기 한글 우편영상에서의 수취인 주소 영역 추출 방법)

  • 정선화;장승익;임길택;남윤석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.1
    • /
    • pp.8-19
    • /
    • 2004
  • In this paper, we propose an efficient method for locating destination address block on both of machine-Printed and handwritten Korean mail piece images. The proposed method extracts connected components from the binary mail piece image, generates text lines by merging them, and then groups the text fines into nine clusters. The destination address block is determined by selecting some clusters. Considering the geometric characteristics of address information on Korean mail piece, we split a mail piece image into nine areas with an equal size. The nine clusters are initialized with the center coordinate of each area. A modified Manhattan distance function is used to compute the distance between text lines and clusters. We modified the distance function on which the aspect ratio of mail piece could be reflected. The experiment done with live Korean mail piece images has demonstrated the superiority of the Proposed method. The success rate for 1, 988 testing images was about 93.56%.

Text Area Extraction Method for Color Images Based on Labeling and Gradient Difference Method (레이블링 기법과 밝기값 변화에 기반한 컬러영상의 문자영역 추출 방법)

  • Won, Jong-Kil;Kim, Hye-Young;Cho, Jin-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.511-521
    • /
    • 2011
  • As the use of image input and output devices increases, the importance of extracting text area in color images is also increasing. In this paper, in order to extract text area of the images efficiently, we present a text area extraction method for color images based on labeling and gradient difference method. The proposed method first eliminates non-text area using the processes of labeling and filtering. After generating the candidates of text area by using the property that is high gradient difference in text area, text area is extracted using the post-processing of noise removal and text area merging. The benefits of the proposed method are its simplicity and high accuracy that is better than the conventional methods. Experimental results show that precision, recall and inverse ratio of non-text extraction (IRNTE) of the proposed method are 99.59%, 98.65% and 82.30%, respectively.

Face Detection System Based on Candidate Extraction through Segmentation of Skin Area and Partial Face Classifier (피부색 영역의 분할을 통한 후보 검출과 부분 얼굴 분류기에 기반을 둔 얼굴 검출 시스템)

  • Kim, Sung-Hoon;Lee, Hyon-Soo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.2
    • /
    • pp.11-20
    • /
    • 2010
  • In this paper we propose a face detection system which consists of a method of face candidate extraction using skin color and a method of face verification using the feature of facial structure. Firstly, the proposed extraction method of face candidate uses the image segmentation and merging algorithm in the regions of skin color and the neighboring regions of skin color. These two algorithms make it possible to select the face candidates from the variety of faces in the image with complicated backgrounds. Secondly, by using the partial face classifier, the proposed face validation method verifies the feature of face structure and then classifies face and non-face. This classifier uses face images only in the learning process and does not consider non-face images in order to use less number of training images. In the experimental, the proposed method of face candidate extraction can find more 9.55% faces on average as face candidates than other methods. Also in the experiment of face and non-face classification, the proposed face validation method obtains the face classification rate on the average 4.97% higher than other face/non-face classifiers when the non-face classification rate is about 99%.

Three-dimensional surgical accuracy between virtually planned and actual surgical movements of the maxilla in two-jaw orthognathic surgery

  • Hong, Mihee;Kim, Myung-Jin;Shin, Hye Jung;Cho, Heon Jae;Baek, Seung-Hak
    • The korean journal of orthodontics
    • /
    • v.50 no.5
    • /
    • pp.293-303
    • /
    • 2020
  • Objective: To investigate the three-dimensional (3D) surgical accuracy between virtually planned and actual surgical movements (SM) of the maxilla in two-jaw orthognathic surgery. Methods: The sample consisted of 15 skeletal Class III patients who underwent two-jaw orthognathic surgery performed by a single surgeon using a virtual surgical simulation (VSS) software. The 3D cone-beam computed tomography (CBCT) images were obtained before (T0) and after surgery (T1). After merging the dental cast image onto the T0 CBCT image, VSS was performed. SM were classified into midline correction (anterior and posterior), advancement, setback, anterior elongation, and impaction (total and posterior). The landmarks were the midpoint between the central incisors, the mesiobuccal cusp tip (MBCT) of both first molars, and the midpoint of the two MBCTs. The amount and direction of SM by VSS and actual surgery were measured using 3D coordinates of the landmarks. Discrepancies less than 1 mm between VSS and T1 landmarks indicated a precise outcome. The surgical achievement percentage (SAP, [amount of movement in actual surgery/amount of movement in VSS] × 100) (%) and precision percentage (PP, [number of patients with precise outcome/number of total patients] × 100) (%) were compared among SM types using Fisher's exact and Kruskal-Wallis tests. Results: Overall mean discrepancy between VSS and actual surgery, SAP, and PP were 0.13 mm, 89.9%, and 68.3%, respectively. There was no significant difference in the SAP and PP values among the seven SM types (all p > 0.05). Conclusions: VSS could be considered as an effective tool for increasing surgical accuracy.

Study on Heart Rate Variability and PSD Analysis of PPG Data for Emotion Recognition (감정 인식을 위한 PPG 데이터의 심박변이도 및 PSD 분석)

  • Choi, Jin-young;Kim, Hyung-shin
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.103-112
    • /
    • 2018
  • In this paper, we propose a method of recognizing emotions using PPG sensor which measures blood flow according to emotion. From the existing PPG signal, we use a method of determining positive emotions and negative emotions in the frequency domain through PSD (Power Spectrum Density). Based on James R. Russell's two-dimensional prototype model, we classify emotions as joy, sadness, irritability, and calmness and examine their association with the magnitude of energy in the frequency domain. It is significant that this study used the same PPG sensor used in wearable devices to measure the top four kinds of emotions in the frequency domain through image experiments. Through the questionnaire, the accuracy, the immersion level according to the individual, the emotional change, and the biofeedback for the image were collected. The proposed method is expected to be various development such as commercial application service using PPG and mobile application prediction service by merging with context information of existing smart phone.

Region-Based Moving Object Segmentation for Video Monitoring System (비디오 감시시스템을 위한 영역 기반의 움직이는 물체 분할)

  • 이경미;김종배;이창우;김항준
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.1
    • /
    • pp.30-38
    • /
    • 2003
  • This paper presents an efficient region-based motion segmentation method for segmenting of moving objects in a traffic scene with a focus on a Video Monitoring System (VMS). The presented method consists of two phases: motion detection and motion segmentation. Using the adaptive thresholding technique, the differences between two consecutive frames are analyzed to detect the movements of objects in a scene. To segment the detected regions into meaningful objects which have the similar intensity and motion information, the regions are initially segmented using a k-means clustering algorithm and then, the neighboring regions with the similar motion information are merged. Since we deal with not the whole image, but the detected regions in the segmentation phase, the computational cost is reduced dramatically. Experimental results demonstrate robustness in the occlusions among multiple moving objects and the change in environmental conditions as well.

Maritime Target Image Generation and Detection in a Sea Clutter Environment at High Grazing Angle (높은 지표각에서 해상 클러터 환경을 고려한 해상 표적 영상 생성 및 탐지)

  • Jin, Seung-Hyeon;Lee, Kyung-Min;Woo, Seon-Keol;Kim, Yoon-Jin;Kwon, Jun-Beom;Kim, Hong-Rak;Kim, Kyung-Tae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.30 no.5
    • /
    • pp.407-417
    • /
    • 2019
  • When a free-falling ballistic missile intercepts a maritime target in a sea clutter environment at high grazing angle, detection performance of the ballistic missile's seeker can be rapidly degraded by the effect of sea clutter. To solve this problem, it is necessary to verify the performance of maritime target detection via simulations based on various scenarios. We accomplish this by applying a two-dimensional cell -averaging constant false alarm rate detector to a two-dimensional radar image, which is generated by merging a sea clutter signal at high grazing angle with a maritime target signal corresponding to the signal-to-clutter ratio. Simulation results using a computer-aided design model and commercial numerical electromagnetic solver in various scenarios show that the performance of maritime target detection significantly depends on the grazing and azimuth angles.

Accuracy of 5-axis precision milling for guided surgical template (가이드 수술용 템플릿을 위한 5축 정밀가공공정의 정확성에 관한 연구)

  • Park, Ji-Man;Yi, Tae-Kyoung;Jung, Je-Kyo;Kim, Yong;Park, Eun-Jin;Han, Chong-Hyun;Koak, Jai-Young;Kim, Seong-Kyun;Heo, Seong-Joo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.48 no.4
    • /
    • pp.294-300
    • /
    • 2010
  • Purpose: The template-guided implant surgery offers several advantages over the traditional approach. The purpose of this study was to evaluate the accuracy of coordinate synchronization procedure with 5-axis milling machine for surgical template fabrication by means of reverse engineering through universal CAD software. Materials and methods: The study was performed on ten edentulous models with imbedded gutta percha stoppings which were hidden under silicon gingival form. The platform for synchordination was formed on the bottom side of models and these casts were imaged in Cone beam CT. Vectors of stoppings were extracted and transferred to those of planned implant on virtual planning software. Depth of milling process was set to the level of one half of stoppings and the coordinate of the data was synchronized to the model image. Synchronization of milling coordinate was done by the conversion process for the platform for the synchordination located on the bottom of the model. The models were fixed on the synchordination plate of 5-axis milling machine and drilling was done as the planned vector and depth based on the synchronized data with twist drill of the same diameter as GP stopping. For the 3D rendering and image merging, the impression tray was set on the conbeam CT and pre- and post- CT acquiring was done with the model fixed on the impression body. The accuracy analysis was done with Solidworks (Dassault systems, Concord, USA) by measuring vector of stopping’s top and bottom centers of experimental model through merging and reverse engineering the planned and post-drilling CT image. Correlations among the parameters were tested by means of Pearson correlation coefficient and calculated with SPSS (release 14.0, SPSS Inc. Chicago, USA) ($\alpha$ = 0.05). Results: Due to the declination, GP remnant on upper half of stoppings was observed for every drilled bores. The deviation between planned image and drilled bore that was reverse engineered was 0.31 (0.15 - 0.42) mm at the entrance, 0.36 (0.24 - 0.51) mm at the apex, and angular deviation was 1.62 (0.54 - 2.27)$^{\circ}$. There was positive correlation between the deviation at the entrance and that at the apex (Pearson Correlation Coefficient = 0.904, P = .013). Conclusion: The coordinate synchronization 5-axis milling procedure has adequate accuracy for the production of the guided surgical template.

Face Detection in Color Images Based on Skin Region Segmentation and Neural Network (피부 영역 분할과 신경 회로망에 기반한 칼라 영상에서 얼굴 검출)

  • Lee, Young-Sook;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.1-11
    • /
    • 2006
  • Many research demonstrations and commercial applications have been tried to develop face detection and recognition systems. Human face detection plays an important role in applications such as access control and video surveillance, human computer interface, identity authentication, etc. There are some special problems such as a face connected with background, faces connected via the skin color, and a face divided into several small parts after skin region segmentation in generally. It can be allowed many face detection techniques to solve the first and second problems. However, it is not easy to detect a face divided into several parts of regions for reason of different illumination conditions in the third problem. Therefore, we propose an efficient modified skin segmentation algorithm to solve this problem because the typical region segmentation algorithm can not be used to. Our algorithm detects skin regions over the entire image, and then generates face candidate regions using our skin segmentation algorithm For each face candidate, we implement the procedure of region merging for divided regions in order to make a region using adjacency between homogeneous regions. We utilize various different searching window sizes to detect different size faces and a face detection classifier based on a back-propagation algorithm in order to verify whether the searching window contains a face or not.

  • PDF

MTSAT Satellite Image Features on the Sever Storm Events in Yeongdong Region (영동지역 악기상 사례에 대한 MTSAT 위성 영상의 특징)

  • Kim, In-Hye;Kwon, Tae-Yong;Kim, Deok-Rae
    • Atmosphere
    • /
    • v.22 no.1
    • /
    • pp.29-45
    • /
    • 2012
  • An unusual autumn storm developed rapidly in the western part of the East sea on the early morning of 23 October 2006. This storm produced a record-breaking heavy rain and strong wind in the northern and middle part of the Yeong-dong region; 24-h rainfall of 304 mm over Gangneung and wind speed exceeding 63.7 m $s^{-1}$ over Sokcho. In this study, MTSAT-1R (Multi-fuctional Transport Satellite) water vapor and infrared channel imagery are examined to find out some features which are dynamically associated with the development of the storm. These features may be the precursor signals of the rapidly developing storm and can be employed for very short range forecast and nowcasting of severe storm. The satellite features are summarized: 1) MTSAT-1R Water Vapor imagery exhibited that distinct dark region develops over the Yellow sea at about 12 hours before the occurrence of maximum rainfall about 1100 KST on 23 October 2006. After then, it changes gradually into dry intrusion. This dark region in the water vapor image is closely related with the positive anomaly in 500 hPa Potential Vorticity field. 2) In the Infrared imagery, low stratus (brightness temperature: $0{\sim}5^{\circ}C$) develops from near Bo-Hai bay and Shanfung peninsula and then dissipates partially on the western coast of Korean peninsula. These features are found at 10~12 hours before the maximum rainfall occurrence, which are associated with the cold and warm advection in the lower troposphere. 3) The IR imagery reveals that two convective cloud cells (brightness temperature below $-50^{\circ}C$) merge each other and after merging it grows up rapidly over the western part of East sea at about 5 hours before the maximum rainfall occurrence. These features remind that there must be the upward flow in the upper troposphere and the low-layer convergence over the same region of East sea. The time of maximum growth of the convective cloud agrees well with the time of the maximum rainfall.