• Title/Summary/Keyword: Block Extraction

Search Result 275, Processing Time 0.032 seconds

Fast information extraction algorithm for object-based MPEG-4 application from MPEG-2 bit-streamaper (MPEG-2 비트열로부터 객체 기반 MPEG-4 응용을 위한 고속 정보 추출 알고리즘)

  • 양종호;원치선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12A
    • /
    • pp.2109-2119
    • /
    • 2001
  • In this paper, a fast information extraction algorithm for object-based MPEG-4 application from MPEG-2 bit-steam is proposed. For object-based MPEG-4 conversion, we need to extract such information as object-image, shape-image, macro-block motion vector, and header information from MPEG-2 bit-stream. If we use the extracted information, fast conversion for object-based MPEG-4 is possible. The proposed object extraction algorithm has two important steps, namely the motion vectors extraction from MPEG-2 bit-stream and the watershed algorithm. The algorithm extracts objects using user\`s assistance in the intra frame and tracks then in the following inter frames. If we have an unsatisfactory result for a fast moving object, the user can intervene to correct the segmentation. The proposed algorithm consist of two steps, which are intra frame object extracts processing and inter frame tracking processing. Object extracting process is the step in which user extracts a semantic object directly by using the block classification and watersheds. Object tacking process is the step of the following the object in the subsequent frames. It is based on the boundary fitting method using motion vector, object-mask, and modified watersheds. Experimental results show that the proposed method can achieve a fast conversion from the MPEG-2 bit-stream to the object-based MPEG-4 input.

  • PDF

Fast information extraction algorithm for object-based MPEG-4 conversion from MPEG-1,2 (MPEG-1,2로부터 객체 기반 MPEG-4 변환을 위한 고속 정보 추출 알고리즘)

  • 양종호;박성욱
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.91-102
    • /
    • 2004
  • In this paper, a fast information extraction algorithm for object-based MPEG-4 application from MPEG-1,2 is proposed. For object-based MPEG-4 conversion, we need to extract such information as object-image, shape-image, macro-block motion vector, and header information from MPEG-1,2 bit-stream. If we use the extracted information, fast conversion for object-based MPEG-4 is possible. The proposed object extraction algerian has two important steps, namely the motion vector extraction from MPEG-1,2 bit-stream and the watershed algerian The algorithm extracts objects using user's assistance in the intra frame and tracks then in the following inter frames. If we have an unsatisfactory result for a fast moving object the user can intervene to connect the segmentation. The proposed algorithm consist of two steps, which are intra frame object extracting processing and inter frame tracking processing. Object extracting process is the step in which user extracts a semantic object directly by using the block classification and watersheds. Object tracking process is the step of the following the object in the subsequent frames. It is based on the boundary fitting method using motion vector, object-mask and modified watersheds. Experimental results show that the proposed method can achieve a fast conversion from the MPEG-1,2 bit-stream to the object-based MPEG-4 input.

Recognition of Car License Plates Using Difference Operator and ART2 Algorithm (차 연산과 ART2 알고리즘을 이용한 차량 번호판 통합 인식)

  • Kim, Kwang-Baek;Kim, Seong-Hoon;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.11
    • /
    • pp.2277-2282
    • /
    • 2009
  • In this paper, we proposed a new recognition method can be used in application systems using morphological features, difference operators and ART2 algorithm. At first, edges are extracted from an acquired car image by a camera using difference operators and the image of extracted edges is binarized by a block binarization method. In order to extract license plate area, noise areas are eliminated by applying morphological features of new and existing types of license plate to the 8-directional edge tracking algorithm in the binarized image. After the extraction of license plate area, mean binarization and mini-max binarization methods are applied to the extracted license plate area in order to eliminated noises by morphological features of individual elements in the license plate area, and then each character is extracted and combined by Labeling algorithm. The extracted and combined characters(letter and number symbols) are recognized after the learning by ART2 algorithm. In order to evaluate the extraction and recognition performances of the proposed method, 200 vehicle license plate images (100 for green type and 100 for white type) are used for experiment, and the experimental results show the proposed method is effective.

Object-based Stereoscopic Video Coding Using Image Segmentation and Prediction (영역분할 및 예측을 통한 객체기반 스테레오 동영상 부호화)

  • 권순규;배태면;한규필;정의윤;하영호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2349-2358
    • /
    • 1999
  • Object-based stereoscopic video coding scheme is presented in this paper. In conventional BMA based stereoscopic video coding for low bit rate transmission, image prediction errors such as block artifacts and mosquito phenomena are occurred. In order to reduce these errors, object based coding scheme is adopted. The proposed scheme consists of preprocessing, object extraction, and object update procedures. The preprocessing procedure extracts non-object regions having low reliability for motion and disparity estimation. This procedure prohibits extracting inaccurate objects. For the better prediction of left channel image, the disparity information is added to the object extraction. And the proposed algorithm can reduce the accumulated error through the object update procedure that detects newly emerging objects, merges objects that have the same object-disparity and object motion, and splits object which has large image prediction error. The experimental results show that the proposed algorithms improve the quality of the prediction without block artifacts and mosquito phenomena.

  • PDF

Image Fingerprint for Contents based Video Copy Detection Using Block Comparison (블록 비교를 이용한 내용기반 동영상 복사 검색용 영상 지문)

  • Na, Sang-Il;Jin, Ju-Kyoun;Cho, Ju-Hee;Oh, Weon-Geun;Jeong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.136-144
    • /
    • 2010
  • Two types of informations are used for content-based video copy detection: spatial information and temporal information. The spatial information means content-based image fingerprint. This image fingerprint must have following characteristic. First, Extraction is simple. Second, pairwise independence for random selected two images. At last, Robust for modifications. This paper proposed image fingerprint method for contents based video copy detection. Proposed method's extraction speed is fast because this method's using block average, first order differentiation and second order differentiation that can be calculated add and minus operation. And it has pairwise independence and robust against modifications. Also, proposed method feature makes binary by comparisons and using coarse to fine structure, so it's matching speed is fast. Proposed method is verified by modified image that modified by VCE7's experimental conditions in MPEG7.

Video Object Extraction Using Contour Information (윤곽선 정보를 이용한 동영상에서의 객체 추출)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.33-45
    • /
    • 2011
  • In this paper, we present a method for extracting video objects efficiently by using the modified graph cut algorithm based on contour information. First, we extract objects at the first frame by an automatic object extraction algorithm or the user interaction. To estimate the objects' contours at the current frame, motion information of objects' contour in the previous frame is analyzed. Block-based histogram back-projection is conducted along the estimated contour point. Each color model of objects and background can be generated from back-projection images. The probabilities of links between neighboring pixels are decided by the logarithmic based distance transform map obtained from the estimated contour image. Energy of the graph is defined by predefined color models and logarithmic distance transform map. Finally, the object is extracted by minimizing the energy. Experimental results of various test images show that our algorithm works more accurately than other methods.

An in vivo study comparing efficacy of 0.25% and 0.5% bupivacaine in infraorbital nerve block for postoperative analgesia

  • Saha, Aditi;Shah, Sonal;Waknis, Pushkar;Aher, Sharvika;Bhujbal, Prathamesh;Vaswani, Vibha
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • v.19 no.4
    • /
    • pp.209-215
    • /
    • 2019
  • Background: Pain is an unpleasant sensation ranging from mild localized discomfort to agony and is one of the most commonly experienced symptoms in oral surgery. Usually, local anesthetic agents and analgesics are used for pain control in oral surgical procedures. Local anesthetic agents including lignocaine and bupivacaine are routinely used in varying concentrations. The present study was designed to evaluate and compare the efficacy of 0.25% and 0.5% bupivacaine for postoperative analgesia in infraorbital nerve block. Methods: Forty-one patients undergoing bilateral maxillary orthodontic extraction received 0.5% bupivacaine (n = 41) on one side and 0.25% bupivacaine (n = 41) on the other side at an interval of 7 d. The parameters evaluated for both the bupivacaine concentrations were onset of action, pain during procedure (visual analog scale score [VAS]), and duration of action. The results were noted, tabulated, and analyzed using the Wilcoxon signed rank test. Results: The onset of action of 0.5% bupivacaine was quicker than that of 0.25% bupivacaine, but the difference was not statistically significant (P = 0.306). No significant difference was found between the solutions for VAS scores (P = 0.221) scores and duration of action (P = 0.662). Conclusion: There was no significant difference between 0.25% bupivacaine and 0.5% bupivacaine in terms of onset of action, pain during procedure, and duration of action. The use of 0.25% bupivacaine is recommended.

Lightweight Single Image Super-Resolution Convolution Neural Network in Portable Device

  • Wang, Jin;Wu, Yiming;He, Shiming;Sharma, Pradip Kumar;Yu, Xiaofeng;Alfarraj, Osama;Tolba, Amr
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4065-4083
    • /
    • 2021
  • Super-resolution can improve the clarity of low-resolution (LR) images, which can increase the accuracy of high-level compute vision tasks. Portable devices have low computing power and storage performance. Large-scale neural network super-resolution methods are not suitable for portable devices. In order to save the computational cost and the number of parameters, Lightweight image processing method can improve the processing speed of portable devices. Therefore, we propose the Enhanced Information Multiple Distillation Network (EIMDN) to adapt lower delay and cost. The EIMDN takes feedback mechanism as the framework and obtains low level features through high level features. Further, we replace the feature extraction convolution operation in Information Multiple Distillation Block (IMDB), with Ghost module, and propose the Enhanced Information Multiple Distillation Block (EIMDB) to reduce the amount of calculation and the number of parameters. Finally, coordinate attention (CA) is used at the end of IMDB and EIMDB to enhance the important information extraction from Spaces and channels. Experimental results show that our proposed can achieve convergence faster with fewer parameters and computation, compared with other lightweight super-resolution methods. Under the condition of higher peak signal-to-noise ratio (PSNR) and higher structural similarity (SSIM), the performance of network reconstruction image texture and target contour is significantly improved.

Face Recognitions Using Centroid Shift and Neural Network-based Principal Component Analysis (중심이동과 신경망 기반 주요성분분석을 이용한 얼굴인식)

  • Cho Yong-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.715-720
    • /
    • 2005
  • This paper presents a hybrid recognition method of first moment of face image and principal component analysis(PCA). First moment is applied to reduce the dimension by shifting to the centroid of image, which is to exclude the needless backgrounds in the face recognitions. PCA is implemented by single layer neural network which has a teaming rule of Foldiak algorithm. It has been used as an alternative method for numerical PCA. PCA is to derive an orthonormal basis which directly leads to dimensionality reduction and possibly to feature extraction of face image. The proposed method has been applied to the problems for recognizing the 48 face images(12 Persons $\ast$ 4 scenes) of 64$\ast$64 pixels. The 3 distances such as city-block, Euclidean, negative angle are used as measures when match the probe images to the nearest gallery images. The experimental results show that the proposed method has a superior recognition performances(speed, rate). The negative angle has been relatively achieved more an accurate similarity than city-block or Euclidean.

Articaine (4%) with epinephrine (1:100,000 or 1:200,000) in inferior alveolar nerve block: Effects on the vital signs and onset, and duration of anesthesia

  • Lasemi, Esshagh;Sezavar, Mehdi;Habibi, Leyla;Hemmat, Seyfollah;Sarkarat, Farzin;Nematollahi, Zahra
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • v.15 no.4
    • /
    • pp.201-205
    • /
    • 2015
  • Background: This prospective, randomized, double-blind, clinical study was conducted to compare the effects of 4% articaine with 1:100,000 epinephrine (A100) and 4% articaine with 1:200,000 epinephrine (A200) on the vital signs and onset and duration of anesthesia in an inferior alveolar nerve block (IANB). Methods: In the first appointment, an IANB was performed by injecting A100 or A200 in 1 side of the mouth (right or left) randomly in patients referred for extraction of both their first mandibular molars. In the second appointment, the protocol was repeated and the other anesthetic solution was injected in the side that had not received the block in the previous session. Systolic and diastolic blood pressures (SBP and DBP) and pulse rate were measured during and 5 min after the injection. The onset and duration of anesthesia were also evaluated. Data were analyzed using t-test and Mann-Whitney U-test, and p-value was set at 0.05. Results: SBP and pulse rate changes were slightly more with A100; however, DBP changes were more with A200, although the differences were not significant (P > 0.05). There were no statistically significant differences in the parameters evaluated in this study. The onset and duration of anesthesia, and the changes in SBP, DBP, and pulse rate during and 5 min after the injection were the same in both the groups. Conclusions: For an IANB, A200 and A100 were equally efficient and successful in producing the block. Epinephrine concentration did not influence the effects of 4% articaine.