• Title/Summary/Keyword: ROI Detection

Search Result 210, Processing Time 0.029 seconds

Fluid Accumulation in Canine Tympanic Bulla: Radiography, CT and MRI Examinations

  • Lee, Young-Won;Kang, Sang-Kyu;Choi, Ho-Jung
    • Journal of Veterinary Clinics
    • /
    • v.25 no.3
    • /
    • pp.176-181
    • /
    • 2008
  • Fluid accumulation within the tympanic bulla is an important diagnostic indicator of canine otitis media although its identification can be a challenge using currently available imaging techniques. The purpose of this study was to compare radiography, computed tomography (CT) and magnetic resonance imaging (MRI) in the identification of fluid accumulation within canine tympanic bulla. Unilateral tympanic bulla in 10 beagles were experimentally filled with blood or saline. Quantitative analysis of CT images were obtained by using Hounsfield unit (HU). MR signal intensity was obtained by using region of interesting (ROI) and compared with those of gray matter. On the CT image, the presence of blood or saline produced a fluid opacity occupying the tympanic bulla. On the MR image, the appearance of blood in the tympanic bulla was isointense in T1-weighted images and hyperintense in T2-weighted images. However, the appearance of saline in the tympanic bulla was hypointense in T1-weighted images and hyperintense in T2-weighted images. This study suggest that CT and MR imaging are useful methods for detection and differentiation of fluid in canine tympanic bulla.

Development of Data Logging Platform of Multiple Commercial Radars for Sensor Fusion With AVM Cameras (AVM 카메라와 융합을 위한 다중 상용 레이더 데이터 획득 플랫폼 개발)

  • Jin, Youngseok;Jeon, Hyeongcheol;Shin, Young-Nam;Hyun, Eugin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.4
    • /
    • pp.169-178
    • /
    • 2018
  • Currently, various sensors have been used for advanced driver assistance systems. In order to overcome the limitations of individual sensors, sensor fusion has recently attracted the attention in the field of intelligence vehicles. Thus, vision and radar based sensor fusion has become a popular concept. The typical method of sensor fusion involves vision sensor that recognizes targets based on ROIs (Regions Of Interest) generated by radar sensors. Especially, because AVM (Around View Monitor) cameras due to their wide-angle lenses have limitations of detection performance over near distance and around the edges of the angle of view, for high performance of sensor fusion using AVM cameras and radar sensors the exact ROI extraction of the radar sensor is very important. In order to resolve this problem, we proposed a sensor fusion scheme based on commercial radar modules of the vendor Delphi. First, we configured multiple radar data logging systems together with AVM cameras. We also designed radar post-processing algorithms to extract the exact ROIs. Finally, using the developed hardware and software platforms, we verified the post-data processing algorithm under indoor and outdoor environments.

Effect of a Preprocessing Method on the Inversion of OH* Chemiluminescence Images Acquired for Visualizing SNG Swirl-stabilized Flame Structure (SNG 선회 안정화 화염구조 가시화를 위한 OH* 자발광 이미지 역변환에서 전처리 효과)

  • Ahn, Kwang Ho;Song, Won Joon;Cha, Dong Jin
    • Journal of the Korean Society of Combustion
    • /
    • v.20 no.1
    • /
    • pp.24-31
    • /
    • 2015
  • Flame structure, which contains a useful information for studying combustion instability of the flame, is often quantitatively visualized with PLIF (planar laser-induced fluorescence) and/or chemiluminescence images. The latter, a line-integral of a flame property, needs to be preprocessed before being inverted, mainly due to its inherent noise and the axisymmetry assumption of the inversion. A preprocessing scheme utilizing multi-division of ROI (region of interest) of the chemiluminescence image is proposed. Its feasibility has been tested with OH PLIF and $OH^*$ chemiluminescence images of SNG (synthetic natural gas) swirl-stabilized flames taken from a model gas turbine combustor. It turns out that the multi-division technique outperforms two conventional ones: those are, one without preprocessing and the other with uni-division preprocessing, reconstructing the SNG flame structure much better than its two counterparts, when compared with the corresponding OH PLIF images. It is also found that the Canny edge detection algorithm used for detecting edges in the multi-division method works better than the Sobel algorithm does.

A Study on the Mark Reader Using the Image Processing (영상처리를 이용한 Mark 판독 기법에 관한 연구)

  • 김승호;김범진;이용구;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.83-83
    • /
    • 2000
  • Recently, Vision system has being used all around industry. Sensor systems are used for Mark Reader, for example, optical scanning is proximity sensor system, have many disadvantages, such as, lacking user interface and difficulty to store original specimens. In contrast with this, Vision systems for Mark Reader has many advantages, including function conversion to achieve other work, high accuracy, high speed, etc. In this thesis, we have researched the development of Mark Reader by using a Vision system. The processing course of this s)'stem is consist to Image Pre-Processing such as noise reduction, edge detection, threshold processing. And then, we have carried out camera calibration to calibrate images which are acquired from camera. After searching for reference point within scanning area(60pixe1${\times}$30pixe1), we have calculated points crossing by using line equations. And then, we decide to each ROI(region of interest) which are expressed by four points. Next we have converted absolute coordinate into relative coordinate for analysis a translation component. Finally we carry out Mark Reading with images classified by six patterns. As a result of experiment which follows the algorithm has proposed, we have get error within 0.5% from total image.

  • PDF

Area Classification, Identification and Tracking for Multiple Moving Objects with the Similar Colors (유사한 색상을 지닌 다수의 이동 물체 영역 분류 및 식별과 추적)

  • Lee, Jung Sik;Joo, Yung Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.3
    • /
    • pp.477-486
    • /
    • 2016
  • This paper presents the area classification, identification, and tracking for multiple moving objects with the similar colors. To do this, first, we use the GMM(Gaussian Mixture Model)-based background modeling method to detect the moving objects. Second, we propose the use of the binary and morphology of image in order to eliminate the shadow and noise in case of detection of the moving object. Third, we recognize ROI(region of interest) of the moving object through labeling method. And, we propose the area classification method to remove the background from the detected moving objects and the novel method for identifying the classified moving area. Also, we propose the method for tracking the identified moving object using Kalman filter. To the end, we propose the effective tracking method when detecting the multiple objects with the similar colors. Finally, we demonstrate the feasibility and applicability of the proposed algorithms through some experiments.

Finger Vein Recognition based on Matching Score-Level Fusion of Gabor Features

  • Lu, Yu;Yoon, Sook;Park, Dong Sun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.2
    • /
    • pp.174-182
    • /
    • 2013
  • Most methods for fusion-based finger vein recognition were to fuse different features or matching scores from more than one trait to improve performance. To overcome the shortcomings of "the curse of dimensionality" and additional running time in feature extraction, in this paper, we propose a finger vein recognition technology based on matching score-level fusion of a single trait. To enhance the quality of finger vein image, the contrast-limited adaptive histogram equalization (CLAHE) method is utilized and it improves the local contrast of normalized image after ROI detection. Gabor features are then extracted from eight channels based on a bank of Gabor filters. Instead of using the features for the recognition directly, we analyze the contributions of Gabor feature from each channel and apply a weighted matching score-level fusion rule to get the final matching score, which will be used for the last recognition. Experimental results demonstrate the CLAHE method is effective to enhance the finger vein image quality and the proposed matching score-level fusion shows better recognition performance.

Texture analysis of Thyroid Nodules in Ultrasound Image for Computer Aided Diagnostic system (컴퓨터 보조진단을 위한 초음파 영상에서 갑상선 결절의 텍스쳐 분석)

  • Park, Byung eun;Jang, Won Seuk;Yoo, Sun Kook
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.43-50
    • /
    • 2017
  • According to living environment, the number of deaths due to thyroid diseases increased. In this paper, we proposed an algorithm for recognizing a thyroid detection using texture analysis based on shape, gray level co-occurrence matrix and gray level run length matrix. First of all, we segmented the region of interest (ROI) using active contour model algorithm. Then, we applied a total of 18 features (5 first order descriptors, 10 Gray level co-occurrence matrix features(GLCM), 2 Gray level run length matrix features and shape feature) to each thyroid region of interest. The extracted features are used as statistical analysis. Our results show that first order statistics (Skewness, Entropy, Energy, Smoothness), GLCM (Correlation, Contrast, Energy, Entropy, Difference variance, Difference Entropy, Homogeneity, Maximum Probability, Sum average, Sum entropy), GLRLM features and shape feature helped to distinguish thyroid benign and malignant. This algorithm will be helpful to diagnose of thyroid nodule on ultrasound images.

Visualization and classification of hidden defects in triplex composites used in LNG carriers by active thermography

  • Hwang, Soonkyu;Jeon, Ikgeun;Han, Gayoung;Sohn, Hoon;Yun, Wonjun
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.803-812
    • /
    • 2019
  • Triplex composite is an epoxy-bonded joint structure, which constitutes the secondary barrier in a liquefied natural gas (LNG) carrier. Defects in the triplex composite weaken its shear strength and may cause leakage of the LNG, thus compromising the structural integrity of the LNG carrier. This paper proposes an autonomous triplex composite inspection (ATCI) system for visualizing and classifying hidden defects in the triplex composite installed inside an LNG carrier. First, heat energy is generated on the surface of the triplex composite using halogen lamps, and the corresponding heat response is measured by an infrared (IR) camera. Next, the region of interest (ROI) is traced and noise components are removed to minimize false indications of defects. After a defect is identified, it is classified as internal void or uncured adhesive and its size and shape are quantified and visualized, respectively. The proposed ATCI system allows the fully automated and contactless detection, classification, and quantification of hidden defects inside the triplex composite. The effectiveness of the proposed ATCI system is validated using the data obtained from actual triplex composite installed in an LNG carrier membrane system.

Improved Sliding Shapes for Instance Segmentation of Amodal 3D Object

  • Lin, Jinhua;Yao, Yu;Wang, Yanjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5555-5567
    • /
    • 2018
  • State-of-art instance segmentation networks are successful at generating 2D segmentation mask for region proposals with highest classification score, yet 3D object segmentation task is limited to geocentric embedding or detector of Sliding Shapes. To this end, we propose an amodal 3D instance segmentation network called A3IS-CNN, which extends the detector of Deep Sliding Shapes to amodal 3D instance segmentation by adding a new branch of 3D ConvNet called A3IS-branch. The A3IS-branch which takes 3D amodal ROI as input and 3D semantic instances as output is a fully convolution network(FCN) sharing convolutional layers with existing 3d RPN which takes 3D scene as input and 3D amodal proposals as output. For two branches share computation with each other, our 3D instance segmentation network adds only a small overhead of 0.25 fps to Deep Sliding Shapes, trading off accurate detection and point-to-point segmentation of instances. Experiments show that our 3D instance segmentation network achieves at least 10% to 50% improvement over the state-of-art network in running time, and outperforms the state-of-art 3D detectors by at least 16.1 AP.

Vision-Based High Accuracy Vehicle Positioning Technology (비전 기반 고정밀 차량 측위 기술)

  • Jo, Sang-Il;Lee, Jaesung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.12
    • /
    • pp.1950-1958
    • /
    • 2016
  • Today, technique for precisely positioning vehicles is very important in C-ITS(Cooperative Intelligent Transport System), Self-Driving Car and other information technology relating to transportation. Though the most popular technology for vehicle positioning is the GPS, its accuracy is not reliable because of large delay caused by multipath effect, which is very bad for realtime traffic application. Therefore, in this paper, we proposed the Vision-Based High Accuracy Vehicle Positioning Technology. At the first step of proposed algorithm, the ROI is set up for road area and the vehicles detection. Then, center and four corners points of found vehicles on the road are determined. Lastly, these points are converted into aerial view map using homography matrix. By analyzing performance of algorithm, we find out that this technique has high accuracy with average error of result is less than about 20cm and the maximum value is not exceed 44.72cm. In addition, it is confirmed that the process of this algorithm is fast enough for real-time positioning at the $22-25_{FPS}$.