• Title/Summary/Keyword: RGB color image

Search Result 482, Processing Time 0.029 seconds

Perfusion MR Imaging of the Brain Tumor: Preliminary Report (뇌종야의 관류 자기공명영상: 예비보고)

  • 김홍대;장기현;성수옥;한문희;한만청
    • Investigative Magnetic Resonance Imaging
    • /
    • v.1 no.1
    • /
    • pp.119-124
    • /
    • 1997
  • Purpose: To assess the utility of magnetic resonance(MR) cerebral blood volume (CBV) map in the evaluation of brain tumors. Materials and Methods: We performed perfusion MR imaing preoperatively in the consecutive IS patients with intracranial masses(3 meningiomas, 2 glioblastoma multiformes, 3 low grade gliomas, 1 lymphoma, 1 germinoma, 1 neurocytoma, 1 metastasis, 2 abscesses, 1 radionecrosis). The average age of the patients was 42 years (22yr -68yr), composed of 10 males and S females. All MR images were obtained at l.ST imager(Signa, CE Medical Systems, Milwaukee, Wisconsin). The regional CBV map was obtained on the theoretical basis of susceptibility difference induced by first pass circulation of contrast media. (contrast media: IScc of gadopentate dimeglumine, about 2ml/sec by hand, starting at 10 second after first baseline scan). For each patient, a total of 480 images (6 slices, 80 images/slice in 160 sec) were obtained by using gradient echo(CE) single shot echo-planar image(EPI) sequence (TR 2000ms, TE SOms, flip angle $90^{\circ}$, FOV $240{\times}240mm,{\;}matrix{\;}128{\times}128$, slice-thick/gap S/2.S). After data collection, the raw data were transferred to CE workstation and rCBV maps were generated from the numerical integration of ${\Delta}R2^{*} on a voxel by voxel basis, with home made software (${\Delta}R2^{*}=-ln (S/SO)/TE). For easy visual interpretation, relative RCB color coding with reference to the normal white matter was applied and color rCBV maps were obtained. The findings of perfusion MR image were retrospectively correlated with Cd-enhanced images with focus on the degree and extent of perfusion and contrast enhancement. Results: Two cases of glioblastoma multiforme with rim enhancement on Cd-enhanced Tl weighted image showed increased perfusion in the peripheral rim and decreased perfusion in the central necrosis portion. The low grade gliomas appeared as a low perfusion area with poorly defined margin. In 2 cases of brain abscess, the degree of perfusion was similar to that of the normal white matter in the peripheral enhancing rim and was low in the central portion. All meningiomas showed diffuse homogeneous increased perfusion of moderate or high degree. One each of lymphoma and germinoma showed homogenously decreased perfusion with well defined margin. The central neurocytoma showed multifocal increased perfusion areas of moderate or high degree. A few nodules of the multiple metastasis showed increased perfusion of moderate degree. One radionecrosis revealed multiple foci of increased perfusion within the area of decreased perfusion. Conclusion: The rCBV map appears to correlate well with the perfusion state of brain tumor, and may be helpful in discrimination between low grade and high grade gliomas. The further study is needed to clarify the role of perfusion MR image in the evaluation of brain tumor.

  • PDF

A Study on u-CCTV Fire Prevention System Development of System and Fire Judgement (u-CCTV 화재 감시 시스템 개발을 위한 시스템 및 화재 판별 기술 연구)

  • Kim, Young-Hyuk;Lim, Il-Kwon;Li, Qigui;Park, So-A;Kim, Myung-Jin;Lee, Jae-Kwang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.463-466
    • /
    • 2010
  • In this paper, CCTV based fire surveillance system should aim to development. Advantages and Disadvantages analyzed of Existing sensor-based fire surveillance system and video-based fire surveillance system. To national support U-City, U-Home, U-Campus, etc, spread the ubiquitous environment appropriate to fire surveillance system model and a fire judgement technology. For this study, Microsoft LifeCam VX-1000 using through the capturing images and analyzed for apple and tomato, Finally we used H.264. The client uses the Linux OS with ARM9 S3C2440 board was manufactured, the client's role is passed to the server to processed capturing image. Client and the server is basically a 1:1 video communications. So to multiple receive to video multicast support will be a specification. Is fire surveillance system designed for multiple video communication. Video data from the RGB format to YUV format and transfer and fire detection for Y value. Y value is know movement data. The red color of the fire is determined to detect and calculate the value of Y at the fire continues to detect the movement of flame.

  • PDF

A Robust Hand Recognition Method to Variations in Lighting (조명 변화에 안정적인 손 형태 인지 기술)

  • Choi, Yoo-Joo;Lee, Je-Sung;You, Hyo-Sun;Lee, Jung-Won;Cho, We-Duke
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.25-36
    • /
    • 2008
  • In this paper, we present a robust hand recognition approach to sudden illumination changes. The proposed approach constructs a background model with respect to hue and hue gradient in HSI color space and extracts a foreground hand region from an input image using the background subtraction method. Eighteen features are defined for a hand pose and multi-class SVM(Support Vector Machine) approach is applied to learn and classify hand poses based on eighteen features. The proposed approach robustly extracts the contour of a hand with variations in illumination by applying the hue gradient into the background subtraction. A hand pose is defined by two Eigen values which are normalized by the size of OBB(Object-Oriented Bounding Box), and sixteen feature values which represent the number of hand contour points included in each subrange of OBB. We compared the RGB-based background subtraction, hue-based background subtraction and the proposed approach with sudden illumination changes and proved the robustness of the proposed approach. In the experiment, we built a hand pose training model from 2,700 sample hand images of six subjects which represent nine numerical numbers from one to nine. Our implementation result shows 92.6% of successful recognition rate for 1,620 hand images with various lighting condition using the training model.

A Design and Implementation of Fitness Application Based on Kinect Sensor

  • Lee, Won Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.43-50
    • /
    • 2021
  • In this paper, we design and implement KITNESS, a windows application that feeds back the accuracy of fitness motions based on Kinect sensors. The feature of this application is to use Kinect's camera and joint recognition sensor to give feedback to the user to exercise in the correct fitness position. At this time, the distance between the user and the Kinect is measured using Kinect's IR Emitter and IR Depth Sensor, and the joint, which is the user's joint position, and the Skeleton data of each joint are measured. Using this data, a certain distance is calculated for each joint position and posture of the user, and the accuracy of the posture is determined. And it is implemented so that users can check their posture through Kinect's RGB camera. That is, if the user's posture is correct, the skeleton information is displayed as a green line, and if it is not correct, the inaccurate part is displayed as a red line to inform intuitively. Through this application, the user receives feedback on the accuracy of the exercise position, so he can exercise himself in the correct position. This application classifies the exercise area into three areas: neck, waist, and leg, and increases the recognition rate of Kinect by excluding positions that Kinect does not recognize due to overlapping joints in the position of each exercise area. And at the end of the application, the last exercise is shown as an image for 5 seconds to inspire a sense of accomplishment and to continuously exercise.

3D object generation based on the depth information of an active sensor (능동형 센서의 깊이 정보를 이용한 3D 객체 생성)

  • Kim, Sang-Jin;Yoo, Ji-Sang;Lee, Seung-Hyun
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.5
    • /
    • pp.455-466
    • /
    • 2006
  • In this paper, 3D objects is created from the real scene that is used by an active sensor, which gets depth and RGB information. To get the depth information, this paper uses the $Zcam^{TM}$ camera which has built-in an active sensor module. <중략> Thirdly, calibrate the detailed parameters and create 3D mesh model from the depth information, then connect the neighborhood points for the perfect 3D mesh model. Finally, the value of color image data is applied to the mesh model, then carries out mapping processing to create 3D object. Experimentally, it has shown that creating 3D objects using the data from the camera with active sensors is possible. Also, this method is easier and more useful than the using 3D range scanner.

  • PDF

Detection and Classification of Major Aerosol Type Using the Himawari-8/AHI Observation Data (Himawari-8/AHI 관측자료를 이용한 주요 대기 에어로솔 탐지 및 분류 방법)

  • Lee, Kwon-Ho;Lee, Kyu-Tae
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.34 no.3
    • /
    • pp.493-507
    • /
    • 2018
  • Due to high spatio-temporal variability of amount and optical/microphysical properties of atmospheric aerosols, satellite-based observations have been demanded for spatiotemporal monitoring the major aerosols. Observations of the heavy aerosol episodes and determination on the dominant aerosol types from a geostationary satellite can provide a chance to prepare in advance for harmful aerosol episodes as it can repeatedly monitor the temporal evolution. A new geostationary observation sensor, namely the Advanced Himawari Imager (AHI), onboard the Himawari-8 platform, has been observing high spatial and temporal images at sixteen wavelengths from 2016. Using observed spectral visible reflectance and infrared brightness temperature (BT), the algorithm to find major aerosol type such as volcanic ash (VA), desert dust (DD), polluted aerosol (PA), and clean aerosol (CA), was developed. RGB color composite image shows dusty, hazy, and cloudy area then it can be applied for comparing aerosol detection product (ADP). The CALIPSO level 2 vertical feature mask (VFM) data and MODIS level 2 aerosol product are used to be compared with the Himawari-8/AHI ADP. The VFM products can deliver nearly coincident dataset, but not many match-ups can be returned due to presence of clouds and very narrow swath. From the case study, the percent correct (PC) values acquired from this comparisons are 0.76 for DD, 0.99 for PA, 0.87 for CA, respectively. The MODIS L2 Aerosol products can deliver nearly coincident dataset with many collocated locations over ocean and land. Increased accuracy values were acquired in Asian region as POD=0.96 over land and 0.69 over ocean, which were comparable to full disc region as POD=0.93 over land and 0.48 over ocean. The Himawari-8/AHI ADP algorithm is going to be improved continuously as well as the validation efforts will be processed by comparing the larger number of collocation data with another satellite or ground based observation data.

Extracting Method The New Roads by Using High-resolution Aerial Orthophotos (고해상도 항공정사영상을 이용한 신설 도로 추출 방법에 관한 연구)

  • Lee, Kyeong Min;Go, Shin Young;Kim, Kyeong Min;Cho, Gi Sung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.3
    • /
    • pp.3-10
    • /
    • 2014
  • Digital maps are made by experts who digitize the data from aerial image and field survey. And the digital maps are updated every 2 years in National Geographic Information Institute. Conventional Digitizing methods take a lot of time and cost. And geographic information needs to be modified and updated appropriately as geographical features are changing rapidly. Therefore in this paper, we modify the digital map updates the road information for rapid high-resolution aerial orthophoto taken at different times were performed HSI color conversion. Road area of the cassification was performed the region growing methods. In addition, changes in the target area for analysis by applying the CVA technique to compare the changed road area by analyzing the accuracy of the proposed extraction.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

A Study on the Feature Extraction Using Spectral Indices from WorldView-2 Satellite Image (WorldView-2 위성영상의 분광지수를 이용한 개체 추출 연구)

  • Hyejin, Kim;Yongil, Kim;Byungkil, Lee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.5
    • /
    • pp.363-371
    • /
    • 2015
  • Feature extraction is one of the main goals in many remote sensing analyses. After high-resolution imagery became more available, it became possible to extract more detailed and specific features. Thus, considerable image segmentation algorithms have been developed, because traditional pixel-based analysis proved insufficient for high-resolution imagery due to its inability to handle the internal variability of complex scenes. However, the individual segmentation method, which simply uses color layers, is limited in its ability to extract various target features with different spectral and shape characteristics. Spectral indices can be used to support effective feature extraction by helping to identify abundant surface materials. This study aims to evaluate a feature extraction method based on a segmentation technique with spectral indices. We tested the extraction of diverse target features-such as buildings, vegetation, water, and shadows from eight band WorldView-2 satellite image using decision tree classification and used the result to draw the appropriate spectral indices for each specific feature extraction. From the results, We identified that spectral band ratios can be applied to distinguish feature classes simply and effectively.

Generation of Feature Map for Improving Localization of Mobile Robot based on Stereo Camera (스테레오 카메라 기반 모바일 로봇의 위치 추정 향상을 위한 특징맵 생성)

  • Kim, Eun-Kyeong;Kim, Sung-Shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.58-63
    • /
    • 2020
  • This paper proposes the method for improving the localization accuracy of the mobile robot based on the stereo camera. To restore the position information from stereo images obtained by the stereo camera, the corresponding point which corresponds to one pixel on the left image should be found on the right image. For this, there is the general method to search for corresponding point by calculating the similarity of pixel with pixels on the epipolar line. However, there are some disadvantages because all pixels on the epipolar line should be calculated and the similarity is calculated by only pixel value like RGB color space. To make up for this weak point, this paper implements the method to search for the corresponding point simply by calculating the gap of x-coordinate when the feature points, which are extracted by feature extraction and matched by feature matching method, are a pair and located on the same y-coordinate on the left/right image. In addition, the proposed method tries to preserve the number of feature points as much as possible by finding the corresponding points through the conventional algorithm in case of unmatched features. Because the number of the feature points has effect on the accuracy of the localization. The position of the mobile robot is compensated based on 3-D coordinates of the features which are restored by the feature points and corresponding points. As experimental results, by the proposed method, the number of the feature points are increased for compensating the position and the position of the mobile robot can be compensated more than only feature extraction.