• Title/Summary/Keyword: Pixel Analysis

Search Result 705, Processing Time 0.03 seconds

Application of Computer-Aided Diagnosis a using Texture Feature Analysis Algorithm in Breast US images (유방 초음파영상에서 질감특성분석 알고리즘을 이용한 컴퓨터보조진단의 적용)

  • Lee, Jin-Soo;Kim, Changsoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.507-515
    • /
    • 2015
  • This paper suggests 6 cases of TFA parameters algorithm(Mean, VA, RS, SKEW, UN, EN) to search for the detection of recognition rates regarding breast disease using CAD on ultrasound images. Of the patients who visited a university hospital in Busan city from August 2013 to January 2014, 90 cases of breast ultrasound images based on the findings in breast US and pathology were selected. $50{\times}50$ pixel size ROI was selected from the breast US images. After pre-processing histogram equalization of the acquired test images(negative, benign, malignancy), we calculated results of TFA algorithm using MATLAB. As a result, in the TFA parameters suggested, the disease recognition rates for negative and malignancy was as high as 100%, and negative and benign was approximately 83~96% for the Mean, SKEW, UN, and EN. Therefore, there is the possibility of auto diagnosis as a pre-processing step for a screening test on breast disease. A additional study of the suggested algorithm and the responsibility and reproducibility for various clinical cases will determine the practical CAD and it might be possible to apply this technique to range of ultrasound images.

A Study on the Feature Extraction Using Spectral Indices from WorldView-2 Satellite Image (WorldView-2 위성영상의 분광지수를 이용한 개체 추출 연구)

  • Hyejin, Kim;Yongil, Kim;Byungkil, Lee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.5
    • /
    • pp.363-371
    • /
    • 2015
  • Feature extraction is one of the main goals in many remote sensing analyses. After high-resolution imagery became more available, it became possible to extract more detailed and specific features. Thus, considerable image segmentation algorithms have been developed, because traditional pixel-based analysis proved insufficient for high-resolution imagery due to its inability to handle the internal variability of complex scenes. However, the individual segmentation method, which simply uses color layers, is limited in its ability to extract various target features with different spectral and shape characteristics. Spectral indices can be used to support effective feature extraction by helping to identify abundant surface materials. This study aims to evaluate a feature extraction method based on a segmentation technique with spectral indices. We tested the extraction of diverse target features-such as buildings, vegetation, water, and shadows from eight band WorldView-2 satellite image using decision tree classification and used the result to draw the appropriate spectral indices for each specific feature extraction. From the results, We identified that spectral band ratios can be applied to distinguish feature classes simply and effectively.

Real-Time Human Tracker Based Location and Motion Recognition for the Ubiquitous Smart Home (유비쿼터스 스마트 홈을 위한 위치와 모션인식 기반의 실시간 휴먼 트랙커)

  • Park, Se-Young;Shin, Dong-Kyoo;Shin, Dong-Il;Cuong, Nguyen Quoe
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06d
    • /
    • pp.444-448
    • /
    • 2008
  • The ubiquitous smart home is the home of the future that takes advantage of context information from the human and the home environment and provides an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. We present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. We used four network cameras for real-time human tracking. This paper explains the real-time human tracker's architecture, and presents an algorithm with the details of two functions (prediction of human location and motion) in the real-time human tracker. The human location uses three kinds of background images (IMAGE1: empty room image, IMAGE2:image with furniture and home appliances in the home, IMAGE3: image with IMAGE2 and the human). The real-time human tracker decides whether the human is included with which furniture (or home appliance) through an analysis of three images, and predicts human motion using a support vector machine. A performance experiment of the human's location, which uses three images, took an average of 0.037 seconds. The SVM's feature of human's motion recognition is decided from pixel number by array line of the moving object. We evaluated each motion 1000 times. The average accuracy of all the motions was found to be 86.5%.

  • PDF

Real-Time Human Tracker Based on Location and Motion Recognition of User for Smart Home (스마트 홈을 위한 사용자 위치와 모션 인식 기반의 실시간 휴먼 트랙커)

  • Choi, Jong-Hwa;Park, Se-Young;Shin, Dong-Kyoo;Shin, Dong-Il
    • The KIPS Transactions:PartA
    • /
    • v.16A no.3
    • /
    • pp.209-216
    • /
    • 2009
  • The ubiquitous smart home is the home of the future that takes advantage of context information from the human and the home environment and provides an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. We present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. We used four network cameras for real-time human tracking. This paper explains the real-time human tracker's architecture, and presents an algorithm with the details of two functions (prediction of human location and motion) in the real-time human tracker. The human location uses three kinds of background images (IMAGE1: empty room image, IMAGE2: image with furniture and home appliances in the home, IMAGE3: image with IMAGE2 and the human). The real-time human tracker decides whether the human is included with which furniture (or home appliance) through an analysis of three images, and predicts human motion using a support vector machine. A performance experiment of the human's location, which uses three images, took an average of 0.037 seconds. The SVM's feature of human's motion recognition is decided from pixel number by array line of the moving object. We evaluated each motion 1000 times. The average accuracy of all the motions was found to be 86.5%.

Fixed-Point Modeling and Performance Analysis of a SIFT Keypoints Localization Algorithm for SoC Hardware Design (SoC 하드웨어 설계를 위한 SIFT 특징점 위치 결정 알고리즘의 고정 소수점 모델링 및 성능 분석)

  • Park, Chan-Ill;Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.6
    • /
    • pp.49-59
    • /
    • 2008
  • SIFT(Scale Invariant Feature Transform) is an algorithm to extract vectors at pixels around keypoints, in which the pixel colors are very different from neighbors, such as vortices and edges of an object. The SIFT algorithm is being actively researched for various image processing applications including 3-D image constructions, and its most computation-intensive stage is a keypoint localization. In this paper, we develope a fixed-point model of the keypoint localization and propose its efficient hardware architecture for embedded applications. The bit-length of key variables are determined based on two performance measures: localization accuracy and error rate. Comparing with the original algorithm (implemented in Matlab), the accuracy and error rate of the proposed fixed point model are 93.57% and 2.72% respectively. In addition, we found that most of missing keypoints appeared at the edges of an object which are not very important in the case of keypoints matching. We estimate that the hardware implementation will give processing speed of $10{\sim}15\;frame/sec$, while its fixed point implementation on Pentium Core2Duo (2.13 GHz) and ARM9 (400 MHz) takes 10 seconds and one hour each to process a frame.

Automatic Titration Using PC Camera in Acidity Analyses of Vinegar, Milk and Takju (PC 카메라를 이용한 식초, 우유 및 탁주의 산도 적정 자동화)

  • Lee, Hyeong-Choon
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.36 no.12
    • /
    • pp.1583-1588
    • /
    • 2007
  • PC-camera based automatic titration was executed in the acidity analyses of vinegar, milk and Takju. The average hue value (Havg) of 144 pixels in the image of the sample solution being titrated was computed and followed up at regular time intervals during titration in order to detect the titration end point. The Havg increase of 5 degrees from the first Havg was regarded as reaching at the end point in the cases of vinegar and milk. The Havg increase set up to detect the end point was 70 degrees in the case of Takju. In the case of vinegar, the volume of added titrant (0.1 N NaOH) was $21.409{\pm}0.066mL$ in manual titration and $21.403{\pm}0.055mL$ in automatic titration (p=0.841). In the case of milk, it was $1.390{\pm}0.025mL$ in manual titration and $1.388{\pm}0.027mL$ in automatic titration (p=0.907). In the case of Takju, it was $4.738{\pm}0.028mL$ in manual titration and $4.752{\pm}0.037mL$ in automatic titration (p=0.518). The high p values suggested that there were good agreements between manual and automatic titration data in all three food samples. The automatic method proposed in this article was considered to be applicable not only to acidity titrations but also to most titrations in which the end points can be detected by color change.

Eliminating Color Mixing of Projector-Camera System for Fast Radiometric Compensation (컬러 보정의 고속화를 위한 프로젝터-카메라 시스템의 컬러 혼합 성분 제거)

  • Lee, Moon-Hyun;Park, Han-Hoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.941-950
    • /
    • 2008
  • The quality of projector output image is influenced by the surrounding conditions such as the shape and color of screen, and environmental light. Therefore, techniques that ensure desirable image quality, regardless of such surrounding conditions, have been in demand and are being steadily developed. Among the techniques, radiometric compensation is a representative one. In general, radiometric compensation is achieved by measuring the color of the screen and environmental light based on an analysis of camera image of projector output image and then adjusting the color of projector input image in a pixel-wise manner. This process is not time-consuming for small sizes of images but the speed of the process drops linearly with respect to image size. In large sizes of images, therefore, reducing the time required for performing the process becomes a critical problem. Therefore, this paper proposes a fast radiometric compensation method. The method uses color filters for eliminating the color mixing between projector and camera because the speed of radiometric compensation depends mainly on measuring color mixing between projector and camera. By using color filters, there is no need to measure the color mixing. Through experiments, the proposed method improved the compensation speed by 44 percent while maintaining the projector output image quality. This method is expected to be a key technique for widespread use of projectors for large-scale and high-quality display.

Improved Binarization and Removal of Noises for Effective Extraction of Characters in Color Images (컬러 영상에서 효율적 문자 추출을 위한 개선된 2치화 및 잡음 저거)

  • 이은주;정장호
    • Journal of Information Technology Application
    • /
    • v.3 no.2
    • /
    • pp.133-147
    • /
    • 2001
  • This paper proposed a new algorithm for binarization and removal of noises in color images with characters and pictures. Binarization was performed by threshold which had computed with color-relationship relative to the number of pixel in background and character candidates and pre-threshold for dividing of background and character candidates in input images. The pre-threshold has been computed by the histogram of R, G, B In respect of the images, while background and character candidates of input images are divided by the above pre-threshold. As it is possible that threshold can be dynamically decided by the quantity of the noises, and the character images are maintained and the noises are removed to the maximum. And, in this study, we made the noise pattern table as a result of analysis in noise pattern included in the various color images aiming at removal of the noises from the Images. Noises included in the images can figure out Distribution by way of the noise pattern table and pattern matching itself. And then this Distribution classified difficulty of noises included in the images into the three categories. As removal of noises in the images is processed through different procedure according to the its classified difficulties, time required for process was reduced and efficiency of noise removal was improved. As a result of recognition experiments in respect of extracted characters in color images by way of the proposed algorithm, we conformed that the proposed algorithm is useful in a sense that it obtained the recognition rate in general documents without colors and pictures to the same level.

  • PDF

Object-Based Integral Imaging Depth Extraction Using Segmentation (영상 분할을 이용한 객체 기반 집적영상 깊이 추출)

  • Kang, Jin-Mo;Jung, Jae-Hyun;Lee, Byoung-Ho;Park, Jae-Hyeung
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.2
    • /
    • pp.94-101
    • /
    • 2009
  • A novel method for the reconstruction of 3D shape and texture from elemental images has been proposed. Using this method, we can estimate a full 3D polygonal model of objects with seamless triangulation. But in the triangulation process, all the objects are stitched. This generates phantom surfaces that bridge depth discontinuities between different objects. To solve this problem we need to connect points only within a single object. We adopt a segmentation process to this end. The entire process of the proposed method is as follows. First, the central pixel of each elemental image is computed to extract spatial position of objects by correspondence analysis. Second, the object points of central pixels from neighboring elemental images are projected onto a specific elemental image. Then, the center sub-image is segmented and each object is labeled. We used the normalized cut algorithm for segmentation of the center sub-image. To enhance the speed of segmentation we applied the watershed algorithm before the normalized cut. Using the segmentation results, the subdivision process is applied to pixels only within the same objects. The refined grid is filtered with median and Gaussian filters to improve reconstruction quality. Finally, each vertex is connected and an object-based triangular mesh is formed. We conducted experiments using real objects and verified our proposed method.

Real-time Moving Object Recognition and Tracking Using The Wavelet-based Neural Network and Invariant Moments (웨이블릿 기반의 신경망과 불변 모멘트를 이용한 실시간 이동물체 인식 및 추적 방법)

  • Kim, Jong-Bae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.10-21
    • /
    • 2008
  • The present paper propose a real-time moving object recognition and tracking method using the wavelet-based neural network and invariant moments. Candidate moving region detection phase which is the first step of the proposed method detects the candidate regions where a pixel value changes occur due to object movement based on the difference image analysis between continued two image frames. The object recognition phase which is second step of proposed method recognizes the vehicle regions from the detected candidate regions using wavelet neurual-network. From object tracking Phase which is third step the recognized vehicle regions tracks using matching methods of wavelet invariant moments bases to recognized object. To detect a moving object from image sequence the candidate regions detection phase uses an adaptive thresholding method between previous image and current image as result it was robust surroundings environmental change and moving object detections were possible. And by using wavelet features to recognize and tracking of vehicle, the proposed method decrease calculation time and not only it will be able to minimize the effect in compliance with noise of road image, vehicle recognition accuracy became improved. The result which it experiments from the image which it acquires from the general road image sequence and vehicle detection rate is 92.8%, the computing time per frame is 0.24 seconds. The proposed method can be efficiently apply to a real-time intelligence road traffic surveillance system.