• Title/Summary/Keyword: boundary pixel

Search Result 180, Processing Time 0.02 seconds

Performance Analysis of Matching Cost Functions of Stereo Matching Algorithm for Making 3D Contents (3D 콘텐츠 생성에서의 스테레오 매칭 알고리즘에 대한 매칭 비용 함수 성능 분석)

  • Hong, Gwang-Soo;Jeong, Yeon-Kyu;Kim, Byung-Gyu
    • Convergence Security Journal
    • /
    • v.13 no.3
    • /
    • pp.9-15
    • /
    • 2013
  • Calculating of matching cost is an important for efficient stereo matching. To investigate the performance of matching process, the concepts of the existing methods are introduced. Also we analyze the performance and merits of them. The simplest matching costs assume constant intensities at matching image locations. We consider matching cost functions which can be distinguished between pixel-based and window-based approaches. The Pixel-based approach includes absolute differences (AD) and sampling-intensitive absolute differences (BT). The window-based approach includes the sum of the absolute differences, the sum of squared differences, the normalized cross-correlation, zero-mean normalized cross-correlation, census transform, and the absolute differences census transform (AD-Census). We evaluate matching cost functions in terms of accuracy and time complexity. In terms of the accuracy, AD-Census method shows the lowest matching error ratio (the best solution). The ZNCC method shows the lowest matching error ratio in non-occlusion and all evaluation part. But it performs high matching error ratio at the discontinuities evaluation part due to blurring effect in the boundary. The pixel-based AD method shows a low complexity in terms of time complexity.

Wood Shrinkage Measurement of Using a Flatbed Scanner (평판형 스캐너를 이용한 목재 수축률 측정)

  • Park, Yonggun;Chang, Yoon-Seong;Yang, Sang-Yun;Yeo, Hwanmyeong;Lee, Mi-Rim;Eom, Chang-Deuk;Kwon, Ohkyung
    • Journal of the Korean Wood Science and Technology
    • /
    • v.43 no.1
    • /
    • pp.43-51
    • /
    • 2015
  • Wood shrinkage, an important study subject with regard to the use of wood, has long been studied by researchers. However, when the size of a wood specimen is measured, distortion must be taken into account, which can be accomplished by applying external force on the wood specimen. However, when measuring a large number of specimens, this technique can be a lengthy process. If the size is measured and the shrinkage is calculated from images acquired with a flatbed scanner, it is possible to reduce the error in the measurement and to shorten the measurement time because the images of many specimens can be acquired with one scan. To clearly establish the boundary between a wood specimen and the background in a scan, an image threshold method was applied here. The size of a wood specimen measured by means of a scanner image was found to be longer than the value determined with a vernier caliper. The maximum pixel size of a scan image for highly accurate shrinkage calculations compared with the use of a vernier caliper was 0.053 mm/pixel.

A New Method of Estimating Coronary Artery Diameter Using Direction Codes (방향코드를 이용한 관상동맥의 직경 측정 방법)

  • Jeon, Chun-Gi;Gang, Gwang-Nam;Lee, Tae-Won
    • Journal of Biomedical Engineering Research
    • /
    • v.16 no.3
    • /
    • pp.289-300
    • /
    • 1995
  • The conventionally used method requires centerline of vessels to estimate the vessel diameter. Two methods of estimating the centerline of vessels are reported : One is manually observer-defined method. This potentially contributes to inter-and intra-observer variability. And the other is to auto- matically detect the centerline of vessels. But this is very complicated method. In this paper, we propose a new method of estimating vessel diameter using direction codes and position informs:ion without detecting centerline. Since this method detects the vessel boundary and direction code at d same time, it simplifies the procedure and reduces execution time in estimating the vessel diameter. Compared to a method that automatically estimates the vessel diAmeter uslng centerline, our method provides improved accuracy in image with poor contrast, branching or obstructed vessels. Also, this provides a good compression of boundary description, because each direction code element can be coded with 3 bits only, instead of the 4 bytes required for the storage of the coordinates of each border pixel. Our experiments demonstrate the usefulness of the technique using direction code for quantitative analysis of coronary angiography Experimental results Justify the validity of the proposed method.

  • PDF

A Study on Object Tracking using Variable Search Block Algorithm (가변 탐색블록을 이용한 객체 추적에 관한 연구)

  • Min Byoung-Muk;Oh Hae-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.463-470
    • /
    • 2006
  • It is difficult to track and extract the movement of an object through a camera exactly because of noises and changes of the light. The fast searching algorithm is necessary to extract the object and to track the movement for realtime image. In this paper, we propose the correct and fast algorithm using the variable searching area and the background image change method to robustic for the change of background image. In case the threshold value is smaller than reference value on an experimental basis, change the background image. When it is bigger, we decide it is the point of the time of the object input and then extract boundary point of it through the pixel check. The extracted boundary points detect precise movement of the object by creating area block of it and searching block that maintaining distance. The designed and embodied system shows more than 95% accuracy in the experimental results.

Displacement Mapping for the Precise Representation of Protrusion (정확한 돌출 형상의 표현을 위한 변위매핑)

  • Yoo, Byoung-Hyun;Han, Soon-Hung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.10
    • /
    • pp.777-788
    • /
    • 2006
  • This paper describes a displacement mapping technique which represents protruded shapes on the surface of an object. Previous approaches for image-based displacement mapping can represent only shapes depressed from the polygon surface. The proposed technique can represent shapes protruded from the underlying surface in real-time. Two auxiliary surfaces which are perpendicular to the underlying surface are added along the boundary of the polygon surface, in order to represent the pixels which overflow over the boundary of the polygon surface. The proposed approach can represent accurate silhouette of protruded shape. It can represent not only smooth displacement of protruded shape, but also abrupt displacement such as perpendicular protrusion by means of adding the supplementary texture information to the steep surface of protruded shape. By per-pixel instructions on the programmable GPU this approach can be executed in real-time. It provides an effective solution for the representation of protruded shape such as high-rise buildings on the ground.

A study on Iris Recognition using Wavelet Transformation and Nonlinear Function

  • Hur, Jung-Youn;Truong, Le Xuan
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.553-559
    • /
    • 2004
  • In todays security industry, personal identification is also based on biometric. Biometric identification is performed basing on the measurement and comparison of physiological and behavioral characteristics, Biometric for recognition includes voice dynamics, signature dynamics, hand geometry, fingerprint, iris, etc. Iris can serve as a kind of living passport or living password. Iris recognition system is the one of the most reliable biometrics recognition system. This is applied to client/server system such as the electronic commerce and electronic banking from stand-alone system or networks, ATMs, etc. A new algorithm using nonlinear function in recognition process is proposed in this paper. An algorithm is proposed to determine the localized iris from the iris image received from iris input camera in client. For the first step, the algorithm determines the center of pupil. For the second step, the algorithm determines the outer boundary of the iris and the pupillary boundary. The localized iris area is transform into polar coordinates. After performing three times Wavelet transformation, normalization was done using sigmoid function. The converting binary process performs normalized value of pixel from 0 to 255 to be binary value, and then the converting binary process is compare pairs of two adjacent pixels. The binary code of the iris is transmitted to the by server. the network. In the server, the comparing process compares the binary value of presented iris to the reference value in the University database. Process of recognition or rejection is dependent on the value of Hamming Distance. After matching the binary value of presented iris with the database stored in the server, the result is transmitted to the client.

  • PDF

Hybrid Super-Resolution Algorithm Robust to Cut-Change (컷 전환에 적응적인 혼합형 초고해상도 기법)

  • Kwon, Soon-Chan;Lim, Jong-Myeong;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.7
    • /
    • pp.1672-1686
    • /
    • 2013
  • In this paper, we propose a hybrid super-resolution algorithm robust to cut-change. Existing single-frame based super-resolution algorithms are usually fast, but quantity of information for interpolation is limited. Although the existing multi-frame based super-resolution algorithms generally robust to this problem, the performance of algorithm strongly depends on motions of input video. Furthemore at boundary of cut, applying of the algorithm is limited. In the proposed method, we detect a define boundary of cut using cut-detection algorithm. Then we adaptively apply a single-frame based super-resolution method to detected cut. Additionally, we propose algorithms of normalizing motion vector and analyzing pattern of edge to solve various problems of existing super-resolution algorithms. The experimental results show that the proposed algorithm has better performance than other conventional interpolation methods.

A Method for Structuring Digital Video

  • Lee, Jae-Yeon;Jeong, Se-Yoon;Yoon, Ho-Sub;Kim, Kyu-Heon;Bae, Younglae-J;Jang, Jong-whan
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.92-97
    • /
    • 1998
  • For the efficient searching and browsing of digital video, it is essential to extract the internal structure of the video contents. As an example, a news video consists of several sections such as politics, economics, sports and others, and also each section consists of individual topics. With this information in hand, users can ore easily access the required video frames. This paper addresses the problem of automatic shot boundary detection and selection of representative frames (R-frames), which are the essential step in recognizing the internal structure of video contents. In the shot boundary detection, a new algorithm that have dual detectors which are designed specifically for the abrupt boundaries (cuts) and gradually changing bounaries respectively is proposed. Compared to the existing 미algorithms that mostly have tried to detect both types by a single mechanism, the proposed algorithm is proved to be more robust and accurate. Also in the problem of R-frame selection, simple mechanical approaches such as selecting one frame every other second have been adopted. However this approach often selects too many R-frames in static short, while drops important frames in dynamic shots. To improve the selection mechanism, a new R-frame selection algorithm that uses motion information extracted from pixel difference is proposed.

  • PDF

A Novel Image Encryption using Complemented MLCA based on NBCA and 2D CAT (NBCA 에 기초한 여원 MLCA와 2D CAT를 이용한 새로운 영상 암호화)

  • Kim, Ha-Kyung;Nam, Tae-Hee;Cho, Sung-Jin;Kim, Seok-Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.6C
    • /
    • pp.361-367
    • /
    • 2011
  • In this paper, we propose encryption method to using complemented MLCA(Maximum Length Cellular Automata) based on NBCA(Null Boundary CA) and 2D CAT (Two-Dimensional Cellular Automata Transform) for efficient image encryption. The encryption method is processed in the following order. First, a transition matrix T is created using the Wolfram Rule matrix. Then, the transition matrix T is multiplied to the original image that is intended to be encrypted, which transfers the pixel values of the original image. Furthermore, the converted original image goes through a XOR operation with complemented vector F to convert into a complemented MLCA applied image. Then, the gateway value is set and 2D CAT basis function is created. Also, the 2D CAT is encrypted by multiplying the created basis function to the complemented MLCA applied image. Lastly, the stability analysis verifies that proposed method holds a high encryption quality status.

A High Image Compression for Computer Storage and Communication

  • Jang, Jong-Whan
    • The Journal of Natural Sciences
    • /
    • v.4
    • /
    • pp.191-220
    • /
    • 1991
  • A new texture segmentation-based image coding technique which performs segmentation based on roughness of textural regions and properties of the human visual system (HVS) is presented. This method solves the problems of a segmentation-based image coding technique with constant segments by proposing a methodology for segmenting an image texturally homogeneous regions with respect to the degree of roughness as perceived by the HVS. The fractal dimension is used to measure the roughness of the textural regions. The segmentation is accomplished by thresholding the fractal dimension so that textural regions are classified into three texture classes; perceived constant intensity, smooth texture, and rough texture. An image coding system with high compression and good image quality is achieved by developing an efficient coding technique for each segment boundary and each texture class. For the boundaries, a binary image representing all the boundaries is created. For regions belonging to perceived constant intensity, only the mean intensity values need to be transmitted. The smooth and rough texture regions are modeled first using polynomial functions, so only the coefficients characterizing the polynomial functions need to be transmitted. The bounda-ries, the means and the polynomial functions are then each encoded using an errorless coding scheme. Good quality reconstructed images are obtained with about 0.08 to 0.3 bit per pixel for three different types of imagery ; a head and shoulder image with little texture variation, a complex image with many edges, and a natural outdoor image with highly textured areas.

  • PDF