• Title/Summary/Keyword: Central Object

Search Result 276, Processing Time 0.025 seconds

Extraction of a Central Object in a Color Image Based on Significant Colors (특이 칼라에 기반한 칼라 영상에서의 중심 객체 추출)

  • SungYoung Kim;Eunkyung Lim;MinHwan Kim
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.5
    • /
    • pp.648-657
    • /
    • 2004
  • A method of extracting central objects in color images without any prior-knowledge is proposed in this paper, which uses basically information of significant color distribution. A central object in an image is defined as a set of regions that lie around center of the image and have significant color distribution against the other surround (or background) regions. Significant colors in an image are first defined as the colors that are distributed more densely around center of the image than near borders. Then core object regions (CORs) are selected as the regions a lot of pixels of which have the significant colors. Finally, the adjacent regions to the CORs are iteratively merged if they are similar to the CORs but not to the background regions in color distribution. The merging result is accepted as the central object that may include differently color-characterized regions and/or two or more objects of interest. Usefulness of the significant colors in extracting the central object was verified through experiments on several kinds of test images. We expect that central objects shall be used usefully in image retrieval applications.

  • PDF

Classification of Man-Made and Natural Object Images in Color Images

  • Park, Chang-Min;Gu, Kyung-Mo;Kim, Sung-Young;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.12
    • /
    • pp.1657-1664
    • /
    • 2004
  • We propose a method that classifies images into two object types man-made and natural objects. A central object is extracted from each image by using central object extraction method[1] before classification. A central object in an images defined as a set of regions that lies around center of the image and has significant color distribution against its surrounding. We define three measures to classify the object images. The first measure is energy of edge direction histogram. The energy is calculated based on the direction of only non-circular edges. The second measure is an energy difference along directions in Gabor filter dictionary. Maximum and minimum energy along directions in Gabor filter dictionary are selected and the energy difference is computed as the ratio of the maximum to the minimum value. The last one is a shape of an object, which is also represented by Gabor filter dictionary. Gabor filter dictionary for the shape of an object differs from the one for the texture in an object in which the former is computed from a binarized object image. Each measure is combined by using majority rule tin which decisions are made by the majority. A test with 600 images shows a classification accuracy of 86%.

  • PDF

A Study on Extraction of Central Objects in Color Images (칼라 영상에서의 중심 객체 추출에 관한 연구)

  • 김성영;박창민;권규복;김민환
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.6
    • /
    • pp.616-624
    • /
    • 2002
  • An extraction method of central objects in the color images is proposed, in this paper. A central object is defined as a comparatively consist of the central object in the image. First of all. an input image and its decreased resolution images are segmented. Segmented regions are classified as the outer or the inner region. The outer region is adjacent regions are included by a same region in the decreased resolution image. Then core object regions and core background regions are selected from the inner region and the outer region respectively. Core object regions are the representative regions for the object and are selected by using the information about the information about the region size and location. Each inner regions is classified into foreground or background regions by comparing values of a color histogram intersection of the inner region against the core object region and the core background regions. The core object region and foreground regions consist of the central object in the image.

  • PDF

Development and Implementation of Design Object Model for Integrated Structural Design System (통합 구조설계 시스템을 위한 설계 객체 모델의 개발과 구현)

  • 천진호;이창호;이병해
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2001.10a
    • /
    • pp.151-158
    • /
    • 2001
  • This paper describes an example of developing an integrated design system, Integrated Structural Design System for Reinforced Concrete Buildings(INDECON). INDECON incorporates a central database and three design modules: a preliminary design module(PDM), a structural analysis module(SAM), and a detailed design module(DDM). The development of INDECON begins with the development of design models including Design Object Model(DOM) which describes design data during the structural design process. The Design Object Model is transformed to Design Table Model(DTM) for the central database, and is specified to be in detail for the three design modules. Then the central database is implemented and managed by relational database management system(RDBMS), and the three design modules are implemented using C++ programming language. The central database in the server computer communicates with the design modules in the client computers using TCP/IP internet protocol. The developing procedure for INDECON in this paper can be applied for developing more comprehensive integrated structural design systems.

  • PDF

Automatic Extraction of Rescue Requests from Drone Images: Focused on Urban Area Images (드론영상에서 구조요청자 자동추출 방안: 도심지역 촬영영상을 중심으로)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.15 no.3
    • /
    • pp.37-44
    • /
    • 2019
  • In this study, we propose the automatic extraction method of Rescue Requests from Drone Images. A central object is extracted from each image by using central object extraction method[7] before classification. A central object in an images are defined as a set of regions that is lined around center of the image and has significant texture distribution against its surrounding. In this case of artificial objects, edge of straight line is often found, and texture is regular and directive. However, natural object's case is not. Such characteristics are extracted using Edge direction histogram energy and texture Gabor energy. The Edge direction histogram energy calculated based on the direction of only non-circular edges. The texture Gabor energy is calculated based on the 24-dimension Gebor filter bank. Maximum and minimum energy along direction in Gabor filter dictionary is selected. Finally, the extracted rescue requestor object areas using the dominant features of the objects. Through experiments, we obtain accuracy of more than 75% for extraction method using each features.

Effect of object position in the field of view and application of a metal artifact reduction algorithm on the detection of vertical root fractures on cone-beam computed tomography scans: An in vitro study

  • Nikbin, Ava;Kajan, Zahra Dalili;Taramsari, Mehran;Khosravifard, Negar
    • Imaging Science in Dentistry
    • /
    • v.48 no.4
    • /
    • pp.245-254
    • /
    • 2018
  • Purpose: To assess the effects of object position in the field of view (FOV) and application of a metal artifact reduction (MAR) algorithm on the diagnostic accuracy of cone-beam computed tomography (CBCT) for the detection of vertical root fractures(VRFs). Materials and Methods: Sixty human single-canal premolars received root canal treatment. VRFs were induced in 30 endodontically treated teeth. The teeth were then divided into 4 groups, with 2 groups receiving metal posts and the remaining 2 only having an empty post space. The roots from different groups were mounted in a phantom made of cow rib bone, and CBCT scans were obtained for the 4 different groups. Three observers evaluated the images independently. Results: The highest frequency of correct diagnoses of VRFs was obtained with the object positioned centrally in the FOV, using the MAR algorithm. Peripheral positioning of the object without the MAR algorithm yielded the highest sensitivity for the first observer (66.7%). For the second and third observers, a central position improved sensitivity, with or without the MAR algorithm. In the presence of metal posts, central positioning of the object in the FOV significantly increased the diagnostic sensitivity and accuracy compared to peripheral positioning. Conclusion: Diagnostic accuracy was higher with central positioning than with peripheral positioning, irrespective of whether the MAR algorithm was applied. However, the effect of the MAR algorithm was more significant with central positioning than with peripheral positioning of the object in the FOV. The clinical experience and expertise of the observers may serve as a confounder in this respect.

Determination of Object Position Using Robot Vision (로보트 비전을 이용한 대상물체의 위치 결정에 관한 연구)

  • Park, K.T.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.9
    • /
    • pp.104-113
    • /
    • 1996
  • In robot system, the robot manipulation needs the information of task and objects to be handled in possessing a variaty of positions and orientations. In the current industrial robot system, determining position and orientation of objects under industrial environments is one of major problems. In order to pick up an object, the roblt needs the information about the position and orientation of object, and between objects and gripper. When sensing is accomplished by pinhole model camera, the mathematical relationship between object points and their images is expressed in terms of perspective, i.e., central projection. In this paper, a new approach to determine the information of the supporting points related to position and orientation of the object using the robot vision system is developed and testified in experimental setup. The result will be useful for the industrial, agricultural, and autonomous robot.

  • PDF

Video Image Tracking Technique Based On Shape-Based Matching Algorithm

  • Chen, Min-Hsin;Chen, Chi-Farn
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.882-884
    • /
    • 2003
  • We present an application of digital video images for object tracking. In order to track a fixed object, which was shoot on a moving vehicle, this study develops a shape-based matching algorithm to implement the tracking task. Because the shape-based matching algorithm has scale and rotation invariant characteristics, therefore it can be used to calculate the similarity between two variant shapes. An experiment is performed to track the ship object in the open sea. The result shows that the proposed method can track the object in the video images even the shape change largely.

  • PDF

MULTI-VIEW STEREO CAMERA CALIBRATION USING LASER TARGETS FOR MEASUREMENT OF LONG OBJECTS

  • Yoshimi, Takashi;Yoshimura, Takaharu;Takase, Ryuichi;Kawai, Yoshihiro;Tomita, Fumiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.566-571
    • /
    • 2009
  • A calibration method for multiple sets of stereo vision cameras is proposed. To measure the three-dimensional shape of a very long object, measuring the object at different viewpoints and registration of the data are necessary. In this study, two lasers beams generate two strings of calibration targets, which form straight lines in the world coordinate system. An evaluation function is defined to calculate the sum of the squares of the distances between each transformed target and the fitted line representing the laser beam to each target, and the distances between points appearing in the data sets of two adjacent viewpoints. The calculation process for the approximation method based on data linearity is presented. The experimental results show the effectiveness of the method.

  • PDF

DEVELOPMENT OF AN ORTHOGONAL DOUBLE-IMAGE PROCESSING ALGORITHM TO MEASURE BUBBLE VOLUME IN A TWO-PHASE FLOW

  • Kim, Seong-Jin;Park, Goon-Cherl
    • Nuclear Engineering and Technology
    • /
    • v.39 no.4
    • /
    • pp.313-326
    • /
    • 2007
  • In this paper, an algorithm to reconstruct two orthogonal images into a three-dimensional image is developed in order to measure the bubble size and volume in a two-phase boiling flow. The central-active contour model originally proposed by P. $Szczypi\'{n}ski$ and P. Strumillo is modified to reduce the dependence on the initial reference point and to increase the contour stability. The modified model is then applied to the algorithm to extract the object boundary. This improved central contour model could be applied to obscure objects using a variable threshold value. The extracted boundaries from each image are merged into a three-dimensional image through the developed algorithm. It is shown that the object reconstructed using the developed algorithm is very similar or identical to the real object. Various values such as volume and surface area are calculated for the reconstructed images and the developed algorithm is qualitatively verified using real images from rubber clay experiments and quantitatively verified by simulation using imaginary images. Finally, the developed algorithm is applied to measure the size and volume of vapor bubbles condensing in a subcooled boiling flow.