• Title/Summary/Keyword: Using Two Object Masks

Search Result 9, Processing Time 0.028 seconds

A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation (다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM)

  • Geunhyeong Park;HyungGi Jo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

Video object segmentation using a novel object boundary linking (새로운 객체 외곽선 연결 방법을 사용한 비디오 객체 분할)

  • Lee Ho-Suk
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.255-274
    • /
    • 2006
  • Moving object boundary is very important for the accurate segmentation of moving object. We extract the moving object boundary from the moving object edge. But the object boundary shows broken boundaries so we develop a novel boundary linking algorithm to link the broken boundaries. The boundary linking algorithm forms a quadrant around the terminating pixel in the broken boundaries and searches for other terminating pixels to link in concentric circles clockwise within a search radius in the forward direction. The boundary linking algorithm guarantees the shortest distance linking. We register the background from the image sequence using the stationary background filtering. We construct two object masks, one object mask from the boundary linking and the other object mask from the initial moving object, and use these two complementary object masks to segment the moving objects. The main contribution of the proposed algorithms is the development of the novel object boundary linking algorithm for the accurate segmentation. We achieve the accurate segmentation of moving object, the segmentation of multiple moving objects, the segmentation of the object which has a hole within the object, the segmentation of thin objects, and the segmentation of moving objects in the complex background using the novel object boundary linking and the background automatically. We experiment the algorithms using standard MPEG-4 test video sequences and real video sequences of indoor and outdoor environments. The proposed algorithms are efficient and can process 70.20 QCIF frames per second and 19.7 CIF frames per second on the average on a Pentium-IV 3.4GHz personal computer for real-time object-based processing.

Moving Object Segmentation using Space-oriented Object Boundary Linking and Background Registration (공간기반 객체 외곽선 연결과 배경 저장을 사용한 움직이는 객체 분할)

  • Lee Ho Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.2
    • /
    • pp.128-139
    • /
    • 2005
  • Moving object boundary is very important for moving object segmentation. But the moving object boundary shows broken boundary We invent a novel space-oriented boundary linking algorithm to link the broken boundary The boundary linking algorithm forms a quadrant around the terminating pixel in the broken boundary and searches forward other terminating pixel to link within a radius. The boundary linking algorithm guarantees shortest distance linking. We also register the background from image sequence. We construct two object masks, one from the result of boundary linking and the other from the registered background, and use these two complementary object masks together for moving object segmentation. We also suppress the moving cast shadow using Roberts gradient operator. The major advantages of the proposed algorithms are more accurate moving object segmentation and the segmentation of the object which has holes in its region using these two object masks. We experiment the algorithms using the standard MPEG-4 test sequences and real video sequence. The proposed algorithms are very efficient and can process QCIF image more than 48 fps and CIF image more than 19 fps using a 2.0GHz Pentium-4 computer.

2D-3D Pose Estimation using Multi-view Object Co-segmentation (다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정)

  • Kim, Seong-heum;Bok, Yunsu;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.

Active auto-focusing of high-magnification optical microscopes (고배율 광학현미경의 초정밀 능동 자동초점방법)

  • 이호재;이상윤;김승우
    • Korean Journal of Optics and Photonics
    • /
    • v.7 no.2
    • /
    • pp.101-111
    • /
    • 1996
  • Optical microscopes integrated with CCD cameras are widely used for automatic inspection of precision circuit patterns fabricated on glass masks and silicon wafers. For this application it is important to position the object always is focus so that the image appears in good quality while the microscope scans the object. However, as the magnification of the microscope is taken large for fine resolution the depth of focus becomes small, often in submicron ranges, requiring special care in focusing. This study proposes a new auto-focusing method, which can be readily incorporated in existing optical configuration of microscope. This method is based on optical triangulation using a separate beam of laser and two photodiodes, eliminating focus errors caused by surface roughness and waviness. Experimental results prove that the method can produce focus error signals which are very sensitive with a resolution of 5 nm within 0.5 ${\mu}{\textrm}{m}$ accuracy.

  • PDF

The High-Speed Extraction of Interest Region in the Parcel Image of Large Size (대용량 소포영상에서 관심영역 고속추출 방법에 관한 연구)

  • Park, Moon-Sung;Bak, Sang-Eun;Kim, In-Soo;Kim, Hye-Kyu;Jung, Hoe-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.11D no.3
    • /
    • pp.691-702
    • /
    • 2004
  • In this paper, we propose a sequence of method which extrats ROIs(Region of Interests) rapidly from the parcel image of large size. In the proposed method, original image is spilt into the small masks, and the meaningful masks, the ROIs, are extracted by two criterions sequentially The first criterion is difference of pixel value between Inner points, and the second is deviation of it. After processing, some informational ROIs-the areas of bar code, characters, label and the outline of object-are acquired. Using diagonal axis of each ROI and the feature of various 2D bar code, the area of 2D bar code can be extracted from the ROIs. From an experiment using above methods, various ROIs are extracted less than 200msec from large-size parcel image, and 2D bar code region is selected by the accuracy of 100%.

Keypoint-based Deep Learning Approach for Building Footprint Extraction Using Aerial Images

  • Jeong, Doyoung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.1
    • /
    • pp.111-122
    • /
    • 2021
  • Building footprint extraction is an active topic in the domain of remote sensing, since buildings are a fundamental unit of urban areas. Deep convolutional neural networks successfully perform footprint extraction from optical satellite images. However, semantic segmentation produces coarse results in the output, such as blurred and rounded boundaries, which are caused by the use of convolutional layers with large receptive fields and pooling layers. The objective of this study is to generate visually enhanced building objects by directly extracting the vertices of individual buildings by combining instance segmentation and keypoint detection. The target keypoints in building extraction are defined as points of interest based on the local image gradient direction, that is, the vertices of a building polygon. The proposed framework follows a two-stage, top-down approach that is divided into object detection and keypoint estimation. Keypoints between instances are distinguished by merging the rough segmentation masks and the local features of regions of interest. A building polygon is created by grouping the predicted keypoints through a simple geometric method. Our model achieved an F1-score of 0.650 with an mIoU of 62.6 for building footprint extraction using the OpenCitesAI dataset. The results demonstrated that the proposed framework using keypoint estimation exhibited better segmentation performance when compared with Mask R-CNN in terms of both qualitative and quantitative results.

An Instance Segmentation using Object Center Masks (오브젝트 중심점-마스크를 사용한 instance segmentation)

  • Lee, Jong Hyeok;Kim, Hyong Suk
    • Smart Media Journal
    • /
    • v.9 no.2
    • /
    • pp.9-15
    • /
    • 2020
  • In this paper, we propose a network model composed of Multi path Encoder-Decoder branches that can recognize each instance from the image. The network has two branches, Dot branch and Segmentation branch for finding the center point of each instance and for recognizing area of the instance, respectively. In the experiment, the CVPPP dataset was studied to distinguish leaves from each other, and the center point detection branch(Dot branch) found the center points of each leaf, and the object segmentation branch(Segmentation branch) finally predicted the pixel area of each leaf corresponding to each center point. In the existing segmentation methods, there were problems of finding various sizes and positions of anchor boxes (N > 1k) for checking objects. Also, there were difficulties of estimating the number of undefined instances per image. In the proposed network, an effective method finding instances based on their center points is proposed.

A Study on the Extraction of Vectoring Objects in the Color Map Image (칼라지도영상에서의 벡터링 대상물 추출에 관한 연구)

  • 김종민;김성연;김민환
    • Spatial Information Research
    • /
    • v.3 no.2
    • /
    • pp.179-189
    • /
    • 1995
  • To make vector data from a map which has no negative plates by using vectoring tool, it is necessary that we can extract objects to be vectorized from a scanned map. In this paper, we studied on extracting vectoring objects from scanned color maps. To do this, we classified vectoring objects into three types : line type, filled - area type and character/symbol type. To make the extraction method effective, we analyzed characteristics of vectoring objects and color distribution in scanned color maps. Then, we applied these characteristics to designing process of the extraction method. To extract the line type object, our line tracing method was designed by using the masks which considered connectivity and geometrical characteristics of lines. By using the local thresholding method and the similarity function for comparing the color distribution between two NxN blocks, we extracted character/symbol and the filled-area objects effectively. The method proposed in this paper can be used for constructing the small scale GIS application economically using existing color maps.

  • PDF