• Title/Summary/Keyword: View Object

Search Result 931, Processing Time 0.023 seconds

Voxel-Based Thickness Analysis of Intricate Objects

  • Subburaj, K.;Patil, Sandeep;Ravi, B.
    • International Journal of CAD/CAM
    • /
    • v.6 no.1
    • /
    • pp.105-115
    • /
    • 2006
  • Thickness is a commonly used parameter in product design and manufacture. Its intuitive definition as the smallest dimension of a cross-section or the minimum distance between two opposite surfaces is ambiguous for intricate solids, and there is very little reported work in automatic computation of thickness. We present three generic definitions of thickness: interior thickness of points inside an object, exterior thickness for points on the object surface, and radiographic thickness along a view direction. Methods for computing and displaying the respective thickness values are also presented. The internal thickness distribution is obtained by peeling or successive skin removal, eventually revealing the object skeleton (similar to medial axis transformation). Another method involves radiographic scanning along a viewing direction, with minimum, maximum and total thickness options, displayed on the surface of the object. The algorithms have been implemented using an efficient voxel based representation that can handle up to one billion voxels (1000 per axis), coupled with a near-real time display scheme that uses a look-up table based on voxel neighborhood configurations. Three different types of intricate objects: industrial (press cylinder casting), sculpture (Ganesha idol), and medical (pelvic bone) were used for successfully testing the algorithms. The results are found to be useful for early evaluation of manufacturability and other lifecycle considerations.

Multiple Camera Collaboration Strategies for Dynamic Object Association

  • Cho, Shung-Han;Nam, Yun-Young;Hong, Sang-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.6
    • /
    • pp.1169-1193
    • /
    • 2010
  • In this paper, we present and compare two different multiple camera collaboration strategies to reduce false association in finding the correspondence of objects. Collaboration matrices are defined with the required minimum separation for an effective collaboration because homographic lines for objects association are ineffective with the insufficient separation. The first strategy uses the collaboration matrices to select the best pair out of many cameras having the maximum separation to efficiently collaborate on the object association. The association information in selected cameras is propagated to unselected cameras by the global information constructed from the associated targets. While the first strategy requires the long operation time to achieve the high association rate due to the limited view by the best pair, it reduces the computational cost using homographic lines. The second strategy initiates the collaboration process of objects association for all the pairing cases of cameras regardless of the separation. In each collaboration process, only crossed targets by a transformed homographic line from the other collaborating camera generate homographic lines. While the repetitive association processes improve the association performance, the transformation processes of homographic lines increase exponentially. The proposed methods are evaluated with real video sequences and compared in terms of the computational cost and the association performance. The simulation results demonstrate that the proposed methods effectively reduce the false association rate as compared with basic pair-wise collaboration.

Accurate Pig Detection for Video Monitoring Environment (비디오 모니터링 환경에서 정확한 돼지 탐지)

  • Ahn, Hanse;Son, Seungwook;Yu, Seunghyun;Suh, Yooil;Son, Junhyung;Lee, Sejun;Chung, Yongwha;Park, Daihee
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.890-902
    • /
    • 2021
  • Although the object detection accuracy with still images has been significantly improved with the advance of deep learning techniques, the object detection problem with video data remains as a challenging problem due to the real-time requirement and accuracy drop with occlusion. In this research, we propose a method in pig detection for video monitoring environment. First, we determine a motion, from a video data obtained from a tilted-down-view camera, based on the average size of each pig at each location with the training data, and extract key frames based on the motion information. For each key frame, we then apply YOLO, which is known to have a superior trade-off between accuracy and execution speed among many deep learning-based object detectors, in order to get pig's bounding boxes. Finally, we merge the bounding boxes between consecutive key frames in order to reduce false positive and negative cases. Based on the experiment results with a video data set obtained from a pig farm, we confirmed that the pigs could be detected with an accuracy of 97% at a processing speed of 37fps.

Development of Personal Mobility Safety Assistants using Object Detection based on Deep Learning (딥러닝 기반 객체 인식을 활용한 퍼스널 모빌리티 안전 보조 시스템 개발)

  • Kwak, Hyeon-Seo;Kim, Min-Young;Jeon, Ji-Yong;Jeong, Eun-Hye;Kim, Ju-Yeop;Hyeon, So-Dam;Jeong, Jin-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.486-489
    • /
    • 2021
  • Recently, the demand for the use of personal mobility vehicles, such as an electric kickboard, is increasing explosively because of its high portability and usability. However, the number of traffic accidents caused by personal mobility vehicles has also increased rapidly in recent years. To address the issues regarding the driver's safety, we propose a novel approach that can monitor context information around personal mobility vehicles using deep learning-based object detection and smartphone captured videos. In the proposed framework, a smartphone is attached to a personal mobility device and a front or rear view is recorded to detect an approaching object that may affect the driver's safety. Through the detection results using YOLOv5 model, we report the preliminary results and validated the feasibility of the proposed approach.

Multi-View Image Deblurring for 3D Shape Reconstruction (3차원 형상 복원을 위한 다중시점 영상 디블러링)

  • Choi, Ho Yeol;Park, In Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.11
    • /
    • pp.47-55
    • /
    • 2012
  • In this paper, we propose a method to reconstruct accurate 3D shape object by using multi-view images which are disturbed by motion blur. In multi-view deblurring, more precise PSF estimation can be done by using the geometric relationship between multi-view images. The proposed method first estimates initial 2D PSFs from individual input images. Then 3D PSF candidates are projected on the input images one by one to find the best one which are mostly consistent with the initial 2D PSFs. 3D PSF consists with direction and density and it represents the 3D trajectory of object motion. 야to restore 3D shape by using multi-view images computes the similarity map and estimates the position of 3D point. The estimated 3D PSF is again projected to input images and they replaces the intial 2D PSFs which are finally used in image deblurring. Experimental result shows that the quality of image deblurring and 3D reconstruction improves significantly compared with the result when the input images are independently deblurred.

Study on object detection and distance measurement functions with Kinect for windows version 2 (키넥트(Kinect) 윈도우 V2를 통한 사물감지 및 거리측정 기능에 관한 연구)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.6
    • /
    • pp.1237-1242
    • /
    • 2017
  • Computer vision is coming more interesting with new imaging sensors' new capabilities which enable it to understand more its surrounding environment by imitating human vision system with artificial intelligence techniques. In this paper, we made experiments with Kinect camera, a new depth sensor for object detection and distance measurement functions, most essential functions in computer vision such as for unmanned or manned vehicles, robots, drones, etc. Therefore, Kinect camera is used here to estimate the position or the location of objects in its field of view and measure the distance from them to its depth sensor in an accuracy way by checking whether that the detected object is real object or not to reduce processing time ignoring pixels which are not part of real object. Tests showed promising results with such low-cost range sensor, Kinect camera which can be used for object detection and distance measurement which are fundamental functions in computer vision applications for further processing.

Active Object Tracking based on stepwise application of Region and Color Information (지역정보와 색 정보의 단계적 적용에 의한 능동 객체 추적)

  • Jeong, Joon-Yong;Lee, Kyu-Won
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.107-112
    • /
    • 2012
  • An active object tracking algorithm using Pan and Tilt camera based in the stepwise application of region and color information from realtime image sequences is proposed. To reduce environment noises in input sequences, Gaussian filtering is performed first. An image is divided into background and objects by using the adaptive Gaussian mixture model. Once the target object is detected, an initial search window close to an object region is set up and color information is extracted from the region. We track moving objects in realtime by using the CAMShift algorithm which enables to trace objects in active camera with the color information. The proper tracking is accomplished by controlling the amount of pan and tilt to be placed the center position of object into the middle of field of view. The experimental results show that the proposed method is more effective than the hand-operated window method.

Effect of slice inclination and object position within the field of view on the measurement accuracy of potential implant sites on cone-beam computed tomography

  • Saberi, Bardia Vadiati;Khosravifard, Negar;Nourzadeh, Alireza
    • Imaging Science in Dentistry
    • /
    • v.50 no.1
    • /
    • pp.37-43
    • /
    • 2020
  • Purpose: The purpose of this study was to evaluate the accuracy of linear measurements in the horizontal and vertical dimensions based on object position and slice inclination in cone-beam computed tomography (CBCT) images. Materials and Methods: Ten dry sheep hemi-mandibles, each with 4 sites (incisor, canine, premolar, and molar), were evaluated when either centrally or peripherally positioned within the field of view (FOV) with the image slices subjected to either oblique or orthogonal inclinations. Four types of images were created of each region: central/cross-sectional, central/coronal, peripheral/cross-sectional, and peripheral/coronal. The horizontal and vertical dimensions were measured for each region of each image type. Direct measurements of each region were obtained using a digital caliper in both horizontal and vertical dimensions. CBCT and direct measurements were compared using the Bland-Altman plot method. P values <0.05 were considered to indicate statistical significance. Results: The buccolingual dimension of the incisor and premolar areas and the height of the incisor, canine, and molar areas showed statistically significant differences on the peripheral/coronal images compared to the direct measurements (P<0.05). Molar area height in the central/coronal slices also differed significantly from the direct measurements (P<0.05). Cross-sectional images of either the central or peripheral position had no marked difference from the gold-standard values, indicating sufficient accuracy. Conclusion: Peripheral object positioning within the FOV in combination with applying an orthogonal inclination to the slices resulted in significant inaccuracies in the horizontal and vertical measurements. The most undesirable effect was observed in the molar area and the vertical dimension.

A Study on Characteristics of Prospect from the Mountain Pass - Focusing on Mountain Passes Located in Busan - (고개의 조망특성에 관한 연구 - 부산광역시를 대상으로 -)

  • Kang Young-Jo;Cho Seung-Rae;Kim Hee-Jung
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.33 no.4 s.111
    • /
    • pp.22-32
    • /
    • 2005
  • The purpose of this study is to analyze characteristics of prospect from the mountain pass by investigating relations between the type of mountain pass and object overlooked from the mountain pass. For the purpose, this researcher selected and surveyed 44 mountain passes located in Busan, except in Gangseo-gu lesion. According to their locational characteristics, the mountain passes were classified into three types, 'sanmok'(formed between mountain peaks), 'sanheori'(formed on the mountainside) and 'sanmaru'(formed at the tip of the mountain peak). Out of the total 44 mountain passes, 22 were 'sanheori' in type. In the same type, mountain passes mostly had a prospect providing the overlap of downtown and mountain areas. The researcher examined the sight distance and dip of object to be viewed from the mountain pass, determining relations between the object and the mountain pass. When overlooked from mountain passes in Busan, most objects are distributed between $-3^{\circ}\;and\;-1^{\circ}$ in an angle of depression within the sight distance from 0.5km to 14km. Mountain passes are valuable as a post that is very important in prospecting scenes. But they are now in crisis. They are being gradually disappeared because of development projects. Finally, the researcher hopes that the study makes recognizing the value of the mountain pass and contributes to preserve the mountain pass as an important post of view point when its region is later developed.

Alpha : Java Visualization Tool (Alpha : 자바 시각화 도구)

  • Kim, Cheol-Min
    • The Journal of Korean Association of Computer Education
    • /
    • v.7 no.3
    • /
    • pp.45-56
    • /
    • 2004
  • Java provides support for Web, concurrent programming, safety, portability, and GUI, so there is a steady increase in the number of Java users. Java is based on the object-oriented concepts such as classes, instances, encapsulation, inheritance, and polymorphism. However the JVM(Java Virtual Machine) hides most of the phenomena related to the concepts. This is why most of Java users have much difficulty in learning and using Java. As a solution to the problem, I have developed a tool Alpha that visualizes the phenomena occurred in the JVM from the standpoint of the concepts and will describe the design and features of the tool in this paper. For practicality and extendability Alpha has an MVC(Model-View-Controller) architecture and visualizes the phenomena such as object instantiations, method invocations, field accesses, cross-references among objects, and execution flows of threads in the various ways according to the levels and purposes of the users.

  • PDF