• Title/Summary/Keyword: 3D Environment Recognition

Search Result 151, Processing Time 0.024 seconds

A study on Recognition of Inpatient Room Acoustic Pattern for Hospital safety (병원안전을 위한 입원실 음향패턴 인식 관한 연구)

  • Ryu, Han-Sul;Ahn, Jong-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.169-173
    • /
    • 2021
  • Currently, safety accidents in hospitals are steadily occurring. In particular, safety accidents of elderly patients with weak immunity, such as nursing hospitals, continue to occur, and countermeasures are needed. Most accidents are caused by patient movement. As a method of reducing safety accidents by analyzing and recognizing the sound of the inpatient room according to the movement of the patient, this paper classifies the sound pattern for sound recognition in the hospital inpatient room using DTW (Dynamic Time Warping), an algorithm applicable to time-series pattern recognition. It was analyzed by applying it to the inpatient room environment.

Implementation of interactive 3D floating image pointing device (인터렉티브 3D 플로팅 영상 포인팅 장치의 구현)

  • Shin, Dong-Hak;Kim, Eun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.8
    • /
    • pp.1481-1487
    • /
    • 2008
  • In this paper, we propose a novel interactive 3D floating image pointing device for the use of 3D environment. The proposed system consists of the 3D floating image generation system by use of a floating lens array and the a user interface based on real-time finger detection. In the proposed system, a user selects single image among the floating images so that the interaction function are performed effectively by pressing the button event through the finger recognition using two cameras. To show the usefulness of the proposed system, we carry out the experiment and the preliminary results are presented.

Modeling and Calibration of a 3D Robot Laser Scanning System (3차원 로봇 레이저 스캐닝 시스템의 모델링과 캘리브레이션)

  • Lee Jong-Kwang;Yoon Ji Sup;Kang E-Sok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.34-40
    • /
    • 2005
  • In this paper, we describe the modeling for the 3D robot laser scanning system consisting of a laser stripe projector, camera, and 5-DOF robot and propose its calibration method. Nonlinear radial distortion in the camera model is considered for improving the calibration accuracy. The 3D range data is calculated using the optical triangulation principle which uses the geometrical relationship between the camera and the laser stripe plane. For optimal estimation of the system model parameters, real-coded genetic algorithm is applied in the calibration process. Experimental results show that the constructed system is able to measure the 3D position within about 1mm error. The proposed scheme could be applied to the kinematically dissimilar robot system without losing the generality and has a potential for recognition for the unknown environment.

음성인식 기반 인터렉티브 미디어아트의 연구 - 소리-시각 인터렉티브 설치미술 "Water Music" 을 중심으로-

  • Lee, Myung-Hak;Jiang, Cheng-Ri;Kim, Bong-Hwa;Kim, Kyu-Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.354-359
    • /
    • 2008
  • This Audio-Visual Interactive Installation is composed of a video projection of a video Projection and digital Interface technology combining with the viewer's voice recognition. The Viewer can interact with the computer generated moving images growing on the screen by blowing his/her breathing or making sound. This symbiotic audio and visual installation environment allows the viewers to experience an illusionistic spacephysically as well as psychologically. The main programming technologies used to generate moving water waves which can interact with the viewer in this installation are visual C++ and DirectX SDK For making water waves, full-3D rendering technology and particle system were used.

  • PDF

Development of Realtime Phonetic Typewriter (실시간 음성타자 시스템 구현)

  • Cho, W.Y.;Choi, D.I.
    • Proceedings of the KIEE Conference
    • /
    • 1999.11c
    • /
    • pp.727-729
    • /
    • 1999
  • We have developed a realtime phonetic typewriter implemented on IBM PC with sound card based on Windows 95. In this system, analyzing of speech signal, learning of neural network, labeling of output neurons and visualizing of recognition results are performed on realtime. The developing environment for speech processing is established by adding various functions, such as editing, saving, loading of speech data and 3-D or gray level displaying of spectrogram. Recognition experimental using Korean phone had a 71.42% for 13 basic consonant and 90.01% for 7 basic vowel accuracy.

  • PDF

Korean Single-Vowel Recognition Using Cumulants in Color Noisy Environment (유색 잡음 환경하에서 Cumulant를 이용한 한국어 단모음 인식)

  • Lee, Hyung-Gun;Yang, Won-Young;Cho, Yong-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.2
    • /
    • pp.50-59
    • /
    • 1994
  • This paper presents a speech recognition method utilizing third-order cumulants as a feature vector and a neural network for recognition. The use of higher-order cumulants provides desirable uncoupling between the gaussian noise and speech, which enables us to estimate the coefficients of AR model without bias. Unlike the conventional method using second-order statistics, the proposed one exhibits low bias even in SNR as low as 0 dB at the expense of higher variance. It is confirmed through computer simulation that recognition rate of korean single-vowels with the cumulant-based method is much higher than the results with the conventional method even in low SNR.

  • PDF

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

A Study on the Method for Reconstructing the Shell Plates Surface from Shell Template Offset Drawing (Shell Template Offset 도면을 활용한 선체 곡판 형상 복원 방법에 관한 연구)

  • Hwang, Inhyuck;Son, Seunghyeok
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.56 no.1
    • /
    • pp.66-74
    • /
    • 2019
  • In the field of shipbuilding design, the use of 3D CAD is becoming commonplace, and most of the large shipyards are conducting 3D design. However at the production site, workers are still working on 2D drawings rather than 3D models. This tendency is even worse in small-scale shipyards and block manufacturing shops. Particularly, in a manufacturing shop that is engaged in the outsourcing of blocks, it may not be possible to provide 3D model. However, the demand for 3D models in the production field is steadily increasing. Therefore, it would be helpful if 3D model could be generated from a 2D drawing. In this paper, we propose a method to extract template and unfolded surface shape information from shell template offset drawing using computer vision technology. Also a 3D surface model was reconstructed and visualized from the extracted information. The result of this study is thought to be helpful in the work environment where 3D model information can not be obtained.

Stroke Based Hand Gesture Recognition by Analyzing a Trajectory of Polhemus Sensor (Polhemus 센서의 궤적 정보 해석을 이용한 스트로크 기반의 손 제스처 인식)

  • Kim, In-Cheol;Lee, Nam-Ho;Lee, Yong-Bum;Chien, Sung-Il
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.8
    • /
    • pp.46-53
    • /
    • 1999
  • We have developed glove based hand gesture recognition system for recognizing 3D gesture of operators in remote work environment. Polhemus sensor attached to the PinchGlove is employed to obtain the sequence of 3D positions of a hand trajectory. These 3D data are then encoded as the input to our recognition system. We propose the use of the strokes to be modeled by HMMs as basic units. The gesture models are constructed by concatenating stroke HMMs and thereby the HMMs for the newly defined gestures can be created without retraining their parameters. Thus, by using stroke models rather than gesture models, we can raise the system extensibility. The experiment results for 16 different gestures show that our stroke based composite HMM performs better than the conventional gesture based HMM.

  • PDF

Study of Traffic Sign Auto-Recognition (교통 표지판 자동 인식에 관한 연구)

  • Kwon, Mann-Jun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5446-5451
    • /
    • 2014
  • Because there are some mistakes by hand in processing electronic maps using a navigation terminal, this paper proposes an automatic offline recognition for traffic signs, which are considered ingredient navigation information. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which have been used widely in the field of 2D face recognition as computer vision and pattern recognition applications, was used to recognize traffic signs. First, using PCA, a high-dimensional 2D image data was projected to a low-dimensional feature vector. The LDA maximized the between scatter matrix and minimized the within scatter matrix using the low-dimensional feature vector obtained from PCA. The extracted traffic signs under a real-world road environment were recognized successfully with a 92.3% recognition rate using the 40 feature vectors created by the proposed algorithm.