• 제목/요약/키워드: 3D image sensor

Search Result 333, Processing Time 0.026 seconds

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.

Design of a CMOS Image Sensor Based on a Low Power Single-Slope ADC (저전력 Single-Slope ADC를 사용한 CMOS 이미지 센서의 설계)

  • Kwon, Hyuk-Bin;Kim, Dae-Yun;Song, Min-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.2
    • /
    • pp.20-27
    • /
    • 2011
  • A CMOS Image Sensor(CIS) mounted on mobile appliances always needs a low power consumption because of the battery life cycle. In this paper, we propose novel power reduction techniques such as a data flip-flop circuit with leakage current elimination, a low power single slope A/D converter with a novel comparator, and etc. Based on 0.13um CMOS process, the chip satisfies QVGA resolution($320{\times}240$ pixels) whose pitch is 2.25um and whose structure is 4-Tr active pixel sensor. From the experimental results, the ADC in the middle of CIS has a 10-b resolution, the operating speed of CIS is 16 frame/s, and the power dissipation is 25mW at 3.3V(Analog)/1.8V(Digital) power supply. When we compare the proposed CIS with conventional ones, the power consumption is reduced approximately by 22% in sleep mode, 20% in operating mode.

Development of Stretchable Joint Motion Sensor for Rehabilitation based on Silver Nanoparticle Direct Printing (은 나노입자 프린팅 기반의 재활치료용 신축성 관절센서 개발)

  • Chae, Woen-Sik;Jung, Jae-Hu
    • Korean Journal of Applied Biomechanics
    • /
    • v.31 no.3
    • /
    • pp.183-188
    • /
    • 2021
  • Objective: The purpose of this study was to develop a stretchable joint motion sensor that is based on silver nano-particle. Through this sensor, it can be utilized as an equipment for rehabilitation and analyze joint movement. Method: In this study, precursor solution was created, after that, nozel printer (Musashi, Image master 350PC) was used to print on a circuit board. Sourcemeter (Keithley, Keithley-2450) was used in order to evaluate changes of electric resistance as the sensor stretches. In addition, the sensor was attached on center of a knee joint to 2 male adults, and performed knee flexion-extension in order to evaluate accurate analysis; 3 infrared cameras (100 Hz, Motion Master 100, Visol Inc., Korea) were also used to analyze three dimensional movement. Descriptive statistics were suggested for comparing each accuracy of measurement variables of joint motions with the sensor and 3D motions. Results: The change of electric resistance of the sensor indicated multiple of 30 times from initial value in 50% of elongation and the value of electric resistance were distinctively classified by following 10%, 20%, 30%, 40% of elongation respectively. Through using the sensor and 3D camera to analyze movement variable, it showed a resistance of 99% in a knee joint extension, whereas, it indicated about 80% in flexion phase. Conclusion: In this research, the stretchable joint motion sensor was created based on silver nanoparticle that has high conductivity. If the sensor stretches, the distance between nanoparticles recede which lead gradual disconnection of an electric circuit and to have increment of electric resistance. Through evaluating angle of knee joints with observation of sensor's electric resistance, it showed similar a result and propensity from 3D motion analysis. However, unstable electric resistance of the stretchable sensor was observed when it stretches to maximum length, or went through numerous joint movements. Therefore, the sensor need complement that requires stability when it comes to measuring motions in any condition.

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.

Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting (레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발)

  • Im, D. H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.24 no.2
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF

Application of the 3D Discrete Wavelet Transformation Scheme to Remotely Sensed Image Classification

  • Yoo, Hee-Young;Lee, Ki-Won;Kwon, Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.355-363
    • /
    • 2007
  • The 3D DWT(The Three Dimensional Discrete Wavelet Transform) scheme is potentially regarded as useful one on analyzing both spatial and spectral information. Nevertheless, few researchers have attempted to process or classified remotely sensed images using the 3D DWT. This study aims to apply the 3D DWT to the land cover classification of optical and SAR(Synthetic Aperture Radar) images. Then, their results are evaluated quantitatively and compared with the results of traditional classification technique. As the experimental results, the 3D DWT shows superior classification results to conventional techniques, especially dealing with the high-resolution imagery and SAR imagery. It is thought that the 3D DWT scheme can be extended to multi-temporal or multi-sensor image classification.

Measurement of GMAW Bead Geometry Using Biprism Stereo Vision Sensor (바이프리즘 스테레오 시각 센서를 이용한 GMA 용접 비드의 3차원 형상 측정)

  • 이지혜;이두현;유중돈
    • Journal of Welding and Joining
    • /
    • v.19 no.2
    • /
    • pp.200-207
    • /
    • 2001
  • Three-diemnsional bead profile was measured using the biprism stereo vision sensor in GMAW, which consists of an optical filter, biprism and CCD camera. Since single CCD camera is used, this system has various advantages over the conventional stereo vision system using two cameras such as finding the corresponding points along the horizontal scanline. In this wort, the biprism stereo vision sensor was designed for the GMAW, and the linear calibration method was proposed to determine the prism and camera parameters. Image processing techniques were employed to find the corresponding point along the pool boundary. The ism-intensity contour corresponding to the pool boundary was found in the pixel order and the filter-based matching algorithm was used to refine the corresponding points in the subpixel order. Predicted bead dimensions were in broad agreements with the measured results under the conditions of spray mode and humping bead.

  • PDF

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Deep Learning Based Gray Image Generation from 3D LiDAR Reflection Intensity (딥러닝 기반 3차원 라이다의 반사율 세기 신호를 이용한 흑백 영상 생성 기법)

  • Kim, Hyun-Koo;Yoo, Kook-Yeol;Park, Ju H.;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.1
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a method of generating a 2D gray image from LiDAR 3D reflection intensity. The proposed method uses the Fully Convolutional Network (FCN) to generate the gray image from 2D reflection intensity which is projected from LiDAR 3D intensity. Both encoder and decoder of FCN are configured with several convolution blocks in the symmetric fashion. Each convolution block consists of a convolution layer with $3{\times}3$ filter, batch normalization layer and activation function. The performance of the proposed method architecture is empirically evaluated by varying depths of convolution blocks. The well-known KITTI data set for various scenarios is used for training and performance evaluation. The simulation results show that the proposed method produces the improvements of 8.56 dB in peak signal-to-noise ratio and 0.33 in structural similarity index measure compared with conventional interpolation methods such as inverse distance weighted and nearest neighbor. The proposed method can be possibly used as an assistance tool in the night-time driving system for autonomous vehicles.

Comparative study of data selection in data integration for 3D building reconstruction

  • Nakagawa, Masafumi;Shibasaki, Ryosuke
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1393-1395
    • /
    • 2003
  • In this research, we presented a data integration, which integrates ultra high resolution images and complementary data for 3D building reconstruction. In our method, as the ultra high resolution image, Three Line Sensor (TLS) images are used in combination with 2D digital maps, DSMs and both of them. Reconstructed 3D buildings, correctness rate and the accuracy of results were presented. As a result, optimized combination scheme of data sets , sensors and methods was proposed.

  • PDF