• Title/Summary/Keyword: Camera Technology

Search Result 2,554, Processing Time 0.027 seconds

Obtaining 3D Shape of Specular Surface Using Five Degrees of Freedom Camera System

  • Yusuf, Khairi;Miyake, Tetsuo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.41-44
    • /
    • 1998
  • In this paper, a new method of obtaining specular surface shape by using five degrees of freedom camera system is described. The normal vectors of the surface are extracted by achieving the coincident between the camera axis and the surface normal vector. This method uses a five degrees of freedom (5DOF) camera to fulfill this task. From the normal vector data, the shape of the surface is reconstructed. The result shows that the methodology improves the 3-D shape of object measurement with good accuracy.

  • PDF

Designing the Optical Structure of a Multiscale Gigapixel Camera (멀티스케일방식의 기가픽셀카메라의 광학구조설계)

  • Moon, Hee jun;Rim, Cheon-Seog
    • Korean Journal of Optics and Photonics
    • /
    • v.27 no.1
    • /
    • pp.25-31
    • /
    • 2016
  • We derive 28 optical structural equations based on our two previous theoretical and experimental papers about a gigapixel camera, which were published in 2013 and 2015 respectively. Utilizing these 28 equations, we are able to obtain an integrated understanding of optical structure for a multiscale gigapixel camera system, in addition to obtaining numerical values for structural parameters very directly and easily.

Fuzzy Navigation Control of Mobile Robot equipped with CCD Camera (퍼지제어를 이용한 카메라가 장착된 이동로봇의 경로제어)

  • Cho, Jung-Tae;Lee, Seok-Won;Nam, Boo-Hee
    • Journal of Industrial Technology
    • /
    • v.20 no.B
    • /
    • pp.195-200
    • /
    • 2000
  • This paper describes the path planning method in an unknown environment for an autonomous mobile robot equipped with CCD(Charge-Coupled Device) camera. The mobile robot moves along the guideline. The CCD camera is used for the detection of the existence of a guideline. The wavelet transform is used to find the edge of guideline. It is possible for us to do image processing more easily and rapidly by using wavelet transform. We make a fuzzy control rule using image data as an input then determined the position and the navigation of the mobile robot. The center value of guideline is the input of fuzzy logic controller and the steering angle of the mobile robot is the fuzzy controller output. Some actual experiments show that the mobile robot effectively moves to target position by means of the applied fuzzy control.

  • PDF

A Prediction Method for Sabot-Trajectory of Projectile by using High Speed Camera Data Analysis (고속카메라 데이터 분석을 통한 발사체 지지대 분산 궤적의 근사적 예측 방법)

  • Park, Yunho;Woo, Hokil
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.1
    • /
    • pp.1-9
    • /
    • 2018
  • In this paper, we have proposed a prediction method for sabot-trajectory of projectile using high speed camera data analysis. Through analyzing trajectory of sabot with high speed camera data, we can extract its real velocity and acceleration including effects of friction force, pressure of flume, etc. Using these data, we suggest a prediction method for sabot-trajectory of projectile having variable acceleration, especially for minimum and maximum acceleration, by using interpolation method for velocity and acceleration data of sabot. Also we perform the projectile launching tests to achieve the trajectory of sabot in case of minimum and maximum thrust. Simulation results show that they are similar to real tests data, for example velocity, acceleration and the trajectory of sabot.

Parameter Calibration of Laser Scan Camera for Measuring the Impact Point of Arrow (화살 탄착점 측정을 위한 레이저 스캔 카메라 파라미터 보정)

  • Baek, Gyeong-Dong;Cheon, Seong-Pyo;Lee, In-Seong;Kim, Sung-Shin
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.21 no.1
    • /
    • pp.76-84
    • /
    • 2012
  • This paper presents the measurement system of arrow's point of impact using laser scan camera and describes the image calibration method. The calibration process of distorted image is primarily divided into explicit and implicit method. Explicit method focuses on direct optical property using physical camera and its parameter adjustment functionality, while implicit method relies on a calibration plate which assumed relations between image pixels and target positions. To find the relations of image and target position in implicit method, we proposed the performance criteria based polynomial theorem model that overcome some limitations of conventional image calibration model such as over-fitting problem. The proposed method can be verified with 2D position of arrow that were taken by SICK Ranger-D50 laser scan camera.

A Vehicle Recognition Method based on Radar and Camera Fusion in an Autonomous Driving Environment

  • Park, Mun-Yong;Lee, Suk-Ki;Shin, Dong-Jin
    • International journal of advanced smart convergence
    • /
    • v.10 no.4
    • /
    • pp.263-272
    • /
    • 2021
  • At a time when securing driving safety is the most important in the development and commercialization of autonomous vehicles, AI and big data-based algorithms are being studied to enhance and optimize the recognition and detection performance of various static and dynamic vehicles. However, there are many research cases to recognize it as the same vehicle by utilizing the unique advantages of radar and cameras, but they do not use deep learning image processing technology or detect only short distances as the same target due to radar performance problems. Radars can recognize vehicles without errors in situations such as night and fog, but it is not accurate even if the type of object is determined through RCS values, so accurate classification of the object through images such as cameras is required. Therefore, we propose a fusion-based vehicle recognition method that configures data sets that can be collected by radar device and camera device, calculates errors in the data sets, and recognizes them as the same target.

Implementation of a Dashcam System using a Rotating Camera (회전 카메라를 이용한 블랙박스 시스템 구현)

  • Kim, Kiwan;Koo, Sung-Woo;Kim, Doo Yong
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.4
    • /
    • pp.34-38
    • /
    • 2020
  • In this paper, we implement a Dashcam system capable of shooting 360 degrees using a Raspberry Pi, shock sensors, distance sensors, and rotating camera with a servo motor. If there is an object approaching the vehicle by the distance sensor, the camera rotates to take a video. In the event of an external shock, videos and images are stored in the server to analyze the cause of the vehicle's accident and prevent the user from forging or tampering with videos or images. We also implement functions that transmit the message with the location and the intensity of the impact when the accident occurs and send the vehicle information to an insurance authority with by linking the system with a smart device. It is advantage that the authority analyzes the transmitted message and provides the accident handling information giving the user's safety and convenience.

USING WEB CAMERA TECHNOLOGY TO MONITOR STEEL CONSTRUCTION

  • Kerry T. Slattery;Amit Kharbanda
    • International conference on construction engineering and project management
    • /
    • 2005.10a
    • /
    • pp.841-844
    • /
    • 2005
  • Computer vision technology can be used to interpret the images captured by web cameras installed on construction sites to automatically quantify the results. This information can be used for quality control, productivity measurement and to direct construction. Steel frame construction is particularly well suited for automatic monitoring as all structural members can be viewed from a small number of camera locations, and three-dimensional computer models of steel structures are frequently available in a standard electronic format. A system is being developed that interprets the 3-D model and directs a camera to look for individual members as regular intervals to determine when each is in place and report the results. Results from a simple lab-scale system are presented along with preliminary full-scale development.

  • PDF

Recognition of Model Cars Using Low-Cost Camera in Smart Toy Games (저가 카메라를 이용한 스마트 장난감 게임을 위한 모형 자동차 인식)

  • Minhye Kang;Won-Kee Hong;Jaepil Ko
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.27-32
    • /
    • 2024
  • Recently, there has been a growing interest in integrating physical toys into video gaming within the game content business. This paper introduces a novel method that leverages low-cost camera as an alternative to using sensor attachments to meet this rising demand. We address the limitations associated with low-cost cameras and propose an optical design tailored to the specific environment of model car recognition. We overcome the inherent limitations of low-cost cameras by proposing an optical design specifically tailored for model car recognition. This approach primarily focuses on recognizing the underside of the car and addresses the challenges associated with this particular perspective. Our method employs a transfer learning model that is specifically trained for this task. We have achieved a 100% recognition rate, highlighting the importance of collecting data under various camera exposures. This paper serves as a valuable case study for incorporating low-cost cameras into vision systems.

An Efficient Camera Calibration Method for Head Pose Tracking (머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구)

  • Park, Gyeong-Su;Im, Chang-Ju;Lee, Gyeong-Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF