• Title/Summary/Keyword: Camera Angle

Search Result 784, Processing Time 0.028 seconds

A study on measurement and compensation of automobile door gap using optical triangulation algorithm (광 삼각법 측정 알고리즘을 이용한 자동차 도어 간격 측정 및 보정에 관한 연구)

  • Kang, Dong-Sung;Lee, Jeong-woo;Ko, Kang-Ho;Kim, Tae-Min;Park, Kyu-Bag;Park, Jung Rae;Kim, Ji-Hun;Choi, Doo-Sun;Lim, Dong-Wook
    • Design & Manufacturing
    • /
    • v.14 no.1
    • /
    • pp.8-14
    • /
    • 2020
  • In general, auto parts production assembly line is assembled and produced by automatic mounting by an automated robot. In such a production site, quality problems such as misalignment of parts (doors, trunks, roofs, etc.) to be assembled with the vehicle body or collision between assembly robots and components are often caused. In order to solve such a problem, the quality of parts is manually inspected by using mechanical jig devices outside the automated production line. Automotive inspection technology is the most commonly used field of vision, which includes surface inspection such as mounting hole spacing and defect detection, body panel dents and bends. It is used for guiding, providing location information to the robot controller to adjust the robot's path to improve process productivity and manufacturing flexibility. The most difficult weighing and measuring technology is to calibrate the surface analysis and position and characteristics between parts by storing images of the part to be measured that enters the camera's field of view mounted on the side or top of the part. The problem of the machine vision device applied to the automobile production line is that the lighting conditions inside the factory are severely changed due to various weather changes such as morning-evening, rainy days and sunny days through the exterior window of the assembly production plant. In addition, since the material of the vehicle body parts is a steel sheet, the reflection of light is very severe, which causes a problem in that the quality of the captured image is greatly changed even with a small light change. In this study, the distance between the car body and the door part and the door are acquired by the measuring device combining the laser slit light source and the LED pattern light source. The result is transferred to the joint robot for assembling parts at the optimum position between parts, and the assembly is done at the optimal position by changing the angle and step.

Effects of Visual Information Blockage on Landing Strategy during Drop Landing (시각 정보의 차단이 드롭랜딩 시 착지 전략에 미치는 영향)

  • Koh, Young-Chul;Cho, Joon-Haeng;Moon, Gon-Sung;Lee, Hae-Dong;Lee, Sung-Cheol
    • Korean Journal of Applied Biomechanics
    • /
    • v.21 no.1
    • /
    • pp.31-38
    • /
    • 2011
  • This study aimed to determine the effects of the blockage of visual feedback on joint dynamics of the lower extremity. Fifteen healthy male subjects(age: $24.1{\pm}2.3\;yr$, height: $178.7{\pm}5.2\;cm$, weight: $73.6{\pm}6.6\;kg$) participated in this study. Each subject performed single-legged landing from a 45 cm-platform with the eyes open or closed. During the landing performance, three-dimensional kinematics of the lower extremity and ground reaction force(GRF) were recorded using a 8 infrared camera motion analysis system (Vicon MX-F20, Oxford Metric Ltd, Oxford, UK) with a force platform(ORG-6, AMTI, Watertown, MA). The results showed that at 50 ms prior to foot contact and at the time of foot contact, ankle plantar-flexion angle was smaller(p<.05) but the knee joint valgus and the hip flexion angles were greater with the eyes closed as compared to with the eyes open(p<.05). An increase in anterior GRF was observed during single-legged landing with the eyes closed as compared to with the eyes open(p<.05). Time to peak GRF in the medial, vertical and posterior directions occurred significantly earlier when the eyes were closed as compared to when the eyes were open(p<.05). Landing with the eyes closed resulted in a higher peak vertical loading rate(p<.05). In addition, the shock-absorbing power decreased at the ankle joint(p<.05) but increased at the hip joints when landing with the eyes closed(p<.05). When the eyes were closed, landing could be characterized by a less plantarflexed ankle joint and more flexed hip joint, with a faster time to peak GRF. These results imply that subjects are able to adapt the control of landing to different feedback conditions. Therefore, we suggest that training programs be introduced to reduce these injury risk factors.

Production Techniques for Mobile Motion Pictures base on Smart Phone (스마트폰 시장 확대에 따른 모바일 동영상 편집 기법 연구)

  • Choi, Eun-Young;Choi, Hun
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.115-123
    • /
    • 2010
  • Because of development of information technology, moving picture can run various platforms. We should consider and apply users' attitude as well as production technique because convergence between mobile and media technology may be increased full-browsing service using mobile device. Previous research related to production technique in various platforms only focus on video quality and adjustment of screen size. However, besides of technical side, production techniques should be changed such as image production as well as image editing by point of view aesthetic. Mise-en-scene such as camera angle, composition, and lighting is changed due to HD image. Also image production should be changed to a suitable full-browsing service using mobile device. Therefore, we would explore a new suitable production techniques and image editing for smart phone. To propose production techniques for smart phone, we used E-learning production system, which are transition, editing technique for suitable converting system. Such as new attempts are leading to new paradigm and establishing their position by applying characteries such as openness, timeliness to mobile. Also it can be extended individual area and established as expression and play tool.

A Study on the Simulation Modeling Method of LKAS Test Evalution (LKAS 시험평가의 시뮬레이션 모델링 기법에 관한 연구)

  • Bae, Geon-Hwan;Lee, Seon-bong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.3
    • /
    • pp.57-64
    • /
    • 2020
  • The leading technologies of the ADAS (Advanced Driver Assist System) are ACC (Advanced Cruise Control), LKAS (Lane Keeping Assist System), and AEB (Autonomous Emergency Braking). LKAS is a system that uses cameras and infrared sensors to control steering and return to its running lane in the event of unintentional deviations. The actual test is performed for a safety evaluation and verification of the system. On the other hand, research on the system evaluation method is insufficient when an additional steering angle is applied. In this study, a model using Prescan was developed and simulated for the scenarios proposed in the preceding study. Comparative analyses of the simulation and the actual test were performed. As a result, the modeling validity was verified. A difference between the front wheels and the lane occurred due to the return velocity. The results revealed a maximum error of 0.56 m. The error occurred because the lateral velocity of the car was relatively small. On the other hand, the distance from wheels to the lanes displayed a tendency of approximately 0.5 m. This can be verified reliably.

Comparison of Biomechanical Characteristics of Rowing Performance between Elite and Non-Elite Scull Rowers: A Pilot Study

  • Kim, Jin-Sun;Cho, Hanyeop;Han, Bo-Ram;Yoon, So-Ya;Park, Seonhyung;Cho, Hyunseung;Lee, Joohyeon;Lee, Hae-Dong
    • Korean Journal of Applied Biomechanics
    • /
    • v.26 no.1
    • /
    • pp.21-30
    • /
    • 2016
  • Objective: This study aimed to examine the characteristics of joint kinematics and synchronicity of rowing motion between elite and non-elite rowers. Methods: Two elite and two non-elite rowers performed rowing strokes (3 trials, 20 strokes in each trial) at three different stroke rates (20, 30, 40 stroke/min) on two stationary rowing ergometers. The rowing motions of the rowers were captured using a 3-dimensional motion analysis system (8-infrared camera VICON system, Oxford, UK). The range of motion (RoM) of the knee, hip, and elbow joints on the sagittal plane, the lead time ($T_{Lead}$) and the drive time $T_{Drive}$) for each joint, and the elapsed time for the knee joint to maintain a fully extended position ($T_{Knee}$) during the stroke were analyzed and compared between elite and non-elite rowers. Synchronicity of the rowing motion within and between groups was examined using coefficients of variation (CV) of the $T_{Drive}$ for each joint. Results: Regardless of the stroke rate, the RoM of all joints were greater for the elite than for non-elite rowers, except for the RoMs of the knee joint at 30 stroke/min and the elbow joint at 40 stroke/min (p < .05). Although the $T_{Lead}$ at all stroke rates were the same between the groups, the $T_{Drive}$ for each joint was shorter for the elite than for the non-elite rowers. During the drive phase, elite rowers kept the fully extended knee joint angle longer than the non-elite rowers (p < .05). The CV values of the TDrive within each group were smaller for the elite compared with non-elite rowers, except for the CV values of the hip at all stroke/min and elbow at 40 stroke/min. Conclusion: The elite, compared with non-elite, rowers seem to be able to perform more powerful and efficient rowing strokes with large RoM and a short $T_{Drive}$ with the same $T_{Lead}$.

Effect of Golf Shoe Design on Kinematic Variables During Driver Swing (골프화의 구조적 특성 및 내부형태에 따른 스윙의 운동학적 변인에 미치는 영향)

  • Park, Jong-Jin
    • Korean Journal of Applied Biomechanics
    • /
    • v.19 no.1
    • /
    • pp.167-177
    • /
    • 2009
  • The purpose of this study was to investigate effect of golf shoe design on kinematic variables during golf swing. Five professional male golfers with shoe size 270mm were recruited for the study. Swing motion was collected using 8 high speed camera motion analysis at a sampling of 180Hz. Kinematic variables were calculated by EVaRT 4.2 software. Driver swing was divided into four events; El(adress), E2(top), E3(impact) and E4(finish). Time, peak velocity, velocity of center of mass, velocity of the foot and ankle angle during Phase 1(El-E2), Phase 2(E2-E3), and Phase 3(E3-E4) were analyzed in order to investigate the relationship between golf shoe design and swing performance. The findings indicated that type C golf shoes would be beneficial for stability and control of movement during address and swing performance. Furthermore, faster speed of golf shoes, center of mass, and both feet were observed with Type C golf shoes. It is expected that golfers with Type C golf Shoes provide greater force as they control the center of mass faster and increase rotational force during impact compared to other golf shoes.

Three-Dimensional Image Display System using Stereogram and Holographic Optical Memory Techniques (스테레오그램과 홀로그래픽 광 메모리 기술을 이용한 3차원 영상 표현 시스템)

  • 김철수;김수중
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.6B
    • /
    • pp.638-644
    • /
    • 2002
  • In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH(binary phase hologram) and LCD(liquid crystal display) for controlling reference beam. The reference beams are acquired by Fourier transform of BPHs which designed with SA(simulated annealing)algorithm, and the BPHs are represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software(Photoshop) with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. In output plane, we used a LCD shutter that is synchronized to a monitor that display alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO$_3$ repeatedly using the proposed holographic optical memory techniques.

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF

Study on Measurement Condition Effects of CRP-based Structure Monitoring Techniques for Disaster Response (재해 대응을 위한 CRP기반 시설물 모니터링 기법의 계측조건 영향 분석)

  • Lee, Donghwan;Leem, Junghyun;Park, Jihwan;Yu, Byoungjoon;Park, Seunghee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.30 no.6
    • /
    • pp.541-547
    • /
    • 2017
  • Climate change has become the main cause of the exacerbation in natural disasters. Social Overhead Capital(SOC) structure needs to be checked for displacement and crack periodically to prevent damage and the collapse caused by natural disaster and ensure the safety. For efficient structure maintenance, the optical image technology is applied to the Structure Health Monitoring(SHM). However, optical image is sensitive to environmental factors. So it is necessary to verify its validity. In this paper, the accuracy of estimating the vertical displacement was verified with respect to environmental condition such as natural light, measurement distance, and the number of image sheets. The result of experiments showed that the effect of natural light on accuracy of estimating vertical displacement was the greatest of all. The measurement angle which was affected by the change in measurement distance was also important to check the vertical displacement. These findings will be taken into account by applying appropriate environmental condition to minimize errors when the bridge was measured by camera. It will also enable the application of optical images to the SHM.