• Title/Summary/Keyword: Visual sensor

Search Result 457, Processing Time 0.021 seconds

Contrast Enhancement Method using Color Components Analysis (컬러 성분 분석을 이용한 대비 개선 방법)

  • Park, Sang-Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.4
    • /
    • pp.707-714
    • /
    • 2019
  • Recently, as the sensor network technologies and camera technologies develops, there are increasing needs by combining two technologies to effectively observe or monitor the areas that are difficult for people to access by using the visual sensor network. Since the applications using visual sensors take pictures of the outdoor areas, the images may not be well contrasted due to cloudy weather or low-light time periods such as a sunset. In this paper, we first model the color characteristics according to illumination using the characteristics of visual sensors that continuously capture the same area. Using this model, a new method for improving low contrast images in real time is proposed. In order to make the model, the regions of interest consisting of the same color are set up and the changes of color according to the brightness of images are measured. The gamma function is used to model color characteristics using the measured data. It is shown by experimental results that the proposed method improves the contrast of an image by adjusting the color components of the low contrast image simply and accurately.

Control of Robot Manipulators Using LQG Visual Tracking Cotroller (LQG 시각추종제어기를 이용한 로봇매니퓰레이터의 제어)

  • Lim, Tai-Hun;Jun, Hyang-Sig;Choi, Young-Kiu;Kim, Sung-Shin
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2995-2997
    • /
    • 1999
  • Recently, real-time visual tracking control for a robot manipulator is performed by using a vision feedback sensor information. In this paper, the optical flow is computed based on the eye-in-hand robot configuration. The image jacobian is employed to calculate the rotation and translation velocity of a 3D moving object. LQG visual controller generates the real-time visual trajectory. In order to improving the visual tracking performance. VSC controller is employed to control the robot manipulator. Simulation results show a better visual tracking performance than other method.

  • PDF

Large-scale Language-image Model-based Bag-of-Objects Extraction for Visual Place Recognition (영상 기반 위치 인식을 위한 대규모 언어-이미지 모델 기반의 Bag-of-Objects 표현)

  • Seung Won Jung;Byungjae Park
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.78-85
    • /
    • 2024
  • We proposed a method for visual place recognition that represents images using objects as visual words. Visual words represent the various objects present in urban environments. To detect various objects within the images, we implemented and used a zero-shot detector based on a large-scale image language model. This zero-shot detector enables the detection of various objects in urban environments without additional training. In the process of creating histograms using the proposed method, frequency-based weighting was applied to consider the importance of each object. Through experiments with open datasets, the potential of the proposed method was demonstrated by comparing it with another method, even in situations involving environmental or viewpoint changes.

Multi-Operation Robot For Fruit Production

  • Kondo, Naoshi;Monta, Mitsuji;Shibano, Yasunori
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.621-631
    • /
    • 1996
  • It is said that robot can be used for multi-purpose use by changing end effector or/and visual sensor with its software. In this study, it was investigated what multi-purpose robot for fruit-production was using a tomato harvesting robot and a robot to work in vineyard. Tomato harvesting robot consisted of manipulator, end-effector, visual sensor and traveling device. Plant training system of larger size tomato is similar with that of cherry-tomato. Two end-effectors were prepared for larger size tomato and cherry-tomato fruit harvesting operations, while the res components were not changed for the different work objects. A color TV camera could be used for the both work objects, however fruit detecting algorithm and extracted features from image should be changed. As for the grape-robot , several end-effector for harvesting , berry thinning , bagging and spraying were developed and experimented after attaching each end-effector to manipulator end. The manipulator was a polar coordinate type and had five degrees of freedom so that it could have enough working space for the operations. It was observed that visual sensor was necessary for harvesting, bagging and berry-thinning operations and that spraying operation requires another sensor for keeping certain distance between trellis and end-effector. From the experimental results, it was considered that multi-operations by the same robot could be appropriately done on the same or similar plant training system changing some robot components . One of the important results on having function of multi-operation was to be able to make working period of the robot longer.

  • PDF

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.

Landmark Detection Based on Sensor Fusion for Mobile Robot Navigation in a Varying Environment

  • Jin, Tae-Seok;Kim, Hyun-Sik;Kim, Jong-Wook
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.281-286
    • /
    • 2010
  • We propose a space and time based sensor fusion method and a robust landmark detecting algorithm based on sensor fusion for mobile robot navigation. To fully utilize the information from the sensors, first, this paper proposes a new sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable an accurate measurement. Exploration of an unknown environment is an important task for the new generation of mobile robots. The mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. The newly proposed, STSF (Space and Time Sensor Fusion) scheme is applied to landmark recognition for mobile robot navigation in an unstructured environment as well as structured environment, and the experimental results demonstrate the performances of the landmark recognition.

Active assisted-living system using a robot in WSAN (WSAN에서 로봇을 활용한 능동 생활지원 시스템)

  • Kim, Hong-Seok;Yi, Soo-Yeong;Choi, Byoung-Wook
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.3
    • /
    • pp.177-184
    • /
    • 2009
  • This paper presents an active assisted-living system in wireless sensor and actor network (WSAN) in which the mobile robot roles an actor. In order to provide assisted-living service to the elderly people, position recognition of the sensor node attached on the user and localization of the mobile robot should be performed at the same time. For the purpose, we use received signal strength indication (RSSI) to find the position of the person and ubiquitous sensor nodes including ultrasonic sensor which performs both transmission of sensor information and localization like global positioning system. Active services are moving to the elderly people by detecting activity sensor and visual tracking and voice chatting with remote monitoring system.

  • PDF

A Study on Visual Servoing Application for Robot OLP Compensation (로봇 OLP 보상을 위한 시각 서보잉 응용에 관한 연구)

  • 김진대;신찬배;이재원
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.4
    • /
    • pp.95-102
    • /
    • 2004
  • It is necessary to improve the exactness and adaptation of the working environment in the intelligent robot system. The vision sensor have been studied for this reason fur a long time. However, it is very difficult to perform the camera and robot calibrations because the three dimensional reconstruction and many processes are required for the real usages. This paper suggests the image based visual servoing to solve the problem of old calibration technique and supports OLP(Off-Line-Programming) path compensation. Virtual camera can be modeled from the real factors and virtual images obtained from virtual camera gives more easy perception process. Also, Initial path generated from OLP could be compensated by the pixel level acquired from the real and virtual, respectively. Consequently, the proposed visually assisted OLP teaching remove the calibration and reconstruction process in real working space. With a virtual simulation, the better performance is observed and the robot path error is calibrated by the image differences.

Visual Sensing of Fires Using Color and Dynamic Features (컬러와 동적 특징을 이용한 화재의 시각적 감지)

  • Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.3
    • /
    • pp.211-216
    • /
    • 2012
  • Fires are the most common disaster and early fire detection is of great importance to minimize the consequent damage. Simple sensors including smoke detectors are widely used for the purpose but they are able to sense fires only at close proximity. Recently, due to the rapid advances of relevant technologies, vision-based fire sensing has attracted growing attention. In this paper, a novel visual sensing technique to automatically detect fire is presented. The proposed technique consists of multiple steps of image processing: pixel-level, block-level, and frame level. At the first step, fire flame pixel candidates are selected based on their color values in YIQ space from the image of a camera which is installed as a vision sensor at a fire scene. At the second step, the dynamic parts of flames are extracted by comparing two consecutive images. These parts are then represented in regularly divided image blocks to reduce pixel-level detection error and simplify following processing. Finally, the temporal change of the detected blocks is analyzed to confirm the spread of fire. The proposed technique was tested using real fire images and it worked quite reliably.

Virtual Visual Sensors and Their Application in Structural Health Monitoring (가상 시각 센서의 구조물 건전성 모니터링 응용)

  • Kim, Hee Seung;Choi, Kyoung Kyu;Kim, Tae Jin
    • Magazine of the Korea Institute for Structural Maintenance and Inspection
    • /
    • v.18 no.4
    • /
    • pp.83-88
    • /
    • 2014
  • 구조물에 최적화된 센서 배열(Sensor Array)을 수행하는 것은 센서 네트워크 설계에서 중요한 요소이다. 그러나 센서의 설치와 관리는 구조물이 처해 있는 환경이나, 경제성 그리고 센서의 주파수대역의 제한과 같은 다양한 원인으로 인해 매우 어려울 수 있다. 이 논문에서는 일반적인 문제와 환경에서 현재 사용되고 있는 물리적 센서 대용으로 가상 시각 센서(VVS: Virtual Visual Sensor)를 제안하였다. 가상 시각 센서는 설치가 간편하고 경제적이며 관리가 편하다는 큰 장점을 가지고 있다. 이러한 가상 시각 센서의 기본적인 아이디어는 최첨단 컴퓨터 시각 알고리즘과 마커 추출 기법의 적용으로 이루어진다. 이 연구에서는 가상 시각 센서를 이용하여 모드 형태와 주파수를 추출하는데 용이하다는 점을 보여주며 이를 구조물 건전성 모니터링에 적용할 경우 효율적이라는 점을 입증하였다.

  • PDF