• Title/Summary/Keyword: 어안

Search Result 95, Processing Time 0.023 seconds

Operation Rearrangement for Low-Power VLIW Instruction Fetches (저전력 VLIW 명령어 추출을 위한 연산재배치 기법)

  • Sin, Dong-Gun;Kim, Ji-Hong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.10
    • /
    • pp.530-540
    • /
    • 2001
  • As mobile applications are required to handle more computing-intensive tasks, many mobile devices are designed using VLIW processors for high performance. In VLIW machines where a single instruction contains multiple operations, the power consumption during instruction fetches varies significantly depending on how the operations are arranged within the instruction. In this paper, we describe a post-pass optimal operation rearrangement method for low-power VLIW instruction fetch, The proposed method modifies operation placement orders within VLIW instructions so that the switching activity between successive instruction fetches is minimized. Our experiment shows that the switching activity can be 34% on average fro benchmark programs.

  • PDF

Motion-based ROI Extraction with a Standard Angle-of-View from High Resolution Fisheye Image (고해상도 어안렌즈 영상에서 움직임기반의 표준 화각 ROI 검출기법)

  • Ryu, Ar-Chim;Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.395-401
    • /
    • 2020
  • In this paper, a motion-based ROI extraction algorithm from a high resolution fisheye image is proposed for multi-view monitoring systems. Lately fisheye cameras are widely used because of the wide angle-of-view and they basically provide a lens correction functionality as well as various viewing modes. However, since the distortion-free angle of conventional algorithms is quite narrow due to the severe distortion ratio, there are lots of unintentional dead areas and they require much computation time in finding undistorted coordinates. Thus, the proposed algorithm adopts an image decimation and a motion detection methods, that can extract the undistorted ROI image with a standard angle-of-view for the fast and intelligent surveillance system. In addition, a mesh-type ROI is presented to reduce the lens correction time, so that this independent ROI scheme can parallelize and maximize the processor's utilization.

Distortion Center Estimation using FOV Model and 2D Pattern (FOV 모델과 2D 패턴을 이용한 왜곡 중심 추정 기법)

  • Seo, Jeong-Goo;Kang, Euiseon
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.8
    • /
    • pp.11-19
    • /
    • 2013
  • This paper presents a simple method to estimate center of distortion and correct radial distortion from fish-eye lens. If the center of image is not locate that of lens in a straight line, the disadvantage of FOV model is low accurate because of correcting distortion without estimated centre of distortion. We propose a method accurately estimating Distortion center using FOV model and 2D pattern from wide angle lens. Our method determines the center of distortion in least error between straight lines and curves with FOV model. The results of experimental measurements on synthetic and real data are presented.

Multi-robot Formation based on Object Tracking Method using Fisheye Images (어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어)

  • Choi, Yun Won;Kim, Jong Uk;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

Fish-eye camera calibration and artificial landmarks detection for the self-charging of a mobile robot (이동로봇의 자동충전을 위한 어안렌즈 카메라의 보정 및 인공표지의 검출)

  • Kwon, Oh-Sang
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.4
    • /
    • pp.278-285
    • /
    • 2005
  • This paper describes techniques of camera calibration and artificial landmarks detection for the automatic charging of a mobile robot, equipped with a fish-eye camera in the direction of its operation for movement or surveillance purposes. For its identification from the surrounding environments, three landmarks employed with infrared LEDs, were installed at the charging station. When the robot reaches a certain point, a signal is sent to the LEDs for activation, which allows the robot to easily detect the landmarks using its vision camera. To eliminate the effects of the outside light interference during the process, a difference image was generated by comparing the two images taken when the LEDs are on and off respectively. A fish-eye lens was used for the vision camera of the robot but the wide-angle lens resulted in a significant image distortion. The radial lens distortion was corrected after linear perspective projection transformation based on the pin-hole model. In the experiment, the designed system showed sensing accuracy of ${\pm}10$ mm in position and ${\pm}1^{\circ}$ in orientation at the distance of 550 mm.

Real time Omni-directional Object Detection Using Background Subtraction of Fisheye Image (어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지)

  • Choi, Yun-Won;Kwon, Kee-Koo;Kim, Jong-Hyo;Na, Kyung-Jin;Lee, Suk-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.8
    • /
    • pp.766-772
    • /
    • 2015
  • This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Vision-based Self Localization Using Ceiling Artificial Landmark for Ubiquitous Mobile Robot (유비쿼터스 이동로봇용 천장 인공표식을 이용한 비젼기반 자기위치인식법)

  • Lee Ju-Sang;Lim Young-Cheol;Ryoo Young-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.560-566
    • /
    • 2005
  • In this paper, a practical technique for correction of a distorted image for vision-based localization of ubiquitous mobile robot. The localization of mobile robot is essential and is realized by using camera vision system. In order to wide the view angle of camera, the vision system includes a fish-eye lens, which distorts the image. Because a mobile robot moves rapidly, the image processing should he fast to recognize the localization. Thus, we propose the practical correction technique for a distorted image, verify the Performance by experimental test.

Effect of Static Load Level of Ultrasonic Nanocrystal Surface Modification Technology on Fatigue Characteristics of SKD61 (초음파 나노 표면개질 기술의 정하중 레벨이 SKD61 강의 피로특성에 미치는 영향)

  • Suh, Chang-Min;Kim, Sung-Hwan
    • Journal of Ocean Engineering and Technology
    • /
    • v.22 no.2
    • /
    • pp.99-105
    • /
    • 2008
  • Ultrasonic nanocrystal surface modification (UNSM) is a method to induce severe plastic deformation to a material surface, so that the structure of the material surface becomes a nanocrystal structure from the surface to a certain depth. It improves the mechanical properties, namely hardness, compressive residual stress, and fatigue characteristics. Specimens of SKD61 were tested to verify the effects of the variation of UNSM static load level on fatigue characteristics. The results were as follows: the grain size of SKD61 treated with UNSM became very fine from the material surface to a $100{\mu}m$ depth. The surface hardness of SKD61 was increased up to 37% after UNSM. And fatigue strength at $10^7$ cycles was increased by 8.3, 11.2, and 17.9% respectively, when the static load levels of UNSM were 4, 6, 8 kgf.

Fast Light Source Estimation Technique for Effective Synthesis of Mixed Reality Scene (효과적인 혼합현실 장면 생성을 위한 고속의 광원 추정 기법)

  • Shin, Seungmi;Seo, Woong;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.89-99
    • /
    • 2016
  • One of the fundamental elements in developing mixed reality applications is to effectively analyze and apply the environmental lighting information to image synthesis. In particular, interactive applications require to process dynamically varying lighting sources in real-time, reflecting them properly in rendering results. Previous related works are not often appropriate for this because they are usually designed to synthesize photorealistic images, generating too many, often exponentially increasing, light sources or having too heavy a computational complexity. In this paper, we present a fast light source estimation technique that aims to search for primary light sources on the fly from a sequence of video images taken by a camera equipped with a fisheye lens. In contrast to previous methods, our technique can adust the number of found light sources approximately to the size that a user specifies. Thus, it can be effectively used in Phong-illumination-model-based direct illumination or soft shadow generation through light sampling over area lights.