• 제목/요약/키워드: object detect

검색결과 932건 처리시간 0.031초

물체 형상인식 알고리즘을 이용한 물고기 로봇 위치 검출에 관한 연구 (A Study of Detecting The Fish Robot Position Using The Object Boundary Algorithm)

  • 아마르나 바르마 앙가니;강민정;신규재
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2015년도 추계학술발표대회
    • /
    • pp.1350-1353
    • /
    • 2015
  • In this paper, we have researched about how to detect the fish robot objects in aquarium. We had used designed fish robots DOMI ver1.0, which had researched and developed for aquarium underwater robot. The model of the robot fish is analysis to maximize the momentum of the robot fish and the body of the robot is designed through the analysis of the biological fish swimming. We are planned to non-external equipment to find the position and manipulated the position using creating boundary to fish robot to detect the fish robot objects. Also, we focused the detecting fish robot in aquarium by using boundary algorithm. In order to the find the object boundary, it is filtering the video frame to picture frames and changing the RGB to gray. Then, applied the boundary algorithm stand of equations which operates the boundary for objects. We called these procedures is kind of image processing that can distinguish the objects and background in the captured video frames. It was confirmed that excellent performance in the field test such as filtering image, object detecting and boundary algorithm.

딥러닝을 활용한 단안 카메라 기반 실시간 물체 검출 및 거리 추정 (Monocular Camera based Real-Time Object Detection and Distance Estimation Using Deep Learning)

  • 김현우;박상현
    • 로봇학회논문지
    • /
    • 제14권4호
    • /
    • pp.357-362
    • /
    • 2019
  • This paper proposes a model and train method that can real-time detect objects and distances estimation based on a monocular camera by applying deep learning. It used YOLOv2 model which is applied to autonomous or robot due to the fast image processing speed. We have changed and learned the loss function so that the YOLOv2 model can detect objects and distances at the same time. The YOLOv2 loss function added a term for learning bounding box values x, y, w, h, and distance values z as 클래스ification losses. In addition, the learning was carried out by multiplying the distance term with parameters for the balance of learning. we trained the model location, recognition by camera and distance data measured by lidar so that we enable the model to estimate distance and objects from a monocular camera, even when the vehicle is going up or down hill. To evaluate the performance of object detection and distance estimation, MAP (Mean Average Precision) and Adjust R square were used and performance was compared with previous research papers. In addition, we compared the original YOLOv2 model FPS (Frame Per Second) for speed measurement with FPS of our model.

단일 영상에서 안개 제거 방법을 이용한 객체 검출 알고리즘 개선 (Enhancement of Object Detection using Haze Removal Approach in Single Image)

  • 안효창;이용환
    • 반도체디스플레이기술학회지
    • /
    • 제17권2호
    • /
    • pp.76-80
    • /
    • 2018
  • In recent years, with the development of automobile technology, smart system technology that assists safe driving has been developed. A camera is installed on the front and rear of the vehicle as well as on the left and right sides to detect and warn of collision risks and hazards. Beyond the technology of simple black-box recording via cameras, we are developing intelligent systems that combine various computer vision technologies. However, most related studies have been developed to optimize performance in laboratory-like environments that do not take environmental factors such as weather into account. In this paper, we propose a method to detect object by restoring visibility in image with degraded image due to weather factors such as fog. First, the image quality degradation such as fog is detected in a single image, and the image quality is improved by restoring using an intermediate value filter. Then, we used an adaptive feature extraction method that removes unnecessary elements such as noise from the improved image and uses it to recognize objects with only the necessary features. In the proposed method, it is shown that more feature points are extracted than the feature points of the region of interest in the improved image.

Implementation of YOLOv5-based Forest Fire Smoke Monitoring Model with Increased Recognition of Unstructured Objects by Increasing Self-learning data

  • Gun-wo, Do;Minyoung, Kim;Si-woong, Jang
    • International Journal of Advanced Culture Technology
    • /
    • 제10권4호
    • /
    • pp.536-546
    • /
    • 2022
  • A society will lose a lot of something in this field when the forest fire broke out. If a forest fire can be detected in advance, damage caused by the spread of forest fires can be prevented early. So, we studied how to detect forest fires using CCTV currently installed. In this paper, we present a deep learning-based model through efficient image data construction for monitoring forest fire smoke, which is unstructured data, based on the deep learning model YOLOv5. Through this study, we conducted a study to accurately detect forest fire smoke, one of the amorphous objects of various forms, in YOLOv5. In this paper, we introduce a method of self-learning by producing insufficient data on its own to increase accuracy for unstructured object recognition. The method presented in this paper constructs a dataset with a fixed labelling position for images containing objects that can be extracted from the original image, through the original image and a model that learned from it. In addition, by training the deep learning model, the performance(mAP) was improved, and the errors occurred by detecting objects other than the learning object were reduced, compared to the model in which only the original image was learned.

동적 물체의 비전 검출을 통한 이동로봇의 장애물 회피 (Mobile Robot Obstacle Avoidance using Visual Detection of a Moving Object)

  • 김인권;송재복
    • 로봇학회논문지
    • /
    • 제3권3호
    • /
    • pp.212-218
    • /
    • 2008
  • Collision avoidance is a fundamental and important task of an autonomous mobile robot for safe navigation in real environments with high uncertainty. Obstacles are classified into static and dynamic obstacles. It is difficult to avoid dynamic obstacles because the positions of dynamic obstacles are likely to change at any time. This paper proposes a scheme for vision-based avoidance of dynamic obstacles. This approach extracts object candidates that can be considered moving objects based on the labeling algorithm using depth information. Then it detects moving objects among object candidates using motion vectors. In case the motion vectors are not extracted, it can still detect the moving objects stably through their color information. A robot avoids the dynamic obstacle using the dynamic window approach (DWA) with the object path estimated from the information of the detected obstacles. The DWA is a well known technique for reactive collision avoidance. This paper also proposes an algorithm which autonomously registers the obstacle color. Therefore, a robot can navigate more safely and efficiently with the proposed scheme.

  • PDF

미지물체를 잡기 위한 로봇 손가락의 3축 힘감지센서 설계 및 제작 (Design and fabrication of robot′s finger 3-axis force sensor for grasping an unknown object)

  • 김갑순
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 춘계학술대회 논문집
    • /
    • pp.229-232
    • /
    • 2002
  • This paper describes the development of robot's finger 3-axis force sensor that detects the Fx, Fy, and Fz simultaneously fur stably grasping an unknown object. In order to safely grasp an unknown object using the robot's fingers, they should detect the force of gripping direction and the force of gravity direction, and perform the force control using the detected farces. The 3-axis force sensor that detects the Fx, Fy, and Fz simultaneously should be used for accurately detecting the weight of an unknown object of gravity direction. Thus, in this paper, robot's finger for stably grasping an unknown object is developed. And, the 3-axis farce sensor that detects the Fx, Fy, and Fz simultaneously fur constructing a robot's finger is newly modeled using several parallel-plate beams, and is fabricated. Also, it is calibrated, and evaluated.

  • PDF

Development of an Edge-Based Algorithm for Moving-Object Detection Using Background Modeling

  • Shin, Won-Yong;Kabir, M. Humayun;Hoque, M. Robiul;Yang, Sung-Hyun
    • Journal of information and communication convergence engineering
    • /
    • 제12권3호
    • /
    • pp.193-197
    • /
    • 2014
  • Edges are a robust feature for object detection. In this paper, we present an edge-based background modeling method for the detection of moving objects. The edges in the image frames were mapped using robust Canny edge detector. Two edge maps were created and combined to calculate the ultimate moving-edge map. By selecting all the edge pixels of the current frame above the defined threshold of the ultimate moving edges, a temporary background-edge map was created. If the frequencies of the temporary background edge pixels for several frames were above the threshold, then those edge pixels were treated as background edge pixels. We conducted a performance comparison with previous works. The existing edge-based moving-object detection algorithms pose some difficulty due to the changes in background motion, object shape, illumination variation, and noises. The result of the performance evaluation shows that the proposed algorithm can detect moving objects efficiently in real-world scenarios.

히스토그램과 블록분할을 이용한 매칭 알고리즘 (Matching Algorithm using Histogram and Block Segmentation)

  • 박성곤;최연호;조내수;임성운;권우현
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.231-233
    • /
    • 2009
  • The object recognition is one of the major computer vision fields. The object recognition using features(SIFT) is finding common features in input images and query images. But the object recognition using feature methods has suffered of difficulties due to heavy calculations when resizing input images and query images. In this paper, we focused on speed up finding features in the images. we proposed method using block segmentation and histogram. Block segmentation used diving input image and than histogram decided correlation between each 1]lock and query image. This paper has confirmed that tile matching time reduced for object recognition since reducing block.

  • PDF

스테레오 카메라를 이용한 물체의 3D 포즈 인식 (The Object 3D Pose Recognition Using Stereo Camera)

  • 유성훈;강효석;조영완;김은태;박민용
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.1123-1124
    • /
    • 2008
  • In this paper, we develop a program that recognition of the object 3D pose using stereo camera. In order to detect the object, this paper is applied to canny edge detection algorithm and also used stereo camera to get the 3D point about the object and applied to recognize the pose of the object using iterative closest point(ICP) algorithm.

  • PDF

위치 정보 기반 객체인지에 대한 연구 (A study for object recognition based on location information)

  • 김관중
    • 한국산학기술학회논문지
    • /
    • 제14권4호
    • /
    • pp.1988-1992
    • /
    • 2013
  • 본 논문에서는 일정 지역 내에 진입한 영상 객체에 대한 객체인지 방안을 제안한다. 이 방안은 특정 지역내에 진입한 객체의 행동 패턴을 검출하고 추적하는 응용 모듈에 필요하다. 객체인지에 대한 부분은 여러 응용 모듈에서 적용될 수 있는 방안으로 단순히 영상 정보의 인식 범위에서 실제 좌표에 대한 인식으로의 확대를 위한 것이다. GPS 좌표와 영상 정보의 정합을 통하여 개체의 위치 좌표를 추출함으로서 지정 영역에서 인지된 객체의 위치를 탐색한다.