• 제목/요약/키워드: fish image

검색결과 158건 처리시간 0.026초

활어 개체어의 광대역 음향산란신호에 대한 시간-주파수 이미지의 어파인 변환과 주성분 분석을 이용한 어종식별 (Identification of Fish Species using Affine Transformation and Principal Component Analysis of Time-Frequency Images of Broadband Acoustic Echoes from Individual Live Fish)

  • 이대재
    • 한국수산과학회지
    • /
    • 제50권2호
    • /
    • pp.195-206
    • /
    • 2017
  • Joint time-frequency images of the broadband echo signals of six fish species were obtained using the smoothed pseudo-Wigner-Ville distribution in controlled environments. Affine transformation and principal component analysis were used to obtain eigenimages that provided species-specific acoustic features for each of the six fish species. The echo images of an unknown fish species, acquired in real time and in a fully automated fashion, were identified by finding the smallest Euclidean or Mahalanobis distance between each combination of weight matrices of the test image of the fish species to be identified and of the eigenimage classes of each of six fish species in the training set. The experimental results showed that the Mahalanobis classifier performed better than the Euclidean classifier in identifying both single- and mixed-species groups of all species assessed.

Semiautomated Analysis of Data from an Imaging Sonar for Fish Counting, Sizing, and Tracking in a Post-Processing Application

  • Kang, Myoung-Hee
    • Fisheries and Aquatic Sciences
    • /
    • 제14권3호
    • /
    • pp.218-225
    • /
    • 2011
  • Dual frequency identification sonar (DIDSON) is an imaging sonar that has been used for numerous fisheries investigations in a diverse range of freshwater and marine environments. The main purpose of DIDSON is fish counting, fish sizing, and fish behavioral studies. DIDSON records video-quality data, so processing power for handling the vast amount of data with high speed is a priority. Therefore, a semiautomated analysis of DIDSON data for fish counting, sizing, and fish behavior in Echoview (fisheries acoustic data analysis software) was accomplished using testing data collected on the Rakaia River, New Zealand. Using this data, the methods and algorithms for background noise subtraction, image smoothing, target (fish) detection, and conversion to single targets were precisely illustrated. Verification by visualization identified the resulting targets. As a result, not only fish counts but also fish sizing information such as length, thickness, perimeter, compactness, and orientation were obtained. The alpha-beta fish tracking algorithm was employed to extract the speed, change in depth, and the distributed depth relating to fish behavior. Tail-beat pattern was depicted using the maximum intensity of all beams. This methodology can be used as a template and applied to data from BlueView two-dimensional imaging sonar.

Fish Injured Rate Measurement Using Color Image Segmentation Method Based on K-Means Clustering Algorithm and Otsu's Threshold Algorithm

  • Sheng, Dong-Bo;Kim, Sang-Bong;Nguyen, Trong-Hai;Kim, Dae-Hwan;Gao, Tian-Shui;Kim, Hak-Kyeong
    • 동력기계공학회지
    • /
    • 제20권4호
    • /
    • pp.32-37
    • /
    • 2016
  • This paper proposes two measurement methods for injured rate of fish surface using color image segmentation method based on K-means clustering algorithm and Otsu's threshold algorithm. To do this task, the following steps are done. Firstly, an RGB color image of the fish is obtained by the CCD color camera and then converted from RGB to HSI. Secondly, the S channel is extracted from HSI color space. Thirdly, by applying the K-means clustering algorithm to the HSI color space and applying the Otsu's threshold algorithm to the S channel of HSI color space, the binary images are obtained. Fourthly, morphological processes such as dilation and erosion, etc. are applied to the binary image. Fifthly, to count the number of pixels, the connected-component labeling is adopted and the defined injured rate is gotten by calculating the pixels on the labeled images. Finally, to compare the performances of the proposed two measurement methods based on the K-means clustering algorithm and the Otsu's threshold algorithm, the edge detection of the final binary image after morphological processing is done and matched with the gray image of the original RGB image obtained by CCD camera. The results show that the detected edge of injured part by the K-means clustering algorithm is more close to real injured edge than that by the Otsu' threshold algorithm.

어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘 (Localization using Ego Motion based on Fisheye Warping Image)

  • 최윤원;최경식;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

차량용 어안렌즈영상의 기하학적 왜곡 보정 (Geometric Correction of Vehicle Fish-eye Lens Images)

  • 김성희;조영주;손진우;이중렬;김명희
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2009년도 학술대회
    • /
    • pp.601-605
    • /
    • 2009
  • $180^{\circ}$ 이상의 영역을 획득하는 어안렌즈(fish-eye lens)는 최소의 카메라로 최대 시야각을 확보할 수 있는 장점으로 인해 차량 장착 시도가 늘고 있다. 운전자에게 현실감 있는 영상을 제공하고 센서로 이용하기 위해서는 캘리브레이션을 통해 방사왜곡(radial distortion)에 따른 기하학적인 왜곡 보정이 필요하다. 그런데 차량용 어안렌즈의 경우, 대각선 어안렌즈로 일반 원상 어안렌즈로 촬영한 둥근 화상의 바깥둘레에 내접하는 부분을 잘라낸 직사각형 영상과 같으며, 수직, 수평 화각에 따라 왜곡이 비대칭구조로 설계되었다. 본 논문에서는, 영상의 특징점(feature points)을 이용하여 차량용 어안렌즈에 적합한 카메라 모델 및 캘리브레이션 기법을 소개한다. 캘리브레이션한 결과, 제안한 방법은 화각이 다른 차량용 어안렌즈에도 적용 가능하다.

  • PDF

Semiautomatic 3D Virtual Fish Modeling based on 2D Texture

  • Nakajima, Masayuki;Hagiwara, Hisaya;Kong, Wai-Ming;Takahashi, Hiroki
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1996년도 Proceedings International Workshop on New Video Media Technology
    • /
    • pp.18-21
    • /
    • 1996
  • In the field of Virtual Reality, many studies have been reported. Especially, there are many studies on generating virtual creatures on computer systems. In this paper we propose an algorithm to automatically generate 3D fish models from 2D images which are printed in illustrated books, pictures or handwritings. At first, 2D fish images are captured by means of image scanner. Next, the fish image is separated from background and segmented to several parts such as body, anal fin, dorsal fin, ectoral fin and ventral fin using the proposed method“Active Balloon model”. After that, users choose front view model and top view model among six samples, respectively. 3D model is automatically generated from separated body, fins and the above two view models. The number of patches is decreased without any influence on the accuracy of the generated 3D model to reduce the time cost when texture mapping is applied. Finally, we can get any kinds of 3D fish models.

  • PDF

고해상도 사이드 스캔 소나 영상의 보정 및 매핑 알고리즘의 개발 (Development of Algorithms for Correcting and Mapping High-Resolution Side Scan Sonar Imagery)

  • 이동진;박요섭;김학일
    • 대한원격탐사학회지
    • /
    • 제17권1호
    • /
    • pp.45-56
    • /
    • 2001
  • 해저면의 정보를 얻기 위하여 사이드 스캔 소나(Side Scan Sonar)를 이용하여 해저면의 모자이 영상을 생성하였다. 경사거리 보정에 필요한 Tow-Fish의 수증고도 산출을 위해 short time energy 함수를 각 ping의 음압 레벨에 적용하였으며, 수주(water column) 영역이 제거된 모자익 영상을 생성할 수 있었다. 모자익 영상 생성시 각 화소의 음압 대표값으로 최대값, 최근값 및 평균값을 사용하였으며, 평균값 사용시 일정 방향으로 발사된 음파의 음악값만을 대상으로 평균값을 구하여 해저면의 3차원 정보를 보존하였다. 모자익 영상 생성 방법으로 Im/pixel 이상의 저해상도로 전테 탐사 영역에 대한 조자익 영상을 생성한 후 관심 대상 영역을 선택하여 0.1m/pixel의 공간 해상도를 가진 고화질의 모자익 영상을 생성하였으며, 해저면의 암석, 연흔, 개펄, 인공 어초 등의 해저 물체를 확인할 수 있었다.

융합콘텐츠개발을 위한 『자산어보』 해양문화원형 연구 (Study on Maritime Cultural Archetypes of 'Jasan-Urbo' for Convergent Contents Development)

  • 김상남;이영숙
    • 한국멀티미디어학회논문지
    • /
    • 제23권3호
    • /
    • pp.490-498
    • /
    • 2020
  • This study researched the elements about seafood culture among the archetypes of 『Jasan Urbo』. We built a hypothesis on 100 species of fish presented in 『Jasan Urbo』 to extract archetypes from them. To prove this hypothesis, we analyzed the properties of archetypes according to recipes. Next, we grasped the food preferences of past ages by inspecting 'taste' words of the literature. Finally, we examined the relations between dominant features of cuisine and fish preferences. We have found four attributes of maritime cultural archetypes. Our research has limitations since we could not had accounted the whole fish species of 『Jasan Urbo』. However, we achieved the huge outcome by our research, in which we applied various extraction methods of archetypes and acquired the dominance of maritime cultural archetypes and preferences.

차량용 어안렌즈 카메라 캘리브레이션 및 왜곡 보정 (Camera Calibration and Barrel Undistortion for Fisheye Lens)

  • 허준영;이동욱
    • 전기학회논문지
    • /
    • 제62권9호
    • /
    • pp.1270-1275
    • /
    • 2013
  • A lot of research about camera calibration and lens distortion for wide-angle lens has been made. Especially, calibration for fish-eye lens which has 180 degree FOV(field of view) or above is more tricky, so existing research employed a huge calibration pattern or even 3D pattern. And it is important that calibration parameters (such as distortion coefficients) are suitably initialized to get accurate calibration results. It can be achieved by using manufacturer information or lease-square method for relatively narrow FOV(135, 150 degree) lens. In this paper, without any previous manufacturer information, camera calibration and barrel undistortion for fish-eye lens with over 180 degree FOV are achieved by only using one calibration pattern image. We applied QR decomposition for initialization and Regularization for optimization. With the result of experiment, we verified that our algorithm can achieve camera calibration and image undistortion successfully.

색상 검출 알고리즘을 활용한 물고기로봇의 위치인식과 군집 유영제어 (Position Detection and Gathering Swimming Control of Fish Robot Using Color Detection Algorithm)

  • 무하마드 아크바르;신규재
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2016년도 추계학술발표대회
    • /
    • pp.510-513
    • /
    • 2016
  • Detecting of the object in image processing is substantial but it depends on the object itself and the environment. An object can be detected either by its shape or color. Color is an essential for pattern recognition and computer vision. It is an attractive feature because of its simplicity and its robustness to scale changes and to detect the positions of the object. Generally, color of an object depends on its characteristics of the perceiving eye and brain. Physically, objects can be said to have color because of the light leaving their surfaces. Here, we conducted experiment in the aquarium fish tank. Different color of fish robots are mimic the natural swim of fish. Unfortunately, in the underwater medium, the colors are modified by attenuation and difficult to identify the color for moving objects. We consider the fish motion as a moving object and coordinates are found at every instinct of the aquarium to detect the position of the fish robot using OpenCV color detection. In this paper, we proposed to identify the position of the fish robot by their color and use the position data to control the fish robot gathering in one point in the fish tank through serial communication using RF module. It was verified by the performance test of detecting the position of the fish robot.