• Title/Summary/Keyword: Morphological Segmentation

Search Result 158, Processing Time 0.02 seconds

Embryonic, Larval, and Juvenile Stages in Yellow Puffer, Takifugu obscurus (황복의 난발생과 자치어 발달)

  • Jang, Seon-Il;Kang, Hee-Woung;Han, Hyoung-Kyun
    • Journal of Aquaculture
    • /
    • v.9 no.1
    • /
    • pp.11-18
    • /
    • 1996
  • We described morphological characteristics of embryonic, larval, and juvenile period of the yellow puffer, Takifugu obscurus. We defined seven periods of embryogenesis the zygote, cleavage, blastula, gastrula, segmentation, pharygula, and hatching periods. The eggs were adhesive and spherical in shape. The egg yolk had numerous tiny oil globules. Hatching began about 280 hours after insemination at $17.0{\pm}1.0^{\circ}C$ water temperature. Melanopores of star shape were seen on yolk, head and trunk during the pharygula and hatching period. The hatched larvae haying large yolk were $3.00\~3.54$ mm in size with $25\~26$ myomeres. The larvae completely absorbed the yolk materials and oil globules within 7 days after hatching and became post-larvae. Laval fish became juveniles within 60 days after hatching, and they reached $23.54\~30.12$ mm in total length and had fin-rays.

  • PDF

Automatic Photovoltaic Panel Area Extraction from UAV Thermal Infrared Images

  • Kim, Dusik;Youn, Junhee;Kim, Changyoon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.6
    • /
    • pp.559-568
    • /
    • 2016
  • For the economic management of photovoltaic power plants, it is necessary to regularly monitor the panels within the plants to detect malfunctions. Thermal infrared image cameras are generally used for monitoring, since malfunctioning panels emit higher temperatures compared to those that are functioning. Recently, technologies that observe photovoltaic arrays by mounting thermal infrared cameras on UAVs (Unmanned Aerial Vehicle) are being developed for the efficient monitoring of large-scale photovoltaic power plants. However, the technologies developed until now have had the shortcomings of having to analyze the images manually to detect malfunctioning panels, which is time-consuming. In this paper, we propose an automatic photovoltaic panel area extraction algorithm for thermal infrared images acquired via a UAV. In the thermal infrared images, panel boundaries are presented as obvious linear features, and the panels are regularly arranged. Therefore, we exaggerate the linear features with a vertical and horizontal filtering algorithm, and apply a modified hierarchical histogram clustering method to extract candidates of panel boundaries. Among the candidates, initial panel areas are extracted by exclusion editing with the results of the photovoltaic array area detection. In this step, thresholding and image morphological algorithms are applied. Finally, panel areas are refined with the geometry of the surrounding panels. The accuracy of the results is evaluated quantitatively by manually digitized data, and a mean completeness of 95.0%, a mean correctness of 96.9%, and mean quality of 92.1 percent are obtained with the proposed algorithm.

Object Detection Method in Sea Environment Using Fast Region Merge Algorithm (해양환경에서 고속 영역 병합 알고리즘을 이용한 물표 탐지 기법)

  • Jeong, Jong-Myeon;Park, Gyei-Kark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.5
    • /
    • pp.610-616
    • /
    • 2012
  • In this paper, we present a method to detect an object such as ship, rock and buoy from sea IR image for the safety navigation. To this end, we do the image smoothing first and the apply watershed algorithm to segment image into subregions. Since watershed algorithm almost always produces over-segmented regions, it requires posterior merging process to get meaningful segmented regions. We propose an efficient merger algorithm that requires only two times of direct access to the pixels regardless of the number of regions. Also by analyzing IR image obtained from sea environments, we could find out that most horizontal edge come out from object regions. For the given input IR image we extract horizontal edge and eliminate isolated edges produced from background and noises by adopting morphological operator. Among the segmented regions, the regions that have horizontal edges are extracted as final results. Experimental results show the adequacy of the proposed method.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Traffic Sign Area Detection by using Color Rate and Distance Rate (컬러비와 거리비를 이용한 교통표지판 영역추출)

  • Kwak, Hyun-Wook;Lee, Woo-Beom;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.681-688
    • /
    • 2002
  • This paper proposes a system detecting the area of traffic sign, which uses color rate as the information of colors, and corner point and distance rate as the information of morphology. In this system, a candidate area is extracted by performing dilation operation on the binary image made by the color rate of R, G, B components and by detecting corner point and center point through mask. The area of traffic sign with varied shapes is extracted by calculating the distance rate from center point, which is the information of morphology. The results of this experiment demonstrate that in this system which is invariable regardless of its size and location, it is possible to extract the exact area from varied traffic signs such as the shapes of triangle, circle, inverse triangle, and square as well as from the images at both day and night when brightness value is greatly different. Moreover, it demonstrates great accuracy and speed in processing.

Three Dimensional Measurement of Ideal Trajectory of Pedicle Screws of Subaxial Cervical Spine Using the Algorithm Could Be Applied for Robotic Screw Insertion

  • Huh, Jisoon;Hyun, Jae Hwan;Park, Hyeong Geon;Kwak, Ho-Young
    • Journal of Korean Neurosurgical Society
    • /
    • v.62 no.4
    • /
    • pp.376-381
    • /
    • 2019
  • Objective : To define optimal method that calculate the safe direction of cervical pedicle screw placement using computed tomography (CT) image based three dimensional (3D) cortical shell model of human cervical spine. Methods : Cortical shell model of cervical spine from C3 to C6 was made after segmentation of in vivo CT image data of 44 volunteers. Three dimensional Cartesian coordinate of all points constituting surface of whole vertebra, bilateral pedicle and posterior wall were acquired. The ideal trajectory of pedicle screw insertion was defined as viewing direction at which the inner area of pedicle become largest when we see through the biconcave tubular pedicle. The ideal trajectory of 352 pedicles (eight pedicles for each of 44 subjects) were calculated using custom made program and were changed from global coordinate to local coordinate according to the three dimensional position of posterior wall of each vertebral body. The transverse and sagittal angle of trajectory were defined as the angle between ideal trajectory line and perpendicular line of posterior wall in the horizontal and sagittal plane. The averages and standard deviations of all measurements were calculated. Results : The average transverse angles were $50.60^{\circ}{\pm}6.22^{\circ}$ at C3, $51.42^{\circ}{\pm}7.44^{\circ}$ at C4, $47.79^{\circ}{\pm}7.61^{\circ}$ at C5, and $41.24^{\circ}{\pm}7.76^{\circ}$ at C6. The transverse angle becomes more steep from C3 to C6. The mean sagittal angles were $9.72^{\circ}{\pm}6.73^{\circ}$ downward at C3, $5.09^{\circ}{\pm}6.39^{\circ}$ downward at C4, $0.08^{\circ}{\pm}6.06^{\circ}$ downward at C5, and $1.67^{\circ}{\pm}6.06^{\circ}$ upward at C6. The sagittal angle changes from caudad to cephalad from C3 to C6. Conclusion : The absolute values of transverse and sagittal angle in our study were not same but the trend of changes were similar to previous studies. Because we know 3D address of all points constituting cortical shell of cervical vertebrae. we can easily reconstruct 3D model and manage it freely using computer program. More creative measurement of morphological characteristics could be carried out than direct inspection of raw bone. Furthermore this concept of measurement could be used for the computing program of automated robotic screw insertion.

Water Segmentation Based on Morphologic and Edge-enhanced U-Net Using Sentinel-1 SAR Images (형태학적 연산과 경계추출 학습이 강화된 U-Net을 활용한 Sentinel-1 영상 기반 수체탐지)

  • Kim, Hwisong;Kim, Duk-jin;Kim, Junwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.793-810
    • /
    • 2022
  • Synthetic Aperture Radar (SAR) is considered to be suitable for near real-time inundation monitoring. The distinctly different intensity between water and land makes it adequate for waterbody detection, but the intrinsic speckle noise and variable intensity of SAR images decrease the accuracy of waterbody detection. In this study, we suggest two modules, named 'morphology module' and 'edge-enhanced module', which are the combinations of pooling layers and convolutional layers, improving the accuracy of waterbody detection. The morphology module is composed of min-pooling layers and max-pooling layers, which shows the effect of morphological transformation. The edge-enhanced module is composed of convolution layers, which has the fixed weights of the traditional edge detection algorithm. After comparing the accuracy of various versions of each module for U-Net, we found that the optimal combination is the case that the morphology module of min-pooling and successive layers of min-pooling and max-pooling, and the edge-enhanced module of Scharr filter were the inputs of conv9. This morphologic and edge-enhanced U-Net improved the F1-score by 9.81% than the original U-Net. Qualitative inspection showed that our model has capability of detecting small-sized waterbody and detailed edge of water, which are the distinct advancement of the model presented in this research, compared to the original U-Net.

Spatial Replicability Assessment of Land Cover Classification Using Unmanned Aerial Vehicle and Artificial Intelligence in Urban Area (무인항공기 및 인공지능을 활용한 도시지역 토지피복 분류 기법의 공간적 재현성 평가)

  • Geon-Ung, PARK;Bong-Geun, SONG;Kyung-Hun, PARK;Hung-Kyu, LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.63-80
    • /
    • 2022
  • As a technology to analyze and predict an issue has been developed by constructing real space into virtual space, it is becoming more important to acquire precise spatial information in complex cities. In this study, images were acquired using an unmanned aerial vehicle for urban area with complex landscapes, and land cover classification was performed object-based image analysis and semantic segmentation techniques, which were image classification technique suitable for high-resolution imagery. In addition, based on the imagery collected at the same time, the replicability of land cover classification of each artificial intelligence (AI) model was examined for areas that AI model did not learn. When the AI models are trained on the training site, the land cover classification accuracy is analyzed to be 89.3% for OBIA-RF, 85.0% for OBIA-DNN, and 95.3% for U-Net. When the AI models are applied to the replicability assessment site to evaluate replicability, the accuracy of OBIA-RF decreased by 7%, OBIA-DNN by 2.1% and U-Net by 2.3%. It is found that U-Net, which considers both morphological and spectroscopic characteristics, performs well in land cover classification accuracy and replicability evaluation. As precise spatial information becomes important, the results of this study are expected to contribute to urban environment research as a basic data generation method.