• 제목/요약/키워드: eye pairs

검색결과 27건 처리시간 0.023초

작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험 (Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images)

  • 박소연;김예슬;나상일;박노욱
    • 대한원격탐사학회지
    • /
    • 제36권5_1호
    • /
    • pp.807-821
    • /
    • 2020
  • 이 연구에서는 작물 모니터링을 위한 시계열 고해상도 영상 구축을 위해 기존 중저해상도 위성영상의 융합을 위해 개발된 대표적인 시공간 융합 모델의 적용성을 평가하였다. 특히 시공간 융합 모델의 원리를 고려하여 입력 영상 pair의 특성 차이에 따른 모델의 예측 성능을 비교하였다. 농경지에서 획득된 시계열 Sentinel-2 영상과 RapidEye 영상의 시공간 융합 실험을 통해 시공간 융합 모델의 예측 성능을 평가하였다. 시공간 융합 모델로는 Spatial and Temporal Adaptive Reflectance Fusion Model(STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model(SPSTFM)과 Flexible Spatiotemporal DAta Fusion(FSDAF) 모델을 적용하였다. 실험 결과, 세 시공간 융합 모델은 예측 오차와 공간 유사도 측면에서 서로 다른 예측 결과를 생성하였다. 그러나 모델 종류와 관계없이, 예측 시기와 영상 pair가 획득된 시기 사이의 시간 차이보다는 예측 시기의 저해상도 영상과 영상 pair의 상관성이 예측 능력 향상에 더 중요한 것으로 나타났다. 또한 작물 모니터링을 위해서는 오차 전파 문제를 완화할 수 있는 식생지수를 시공간 융합의 입력 자료로 사용해야 함을 확인하였다. 이러한 실험 결과는 작물 모니터링을 위한 시공간 융합에서 최적의 영상 pair 및 입력 자료 유형의 선택과 개선된 모델 개발의 기초정보로 활용될 수 있을 것으로 기대된다.

A Study on Measuring the Speaking Rate of Speaking Signal by Using Line Spectrum Pair Coefficients

  • Jang, Kyung-A;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • 제20권3E호
    • /
    • pp.18-24
    • /
    • 2001
  • Speaking rate represents how many phonemes in speech signal have in limited time. It is various and changeable depending on the speakers and the characters of each phoneme. The preprocessing to remove the effect of variety of speaking rate is necessary before recognizing the speech in the present speech recognition systems. So if it is possible to estimate the speaking rate in advance, the performance of speech recognition can be higher. However, the conventional speech vocoder decides the transmission rate for analyzing the fixed period no regardless of the variety rate of phoneme but if the speaking rate can be estimated in advance, it is very important information of speech to use in speech coding part as well. It increases the quality of sound in vocoder as well as applies the variable transmission rate. In this paper, we propose the method for presenting the speaking rate as parameter in speech vocoder. To estimate the speaking rate, the variety of phoneme is estimated and the Line Spectrum Pairs is used to estimate it. As a result of comparing the speaking rate performance with the proposed algorithm and passivity method worked by eye, error between two methods is 5.38% about fast utterance and 1.78% about slow utterance and the accuracy between two methods is 98% about slow utterance and 94% about fast utterances in 30 dB SNR and 10 dB SNR respectively.

  • PDF

한국땅거미(Atypus coreanus Kim, 1985)의 시각기 미세구조 (Ultrastructure of the Eye in the Atypus coreanus Kim, 1985)

  • 권중균;고명규;정호삼;김주필
    • Applied Microscopy
    • /
    • 제28권4호
    • /
    • pp.477-490
    • /
    • 1998
  • Most spiders have four pairs of simple eyes. A few families of the spider have their developed eyes to an extent visual cues make up a significant of information to response from the external stimuli. Most spiders respond to the external stimuli around them. Specially, they are very sensitive to vibrations from the air, the ground, their webs, or even the surface of water. The present study was undertaken to examine the evolutionary development and function of eye with the observation of visual ultrastructure of Atypus coreanus Kim, 1985 using the electron microscopy. They have weak mobility, limited territory and low developed eyes. Atypus coreanus was collected from Mt. Ungil, Namyangju-gun, Kyonggi province. The fine structure of these eyes was examined by electron microscopy, and prepard by teasing method for scanning electron microscopic observation. As a result, the eyes of Atypus coreanus was composed of cornea, lens, vitreous body, retina, and rhabdome.

  • PDF

다양한 눈의 특징 분석을 통한 감성 분류 방법 (Emotion Classification Method Using Various Ocular Features)

  • 김윤경;원명주;이의철
    • 한국콘텐츠학회논문지
    • /
    • 제14권10호
    • /
    • pp.463-471
    • /
    • 2014
  • 본 논문에서는 근적외선 카메라를 이용한 눈의 다양한 특징 분석을 통해 감성을 분류하는 방법에 관한 연구를 진행하였다. 제안하는 방법은 기존의 유사한 연구와 비교했을 때, 감성 분류를 위해 더 많은 눈의 특징을 사용하였고, 각 특징이 모두 유의미한 정보를 포함하고 있음을 검증하였다. 긍정-부정, 각성-이완의 상반된 감성 유발을 위해 청각 자극을 사용함으로써, 눈의 특징에 끼치는 영향을 최소화하였다. 감성 분류를 위한 특징으로써, 동공 크기, 동공 크기 변화율, 깜박임 빈도, 눈을 감은 지속시간을 사용하였으며, 이들은 근적외선 카메라 영상으로부터 자체 개발한 자동화된 처리 방법을 통해 추출된다. 분석 결과, 각성-이완 감성 유발 자극에 대해서는 동공 크기 변화율과 깜박임 빈도 특징이 유의한 차이를 보였다. 또한, 긍정-부정 감성 유발 자극에 대해에서는 눈을 감은 지속시간 특징이 유의한 차이를 보였다. 특히 동공 크기 특징은 각성-이완, 긍정-부정의 상반된 감성 자극 유발 상황에서 모두 유의한 차이가 없음을 확인할 수 있었다.

The Effect of Gaze Angle on Muscle Activity and Kinematic Variables during Treadmill Walking

  • Kim, Bo-Suk;Jung, Jae-Hu;Chae, Woen-Sik
    • 한국운동역학회지
    • /
    • 제27권1호
    • /
    • pp.35-43
    • /
    • 2017
  • Objective: The purpose of this study was to determine how gaze angle affects muscle activity and kinematic variables during treadmill walking and to offer scientific information for effective and safe treadmill training environment. Method: Ten male subjects who have no musculoskeletal disorder were recruited. Eight pairs of surface electrodes were attached to the right side of the body to monitor the upper trapezius (UT), rectus abdominis (RA), erector spinae (ES), rectus femoris (RF), bicep femoris (BF), tibialis anterior (TA), medialis gastrocnemius (MG), and lateral gastrocnemius (LG). Two digital camcorders were used to obtain 3-D kinematics of the lower extremity. Each subject walked on a treadmill with a TV monitor at three different heights (eye level; EL, 20% above eye level; AE, 20% below eye level; BE) at speed of 5.0 km/h. For each trial being analyzed, five critical instants and four phases were identified from the video recording. For each dependent variable, one-way ANOVA with repeated measures was used to determine whether there were significant differences among three different conditions (p<.05). When a significant difference was found, post hoc analyses were performed using the contrast procedure. Results: This study found that average and peak IEMG values for EL were generally smaller than the corresponding values for AE and BE but the differences were not statically significant. There were also no significant changes in kinematic variables among three different gaze angles. Conclusion: Based on the results of this study, gaze angle does not affect muscle activity and kinematic variables during treadmill walking. However, it is interesting to note that walking with BE may increase the muscle activity of the trapezius and the lower extremity. Moreover, it may hinder proper dorsiflexion during landing phase. Thus, it seems to reasonable to suggest that inappropriate gaze angle should be avoided in treadmill walking. It is obvious that increased walking speed may cause a significant changes in biomechanical parameters used in this study. It is recommended that future studies be conducted which are similar to the present investigation but using different walking speed.

Extension of the Mantel-Haenszel test to bivariate interval censored data

  • Lee, Dong-Hyun;Kim, Yang-Jin
    • Communications for Statistical Applications and Methods
    • /
    • 제29권4호
    • /
    • pp.403-411
    • /
    • 2022
  • This article presents an independence test between pairs of interval censored failure times. The Mantel-Haenszel test is commonly applied to test the independence between two categorical variables accompanied with a strata variable. Hsu and Prentice (1996) applied a Mantel-Haenszel test to the sequence of 2 × 2 tables formed at the grids which are composed of failure times. In this article, due to unknown failure times, the suitable grid points should be determined and the status of failure and at risk are estimated at those grid points. We also consider a weighted test statistic to bring a more powerful test. Simulation studies are performed to evaluate the power of test statistics under finite samples. The method is applied to analyze two real data sets, mastitis data from milk cows and an age-related eye disease study.

First Blindness Cases of Horses Infected with Setaria Digitata (Nematoda: Filarioidea) in the Republic of Korea

  • Shin, Jihun;Ahn, Kyu-Sung;Suh, Guk-Hyun;Kim, Ha-Jung;Jeong, Hak-Sub;Kim, Byung-Su;Choi, Eunsang;Shin, Sung-Shik
    • Parasites, Hosts and Diseases
    • /
    • 제55권6호
    • /
    • pp.667-671
    • /
    • 2017
  • Ocular setariases of cattle were reported but those of equine hosts have never been reported in the Republic of Korea (Korea). We found motile worms in the aqueous humor of 15 horses (Equus spp.) from 12 localities in southern parts of Korea between January 2004 and November 2017. After the affected animals were properly restrained under sedation and local anesthesia, 10 ml disposable syringe with a 16-gauge needle was inserted into the anterior chamber of the affected eye to successfully remove the parasites. The male worm that was found in 7 of the cases showed a pair of lateral appendages near the posterior terminal end of the body. The papillar arrangement was 3 pairs of precloacal, a pair of adcloacal, and 3 pairs of postcloacal papillae, plus a central papilla just in front of the cloaca. The female worms found in the eyes of 8 horses were characterized by the tapering posterior terminal end of the body with a smooth knob. Worms were all identified as Setaria digitata (von Linstow, 1906) by the morphologic characteristics using light and electron microscopic observations. This is the first blindness cases of 15 horses infected with S. digitata (Nematoda: Filarioidea) in Korea.

Rank Order Filter와 화소값 차이를 이용한 강인한 눈동자 검출 (Robust Pupil Detection using Rank Order Filter and Pixel Difference)

  • 장경식
    • 한국정보통신학회논문지
    • /
    • 제16권7호
    • /
    • pp.1383-1390
    • /
    • 2012
  • 이 논문에서는 얼굴 영상에 대해 rank order 필터와 화소 값 차이를 사용하여 강인하게 눈동자를 찾는 방법을 제안한다. 개선된 rank order 필터를 사용하여 얼굴 영상에서 눈동자 후보점을 찾는다. 눈동자와 흰자위의 경계에서 화소값 변화가 크다는 사실을 이용하여 눈썹 등 눈동자가 아닌 위치에 있는 눈동자 후보점들을 제거한다. 눈동자 후보점을 두 점간의 거리와 각도를 이용하여 쌍으로 묶고 눈동자 영역에서의 밝기 정보를 이용한 적합도 함수를 적용하여 최종 눈동자를 추출한다. BioID 얼굴 데이터베이스에 있는 얼굴 영상 400개에 대한 실험 결과 90.25%의 눈동자 추출율을 보여 기존 방법보다 4% 개선된 결과를 얻었으며, 특히 안경을 착용한 얼굴 영상의 경우 기존 방법보다 약 12% 개선된 결과를 얻었다.

Rank Order Filter와 상호상관을 이용한 강인한 눈동자 검출 (Robust Pupil Detection using Rank Order Filter and Cross-Correlation)

  • 장경식;박성대
    • 한국정보통신학회논문지
    • /
    • 제17권7호
    • /
    • pp.1564-1570
    • /
    • 2013
  • 이 논문에서는 rank order 필터와 상호상관을 이용하여 강인하게 눈동자를 찾는 방법을 제안한다. rank order 필터를 사용하여 얼굴 영상에서 눈동자 후보점을 찾는다. 임계치를 변화하며 눈 영역을 이진화하여 눈썹 위치를 구한 후 눈썹 영역의 눈동자 후보점을 제거한다. 눈동자 위치를 보정한 후 두 눈동자 후보점을 기하학적인 제약조건을 기반으로 쌍으로 묶는다. 각 쌍의 두 눈에 대한 유사도를 상호상관을 이용하여 측정하여 가장 큰 값을 갖는 쌍을 최종 눈동자로 결정한다. BioID 얼굴 데이터베이스의 얼굴 영상 500개에 대한 실험 결과 96.8%의 높은 눈동자 검출율을 보였으며 기존 방법보다 약 11.6% 개선된 결과를 얻었다.

인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정 (Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction)

  • 박성기;박민용;이태근
    • 제어로봇시스템학회논문지
    • /
    • 제11권1호
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.