• Title/Summary/Keyword: Eye detection

Search Result 428, Processing Time 0.027 seconds

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF

Automatic Detection of Stage 1 Sleep (자동 분석을 이용한 1단계 수면탐지)

  • 신홍범;한종희;정도언;박광석
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.1
    • /
    • pp.11-19
    • /
    • 2004
  • Stage 1 sleep provides important information regarding interpretation of nocturnal polysomnography, particularly sleep onset. It is a short transition period from wakeful consciousness to sleep. Lack of prominent sleep events characterizing stage 1 sleep is a major obstacle in automatic sleep stage scoring. In this study, we attempted to utilize simultaneous EEC and EOG processing and analyses to detect stage 1 sleep automatically. Relative powers of the alpha waves and the theta waves were calculated from spectral estimation. Either the relative power of alpha waves less than 50% or the relative power of theta waves more than 23% was regarded as stage 1 sleep. SEM (slow eye movement) was defined as the duration of both eye movement ranging from 1.5 to 4 seconds and regarded also as stage 1 sleep. If one of these three criteria was met, the epoch was regarded as stage 1 sleep. Results f ere compared to the manual rating results done by two polysomnography experts. Total of 169 epochs was analyzed. Agreement rate for stage 1 sleep between automatic detection and manual scoring was 79.3% and Cohen's Kappa was 0.586 (p<0.01). A significant portion (32%) of automatically detected stage 1 sleep included SEM. Generally, digitally-scored sleep s1aging shows the accuracy up to 70%. Considering potential difficulties in stage 1 sleep scoring, the accuracy of 79.3% in this study seems to be robust enough. Simultaneous analysis of EOG provides differential value to the present study from previous oneswhich mainly depended on EEG analysis. The issue of close relationship between SEM and stage 1 sleep raised by Kinnariet at. remains to be a valid one in this study.

Freshness Monitoring of Raw Salmon Filet Using a Colorimetric Sensor that is Sensitive to Volatile Nitrogen Compounds (휘발성 질소화합물 감응형 색변환 센서를 활용한 연어 신선도 모니터링)

  • Kim, Jae Man;Lee, Hyeonji;Hyun, Jung-Ho;Park, Joon-Shik;Kim, Yong Shin
    • Journal of Sensor Science and Technology
    • /
    • v.29 no.2
    • /
    • pp.93-99
    • /
    • 2020
  • A colorimetric paper sensor was used to detect volatile nitrogen-containing compounds emitted from spoiled salmon filets to determine their freshness. The sensing mechanism was based on acid-base reactions between acidic pH-indicating dyes and basic volatile ammonia and amines. A sensing layer was simply fabricated by drop-casting a dye solution of bromocresol green (BCG) on a polyvinylidene fluoride substrate, and its color-change response was enhanced by optimizing the amounts of additive chemicals, such as polyethylene glycol, p-toluene sulfonic acid, and graphene oxide in the dye solution. To avoid the adverse effects of water vapor, both faces of the sensing layer were enclosed by using a polyethylene terephthalate film and a gas-permeable microporous polytetrafluoroethylene sheet, respectively. When exposed to basic gas analytes, the paper-like sensor distinctly exhibited a color change from initially yellow, then to green, and finally to blue due to the deprotonation of BCG via the Brønsted acid-base reaction. The use of ammonia analyte as a test gas confirmed that the sensing performance of the optimized sensor was reversible and excellent (detection time of < 15 min, sensitive naked-eye detection at 0.25 ppm, good selectivity to common volatile organic gases, and good stability against thermal stress). Finally, the coloration intensity of the sensor was quantified as a function of the storage time of the salmon filet at 28℃ to evaluate its usefulness in monitoring of the food freshness with the measurement of the total viable count (TVC) of microorganisms in the food. The TVC value increased from 3.2 × 105 to 3.1 × 109 cfu/g in 28 h and then became stable, whereas the sensor response abruptly changed in the first 8 h and slightly increased thereafter. This result suggests that the colorimetric response could be used as an indicator for evaluating the degree of decay of salmon induced by microorganisms.

A Study on the quantitative measurement methods of MRTD and prediction of detection distance for Infrared surveillance equipments in military (군용 열영상장비 최소분해가능온도차의 정량적 측정 방법 및 탐지거리 예측에 관한 연구)

  • Jung, Yeong-Tak;Lim, Jae-Seong;Lee, Ji-Hyeok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.5
    • /
    • pp.557-564
    • /
    • 2017
  • The purpose of the thermal imaging observation device mounted on the K's tank in the Republic of Korea military is to convert infrared rays into visual information to provide information about the environment under conditions of restricted visibility. Among the various performance indicators of thermal observation devices, such as the view, magnification, resolution, MTF, NETD, and Minimum Resolvable Temperature Difference (MRTD), the MRTD is the most important, because it can indicate both the spatial frequency and temperature resolvable. However, the standard method of measuring the MRTD in NATO contains many subjective factors. As the measurement result can vary depending on subjective factors such as the human eye, metal condition and measurement conditions, the MRTD obtained is not stable. In this study, these qualitative MRTD measurement systems are converted into quantitative indicators based on a gray scale using imaging processing. By converting the average of the gray scale differences of the black and white images into the MRTD, the mean values can be used to determine whether the performance requirements required by the defense specification are met. The (mean) value can also be used to discriminate between detection, recognition and identification and the detectable distance of the thermal equipment can be analyzed under various environmental conditions, such as altostratus, heavy rain and fog.

The Efficacy of Biofeedback in Reducing Cybersickness in Virtual Navigation (생체신호 피드백을 적용한 가상 주행환경에서 사이버멀미 감소 효과)

  • 김영윤;김은남;정찬용;고희동;김현택
    • Science of Emotion and Sensibility
    • /
    • v.5 no.2
    • /
    • pp.29-34
    • /
    • 2002
  • Our previous studies investigated that narrow field of view (FOV : 50˚) and slow navigation speed decreased the frequency of occurrence and severity of cybersickness during immersion in the virtual reality (VR). It would cause a significant reduction of cybersickness if it were provided cybersickness alleviating virtual environment (CAVE) using biofeedback method whenever subject underwent physiological agitation. For verifying the hypothesis, we constructed a real-time cybersickness detection and feedback system with artificial neural network whose inputs are electrophysiological parameters of blood pulse volume, skin conductance, eye blink, skin temperature, heart period, and EEG. The system temporary provided narrow FOV and decreased speed of navigation as feedback outputs whenever physiological measures signal the occurrence of cybersickness. We examined the frequency and severity of cybersickness from simulator sickness questionnaires and self-report in 36 subjects. All subjects experienced VR two times in CAVE and non-CAVE condition at one-month intervals. The frequency and severity of cybersickness were significantly reduced in CAVE than non-CAVE condition. Virtual environment of narrow FOV and slow navigation provided by electrophysiological features based artificial neural network caused a significant reduction of cybersickness symptoms. These results showed that efficiency of a cybersickness detection system we developed was relatively high and subjects expressed more comfortable in the virtual navigation environment.

  • PDF

Detecting Adversarial Example Using Ensemble Method on Deep Neural Network (딥뉴럴네트워크에서의 적대적 샘플에 관한 앙상블 방어 연구)

  • Kwon, Hyun;Yoon, Joonhyeok;Kim, Junseob;Park, Sangjun;Kim, Yongchul
    • Convergence Security Journal
    • /
    • v.21 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • Deep neural networks (DNNs) provide excellent performance for image, speech, and pattern recognition. However, DNNs sometimes misrecognize certain adversarial examples. An adversarial example is a sample that adds optimized noise to the original data, which makes the DNN erroneously misclassified, although there is nothing wrong with the human eye. Therefore studies on defense against adversarial example attacks are required. In this paper, we have experimentally analyzed the success rate of detection for adversarial examples by adjusting various parameters. The performance of the ensemble defense method was analyzed using fast gradient sign method, DeepFool method, Carlini & Wanger method, which are adversarial example attack methods. Moreover, we used MNIST as experimental data and Tensorflow as a machine learning library. As an experimental method, we carried out performance analysis based on three adversarial example attack methods, threshold, number of models, and random noise. As a result, when there were 7 models and a threshold of 1, the detection rate for adversarial example is 98.3%, and the accuracy of 99.2% of the original sample is maintained.

Pupil Data Measurement and Social Emotion Inference Technology by using Smart Glasses (스마트 글래스를 활용한 동공 데이터 수집과 사회 감성 추정 기술)

  • Lee, Dong Won;Mun, Sungchul;Park, Sangin;Kim, Hwan-jin;Whang, Mincheol
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.973-979
    • /
    • 2020
  • This study aims to objectively and quantitatively determine the social emotion of empathy by collecting pupillary response. 52 subjects (26 men and 26 women) voluntarily participated in the experiment. After the measurement of the reference of 30 seconds, the experiment was divided into the task of imitation and spontaneously self-expression. The two subjects were interacted through facial expressions, and the pupil images were recorded. The pupil data was processed through binarization and circular edge detection algorithm, and outlier detection and removal technique was used to reject eye-blinking. The pupil size according to the empathy was confirmed for statistical significance with test of normality and independent sample t-test. Statistical analysis results, the pupil size was significantly different between empathy (M ± SD = 0.050 ± 1.817)) and non-empathy (M ± SD = 1.659 ± 1.514) condition (t(92) = -4.629, p = 0.000). The rule of empathy according to the pupil size was defined through discriminant analysis, and the rule was verified (Estimation accuracy: 75%) new 12 subjects (6 men and 6 women, mean age ± SD = 22.84 ± 1.57 years). The method proposed in this study is non-contact camera technology and is expected to be utilized in various virtual reality with smart glasses.

Font Change Blindness Triggered by the Text Difficulty in Moving Window Technique (움직이는 창 기법에서의 덩이글 난이도에 따른 글꼴 변화맹)

  • Seong-Jun Bak;Joo-Seok Hyun
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.4
    • /
    • pp.259-275
    • /
    • 2023
  • The aim of this study was to investigate font change blindness based on text difficulty in the "Moving Window Task", as originally introduced by McConkie and Rayner(1975). During the reading process where the moving window was applied, different target words in terms of font style compared to the text were presented. As participants' gaze reached the position of the target word, the font of the target word was changed to match the text font. The font of the target word before the change was either sans-serif when the text font was serif, or serif when the text font was sans-serif. After completing the reading task, more than half of the participants(62.5%) reported not detecting the font change. Observation of eye movements at the target word positions revealed that when understanding the content within the text was difficult, there was an increase in the number of regressions, an extended gaze duration, and a reduction in saccade length. Specifically, the increase in the number of regressions was evident only when the text font was serif, in other words, when the font of the target word shifted from sans-serif to serif. These results suggest that sensory interference unrelated to content understanding is not easily detected during reading. However, the possibility of detection increases when comprehension of the content becomes challenging. Furthermore, this exceptional detection possibility implies that it may be higher when the text font is serif compared to when it is sans-serif.

Wavelet Transform-based Face Detection for Real-time Applications (실시간 응용을 위한 웨이블릿 변환 기반의 얼굴 검출)

  • 송해진;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.829-842
    • /
    • 2003
  • In this Paper, we propose the new face detection and tracking method based on template matching for real-time applications such as, teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Since the main purpose of paper is to track a face regardless of various environments, we use template-based face tracking method. To generate robust face templates, we apply wavelet transform to the average face image and extract three types of wavelet template from transformed low-resolution average face. However template matching is generally sensitive to the change of illumination conditions, we apply Min-max normalization with histogram equalization according to the variation of intensity. Tracking method is also applied to reduce the computation time and predict precise face candidate region. Finally, facial components are also detected and from the relative distance of two eyes, we estimate the size of facial ellipse.

Evaluation of Application Possibility for Floating Marine Pollutants Detection Using Image Enhancement Techniques: A Case Study for Thin Oil Film on the Sea Surface (영상 강화 기법을 통한 부유성 해양오염물질 탐지 기술 적용 가능성 평가: 해수면의 얇은 유막을 대상으로)

  • Soyeong Jang;Yeongbin Park;Jaeyeop Kwon;Sangheon Lee;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1353-1369
    • /
    • 2023
  • In the event of a disaster accident at sea, the scale of damage will vary due to weather effects such as wind, currents, and tidal waves, and it is obligatory to minimize the scale of damage by establishing appropriate control plans through quick on-site identification. In particular, it is difficult to identify pollutants that exist in a thin film at sea surface due to their relatively low viscosity and surface tension among pollutants discharged into the sea. Therefore, this study aims to develop an algorithm to detect suspended pollutants on the sea surface in RGB images using imaging equipment that can be easily used in the field, and to evaluate the performance of the algorithm using input data obtained from actual waters. The developed algorithm uses image enhancement techniques to improve the contrast between the intensity values of pollutants and general sea surfaces, and through histogram analysis, the background threshold is found,suspended solids other than pollutants are removed, and finally pollutants are classified. In this study, a real sea test using substitute materials was performed to evaluate the performance of the developed algorithm, and most of the suspended marine pollutants were detected, but the false detection area occurred in places with strong waves. However, the detection results are about three times better than the detection method using a single threshold in the existing algorithm. Through the results of this R&D, it is expected to be useful for on-site control response activities by detecting suspended marine pollutants that were difficult to identify with the naked eye at existing sites.