• Title/Summary/Keyword: Color image detection

Search Result 717, Processing Time 0.126 seconds

Estimating vegetation index for outdoor free-range pig production using YOLO

  • Sang-Hyon Oh;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.638-651
    • /
    • 2023
  • The objective of this study was to quantitatively estimate the level of grazing area damage in outdoor free-range pig production using a Unmanned Aerial Vehicles (UAV) with an RGB image sensor. Ten corn field images were captured by a UAV over approximately two weeks, during which gestating sows were allowed to graze freely on the corn field measuring 100 × 50 m2. The images were corrected to a bird's-eye view, and then divided into 32 segments and sequentially inputted into the YOLOv4 detector to detect the corn images according to their condition. The 43 raw training images selected randomly out of 320 segmented images were flipped to create 86 images, and then these images were further augmented by rotating them in 5-degree increments to create a total of 6,192 images. The increased 6,192 images are further augmented by applying three random color transformations to each image, resulting in 24,768 datasets. The occupancy rate of corn in the field was estimated efficiently using You Only Look Once (YOLO). As of the first day of observation (day 2), it was evident that almost all the corn had disappeared by the ninth day. When grazing 20 sows in a 50 × 100 m2 cornfield (250 m2/sow), it appears that the animals should be rotated to other grazing areas to protect the cover crop after at least five days. In agricultural technology, most of the research using machine and deep learning is related to the detection of fruits and pests, and research on other application fields is needed. In addition, large-scale image data collected by experts in the field are required as training data to apply deep learning. If the data required for deep learning is insufficient, a large number of data augmentation is required.

Current Status and Results of In-orbit Function, Radiometric Calibration and INR of GOCI-II (Geostationary Ocean Color Imager 2) on Geo-KOMPSAT-2B (정지궤도 해양관측위성(GOCI-II)의 궤도 성능, 복사보정, 영상기하보정 결과 및 상태)

  • Yong, Sang-Soon;Kang, Gm-Sil;Huh, Sungsik;Cha, Sung-Yong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1235-1243
    • /
    • 2021
  • Geostationary Ocean Color Imager 2 (GOCI-II) on Geo-KOMPSAT-2 (GK2B)satellite was developed as a mission successor of GOCI on COMS which had been operated for around 10 years since launch in 2010 to observe and monitor ocean color around Korean peninsula. GOCI-II on GK2B was successfully launched in February of 2020 to continue for detection, monitoring, quantification, and prediction of short/long term changes of coastal ocean environment for marine science research and application purpose. GOCI-II had already finished IAC and IOT including early in-orbit calibration and had been handed over to NOSC (National Ocean Satellite Center) in KHOA (Korea Hydrographic and Oceanographic Agency). Radiometric calibration was periodically conducted using on-board solar calibration system in GOCI-II. The final calibrated gain and offset were applied and validated during IOT. And three video parameter sets for one day and 12 video parameter sets for a year was selected and transferred to NOSC for normal operation. Star measurement-based INR (Image Navigation and Registration) navigation filtering and landmark measurement-based image geometric correction were applied to meet the all INR requirements. The GOCI2 INR software was validated through INR IOT. In this paper, status and results of IOT, radiometric calibration and INR of GOCI-II are analysed and described.

Albedo Based Fake Face Detection (빛의 반사량 측정을 통한 가면 착용 위변조 얼굴 검출)

  • Kim, Young-Shin;Na, Jae-Keun;Yoon, Sung-Beak;Yi, June-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.6
    • /
    • pp.139-146
    • /
    • 2008
  • Masked fake face detection using ordinary visible images is a formidable task when the mask is accurately made with special makeup. Considering recent advances in special makeup technology, a reliable solution to detect masked fake faces is essential to the development of a complete face recognition system. This research proposes a method for masked fake face detection that exploits reflectance disparity due to object material and its surface color. First, we have shown that measuring of albedo can be simplified to radiance measurement when a practical face recognition system is deployed under the user-cooperative environment. This enables us to obtain albedo just by grey values in the image captured. Second, we have found that 850nm infrared light is effective to discriminate between facial skin and mask material using reflectance disparity. On the other hand, 650nm visible light is known to be suitable for distinguishing different facial skin colors between ethnic groups. We use a 2D vector consisting of radiance measurements under 850nm and 659nm illumination as a feature vector. Facial skin and mask material show linearly separable distributions in the feature space. By employing FIB, we have achieved 97.8% accuracy in fake face detection. Our method is applicable to faces of different skin colors, and can be easily implemented into commercial face recognition systems.

A Study on the Implementation and Development of Image Processing Algorithms for Vibes Detection Equipment (정맥 검출 장비 구현 및 영상처리 알고리즘 개발에 대한 연구)

  • Jin-Hyoung, Jeong;Jae-Hyun, Jo;Jee-Hun, Jang;Sang-Sik, Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.6
    • /
    • pp.463-470
    • /
    • 2022
  • Intravenous injection is widely used for patient treatment, including injection drugs, fluids, parenteral nutrition, and blood products, and is the most frequently performed invasive treatment for inpatients, including blood collection, peripheral catheter insertion, and other IV therapy, and more than 1 billion cases per year. Intravenous injection is one of the difficult procedures performed only by experienced nurses who have been trained in intravenous injection, and failure can lead to thrombosis and hematoma or nerve damage to the vein. Nurses who frequently perform intravenous injections may also make mistakes because it is not easy to detect veins due to factors such as obesity, skin color, and age. Accordingly, studies on auxiliary equipment capable of visualizing the venous structure of the back of the hand or arm have been published to reduce mistakes during intravenous injection. This paper is about the development of venous detection equipment that visualizes venous structure during intravenous injection, and the optimal combination was selected by comparing the brightness of acquired images according to the combination of near-infrared (NIR) LED and Filter with different wavelength bands. In addition, an image processing algorithm was derived to threshehold and making blood vessel part to green through grayscale conversion, histogram equilzation, and sharpening filters for clarity of vein images obtained through the implemented venous detection experimental module.

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.

An Integrated Face Detection and Recognition System (통합된 시스템에서의 얼굴검출과 인식기법)

  • 박동희;이규봉;이유홍;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.165-170
    • /
    • 2003
  • This paper presents an integrated approach to unconstrained face recognition in arbitrary scenes. The front end of the system comprises of a scale and pose tolerant face detector. Scale normalization is achieved through novel combination of a skin color segmentation and log-polar mapping procedure. Principal component analysis is used with the multi-view approach proposed in[10] to handle the pose variations. For a given color input image, the detector encloses a face in a complex scene within a circular boundary and indicates the position of the nose. Next, for recognition, a radial grid mapping centered on the nose yields a feature vector within the circular boundary. As the width of the color segmented region provides an estimated size for the face, the extracted feature vector is scale normalized by the estimated size. The feature vector is input to a trained neural network classifier for face identification. The system was evaluated using a database of 20 person's faces with varying scale and pose obtained on different complex backgrounds. The performance of the face recognizer was also quite good except for sensitivity to small scale face images. The integrated system achieved average recognition rates of 87% to 92%.

  • PDF

Improvement of Sleep Quality Using Color Histogram (컬러 히스토그램을 활용한 수면의 질 향상)

  • Shin, Seong-Yoon;Shin, Kwang-Seong;Rhee, Yamg-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.6
    • /
    • pp.1283-1288
    • /
    • 2011
  • In this paper we collect data concerning sleep environments in a bedroom and analyze the relationship between the collected condition data and sleep. In addition, this paper detects scene changes from the subjects in a sleeping state and presents the physical conditions, reactions during sleep, and physical sensations and stimuli. To detect scene changes in image sequences, we used color histogram for the difference between the preceding frame and the current frame. In addition, to extract the tossing and turning for different situations, the subjects were instructed to enter the level of fatigue, the level of drinking, and the level of stomach emptiness. For the sleep experiment system, we used the H-MOTE2420 Sensor composed of temperature, humidity, and light sensors. This paper is intended to provide the best sleep environment that enhances sleep quality, thus inducing people today to get regular and comfortable sleep.

An Integrated Face Detection and Recognition System (통합된 시스템에서의 얼굴검출과 인식기법)

  • 박동희;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.6
    • /
    • pp.1312-1317
    • /
    • 2003
  • This paper presents an integrated approach to unconstrained face recognition in arbitrary scenes. The front end of the system comprises of a scale and pose tolerant face detector. Scale normalization is achieved through novel combination of a skin color segmentation and log-polar mapping procedure. Principal component analysis is used with the multi-view approach proposed in[10] to handle the pose variations. For a given color input image, the detector encloses a face in a complex scene within a circular boundary and indicates the position of the nose. Next, for recognition, a radial grid mapping centered on the nose yields a feature vector within the circular boundary. As the width of the color segmented region provides an estimated size for the face, the extracted feature vector is scale normalized by the estimated size. The feature vector is input to a trained neural network classifier for face identification. The system was evaluated using a database of 20 person's faces with varying scale and pose obtained on different complex backgrounds. The performance of the face recognizer was also quite good except for sensitivity to small scale face images. The integrated system achieved average recognition rates of 87% to 92%.

Color Recognition and Phoneme Pattern Segmentation of Hangeul Using Augmented Reality (증강현실을 이용한 한글의 색상 인식과 자소 패턴 분리)

  • Shin, Seong-Yoon;Choi, Byung-Seok;Rhee, Yang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.6
    • /
    • pp.29-35
    • /
    • 2010
  • While diversification of the use of video in the prevalence of cheap video equipment, augmented reality can print additional real-world images and video image. Although many recent advent augmented reality techniques, currently attempting to correct the character recognition is performed. In this paper characters marked with a visual marker recognition, and the color to match the marker color of the characters finds. And, it was shown on the screen by the character recognition. In this paper, by applying the phoneme pattern segmentation algorithm by the horizontal projection, we propose to segment the phoneme to match the six types of Hangul representation. Throughout the experiment sample of phoneme segmentation using augmented reality showed proceeding result at each step, and the experimental results was found to be that detection rate was above 90%.

License Plate Location Using SVM (SVM을 이용한 차량 번호판 위치 추출)

  • Hong, Seok-Keun;Chun, Joo-Kwong;An, Myoung-Seok;Shim, Jun-Hwan;Cho, Seok-Je
    • Journal of Navigation and Port Research
    • /
    • v.32 no.10
    • /
    • pp.845-850
    • /
    • 2008
  • In this paper, we propose a license plate locating algorithm by using SVM. Tipically, the features regarding license plate format include height-to-width ratio, color, and spatial frequency. The method is dived into three steps which are image acquisition, detecting license plate candidate regions, verifying the license plate accurately. In the course of detecting license plate candidate regions, color filtering and edge detecting are performed to detect candidate regions, and then verify candidate region using Support Vector Machines(SVM) with DCT coefficients of candidates. It is possible to perform reliable license plate location bemuse we can protect false detection through these verification process. We validate our approach with experimental results.