• Title/Summary/Keyword: automatic image change

Search Result 137, Processing Time 0.024 seconds

Traffic Object Tracking Based on an Adaptive Fusion Framework for Discriminative Attributes (차별적인 영상특징들에 적응 가능한 융합구조에 의한 도로상의 물체추적)

  • Kim Sam-Yong;Oh Se-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.5 s.311
    • /
    • pp.1-9
    • /
    • 2006
  • Because most applications of vision-based object tracking demonstrate satisfactory operations only under very constrained environments that have simplifying assumptions or specific visual attributes, these approaches can't track target objects for the highly variable, unstructured, and dynamic environments like a traffic scene. An adaptive fusion framework is essential that takes advantage of the richness of visual information such as color, appearance shape and so on, especially at cluttered and dynamically changing scenes with partial occlusion[1]. This paper develops a particle filter based adaptive fusion framework and improves the robustness and adaptation of this framework by adding a new distinctive visual attribute, an image feature descriptor using SIFT (Scale Invariant Feature Transform)[2] and adding an automatic teaming scheme of the SIFT feature library according to viewpoint, illumination, and background change. The proposed algorithm is applied to track various traffic objects like vehicles, pedestrians, and bikes in a driver assistance system as an important component of the Intelligent Transportation System.

An Enhanced Cloud Cover Reading Algorithm Against Aerosol (연무에 강한 구름 판독 알고리즘)

  • Yun, Han-Kyung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.1
    • /
    • pp.7-12
    • /
    • 2019
  • Clouds in the atmosphere are important variables that affect the temperature change by reflecting the radiant energy of the earth surface as well as changing the amount of sunshine by reflecting the sun's radiation energy. Especially, the amount of sunshine on the surface is very important It is essential information. Therefore, eye-observations of the sky on the surface of the earth have been enhanced by satellite photographs or relatively narrowed observation equipments. Therefore, cloud automatic observing systems have been developed in order to replace the human observers, but depending on the seasons, the reliability of observations is not high enough to be applied in the field due to pollutants or fog in the atmosphere. Therefore, we have developed a cloud observation algorithm that is robust against smog and fog. It is based on the calculation of the degree of aerosol from the all-sky image, and is added to the developed cloud reader to develop season- and climate-insensitive algorithms to improve reliability. The result compared to existing cloud readers and the result of cloud cover is improved.

A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm (기계학습(machine learning) 기반 터널 영상유고 자동 감지 시스템 개발을 위한 사전검토 연구)

  • Shin, Hyu-Soung;Kim, Dong-Gyou;Yim, Min-Jin;Lee, Kyu-Beom;Oh, Young-Sup
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • In this study, a preliminary study was undertaken for development of a tunnel incident automatic detection system based on a machine learning algorithm which is to detect a number of incidents taking place in tunnel in real time and also to be able to identify the type of incident. Two road sites where CCTVs are operating have been selected and a part of CCTV images are treated to produce sets of training data. The data sets are composed of position and time information of moving objects on CCTV screen which are extracted by initially detecting and tracking of incoming objects into CCTV screen by using a conventional image processing technique available in this study. And the data sets are matched with 6 categories of events such as lane change, stoping, etc which are also involved in the training data sets. The training data are learnt by a resilience neural network where two hidden layers are applied and 9 architectural models are set up for parametric studies, from which the architectural model, 300(first hidden layer)-150(second hidden layer) is found to be optimum in highest accuracy with respect to training data as well as testing data not used for training. From this study, it was shown that the highly variable and complex traffic and incident features could be well identified without any definition of feature regulation by using a concept of machine learning. In addition, detection capability and accuracy of the machine learning based system will be automatically enhanced as much as big data of CCTV images in tunnel becomes rich.

Validation of Extreme Rainfall Estimation in an Urban Area derived from Satellite Data : A Case Study on the Heavy Rainfall Event in July, 2011 (위성 자료를 이용한 도시지역 극치강우 모니터링: 2011년 7월 집중호우를 중심으로)

  • Yoon, Sun-Kwon;Park, Kyung-Won;Kim, Jong Pil;Jung, Il-Won
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.4
    • /
    • pp.371-384
    • /
    • 2014
  • This study developed a new algorithm of extreme rainfall extraction based on the Communication, Ocean and Meteorological Satellite (COMS) and the Tropical Rainfall Measurement Mission (TRMM) Satellite image data and evaluated its applicability for the heavy rainfall event in July-2011 in Seoul, South Korea. The power-series-regression-based Z-R relationship was employed for taking into account for empirical relationships between TRMM/PR, TRMM/VIRS, COMS, and Automatic Weather System(AWS) at each elevation. The estimated Z-R relationship ($Z=303R^{0.72}$) agreed well with observation from AWS (correlation coefficient=0.57). The estimated 10-minute rainfall intensities from the COMS satellite using the Z-R relationship generated underestimated rainfall intensities. For a small rainfall event the Z-R relationship tended to overestimated rainfall intensities. However, the overall patterns of estimated rainfall were very comparable with the observed data. The correlation coefficients and the Root Mean Square Error (RMSE) of 10-minute rainfall series from COMS and AWS gave 0.517, and 3.146, respectively. In addition, the averaged error value of the spatial correlation matrix ranged from -0.530 to -0.228, indicating negative correlation. To reduce the error by extreme rainfall estimation using satellite datasets it is required to take into more extreme factors and improve the algorithm through further study. This study showed the potential utility of multi-geostationary satellite data for building up sub-daily rainfall and establishing the real-time flood alert system in ungauged watersheds.

Automatic Detection of Type II Solar Radio Burst by Using 1-D Convolution Neutral Network

  • Kyung-Suk Cho;Junyoung Kim;Rok-Soon Kim;Eunsu Park;Yuki Kubo;Kazumasa Iwai
    • Journal of The Korean Astronomical Society
    • /
    • v.56 no.2
    • /
    • pp.213-224
    • /
    • 2023
  • Type II solar radio bursts show frequency drifts from high to low over time. They have been known as a signature of coronal shock associated with Coronal Mass Ejections (CMEs) and/or flares, which cause an abrupt change in the space environment near the Earth (space weather). Therefore, early detection of type II bursts is important for forecasting of space weather. In this study, we develop a deep-learning (DL) model for the automatic detection of type II bursts. For this purpose, we adopted a 1-D Convolution Neutral Network (CNN) as it is well-suited for processing spatiotemporal information within the applied data set. We utilized a total of 286 radio burst spectrum images obtained by Hiraiso Radio Spectrograph (HiRAS) from 1991 and 2012, along with 231 spectrum images without the bursts from 2009 to 2015, to recognizes type II bursts. The burst types were labeled manually according to their spectra features in an answer table. Subsequently, we applied the 1-D CNN technique to the spectrum images using two filter windows with different size along time axis. To develop the DL model, we randomly selected 412 spectrum images (80%) for training and validation. The train history shows that both train and validation losses drop rapidly, while train and validation accuracies increased within approximately 100 epoches. For evaluation of the model's performance, we used 105 test images (20%) and employed a contingence table. It is found that false alarm ratio (FAR) and critical success index (CSI) were 0.14 and 0.83, respectively. Furthermore, we confirmed above result by adopting five-fold cross-validation method, in which we re-sampled five groups randomly. The estimated mean FAR and CSI of the five groups were 0.05 and 0.87, respectively. For experimental purposes, we applied our proposed model to 85 HiRAS type II radio bursts listed in the NGDC catalogue from 2009 to 2016 and 184 quiet (no bursts) spectrum images before and after the type II bursts. As a result, our model successfully detected 79 events (93%) of type II events. This results demonstrates, for the first time, that the 1-D CNN algorithm is useful for detecting type II bursts.

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.

Evaluation of High Absorption Photoconductor for Application to Auto Exposure Control Sensor by Screen Printing Method (자동노출제어장치 센서적용을 위한 스크린 프린팅 제작방식의 고흡수율 광도전체 특성평가)

  • Kim, Dae-Kuk;Kim, Kyo-Tae;Park, Jeong-Eun;Hong, Ju-Yeon;Kim, Jin-Seon;Oh, Kyung-Min;Nam, Sang-Hee
    • Journal of the Korean Society of Radiology
    • /
    • v.9 no.2
    • /
    • pp.67-72
    • /
    • 2015
  • In diagnostic radiology, the use of automatic exposure control device is internationally recommended for diagnosis and optimization. However, if exposed to prolonged radiation is a complicated manufacturing process, there is a problem that occurs decrease of various performance overall brightness sensor, which is commercially available conventional. Therefore, in this study, absorption of X-ray is high, and I want to evaluate the AEC applicability of the sensor of the photoconductor-based production has an easy advantage. Experimental results confirms the possibility of fabrication of the sensor through an increase in the SNR, with the detection efficiency superior, accurate turn-off. In addition, it is confirmed that the experimental results of the transmittance and the latent image, Ghost effect by the light conductor does not appear, in the case of a photoconductor with the exception of the PbO, 80% - and it was confirmed good transmittance of 90%. Therefore, excellent mechanical stability and poor performance due to a change of the doping concentration than the existing products that have been put to practical use, the sensor easy photoconductor based, fabrication and can be applied as AEC sensor is expected.

A computer vision-based approach for behavior recognition of gestating sows fed different fiber levels during high ambient temperature

  • Kasani, Payam Hosseinzadeh;Oh, Seung Min;Choi, Yo Han;Ha, Sang Hun;Jun, Hyungmin;Park, Kyu hyun;Ko, Han Seo;Kim, Jo Eun;Choi, Jung Woo;Cho, Eun Seok;Kim, Jin Soo
    • Journal of Animal Science and Technology
    • /
    • v.63 no.2
    • /
    • pp.367-379
    • /
    • 2021
  • The objectives of this study were to evaluate convolutional neural network models and computer vision techniques for the classification of swine posture with high accuracy and to use the derived result in the investigation of the effect of dietary fiber level on the behavioral characteristics of the pregnant sow under low and high ambient temperatures during the last stage of gestation. A total of 27 crossbred sows (Yorkshire × Landrace; average body weight, 192.2 ± 4.8 kg) were assigned to three treatments in a randomized complete block design during the last stage of gestation (days 90 to 114). The sows in group 1 were fed a 3% fiber diet under neutral ambient temperature; the sows in group 2 were fed a diet with 3% fiber under high ambient temperature (HT); the sows in group 3 were fed a 6% fiber diet under HT. Eight popular deep learning-based feature extraction frameworks (DenseNet121, DenseNet201, InceptionResNetV2, InceptionV3, MobileNet, VGG16, VGG19, and Xception) used for automatic swine posture classification were selected and compared using the swine posture image dataset that was constructed under real swine farm conditions. The neural network models showed excellent performance on previously unseen data (ability to generalize). The DenseNet121 feature extractor achieved the best performance with 99.83% accuracy, and both DenseNet201 and MobileNet showed an accuracy of 99.77% for the classification of the image dataset. The behavior of sows classified by the DenseNet121 feature extractor showed that the HT in our study reduced (p < 0.05) the standing behavior of sows and also has a tendency to increase (p = 0.082) lying behavior. High dietary fiber treatment tended to increase (p = 0.064) lying and decrease (p < 0.05) the standing behavior of sows, but there was no change in sitting under HT conditions.

Regional Projection Histogram Matching and Linear Regression based Video Stabilization for a Moving Vehicle (영역별 수직 투영 히스토그램 매칭 및 선형 회귀모델 기반의 차량 운행 영상의 안정화 기술 개발)

  • Heo, Yu-Jung;Choi, Min-Kook;Lee, Hyun-Gyu;Lee, Sang-Chul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.798-809
    • /
    • 2014
  • Video stabilization is performed to remove unexpected shaky and irregular motion from a video. It is often used as preprocessing for robust feature tracking and matching in video. Typical video stabilization algorithms are developed to compensate motion from surveillance video or outdoor recordings that are captured by a hand-help camera. However, since the vehicle video contains rapid change of motion and local features, typical video stabilization algorithms are hard to be applied as it is. In this paper, we propose a novel approach to compensate shaky and irregular motion in vehicle video using linear regression model and vertical projection histogram matching. Towards this goal, we perform vertical projection histogram matching at each sub region of an input frame, and then we generate linear regression model to extract vertical translation and rotation parameters with estimated regional vertical movement vector. Multiple binarization with sub-region analysis for generating the linear regression model is effective to typical recording environments where occur rapid change of motion and local features. We demonstrated the effectiveness of our approach on blackbox videos and showed that employing the linear regression model achieved robust estimation of motion parameters and generated stabilized video in full automatic manner.

Effects of Halogen and Light-Shielding Curtains on Acquisition of Hyperspectral Images in Greenhouses (온실 내 초분광 영상 취득 시 할로겐과 차광 커튼이 미치는 영향)

  • Kim, Tae-Yang;Ryu, Chan-Seok;Kang, Ye-seong;Jang, Si-Hyeong;Park, Jun-Woo;Kang, Kyung-Suk;Baek, Hyeon-Chan;Park, Min-Jun;Park, Jin-Ki
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.306-315
    • /
    • 2021
  • This study analyzed the effects of light-shielding curtains and halogens on spectrum when acquiring hyperspectral images in a greenhouse. The image data of tarp (1.4*1.4 m, 12%) with 30 degrees of angles was achieved three times with four conditions depending on 14 heights using the automatic image acquisition system installed in the greenhouse at the department of Southern Area of National Institute of Crop Science. When the image was acquired without both a light-shielding curtain and halogen lamp, there was a difference in spectral tendencies between direct light and shadow parts on the base of 550 nm. The average coefficient of variation (CV) for direct light and shadow parts was 1.8% and 4.2%, respective. The average CV value was increased to 12.5% regardless of shadows. When the image was acquired only used a halogen lamp, the average CV of the direct light and shadow parts were 2 .6% and 10.6%, and the width of change on the spectrum was increased because the amount of halogen light was changed depending on the height. In the case of shading curtains only used, the average CV was 1.6%, and the distinction between direct light and shadows disappeared. When the image was acquired using a shading curtain and halogen lamp, the average CV was increased to 10.2% because the amount of halogen light differed depending on the height. When the average CV depending on the height was calculated using halogen and light-shielding curtains, it was 1.4% at 0.1m and 1.9% at 0.2 m, 2 .6% at 0.3m, and 3.3% at 0.4m of height, respectively. When hyperspectral imagery is acquired, it is necessary to use a shading curtain to minimize the effect of shadows. Moreover, in case of supplementary lighting by using a halogen lamp, it is judged to be effective when the size of the object is less than 0.2 m and the distance between the object and the housing is kept constant.