• Title/Summary/Keyword: Spatial detection system

Search Result 439, Processing Time 0.03 seconds

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Measurement Accuracy for 3D Structure Shape Change using UAV Images Matching (UAV 영상정합을 통한 구조물 형상변화 측정 정확도 연구)

  • Kim, Min Chul;Yoon, Hyuk Jin;Chang, Hwi Jeong;Yoo, Jong Soo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.1
    • /
    • pp.47-54
    • /
    • 2017
  • Recently, there are many studies related aerial mapping project and 3 dimensional shape and model reconstruction using UAV(unmanned aerial vehicle) system and images. In this study, we create 3D reconstruction point data using image matching technology of the UAV overlap images, detect shape change of structure and perform accuracy assessment of area($m^2$) and volume($m^3$) value. First, we build the test structure model data and capturing its images of shape change Before and After. Second, for post-processing the Before dataset is convert the form of raster format image to ensure the compare with all 3D point clouds of the After dataset. The result shows high accuracy in the shape change of more than 30 centimeters, but less is still it becomes difficult to apply because of image matching technology has its own limits. But proposed methodology seems very useful to detect illegal any structures and the quantitative analysis of the structure's a certain amount of damage and management.

Generation of Sea Surface Temperature Products Considering Cloud Effects Using NOAA/AVHRR Data in the TeraScan System: Case Study for May Data (TeraScan시스템에서 NOAA/AVHRR 해수면온도 산출시 구름 영향에 따른 신뢰도 부여 기법: 5월 자료 적용)

  • Yang, Sung-Soo;Yang, Chan-Su;Park, Kwang-Soon
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.13 no.3
    • /
    • pp.165-173
    • /
    • 2010
  • A cloud detection method is introduced to improve the reliability of NOAA/AVHRR Sea Surface Temperature (SST) data processed during the daytime and nighttime in the TeraScan System. In daytime, the channels 2 and 4 are used to detect a cloud using the three tests, which are spatial uniformity tests of brightness temperature (infrared channel 4) and channel 2 albedo, and reflectivity threshold test for visible channel 2. Meanwhile, the nighttime cloud detection tests are performed by using the channels 3 and 4, because the channel 2 data are not available in nighttime. This process include the dual channel brightness temperature difference (ch3 - ch4) and infrared channel brightness temperature threshold tests. For a comparison of daytime and nighttime SST images, two data used here are obtained at 0:28 (UTC) and 21:00 (UTC) on May 13, 2009. 6 parameters was tested to understand the factors that affect a cloud masking in and around Korean Peninsula. In daytime, the thresholds for ch2_max cover a range 3 through 8, and ch4_delta and ch2_delta are fixed on 5 and 2, respectively. In nighttime, the threshold range of ch3_minus_ch4 is from -1 to 0, and ch4_delta and min_ch4_temp have the fixed thresholds with 3.5 and 0, respectively. It is acceptable that the resulted images represent a reliability of SST according to the change of cloud masking area by each level. In the future, the accuracy of SST will be verified, and an assimilation method for SST data should be tested for a reliability improvement considering an atmospheric characteristic of research area around Korean Peninsula.

Report about First Repeated Sectional Measurements of Water Property in the East Sea using Underwater Glider (수중글라이더를 활용한 동해 최초 연속 물성 단면 관측 보고)

  • GYUCHANG LIM;JONGJIN PARK
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.29 no.1
    • /
    • pp.56-76
    • /
    • 2024
  • We for the first time made a successful longest continuous sectional observation in the East Sea by an underwater glider during 95 days from September 18 to December 21 2020 in the Korea along the 106 Line (129.1 °E ~ 131.5 °E at 37.9 °N) of the regular shipboard measurements by the National Institute of Fishery Science (NIFS) and obtained twelve hydrographic sections with high spatiotemporal resolution. The glider was deployed at 129.1 °E in September 18 and conducted 88-days flight from September 19 to December 15 2020, yielding twelve hydrographic sections, and then recovered at 129.2 °E in December 21 after the last 6 days virtual mooring operation. During the total traveled distance of 2550 km, the estimated deviation from the predetermined zonal path had an average RMS distance of 262 m. Based on these high-resolution long-term glider measurements, we conducted a comparative study with the bi-monthly NIFS measurements in terms of spatial and temporal resolutions, and found distinguished features. One is that spatial features of sub-mesoscale such as sub-mesoscale frontal structure and intensified thermocline were detected only in the glider measurements, mainly due to glider's high spatial resolution. The other is the detection of intramonthly variations from the weekly time series of temperature and salinity, which were extracted from glider's continuous sections. Lastly, there were deviations and bias in measurements from both platforms. We argued these deviations in terms of the time scale of variation, the spatial scale of fixed-point observation, and the calibration status of CTD devices of both platforms.

Vegetation Monitoring using Unmanned Aerial System based Visible, Near Infrared and Thermal Images (UAS 기반, 가시, 근적외 및 열적외 영상을 활용한 식생조사)

  • Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.48 no.1
    • /
    • pp.71-91
    • /
    • 2018
  • In recent years, application of UAV(Unmanned Aerial Vehicle) to seed sowing and pest control has been actively carried out in the field of agriculture. In this study, UAS(Unmanned Aerial System) is constructed by combining image sensor of various wavelength band and SfM((Structure from Motion) based image analysis technique in UAV. Utilization of UAS based vegetation survey was investigated and the applicability of precision farming was examined. For this purposes, a UAS consisting of a combination of a VIS_RGB(Visible Red, Green, and Blue) image sensor, a modified BG_NIR(Blue Green_Near Infrared Red) image sensor, and a TIR(Thermal Infrared Red) sensor with a wide bandwidth of $7.5{\mu}m$ to $13.5{\mu}m$ was constructed for a low cost UAV. In addition, a total of ten vegetation indices were selected to investigate the chlorophyll, nitrogen and water contents of plants with visible, near infrared, and infrared wavelength's image sensors. The images of each wavelength band for the test area were analyzed and the correlation between the distribution of vegetation index and the vegetation index were compared with status of the previously surveyed vegetation and ground cover. The ability to perform vegetation state detection using images obtained by mounting multiple image sensors on low cost UAV was investigated. As the utility of UAS equipped with VIS_RGB, BG_NIR and TIR image sensors on the low cost UAV has proven to be more economical and efficient than previous vegetation survey methods that depend on satellites and aerial images, is expected to be used in areas such as precision agriculture, water and forest research.

GAP Estimation on Arterial Road via Vehicle Labeling of Drone Image (드론 영상의 차량 레이블링을 통한 간선도로 차간간격(GAP) 산정)

  • Jin, Yu-Jin;Bae, Sang-Hoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.6
    • /
    • pp.90-100
    • /
    • 2017
  • The purpose of this study is to detect and label the vehicles using the drone images as a way to overcome the limitation of the existing point and section detection system and vehicle gap estimation on Arterial road. In order to select the appropriate time zone, position, and altitude for the acquisition of the drone image data, the final image data was acquired by shooting under various conditions. The vehicle was detected by applying mixed Gaussian, image binarization and morphology among various image analysis techniques, and the vehicle was labeled by applying Kalman filter. As a result of the labeling rate analysis, it was confirmed that the vehicle labeling rate is 65% by detecting 185 out of 285 vehicles. The gap was calculated by pixel unitization, and the results were verified through comparison and analysis with Daum maps. As a result, the gap error was less than 5m and the mean error was 1.67m with the preceding vehicle and 1.1m with the following vehicle. The gaps estimated in this study can be used as the density of the urban roads and the criteria for judging the service level.

A Path Travel Time Estimation Study on Expressways using TCS Link Travel Times (TCS 링크통행시간을 이용한 고속도로 경로통행시간 추정)

  • Lee, Hyeon-Seok;Jeon, Gyeong-Su
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.5
    • /
    • pp.209-221
    • /
    • 2009
  • Travel time estimation under given traffic conditions is important for providing drivers with travel time prediction information. But the present expressway travel time estimation process cannot calculate a reliable travel time. The objective of this study is to estimate the path travel time spent in a through lane between origin tollgates and destination tollgates on an expressway as a prerequisite result to offer reliable prediction information. Useful and abundant toll collection system (TCS) data were used. When estimating the path travel time, the path travel time is estimated combining the link travel time obtained through a preprocessing process. In the case of a lack of TCS data, the TCS travel time for previous intervals is referenced using the linear interpolation method after analyzing the increase pattern for the travel time. When the TCS data are absent over a long-term period, the dynamic travel time using the VDS time space diagram is estimated. The travel time estimated by the model proposed can be validated statistically when compared to the travel time obtained from vehicles traveling the path directly. The results show that the proposed model can be utilized for estimating a reliable travel time for a long-distance path in which there are a variaty of travel times from the same departure time, the intervals are large and the change in the representative travel time is irregular for a short period.

Technology Development for Non-Contact Interface of Multi-Region Classifier based on Context-Aware (상황 인식 기반 다중 영역 분류기 비접촉 인터페이스기술 개발)

  • Jin, Songguo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.175-182
    • /
    • 2020
  • The non-contact eye tracking is a nonintrusive human-computer interface providing hands-free communications for people with severe disabilities. Recently. it is expected to do an important role in non-contact systems due to the recent coronavirus COVID-19, etc. This paper proposes a novel approach for an eye mouse using an eye tracking method based on a context-aware based AdaBoost multi-region classifier and ASSL algorithm. The conventional AdaBoost algorithm, however, cannot provide sufficiently reliable performance in face tracking for eye cursor pointing estimation, because it cannot take advantage of the spatial context relations among facial features. Therefore, we propose the eye-region context based AdaBoost multiple classifier for the efficient non-contact gaze tracking and mouse implementation. The proposed method detects, tracks, and aggregates various eye features to evaluate the gaze and adjusts active and semi-supervised learning based on the on-screen cursor. The proposed system has been successfully employed in eye location, and it can also be used to detect and track eye features. This system controls the computer cursor along the user's gaze and it was postprocessing by applying Gaussian modeling to prevent shaking during the real-time tracking using Kalman filter. In this system, target objects were randomly generated and the eye tracking performance was analyzed according to the Fits law in real time. It is expected that the utilization of non-contact interfaces.

Generation of Time-Series Data for Multisource Satellite Imagery through Automated Satellite Image Collection (자동 위성영상 수집을 통한 다종 위성영상의 시계열 데이터 생성)

  • Yunji Nam;Sungwoo Jung;Taejung Kim;Sooahm Rhee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1085-1095
    • /
    • 2023
  • Time-series data generated from satellite data are crucial resources for change detection and monitoring across various fields. Existing research in time-series data generation primarily relies on single-image analysis to maintain data uniformity, with ongoing efforts to enhance spatial and temporal resolutions by utilizing diverse image sources. Despite the emphasized significance of time-series data, there is a notable absence of automated data collection and preprocessing for research purposes. In this paper, to address this limitation, we propose a system that automates the collection of satellite information in user-specified areas to generate time-series data. This research aims to collect data from various satellite sources in a specific region and convert them into time-series data, developing an automatic satellite image collection system for this purpose. By utilizing this system, users can collect and extract data for their specific regions of interest, making the data immediately usable. Experimental results have shown the feasibility of automatically acquiring freely available Landsat and Sentinel images from the web and incorporating manually inputted high-resolution satellite images. Comparisons between automatically collected and edited images based on high-resolution satellite data demonstrated minimal discrepancies, with no significant errors in the generated output.

A Quick-and-dirty Method for Detection of Ground Moving Targets in Single-Channel SAR Single-Look Complex (SLC) Images by Differentiation (미분을 이용한 단일채널 SAR SLC 영상 내 지상 이동물체의 탐지방법)

  • Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.185-205
    • /
    • 2014
  • SAR ground moving target indicator (GMTI) has long been an important issue for SAR advanced applications. As spatial resolution of space-borne SAR system has been significantly improved recently, the GMTI becomes a very useful tool. Various GMTI techniques have been developed particularly using multi-channel SAR systems. It is, however, still problematic to detect ground moving targets within single channel SAR images while it is not practical to access high resolution multi-channel space-borne SAR systems. Once a ground moving target is detected, it is possible to retrieve twodimensional velocities of the target from single channel space-borne SAR with an accuracy of about 5 % if moving faster than 3 m/s. This paper presents a quick-and-dirty method for detecting ground moving targets from single channel SAR single-look complex (SLC) images by differentiation. Since the signal powers of derivatives present Doppler centroid and rate, it is very efficient and effective for detection of non-stationary targets. The derivatives correlate well with velocities retrieved by a precise method with a correlation coefficient $R^2$ of 0.62, which is well enough to detect the ground moving targets. While the approach is theoretically straightforward, it is necessary to remove the effects of residual Doppler rate before finalizing the ground moving target candidates. The confidence level of results largely depends on the efficiency and effectiveness of the residual Doppler rate removal method. Application results using TerraSAR-X and truck-mounted corner reflectors validated the efficiency of the method. While the derivatives of moving targets remain easily detectable, the signal energy of stationary corner reflectors was suppressed by about 18.5 dB. It results in an easy detection of ground targets moving faster than 8.8 km/h. The proposed method is applicable to any high resolution single channel SAR systems including KOMPSAT-5.