• Title/Summary/Keyword: Consistency 알고리즘

Search Result 151, Processing Time 0.023 seconds

Dust/smoke detection by multi-spectral satellite data over land of East Asia (동아시아 지역의 육상에서 다중채널 위성자료에 의한 황사/연무 탐지)

  • Park, Su-Hyeun;Choo, Gyo-Hwang;Lee, Kyu-Tae;Shin, Hee-Woo;Kim, Dong-Chul;Jeong, Myeong-Jae
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.3
    • /
    • pp.257-266
    • /
    • 2017
  • In this study, the dust/smoke detection algorithm was developed with a multi-spectral satellite remote sensing method using Moderate resolution Imaging Spectroradiometer (MODIS) Level 1B (L1B) data and the results were validated as RGB composite images of red(R; band 1), green(G; band 4), blue(B; band 3) channels using MODIS L1B data and Cloud-Aerosol Lidar with Orthogonal Polarization Satellite Observations(CALIPSO) Vertical Feature Mask (VFM) product. In the daytime on March 30, 2007 and April 27, 2012, the consistencies between the dust/smoke detected by this algorithm and verification data were approximately 56.4 %, 72.0 %, respectively. During the nighttime, the similar consistency was 40.5 % on April 27, 2012. Although these results were analyzed for limited cases due to the spatiotemporal matching for the MODIS and CALIPSO satellites, they could be used to utilize the aerosol detection of geostationary satellites for the next generations in Korea through further research.

A System Recovery using Hyper-Ledger Fabric BlockChain (하이퍼레저 패브릭 블록체인을 활용한 시스템 복구 기법)

  • Bae, Su-Hwan;Cho, Sun-Ok;Shin, Yong-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.2
    • /
    • pp.155-161
    • /
    • 2019
  • Currently, numerous companies and institutes provide services using the Internet, and establish and operate Information Systems to manage them efficiently and reliably. The Information System implies the possibility of losing the ability to provide normal services due to a disaster or disability. It is preparing for this by utilizing a disaster recovery system. However, existing disaster recovery systems cannot perform normal recovery if files for system recovery are corrupted. In this paper, we proposed a system that can verify the integrity of the system recovery file and proceed with recovery by utilizing hyper-ledger fabric blockchain. The PBFT consensus algorithm is used to generate the blocks and is performed by the leader node of the blockchain network. In the event of failure, verify the integrity of the recovery file by comparing the hash value of the recovery file with the hash value in the blockchain and proceed with recovery. For the evaluation of proposed techniques, a comparative analysis was conducted based on four items: existing system recovery techniques and data consistency, able to data retention, recovery file integrity, and using the proposed technique, the amount of traffic generated was analyzed to determine whether it was actually applicable.

Location Tracking and Visualization of Dynamic Objects using CCTV Images (CCTV 영상을 활용한 동적 객체의 위치 추적 및 시각화 방안)

  • Park, Sang-Jin;Cho, Kuk;Im, Junhyuck;Kim, Minchan
    • Journal of Cadastre & Land InformatiX
    • /
    • v.51 no.1
    • /
    • pp.53-65
    • /
    • 2021
  • C-ITS(Cooperative Intelligent Transport System) that pursues traffic safety and convenience uses various sensors to generate traffic information. Therefore, it is necessary to improve the sensor-related technology to increase the efficiency and reliability of the traffic information. Recently, the role of CCTV in collecting video information has become more important due to advances in AI(Artificial Intelligence) technology. In this study, we propose to identify and track dynamic objects(vehicles, people, etc.) in CCTV images, and to analyze and provide information about them in various environments. To this end, we conducted identification and tracking of dynamic objects using the Yolov4 and Deepsort algorithms, establishment of real-time multi-user support servers based on Kafka, defining transformation matrices between images and spatial coordinate systems, and map-based dynamic object visualization. In addition, a positional consistency evaluation was performed to confirm its usefulness. Through the proposed scheme, we confirmed that CCTVs can serve as important sensors to provide relevant information by analyzing road conditions in real time in terms of road infrastructure beyond a simple monitoring role.

Training a semantic segmentation model for cracks in the concrete lining of tunnel (터널 콘크리트 라이닝 균열 분석을 위한 의미론적 분할 모델 학습)

  • Ham, Sangwoo;Bae, Soohyeon;Kim, Hwiyoung;Lee, Impyeong;Lee, Gyu-Phil;Kim, Donggyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.6
    • /
    • pp.549-558
    • /
    • 2021
  • In order to keep infrastructures such as tunnels and underground facilities safe, cracks of concrete lining in tunnel should be detected by regular inspections. Since regular inspections are accomplished through manual efforts using maintenance lift vehicles, it brings about traffic jam, exposes works to dangerous circumstances, and deteriorates consistency of crack inspection data. This study aims to provide methodology to automatically extract cracks from tunnel concrete lining images generated by the existing tunnel image acquisition system. Specifically, we train a deep learning based semantic segmentation model with open dataset, and evaluate its performance with the dataset from the existing tunnel image acquisition system. In particular, we compare the model performance in case of using all of a public dataset, subset of the public dataset which are related to tunnel surfaces, and the tunnel-related subset with negative examples. As a result, the model trained using the tunnel-related subset with negative examples reached the best performance. In the future, we expect that this research can be used for planning efficient model training strategy for crack detection.

LCL Cargo Loading Algorithm Considering Cargo Characteristics and Load Space (화물의 특성 및 적재 공간을 고려한 LCL 화물 적재 알고리즘)

  • Daesan Park;Sangmin Jo;Dongyun Park;Yongjae Lee;Dohee Kim;Hyerim Bae
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.375-393
    • /
    • 2023
  • The demand for Less than Container Load (LCL) has been on the rise due to the growing need for various small-scale production items and the expansion of the e-commerce market. Consequently, more companies in the International Freight Forwarder are now handling LCL. Given the variety in cargo sizes and the diverse interests of stakeholders, there's a growing need for a container loading algorithm that optimizes space efficiency. However, due to the nature of the current situation in which a cargo loading plan is established in advance and delivered to the Container Freight Station (CFS), there is a limitation that variables that can be identified at industrial sites cannot be reflected in the loading plan. Therefore, this study proposes a container loading methodology that makes it easy to modify the loading plan at industrial sites. By allowing the characteristics of cargo and the status of the container to be considered, the requirements of the industrial site were reflected, and the three-dimensional space was manipulated into a two-dimensional planar layer to establish a loading plan to reduce time complexity. Through the methodology presented in this study, it is possible to increase the consistency of the quality of the container loading methodology and contribute to the automation of the loading plan.

An Implementation of OTB Extension to Produce TOA and TOC Reflectance of LANDSAT-8 OLI Images and Its Product Verification Using RadCalNet RVUS Data (Landsat-8 OLI 영상정보의 대기 및 지표반사도 산출을 위한 OTB Extension 구현과 RadCalNet RVUS 자료를 이용한 성과검증)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.449-461
    • /
    • 2021
  • Analysis Ready Data (ARD) for optical satellite images represents a pre-processed product by applying spectral characteristics and viewing parameters for each sensor. The atmospheric correction is one of the fundamental and complicated topics, which helps to produce Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance from multi-spectral image sets. Most remote sensing software provides algorithms or processing schemes dedicated to those corrections of the Landsat-8 OLI sensors. Furthermore, Google Earth Engine (GEE), provides direct access to Landsat reflectance products, USGS-based ARD (USGS-ARD), on the cloud environment. We implemented the Orfeo ToolBox (OTB) atmospheric correction extension, an open-source remote sensing software for manipulating and analyzing high-resolution satellite images. This is the first tool because OTB has not provided calibration modules for any Landsat sensors. Using this extension software, we conducted the absolute atmospheric correction on the Landsat-8 OLI images of Railroad Valley, United States (RVUS) to validate their reflectance products using reflectance data sets of RVUS in the RadCalNet portal. The results showed that the reflectance products using the OTB extension for Landsat revealed a difference by less than 5% compared to RadCalNet RVUS data. In addition, we performed a comparative analysis with reflectance products obtained from other open-source tools such as a QGIS semi-automatic classification plugin and SAGA, besides USGS-ARD products. The reflectance products by the OTB extension showed a high consistency to those of USGS-ARD within the acceptable level in the measurement data range of the RadCalNet RVUS, compared to those of the other two open-source tools. In this study, the verification of the atmospheric calibration processor in OTB extension was carried out, and it proved the application possibility for other satellite sensors in the Compact Advanced Satellite (CAS)-500 or new optical satellites.

Cross-Calibration of GOCI-II in Near-Infrared Band with GOCI (GOCI를 이용한 GOCI-II 근적외 밴드 교차보정)

  • Eunkyung Lee;Sujung Bae;Jae-Hyun Ahn;Kyeong-Sang Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1553-1563
    • /
    • 2023
  • The Geostationary Ocean Color Imager-II (GOCI-II) is a satellite designed for ocean color observation, covering the Northeast Asian region and the entire disk of the Earth. It commenced operations in 2020, succeeding its predecessor, GOCI, which had been active for the previous decade. In this study, we aimed to enhance the atmospheric correction algorithm, a critical step in producing satellite-based ocean color data, by performing cross-calibration on the GOCI-II near-infrared (NIR) band using the GOCI NIR band. To achieve this, we conducted a cross-calibration study on the top-of-atmosphere (TOA) radiance of the NIR band and derived a vicarious calibration gain for two NIR bands (745 and 865 nm). As a result of applying this gain, the offset of two sensors decreased and the ratio approached 1. It shows that consistency of two sensors was improved. Also, the Rayleigh-corrected reflectance at 745 nm and 865 nm increased by 5.62% and 9.52%, respectively. This alteration had implications for the ratio of Rayleigh-corrected reflectance at these wavelengths, potentially impacting the atmospheric correction results across all spectral bands, particularly during the aerosol reflectance correction process within the atmospheric correction algorithm. Due to the limited overlapping operational period of GOCI and GOCI-II satellites, we only used data from March 2021. Nevertheless, we anticipate further enhancements through ongoing cross-calibration research with other satellites in the future. Additionally, it is essential to apply the vicarious calibration gain derived for the NIR band in this study to perform vicarious calibration for the visible channels and assess its impact on the accuracy of the ocean color products.

The Estimation Model of an Origin-Destination Matrix from Traffic Counts Using a Conjugate Gradient Method (Conjugate Gradient 기법을 이용한 관측교통량 기반 기종점 OD행렬 추정 모형 개발)

  • Lee, Heon-Ju;Lee, Seung-Jae
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.1 s.72
    • /
    • pp.43-62
    • /
    • 2004
  • Conventionally the estimation method of the origin-destination Matrix has been developed by implementing the expansion of sampled data obtained from roadside interview and household travel survey. In the survey process, the bigger the sample size is, the higher the level of limitation, due to taking time for an error test for a cost and a time. Estimating the O-D matrix from observed traffic count data has been applied as methods of over-coming this limitation, and a gradient model is known as one of the most popular techniques. However, in case of the gradient model, although it may be capable of minimizing the error between the observed and estimated traffic volumes, a prior O-D matrix structure cannot maintained exactly. That is to say, unwanted changes may be occurred. For this reason, this study adopts a conjugate gradient algorithm to take into account two factors: estimation of the O-D matrix from the conjugate gradient algorithm while reflecting the prior O-D matrix structure maintained. This development of the O-D matrix estimation model is to minimize the error between observed and estimated traffic volumes. This study validates the model using the simple network, and then applies it to a large scale network. There are several findings through the tests. First, as the consequence of consistency, it is apparent that the upper level of this model plays a key role by the internal relationship with lower level. Secondly, as the respect of estimation precision, the estimation error is lied within the tolerance interval. Furthermore, the structure of the estimated O-D matrix has not changed too much, and even still has conserved some attributes.

Analysis of Genetics Problem-Solving Processes of High School Students with Different Learning Approaches (학습접근방식에 따른 고등학생들의 유전 문제 해결 과정 분석)

  • Lee, Shinyoung;Byun, Taejin
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.4
    • /
    • pp.385-398
    • /
    • 2020
  • This study aims to examine genetics problem-solving processes of high school students with different learning approaches. Two second graders in high school participated in a task that required solving the complicated pedigree problem. The participants had similar academic achievements in life science but one had a deep learning approach while the other had a surface learning approach. In order to analyze in depth the students' problem-solving processes, each student's problem-solving process was video-recorded, and each student conducted a think-aloud interview after solving the problem. Although students showed similar errors at the first trial in solving the problem, they showed different problem-solving process at the last trial. Student A who had a deep learning approach voluntarily solved the problem three times and demonstrated correct conceptual framing to the three constraints using rule-based reasoning in the last trial. Student A monitored the consistency between the data and her own pedigree, and reflected the problem-solving process in the check phase of the last trial in solving the problem. Student A's problem-solving process in the third trial resembled a successful problem-solving algorithm. However, student B who had a surface learning approach, involuntarily repeated solving the problem twice, and focused and used only part of the data due to her goal-oriented attitude to solve the problem in seeking for answers. Student B showed incorrect conceptual framing by memory-bank or arbitrary reasoning, and maintained her incorrect conceptual framing to the constraints in two problem-solving processes. These findings can help in understanding the problem-solving processes of students who have different learning approaches, allowing teachers to better support students with difficulties in accessing genetics problems.

The Understanding and Application of Noise Reduction Software in Static Images (정적 영상에서 Noise Reduction Software의 이해와 적용)

  • Lee, Hyung-Jin;Song, Ho-Jun;Seung, Jong-Min;Choi, Jin-Wook;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.54-60
    • /
    • 2010
  • Purpose: Nuclear medicine manufacturers provide various softwares which shorten imaging time using their own image processing techniques such as UlatraSPECT, ASTONISH, Flash3D, Evolution, and nSPEED. Seoul National University Hospital has introduced softwares from Siemens and Philips, but it was still hard to understand algorithm difference between those two softwares. Thus, the purpose of this study was to figure out the difference of two softwares in planar images and research the possibility of application to images produced with high energy isotopes. Materials and Methods: First, a phantom study was performed to understand the difference of softwares in static studies. Various amounts of count were acquired and the images were analyzed quantitatively after application of PIXON, Siemens and ASTONISH, Philips, respectively. Then, we applied them to some applicable static studies and searched for merits and demerits. And also, they have been applied to images produced with high energy isotopes. Finally, A blind test was conducted by nuclear medicine doctors except phantom images. Results: There was nearly no difference between pre and post processing image with PIXON for FWHM test using capillary source whereas ASTONISH was improved. But, both of standard deviation(SD) and variance were decreased for PIXON while ASTONISH was highly increased. And in background variability comparison test using IEC phantom, PIXON has been decreased over all while ASTONISH has shown to be somewhat increased. Contrast ratio in each spheres has also been increased for both methods. For image scale, window width has been increased for 4~5 times after processing with PIXON while ASTONISH showed nearly no difference. After phantom test analysis, ASTONISH seemed to be applicable for some studies which needs quantitative analysis or high contrast, and PIXON seemed to be applicable for insufficient counts studies or long time studies. Conclusion: Quantitative values used for usual analysis were generally improved after application of the two softwares, however it seems that it's hard to maintain the consistency for all of nuclear medicine studies because result images can not be the same due to the difference of algorithm characteristic rather than the difference of gamma cameras. And also, it's hard to expect high image quality with the time shortening method such as whole body scan. But it will be possible to apply to static studies considering the algorithm characteristic or we can expect a change of image quality through application to high energy isotope images.

  • PDF