• Title/Summary/Keyword: 실시간 전처리

Search Result 344, Processing Time 0.031 seconds

Water Segmentation Based on Morphologic and Edge-enhanced U-Net Using Sentinel-1 SAR Images (형태학적 연산과 경계추출 학습이 강화된 U-Net을 활용한 Sentinel-1 영상 기반 수체탐지)

  • Kim, Hwisong;Kim, Duk-jin;Kim, Junwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.793-810
    • /
    • 2022
  • Synthetic Aperture Radar (SAR) is considered to be suitable for near real-time inundation monitoring. The distinctly different intensity between water and land makes it adequate for waterbody detection, but the intrinsic speckle noise and variable intensity of SAR images decrease the accuracy of waterbody detection. In this study, we suggest two modules, named 'morphology module' and 'edge-enhanced module', which are the combinations of pooling layers and convolutional layers, improving the accuracy of waterbody detection. The morphology module is composed of min-pooling layers and max-pooling layers, which shows the effect of morphological transformation. The edge-enhanced module is composed of convolution layers, which has the fixed weights of the traditional edge detection algorithm. After comparing the accuracy of various versions of each module for U-Net, we found that the optimal combination is the case that the morphology module of min-pooling and successive layers of min-pooling and max-pooling, and the edge-enhanced module of Scharr filter were the inputs of conv9. This morphologic and edge-enhanced U-Net improved the F1-score by 9.81% than the original U-Net. Qualitative inspection showed that our model has capability of detecting small-sized waterbody and detailed edge of water, which are the distinct advancement of the model presented in this research, compared to the original U-Net.

Log Collection Method for Efficient Management of Systems using Heterogeneous Network Devices (이기종 네트워크 장치를 사용하는 시스템의 효율적인 관리를 위한 로그 수집 방법)

  • Jea-Ho Yang;Younggon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.119-125
    • /
    • 2023
  • IT infrastructure operation has advanced, and the methods for managing systems have become widely adopted. Recently, research has focused on improving system management using Syslog. However, utilizing log data collected through these methods presents challenges, as logs are extracted in various formats that require expert analysis. This paper proposes a system that utilizes edge computing to distribute the collection of Syslog data and preprocesses duplicate data before storing it in a central database. Additionally, the system constructs a data dictionary to classify and count data in real-time, with restrictions on transmitting registered data to the central database. This approach ensures the maintenance of predefined patterns in the data dictionary, controls duplicate data and temporal duplicates, and enables the storage of refined data in the central database, thereby securing fundamental data for big data analysis. The proposed algorithms and procedures are demonstrated through simulations and examples. Real syslog data, including extracted examples, is used to accurately extract necessary information from log data and verify the successful execution of the classification and storage processes. This system can serve as an efficient solution for collecting and managing log data in edge environments, offering potential benefits in terms of technology diffusion.

Improvement of membrane operation for wastewater reuse (하수재이용 막여과 운영 효율 향상)

  • Chang, Dong Eil;Kim, Jae Hun;Lee, Sang Soo;Kim, Kyung Taek;Han, Bong Suk;Ha, Geum Yul
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.327-327
    • /
    • 2021
  • 하수재이용에서 전처리 막여과 공정은 완벽한 고액분리로 인해 후단 역삼투막 손상을 줄일 수 있는 공정으로 각광을 받고 있다. 일반적으로 막여과 공정은 여과->물리세정->충진->여과 와 같은 제막사에서 제공하는 운전 사이클에 맞춰 적용하고 있으며 유지세정 역시 1회/일 또는 1회/주 단위로 정해진 범위에서 수행되고 있다. 이러한 운전 방식은 시시각각 변화하는 막 유입수질에 적절하게 대응하지 못해 장기적으로 막오염 발생에 따른 생산수량 감소는 물론 막오염 제거를 위한 화학세정 주기가 짧아져 전체적인 생산수량이 감소하는 원인이 되고 있다. Raffine(2012)에 따르면 가역적 막오염의 경우 Flux 증가에 큰 영향이 없으나 비가역적 막오염은 Flux 증가에 따라 급격히 증가한다고 보고하고 있으며 이는 제한된 분리막 면적당 처리수 생산량을 증가시키기 위해 비가역적 막오염의 발생량을 줄이는 것이 필요하며 이를 위해 주기적인 강화역세(Chemical Enhanced Backwashing, CEB)가 도입되고 효율적인 유지세정 방법에 대한 연구가 진행되고 있다. 당사에서는 일산에 있는 I 수질복원센터내에 25 m3/일 규모의 막여과 하수재이용 파일롯 플랜트를 설치하고 막여과 하수재이용 공정의 운영 효율을 높이기 위하여 W사에서 개발한 IntelliFluxControl(IFCr) 소프트웨어를 이용하여 하수재이용 막여과 성능을 확인 하였다. IFCr은 실시간으로 변화하는 수질에 따른 막오염 정도에 따라 역세 강도 및 빈도와 CEB 적용 정도를 변화시켜 분리막 운전의 효율을 높일 수 있는 운영 소프트웨어이다. I 수질복원센터의 하수 방류수를 막여과 유입수로 적용하여 40 LMH를 기준으로 IFCr을 적용하지 않은 경우 23.7일 운전 가능하였으나 IFCr을 적용한 경우 50일 연속 운전이 가능하였다. 또한 역세척수를 운전 기간 동안 약 50 m3을 사용한 반면 IFCr을 적용한 경우 이에 절반 수준인 24.1 m3 만 사용하여 회수율이 91%에서 95%로 증가한 것을 확인 할 수 있었다. 본 연구의 결과는 기존의 제막사에서 제시하는 막 공정 운영방법을 탈피하여 분리막이 갖고 있는 성능을 최대한 끌어 올릴 수 있는 연구 결과라고 판단되며 향후 스케일 업 연구를 통해 실제 플랜트에 적용 가능성이 확인 될 경우 시설의 설치 막모듈 개수와 유지관리비를 동시에 절감할 수 있는 기술이 될 수 있을 것이다

  • PDF

A Study on the Real-time Recommendation Box Recommendation of Fulfillment Center Using Machine Learning (기계학습을 이용한 풀필먼트센터의 실시간 박스 추천에 관한 연구)

  • Dae-Wook Cha;Hui-Yeon Jo;Ji-Soo Han;Kwang-Sup Shin;Yun-Hong Min
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.149-163
    • /
    • 2023
  • Due to the continuous growth of the E-commerce market, the volume of orders that fulfillment centers have to process has increased, and various customer requirements have increased the complexity of order processing. Along with this trend, the operational efficiency of fulfillment centers due to increased labor costs is becoming more important from a corporate management perspective. Using historical performance data as training data, this study focused on real-time box recommendations applicable to packaging areas during fulfillment center shipping. Four types of data, such as product information, order information, packaging information, and delivery information, were applied to the machine learning model through pre-processing and feature-engineering processes. As an input vector, three characteristics were used as product specification information: width, length, and height, the characteristics of the input vector were extracted through a feature engineering process that converts product information from real numbers to an integer system for each section. As a result of comparing the performance of each model, it was confirmed that when the Gradient Boosting model was applied, the prediction was performed with the highest accuracy at 95.2% when the product specification information was converted into integers in 21 sections. This study proposes a machine learning model as a way to reduce the increase in costs and inefficiency of box packaging time caused by incorrect box selection in the fulfillment center, and also proposes a feature engineering method to effectively extract the characteristics of product specification information.

Image Registration for PET/CT and CT Images with Particle Swarm Optimization (Particle Swarm Optimization을 이용한 PET/CT와 CT영상의 정합)

  • Lee, Hak-Jae;Kim, Yong-Kwon;Lee, Ki-Sung;Moon, Guk-Hyun;Joo, Sung-Kwan;Kim, Kyeong-Min;Cheon, Gi-Jeong;Choi, Jong-Hak;Kim, Chang-Kyun
    • Journal of radiological science and technology
    • /
    • v.32 no.2
    • /
    • pp.195-203
    • /
    • 2009
  • Image registration is a fundamental task in image processing used to match two or more images. It gives new information to the radiologists by matching images from different modalities. The objective of this study is to develop 2D image registration algorithm for PET/CT and CT images acquired by different systems at different times. We matched two CT images first (one from standalone CT and the other from PET/CT) that contain affluent anatomical information. Then, we geometrically transformed PET image according to the results of transformation parameters calculated by the previous step. We have used Affine transform to match the target and reference images. For the similarity measure, mutual information was explored. Use of particle swarm algorithm optimized the performance by finding the best matched parameter set within a reasonable amount of time. The results show good agreements of the images between PET/CT and CT. We expect the proposed algorithm can be used not only for PET/CT and CT image registration but also for different multi-modality imaging systems such as SPECT/CT, MRI/PET and so on.

  • PDF

Factors Affecting True Metabolizable Energy Determination of Poultry Feedingstuffs V. The Effect of Levels of Metabolizable Energy of Basal Diets on the Apparent Metabolizable Energy and True Metabolizable Energy Values of Corn and Soybean Meal (양계사료의 True Metabolizable Energy측정에 영향하는 요인에 관한 시험 V. 기초사료의 에너지수준이 옥수수와 대두박의 Apparent Metabolizable Energy 및 True Metabolizable Energy가에 미치는 영향)

  • 이영철
    • Korean Journal of Poultry Science
    • /
    • v.11 no.2
    • /
    • pp.109-114
    • /
    • 1984
  • The experiment was made to study the effect of levels of metabolizable energy of basal diets on apparent metabolizable energy (AME) and true metabolizable energy (TME) values of corn and soybean meals. The test materials, corn and soybean meals, were substituted with basal diet at 50% and 30%, respectively. The excreta of fed md unfed birds were collected for 30 hours. The results obtained were as follows; 1. The AME values of corn were not significantly different among treatments (P>0.05) except for 2400 Kcal/13% treatment, The AME values of soybean meals differed significantly between 2,400 Kcal/13% and 2,800 Kcal/15% or 3,000 Kcal/16%, but were not different between 2,400 Kcal/13% and 2,600 Kcal/14 % (probability at 5% level). 2. The energy levels of basal diets did not affect the AME values of corn and soybean meals (P>0.05) except 2,400 Kcal/13% treatment. This fact indicates that it is not necessary to change energy levels of basal diet according to test materials. 3. That the values of standard error of soybean meals were higher than those of corn was resulted from its low level of substitution with basal diet. 4. The TME values of corn showed significant differences (P<0.05) between 2,400Kcal/13% treatment and other treatments but those of soybean meals were not different among all treatments (P>0.05). 5. The reason that the AME values of corn and soybean meals and the TME values of corn reduced significantly in 2,400 Kcal/13% could be explained by the effect of interaction among ingredients in the diet.

  • PDF

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

A Study on the Delay Analysis Methodologies in Construction of Korea High Speed Railway (경부고속철도 건설사업의 공기지연분석에 관한 연구)

  • Yun Sung-Min;Lee Sang-Hyun;Chae Myung-Jin;Han Seung-Heon
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • 2004.11a
    • /
    • pp.250-255
    • /
    • 2004
  • To analyze delay, Seoul - Daegu line of Korea High Speed Railway was divided into three sections and analyzed independently by the business characteristics. The analysis on the project delay reasons was performed on macro and micro scales. This analytic method was named as 'Macro-Micro Delay Analysis Method (MMDAM)'. The macro scale analysis has three approaches, which are (1) scheduling, (3) structural characteristic, (3) and responsibility of project administrative works. Micro analysis also has three, methodologies which are (1) As Planned Method, (2) As Built method, (3) Modified Time Impact Analysis for analyzing the most influential section which the largest delay occurred. Using elicited project delay reasons from above analysis, the questionnaire was carried out for analyzing the influence of project delay reason. The reasons of the delay were driven from two different aspects (1) structural characteristic and (2) responsibility of the people involved in the project. The reasons that were identified from aforementioned three sections are the factors of the delay of the large-scale government driven projects. Finally, the author suggested the methodology of identifying the project delaying factors. The author also analyzed delay reasons in both the overseas and domestic cases of high rapid railway construction and has elicited some benchmarks for the future projects.

  • PDF

Trends Analysis on Research Articles of the Sharing Economy through a Meta Study Based on Big Data Analytics (빅데이터 분석 기반의 메타스터디를 통해 본 공유경제에 대한 학술연구 동향 분석)

  • Kim, Ki-youn
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.97-107
    • /
    • 2020
  • This study aims to conduct a comprehensive meta-study from the perspective of content analysis to explore trends in Korean academic research on the sharing economy by using the big data analytics. Comprehensive meta-analysis methodology can examine the entire set of research results historically and wholly to illuminate the tendency or properties of the overall research trend. Academic research related to the sharing economy first appeared in the year in which Professor Lawrence Lessig introduced the concept of the sharing economy to the world in 2008, but research began in earnest in 2013. In particular, between 2006 and 2008, research improved dramatically. In order to grasp the overall flow of domestic academic research of trends, 8 years of papers from 2013 to the present have been selected as target analysis papers, focusing on titles, keywords, and abstracts using database of electronic journals. Big data analysis was performed in the order of cleaning, analysis, and visualization of the collected data to derive research trends and insights by year and type of literature. We used Python3.7 and Textom analysis tools for data preprocessing, text mining, and metrics frequency analysis for key word extraction, and N-gram chart, centrality and social network analysis and CONCOR clustering visualization based on UCINET6/NetDraw, Textom program, the keywords clustered into 8 groups were used to derive the typologies of each research trend. The outcomes of this study will provide useful theoretical insights and guideline to future studies.

A Study on the Intellectual Structure of Metadata Research by Using Co-word Analysis (동시출현단어 분석에 기반한 메타데이터 분야의 지적구조에 관한 연구)

  • Choi, Ye-Jin;Chung, Yeon-Kyoung
    • Journal of the Korean Society for information Management
    • /
    • v.33 no.3
    • /
    • pp.63-83
    • /
    • 2016
  • As the usage of information resources produced in various media and forms has been increased, the importance of metadata as a tool of information organization to describe the information resources becomes increasingly crucial. The purposes of this study are to analyze and to demonstrate the intellectual structure in the field of metadata through co-word analysis. The data set was collected from the journals which were registered in the Core collection of Web of Science citation database during the period from January 1, 1998 to July 8, 2016. Among them, the bibliographic data from 727 journals was collected using Topic category search with the query word 'metadata'. From 727 journal articles, 410 journals with author keywords were selected and after data preprocessing, 1,137 author keywords were extracted. Finally, a total of 37 final keywords which had more than 6 frequency were selected for analysis. In order to demonstrate the intellectual structure of metadata field, network analysis was conducted. As a result, 2 domains and 9 clusters were derived, and intellectual relations among keywords from metadata field were visualized, and proposed keywords with high global centrality and local centrality. Six clusters from cluster analysis were shown in the map of multidimensional scaling, and the knowledge structure was proposed based on the correlations among each keywords. The results of this study are expected to help to understand the intellectual structure of metadata field through visualization and to guide directions in new approaches of metadata related studies.