• Title/Summary/Keyword: distributed algorithm

Search Result 1,957, Processing Time 0.03 seconds

Correction of Lunar Irradiation Effect and Change Detection Using Suomi-NPP Data (VIIRS DNB 영상의 달빛 영향 보정 및 변화 탐지)

  • Lee, Boram;Lee, Yoon-Kyung;Kim, Donghan;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.2
    • /
    • pp.265-278
    • /
    • 2019
  • Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) data help to enable rapid emergency responses through detection of the artificial and natural disasters occurring at night. The DNB data without correction of lunar irradiance effect distributed by Korea Ocean Science Center (KOSC) has advantage for rapid change detection because of direct receiving. In this study, radiance differences according to the phase of the moon was analyzed for urban and mountain areas in Korean Peninsula using the DNB data directly receiving to KOSC. Lunar irradiance correction algorithm was proposed for the change detection. Relative correction was performed by regression analysis between the selected pixels considering the land cover classification in the reference DNB image during the new moon and the input DNB image. As a result of daily difference image analysis, the brightness value change in urban area and mountain area was ${\pm}30$ radiance and below ${\pm}1$ radiance respectively. The object based change detection was performed after the extraction of the main object of interest based on the average image of time series data in order to reduce the matching and geometric error between DNB images. The changes in brightness occurring in mountainous areas were effectively detected after the calibration of lunar irradiance effect, and it showed that the developed technology could be used for real time change detection.

A Security Nonce Generation Algorithm Scheme Research for Improving Data Reliability and Anomaly Pattern Detection of Smart City Platform Data Management (스마트시티 플랫폼 데이터 운영의 이상패턴 탐지 및 데이터 신뢰성 향상을 위한 보안 난수 생성 알고리즘 방안 연구)

  • Lee, Jaekwan;Shin, Jinho;Joo, Yongjae;Noh, Jaekoo;Kim, Jae Do;Kim, Yongjoon;Jung, Namjoon
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.4 no.2
    • /
    • pp.75-80
    • /
    • 2018
  • The smart city is developing an energy system efficiently through a common management of the city resource for the growth and a low carbon social. However, the smart city doesn't counter a verification effectively about a anomaly pattern detection when existing security technology (authentication, integrity, confidentiality) is used by fixed security key and key deodorization according to generated big data. This paper is proposed the "security nonce generation based on security nonce generation" for anomaly pattern detection of the adversary and a safety of the key is high through the key generation of the KDC (Key Distribution Center; KDC) for improvement. The proposed scheme distributes the generated security nonce and authentication keys to each facilities system by the KDC. This proposed scheme can be enhanced to the security by doing the external pattern detection and changed new security key through distributed security nonce with keys. Therefore, this paper can do improving the security and a responsibility of the smart city platform management data through the anomaly pattern detection and the safety of the keys.

Apriori Based Big Data Processing System for Improve Sensor Data Throughput in IoT Environments (IoT 환경에서 센서 데이터 처리율 향상을 위한 Apriori 기반 빅데이터 처리 시스템)

  • Song, Jin Su;Kim, Soo Jin;Shin, Young Tae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.277-284
    • /
    • 2021
  • Recently, the smart home environment is expected to be a platform that collects, integrates, and utilizes various data through convergence with wireless information and communication technology. In fact, the number of smart devices with various sensors is increasing inside smart homes. The amount of data that needs to be processed by the increased number of smart devices is also increasing, and big data processing systems are actively being introduced to handle it effectively. However, traditional big data processing systems have all requests directed to cluster drivers before they are allocated to distributed nodes, leading to reduced cluster-wide performance sharing as cluster drivers managing segmentation tasks become bottlenecks. In particular, there is a greater delay rate on smart home devices that constantly request small data processing. Thus, in this paper, we design a Apriori-based big data system for effective data processing in smart home environments where frequent requests occur at the same time. According to the performance evaluation results of the proposed system, the data processing time was reduced by up to 38.6% from at least 19.2% compared to the existing system. The reason for this result is related to the type of data being measured. Because the amount of data collected in a smart home environment is large, the use of cache servers plays a major role in data processing, and association analysis with Apriori algorithms stores highly relevant sensor data in the cache.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

Study of Surfactant Enhanced Remediation Methods for Organic Pollutant(NAPL) Distributed over the Heterogeneous Medium (계면활성제를 이용한 불균질 매질에서 유기오염물(NAPL)의 정화효율에 관한 실험)

  • 서형기;이민희;정상용
    • Journal of Soil and Groundwater Environment
    • /
    • v.6 no.4
    • /
    • pp.51-59
    • /
    • 2001
  • Column and box tests were performed to investigate the removal efficiency of NAPL using the surfactant enhanced flushing In heterogeneous medium. Homogeneous Ottawa sand and heterogeneous soil were used to verify the increase of remediation efficiency for the surfactant enhanced flushing in column test. Box tests with two different heterogeneous sub-structure were performed to quantify the capability of the surfactant enhanced flushing as a remediation method to remove NAPL from the heterogeneous medium. Two different grain size sand layers were repeated in the box to simulate the heterogeneous layer formation and the modified fault structure was built to simulate the fault system in the box. O-xylene as a LNAPL and PCE as a DNAPL were used and oleamide as a non-ionic surfactant. The maximum NAPL effluent concentration with 1% oleamide flushing in the homogeneous column test increased about 460 times compared to that with only water flushing and about 250 times increased in the real soil column test. In heterogeneous medium, the maximum effluent concentration increased about 150 times in 1% oleamide flushing and most of NAPL were removed from the box within 8 pore volume flushing, suggesting that the removal efficiency increased very much compared to in only water flushing. Results investigated the capability of the surfactant enhanced remediation method to remove NAPL even in heterogeneous medium.

  • PDF

Quantitative Rainfall Estimation for S-band Dual Polarization Radar using Distributed Specific Differential Phase (분포형 비차등위상차를 이용한 S-밴드 이중편파레이더의 정량적 강우 추정)

  • Lee, Keon-Haeng;Lim, Sanghun;Jang, Bong-Joo;Lee, Dong-Ryul
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.1
    • /
    • pp.57-67
    • /
    • 2015
  • One of main benefits of a dual polarization radar is improvement of quantitative rainfall estimation. In this paper, performance of two representative rainfall estimation methods for a dual polarization radar, JPOLE and CSU algorithms, have been compared by using data from a MOLIT S-band dual polarization radar. In addition, this paper presents evaluation of specific differential phase ($K_{dp}$) retrieval algorithm proposed by Lim et al. (2013). Current $K_{dp}$ retrieval methods are based on range filtering technique or regression analysis. However, these methods can result in underestimating peak $K_{dp}$ or negative values in convective regions, and fluctuated $K_{dp}$ in low rain rate regions. To resolve these problems, this study applied the $K_{dp}$ distribution method suggested by Lim et al. (2013) and evaluated by adopting new $K_{dp}$ to JPOLE and CSU algorithms. Data were obtained from the Mt. Biseul radar of MOLIT for two rainfall events in 2012. Results of evaluation showed improvement of the peak $K_{dp}$ and did not show fluctuation and negative $K_{dp}$ values. Also, in heavy rain (daily rainfall > 80 mm), accumulated daily rainfall using new $K_{dp}$ was closer to AWS observation data than that using legacy $K_{dp}$, but in light rain(daily rainfall < 80mm), improvement was insignificant, because $K_{dp}$ is used mostly in case of heavy rain rate of quantitative rainfall estimation algorithm.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

Functional MR Imaging of Cerbral Motor Cortex: Comparison between Conventional Gradient Echo and EPI Techniques (뇌 운동피질의 기능적 영상: 고식적 Gradient Echo기법과 EPI기법간의 비교)

  • 송인찬
    • Investigative Magnetic Resonance Imaging
    • /
    • v.1 no.1
    • /
    • pp.109-113
    • /
    • 1997
  • Purpose: To evaluate the differences of functional imaging patterns between conventional spoiled gradient echo (SPGR) and echo planar imaging (EPI) methods in cerebral motor cortex activation. Materials and Methods: Functional MR imaging of cerebral motor cortex activation was examined on a 1.5T MR unit with SPGR (TRfrE/flip angle=50ms/4Oms/$30^{\circ}$, FOV=300mm, matrix $size=256{\times}256$, slice thickness=5mm) and an interleaved single shot gradient echo EPI (TRfrE/flip angle = 3000ms/40ms/$90^{\circ}$, FOV=300mm, matrix $size=128{\times}128$, slice thickness=5mm) techniques in five male healthy volunteers. A total of 160 images in one slice and 960 images in 6 slices were obtained with SPGR and EPI, respectively. A right finger movement was accomplished with a paradigm of an 8 activation/ 8 rest periods. The cross-correlation was used for a statistical mapping algorithm. We evaluated any differences of the time series and the signal intensity changes between the rest and activation periods obtained with two techniques. Also, the locations and areas of the activation sites were compared between two techniques. Results: The activation sites in the motor cortex were accurately localized with both methods. In the signal intensity changes between the rest and activation periods at the activation regions, no significant differences were found between EPI and SPGR. Signal to noise ratio (SNR) of the time series data was higher in EPI than in SPGR by two folds. Also, larger pixels were distributed over small p-values at the activation sites in EPI. Conclusions: Good quality functional MR imaging of the cerebral motor cortex activation could be obtained with both SPGR and EPI. However, EPI is preferable because it provides more precise information on hemodynamics related to neural activities than SPGR due to high sensitivity.

  • PDF

Study of Crustal Structure in North Korea Using 3D Velocity Tomography (3차원 속도 토모그래피를 이용한 북한지역의 지각구조 연구)

  • So Gu Kim;Jong Woo Shin
    • The Journal of Engineering Geology
    • /
    • v.13 no.3
    • /
    • pp.293-308
    • /
    • 2003
  • New results about the crustal structure down to a depth of 60 km beneath North Korea were obtained using the seismic tomography method. About 1013 P- and S-wave travel times from local earthquakes recorded by the Korean stations and the vicinity were used in the research. All earthquakes were relocated on the basis of an algorithm proposed in this study. Parameterization of the velocity structure is realized with a set of nodes distributed in the study volume according to the ray density. 120 nodes located at four depth levels were used to obtain the resulting P- and S-wave velocity structures. As a result, it is found that P- and S-wave velocity anomalies of the Rangnim Massif at depth of 8 km are high and low, respectively, whereas those of the Pyongnam Basin are low up to 24 km. It indicates that the Rangnim Massif contains Archean-early Lower Proterozoic Massif foldings with many faults and fractures which may be saturated with underground water and/or hot springs. On the other hand, the Pyongyang-Sariwon in the Pyongnam Basin is an intraplatform depression which was filled with sediments for the motion of the Upper Proterozoic, Silurian and Upper Paleozoic, and Lower Mesozoic origin. In particular, the high P- and S-wave velocity anomalies are observed at depth of 8, 16, and 24 km beneath Mt. Backdu, indicating that they may be the shallow conduits of the solidified magma bodies, while the low P-and S-wave velocity anomalies at depth of 38 km must be related with the magma chamber of low velocity bodies with partial melting. We also found the Moho discontinuities beneath the Origin Basin including Sari won to be about 55 km deep, whereas those of Mt. Backdu is found to be about 38 km. The high ratio of P-wave velocity/S-wave velocity at Moho suggests that there must be a partial melting body near the boundary of the crust and mantle. Consequently we may well consider Mt. Backdu as a dormant volcano which is holding the intermediate magma chamber near the Moho discontinuity. This study also brought interesting and important findings that there exist some materials with very high P- and S-wave velocity annomoalies at depth of about 40 km near Mt. Myohyang area at the edge of the Rangnim Massif shield.