• Title/Summary/Keyword: Log 분산처리

Search Result 41, Processing Time 0.032 seconds

Inversion of Rayleigh-wave Dispersion Curves for Near-surface Shear-wave Velocities in Chuncheon Area (춘천지역의 천부 횡파속도를 구하기 위한 레일리파 분산곡선 역산)

  • Kim, Ki-Young;Kim, Woo-Jung;Park, Yeong-Hwan
    • Geophysics and Geophysical Exploration
    • /
    • v.15 no.1
    • /
    • pp.1-7
    • /
    • 2012
  • To evaluate methods of determining near-surface shear-wave velocities (${\nu}_s$), we derived dispersion curves of Rayleigh waves generated by both passive and active sources in Chuncheon, Korea. Microtremors were recorded for 5 minutes in each of four triangular arrays with radii of 5 ~ 40 m. Those data were analyzed using the Spatial Autocorrelation method. Rayleigh waves were also generated by a hammer source and recorded in the same area for 2 s using 24 4.5-Hz geophones. Multichannel Analysis of Surface Waves was applied to those data. Velocity spectra were derived with relatively high signal-to-noise ratios in the frequency ranges of 7 ~ 19 and 11 ~ 50 Hz for the microtremors and synthetically generated Rayleigh waves, respectively. The resultant dispersion curves were combined as one and then input to inversion to derive shear wave velocities that were compared with a lithology log from a nearby well. Shearwave velocities in the top soil and soft-rock layers are almost constant with values of 221 and 846 m/s, respectively; while the inverse-modeled ${\nu}_s$ increases linearly in the gravelly sand, cobbles, and weathered-rock layers. If rock type is classified based on shear-wave velocity, the inversion-derived boundary between weathered-rock and soft rock may be about 5 m deeper than in the well log.

Design of an Efficient Turbo Decoder by Initial Threshold Setting (초기 임계값 설정에 의한 효율적인 터보 복호기 설계)

  • 김동한;황선영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.5B
    • /
    • pp.582-591
    • /
    • 2001
  • 터보 부호는 반복적인 복호 알고리즘을 사용함으로써 가산성 백색 가우시안 잡음(AWGN) 채널 환경에서 Shannon 한계에 가까운 성능을 보이는 오류정정 방식으로 제안되었으나, 반복 연산량에 따른 복호 지연과 인터리버에 따른 지연에 의해 실시간 처리의 어려움이라는 문제점을 안고 있다. 본 논문에서는 터보 부호의 성능을 저하시키지 않는 범위에서 적절한 초기 임계값 설정에 따라 불필요한 반복 복호 횟수를 줄일 수 있는 터보 복호기 구조를 제안한다. 적절한 초기 임계값 설정은 LLR(Log-Likelihood Ratio)값의 평균값과 분산, 복호기의 출력에 대한 BER에 근거하여 여러 번의 모의 실험을 통해서 최적의 값으로 결정된다. 제안한 방식은 초기 임계값을 적절히 선택하면 손실이 없는 범위 내에서 반복횟수를 감소시킴으로써 기존의 정해진 반복횟수로 인한 큰 복호 지연을 미연에 방지하고, 이에 따른 계산량 감소는 저전력의 효과도 가져온다. 성능 평가를 위해 BER = $10^{-6}$이내이고, 전송속도가 32kbps 이상인 IMT2000의 고속 데이터 전송 환경에서 모의 실험을 하였다. 실험 결과로 기존의 정해진 반복횟수를 갖는 터보 복호기에 비해 SNR 변동(0~3dB)에서 평균적으로 55~90% 정도의 감소된 반복횟수를 검증하였다.

  • PDF

Log Acquisition of the OpenStack Platform for Cloud Forensic (클라우드 포렌식을 위한 오픈스택 플랫폼에서 로그데이터 수집)

  • Han, Su bin;Lee, Byung-Do;Shim, Jongbo;Shin, Sang Uk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.460-463
    • /
    • 2014
  • 클라우드 컴퓨팅의 많은 장점에도 불구하고 클라우드 컴퓨팅은 보안이슈는 줄어들지 않으며, 특히 디지털 포렌식은 실질적인 기능을 수행하기에 미비한 실정이다. 최근, 다양한 사이버 범죄가 증가하면서 클라우드 컴퓨팅 환경은 사이버 범죄에 노출되어 있으며 악의적인 공격의 위험을 가지고 있다. 클라우드 포렌식은 자원이 가상공간에 존재할 수 있고, 증거 데이터가 물리적으로 분산되어 있기 때문에 기존의 포렌식 수사와는 다르게 접근해야 한다. 또한, 클라우드 기반 포렌식에서 획득 가능한 증거 데이터에 대한 정의가 되어 있지 않아서 증거 데이터를 수집하는데 어려움을 겪는다. 이에 본 논문에서는 오픈스택 플랫폼을 이용한 클라우드 환경을 구축하고, 클라우드 플랫폼 기반 포렌식을 위해 획득 가능한 로그 데이터에 대해 정리하고, 실제 획득 가능한 로그를 수집 및 분석하고, 클라우드 컴퓨팅 플랫폼기반 포렌식의 한계점과 해결방안을 알아본다.

On the vibration influence to the running power plant facilities when the foundation excavated of the cautious blasting works. (노천굴착에서 발파진동의 크기를 감량 시키기 위한 정밀파실험식)

  • Huh Ginn
    • Explosives and Blasting
    • /
    • v.9 no.1
    • /
    • pp.3-13
    • /
    • 1991
  • The cautious blasting works had been used with emulsion explosion electric M/S delay caps. Drill depth was from 3m to 6m with Crawler Drill ${\phi}70mm$ on the calcalious sand stone (soft -modelate -semi hard Rock). The total numbers of test blast were 88. Scale distance were induced 15.52-60.32. It was applied to propagation Law in blasting vibration as follows. Propagtion Law in Blasting Vibration $V=K(\frac{D}{W^b})^n$ were V : Peak partical velocity(cm/sec) D : Distance between explosion and recording sites(m) W : Maximum charge per delay-period of eight milliseconds or more (kg) K : Ground transmission constant, empirically determind on the Rocks, Explosive and drilling pattern ets. b : Charge exponents n : Reduced exponents where the quantity $\frac{D}{W^b}$ is known as the scale distance. Above equation is worked by the U.S Bureau of Mines to determine peak particle velocity. The propagation Law can be catagorized in three groups. Cubic root Scaling charge per delay Square root Scaling of charge per delay Site-specific Scaling of charge Per delay Plots of peak particle velocity versus distoance were made on log-log coordinates. The data are grouped by test and P.P.V. The linear grouping of the data permits their representation by an equation of the form ; $V=K(\frac{D}{W^{\frac{1}{3}})^{-n}$ The value of K(41 or 124) and n(1.41 or 1.66) were determined for each set of data by the method of least squores. Statistical tests showed that a common slope, n, could be used for all data of a given components. Charge and reduction exponents carried out by multiple regressional analysis. It's divided into under loom over loom distance because the frequency is verified by the distance from blast site. Empirical equation of cautious blasting vibration is as follows. Over 30m ------- under l00m ${\cdots\cdots\cdots}{\;}41(D/sqrt[2]{W})^{-1.41}{\;}{\cdots\cdots\cdots\cdots\cdots}{\;}A$ Over 100m ${\cdots\cdots\cdots\cdots\cdots}{\;}121(D/sqrt[3]{W})^{-1.66}{\;}{\cdots\cdots\cdots\cdots\cdots}{\;}B$ where ; V is peak particle velocity In cm / sec D is distance in m and W, maximLlm charge weight per day in kg K value on the above equation has to be more specified for further understaring about the effect of explosives, Rock strength. And Drilling pattern on the vibration levels, it is necessary to carry out more tests.

  • PDF

Analysis of Factors for Korean Women's Cancer Screening through Hadoop-Based Public Medical Information Big Data Analysis (Hadoop기반의 공개의료정보 빅 데이터 분석을 통한 한국여성암 검진 요인분석 서비스)

  • Park, Min-hee;Cho, Young-bok;Kim, So Young;Park, Jong-bae;Park, Jong-hyock
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.10
    • /
    • pp.1277-1286
    • /
    • 2018
  • In this paper, we provide flexible scalability of computing resources in cloud environment and Apache Hadoop based cloud environment for analysis of public medical information big data. In fact, it includes the ability to quickly and flexibly extend storage, memory, and other resources in a situation where log data accumulates or grows over time. In addition, when real-time analysis of accumulated unstructured log data is required, the system adopts Hadoop-based analysis module to overcome the processing limit of existing analysis tools. Therefore, it provides a function to perform parallel distributed processing of a large amount of log data quickly and reliably. Perform frequency analysis and chi-square test for big data analysis. In addition, multivariate logistic regression analysis of significance level 0.05 and multivariate logistic regression analysis of meaningful variables (p<0.05) were performed. Multivariate logistic regression analysis was performed for each model 3.

Log Collection Method for Efficient Management of Systems using Heterogeneous Network Devices (이기종 네트워크 장치를 사용하는 시스템의 효율적인 관리를 위한 로그 수집 방법)

  • Jea-Ho Yang;Younggon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.119-125
    • /
    • 2023
  • IT infrastructure operation has advanced, and the methods for managing systems have become widely adopted. Recently, research has focused on improving system management using Syslog. However, utilizing log data collected through these methods presents challenges, as logs are extracted in various formats that require expert analysis. This paper proposes a system that utilizes edge computing to distribute the collection of Syslog data and preprocesses duplicate data before storing it in a central database. Additionally, the system constructs a data dictionary to classify and count data in real-time, with restrictions on transmitting registered data to the central database. This approach ensures the maintenance of predefined patterns in the data dictionary, controls duplicate data and temporal duplicates, and enables the storage of refined data in the central database, thereby securing fundamental data for big data analysis. The proposed algorithms and procedures are demonstrated through simulations and examples. Real syslog data, including extracted examples, is used to accurately extract necessary information from log data and verify the successful execution of the classification and storage processes. This system can serve as an efficient solution for collecting and managing log data in edge environments, offering potential benefits in terms of technology diffusion.

Analysis of Commercial Continuous Media Server Workloads on Internet (인터넷 환경에서의 상용 연속미디어 서버의 부하 분석)

  • Kim, Ki-Wan;Lee, Seung-Won;Park, Seong-Ho;Chung, Ki-Dong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.87-94
    • /
    • 2003
  • A study on the characteristics of server workloads based on user access pattern offers insights for the strategies on continuous media caching and network workloads distribution. This paper analyses characteristics of continuous media filet in each fervor and user access requests to each of them, using log data of three commercial sites, which are providing continuous media files in the form of real time streaming on the Internet. These servers have more continuous files than ones in the previously reported studies and are processing very large number of user access requests. We analyse the characteristics of continuous media files in each server by the size of files. playback time and encoding bandwidth. We also analyse the characteristics of user access requests by the distribution of user requests to continuous media files, user access time, access rate based on the popularity of the files and the number if access requests to serial continuous media files.

Lazy Garbage Collection of Coordinated Checkpointing Protocol for Avoiding Sympathetic Rollback (동기적 검사점 기법에서 불필요한 복귀를 회피하기 위한 쓰레기 처리 기법)

  • Chung, Kwang-Sik;Yu, Heon-Chang;Lee, Won-Gyu;Lee, Seong-Hoon;Hwang, Chong-Sun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.6
    • /
    • pp.331-339
    • /
    • 2002
  • This paper presents a garbage collection protocol for checkpoints and message logs which are staved on the stable storage or volatile storage for fault tolerancy. The previous works of garbage collections in coordinated checkpointing protocol delete all the checkpoints except for the last checkpoints on earth processes. But implemented in top of reliable communication protocol like as TCP/IP, rollback recovery protocol based on only last checkpoints makes sympathetic rollback. We show that the old checkpoints or message logs except for the last checkpoints have to be preserved in order to replay the lost message. And we define the conditions for garbage collection of checkpoints and message logs for lost messages and present the garbage collection algorithm for checkpoints and message logs in coordinated checkpointing protocol. Since the proposed algorithm uses process information for lost message piggybacked with messages, the additional messages for garbage collection is not required The proposed garbage collection algorithm makes 'the lazy garbage collectioneffect', because relying on the piggybacked checked checkpoint information in send/receive message. But 'the lazy garbage collection effect'does not break the consistency of the whole systems.

A Study on the Data Collection Methods based Hadoop Distributed Environment (하둡 분산 환경 기반의 데이터 수집 기법 연구)

  • Jin, Go-Whan
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.5
    • /
    • pp.1-6
    • /
    • 2016
  • Many studies have been carried out for the development of big data utilization and analysis technology recently. There is a tendency that government agencies and companies to introduce a Hadoop of a processing platform for analyzing big data is increasing gradually. Increased interest with respect to the processing and analysis of these big data collection technology of data has become a major issue in parallel to it. However, study of the collection technology as compared to the study of data analysis techniques, it is insignificant situation. Therefore, in this paper, to build on the Hadoop cluster is a big data analysis platform, through the Apache sqoop, stylized from relational databases, to collect the data. In addition, to provide a sensor through the Apache flume, a system to collect on the basis of the data file of the Web application, the non-structured data such as log files to stream. The collection of data through these convergence would be able to utilize as a basic material of big data analysis.

A Voronoi Distance Based Searching Technique for Fast Image Registration (고속 영상 정합을 위한 보르노이 거리 기반 분할 검색 기법)

  • Bae Ki-Tae;Chong Min-Yeong;Lee Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.265-272
    • /
    • 2005
  • In this paper, we propose a technique which is speedily searching for correspondent points of two images using Voronoi-Distance, as an image registration method for feature based image mosaics. It extracts feature points in two images by the SUSAN corner detector, and then create not only the Voronoi Surface which has distance information among the feature points in the base image using a priority based Voronoi distance algorithm but also select the model area which has the maximum variance value of coordinates of the feature points in the model image. We propose a method for searching for the correspondent points in the Voronoi surface of the base image overlapped with the model area by use of the partitive search algorithm using queues. The feature of the method is that we can rapidly search for the correspondent points between adjacent images using the new Voronoi distance algorithm which has $O(width{\times}height{\times}logN)$ time complexity and the the partitive search algerian using queues which reduces the search range by a fourth at a time.