• Title/Summary/Keyword: Network Technique

Search Result 4,407, Processing Time 0.04 seconds

Simplified Bridge Weigh-In-Motion Algorithm using Strain Response of Short Span RC T-beam Bridge with no Crossbeam installed (가로보가 없는 단지간 RC T빔교의 변형률 응답을 이용한 단순화된 BWIM (Bridge Weigh-In-Motion) 알고리즘)

  • Jeon, Jun-Chang;Hwang, Yoon Koog;Lee, Hee-Hyun
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.25 no.3
    • /
    • pp.57-67
    • /
    • 2021
  • A thorough administration of the arterial road network requires a continuous supply of updated and accurate information about the traffic that travels on the roads. One of the ways to effectively obtain the traffic volume and weight distribution of heavy vehicles is the BWIM technique, which is actively being studied. Unlike previous studies, this study was performed to develop a simplified Bridge Weigh-In-Motion (BWIM) algorithm that can easily estimate the axle spacing and weight of a traveling vehicle by utilizing the structural characteristics of the bridge. A short span RC T-beam bridge with no crossbeam installed was selected for the study, and then the strain response characteristics of bridge deck and girder was checked through preliminary field test. Based on the preliminary field test results, a simplified BWIM algorithm suitable for the bridge to be studied was derived. The validity and accuracy of the BWIM algorithm derived in this study were verified through field test. As a result of the verification test, the proposed BWIM algorithm can estimate the axle spacing and gross weight of the travelling vehicles with the average percent error of less than 3%.

MLP-based 3D Geotechnical Layer Mapping Using Borehole Database in Seoul, South Korea (MLP 기반의 서울시 3차원 지반공간모델링 연구)

  • Ji, Yoonsoo;Kim, Han-Saem;Lee, Moon-Gyo;Cho, Hyung-Ik;Sun, Chang-Guk
    • Journal of the Korean Geotechnical Society
    • /
    • v.37 no.5
    • /
    • pp.47-63
    • /
    • 2021
  • Recently, the demand for three-dimensional (3D) underground maps from the perspective of digital twins and the demand for linkage utilization are increasing. However, the vastness of national geotechnical survey data and the uncertainty in applying geostatistical techniques pose challenges in modeling underground regional geotechnical characteristics. In this study, an optimal learning model based on multi-layer perceptron (MLP) was constructed for 3D subsurface lithological and geotechnical classification in Seoul, South Korea. First, the geotechnical layer and 3D spatial coordinates of each borehole dataset in the Seoul area were constructed as a geotechnical database according to a standardized format, and data pre-processing such as correction and normalization of missing values for machine learning was performed. An optimal fitting model was designed through hyperparameter optimization of the MLP model and model performance evaluation, such as precision and accuracy tests. Then, a 3D grid network locally assigning geotechnical layer classification was constructed by applying an MLP-based bet-fitting model for each unit lattice. The constructed 3D geotechnical layer map was evaluated by comparing the results of a geostatistical interpolation technique and the topsoil properties of the geological map.

3D Mesh Reconstruction Technique from Single Image using Deep Learning and Sphere Shape Transformation Method (딥러닝과 구체의 형태 변형 방법을 이용한 단일 이미지에서의 3D Mesh 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.2
    • /
    • pp.160-168
    • /
    • 2022
  • In this paper, we propose a 3D mesh reconstruction method from a single image using deep learning and a sphere shape transformation method. The proposed method has the following originality that is different from the existing method. First, the position of the vertex of the sphere is modified to be very similar to the 3D point cloud of an object through a deep learning network, unlike the existing method of building edges or faces by connecting nearby points. Because 3D point cloud is used, less memory is required and faster operation is possible because only addition operation is performed between offset value at the vertices of the sphere. Second, the 3D mesh is reconstructed by covering the surface information of the sphere on the modified vertices. Even when the distance between the points of the 3D point cloud created by correcting the position of the vertices of the sphere is not constant, it already has the face information of the sphere called face information of the sphere, which indicates whether the points are connected or not, thereby preventing simplification or loss of expression. can do. In order to evaluate the objective reliability of the proposed method, the experiment was conducted in the same way as in the comparative papers using the ShapeNet dataset, which is an open standard dataset. As a result, the IoU value of the method proposed in this paper was 0.581, and the chamfer distance value was It was calculated as 0.212. The higher the IoU value and the lower the chamfer distance value, the better the results. Therefore, the efficiency of the 3D mesh reconstruction was demonstrated compared to the methods published in other papers.

A Study on the Activation Plan for Professional Sport League through Exploration of Inducing Factors of Match Fixing (승부조작 유발요인 탐색을 통한 프로스포츠 활성화 방안)

  • Bang, Shin-Woong;Park, In-Sil;Kim, Wook-Ki
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.3
    • /
    • pp.153-170
    • /
    • 2021
  • This study was attempted to derive strategic implications for activating professional sports by conducting in-depth interviews with professional sports officials such as players, teams, federations, agencies, etc., by searching for factors that cause match fixing and deriving preventive strategies based on them. Eight people with more than 3 years of experience working in professional sports were selected using the snowball sampling technique. Data were collected and analyzed by applying a semi-structured in-depth interview method for them. As a result of the analysis, five core categories (the learning effect from the cartel for entering university, the culture learned in a camp training, the manifestation of the latent learning effect, the negative effect of the human network, the personal disposition) were derived as factors causing match-fixing. As for the strategy to prevent match fixing, first, improving the college entrance examination system oriented on individual capability, second, improving the education system for student athlete, third, establishing a prevention system, fourth, continuing education, fifth, and activating the agent system as the core categories. Implications for the derived research results and future research directions were discussed.

Optimum conditions for artificial neural networks to simulate indicator bacteria concentrations for river system (하천의 지표 미생물 모의를 위한 인공신경망 최적화)

  • Bae, Hun Kyun
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1053-1060
    • /
    • 2021
  • Current water quality monitoring systems in Korea carried based on in-situ grab sample analysis. It is difficult to improve the current water quality monitoring system, i.e. shorter sampling period or increasing sampling points, because the current systems are both cost- and labor-intensive. One possible way to improve the current water quality monitoring system is to adopt a modeling approach. In this study, a modeling technique was introduced to support the current water quality monitoring system, and an artificial neural network model, the computational tool which mimics the biological processes of human brain, was applied to predict water quality of the river. The approach tried to predict concentrations of Total coliform at the outlet of the river and this showed, somewhat, poor estimations since concentrations of Total coliform were rapidly fluctuated. The approach, however, could forecast whether concentrations of Total coliform would exceed the water quality standard or not. As results, modeling approaches is expected to assist the current water quality monitoring system if the approach is applied to judge whether water quality factors could exceed the water quality standards or not and this would help proper water resource managements.

Construction of an Audio Steganography Botnet Based on Telegram Messenger (텔레그램 메신저 기반의 오디오 스테가노그래피 봇넷 구축)

  • Jeon, Jin;Cho, Youngho
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.127-134
    • /
    • 2022
  • Steganography is a hidden technique in which secret messages are hidden in various multimedia files, and it is widely exploited for cyber crime and attacks because it is very difficult for third parties other than senders and receivers to identify the presence of hidden information in communication messages. Botnet typically consists of botmasters, bots, and C&C (Command & Control) servers, and is a botmasters-controlled network with various structures such as centralized, distributed (P2P), and hybrid. Recently, in order to enhance the concealment of botnets, research on Stego Botnet, which uses SNS platforms instead of C&C servers and performs C&C communication by applying steganography techniques, has been actively conducted, but image or video media-oriented stego botnet techniques have been studied. On the other hand, audio files such as various sound sources and recording files are also actively shared on SNS, so research on stego botnet based on audio steganography is needed. Therefore, in this study, we present the results of comparative analysis on hidden capacity by file type and tool through experiments, using a stego botnet that performs C&C hidden communication using audio files as a cover medium in Telegram Messenger.

A Comparative study on smoothing techniques for performance improvement of LSTM learning model

  • Tae-Jin, Park;Gab-Sig, Sim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.17-26
    • /
    • 2023
  • In this paper, we propose a several smoothing techniques are compared and applied to increase the application of the LSTM-based learning model and its effectiveness. The applied smoothing technique is Savitky-Golay, exponential smoothing, and weighted moving average. Through this study, the LSTM algorithm with the Savitky-Golay filter applied in the preprocessing process showed significant best results in prediction performance than the result value shown when applying the LSTM model to Bitcoin data. To confirm the predictive performance results, the learning loss rate and verification loss rate according to the Savitzky-Golay LSTM model were compared with the case of LSTM used to remove complex factors from Bitcoin price prediction, and experimented with an average value of 20 times to increase its reliability. As a result, values of (3.0556, 0.00005) and (1.4659, 0.00002) could be obtained. As a result, since crypto-currencies such as Bitcoin have more volatility than stocks, noise was removed by applying the Savitzky-Golay in the data preprocessing process, and the data after preprocessing were obtained the most-significant to increase the Bitcoin prediction rate through LSTM neural network learning.

A Study on Research Trends in Metaverse Platform Using Big Data Analysis (빅데이터 분석을 활용한 메타버스 플랫폼 연구 동향 분석)

  • Hong, Jin-Wook;Han, Jung-Wan
    • Journal of Digital Convergence
    • /
    • v.20 no.5
    • /
    • pp.627-635
    • /
    • 2022
  • As the non-face-to-face situation continues for a long time due to COVID-19, the underlying technologies of the 4th industrial revolution such as IOT, AR, VR, and big data are affecting the metaverse platform overall. Such changes in the external environment such as society and culture can affect the development of academics, and it is very important to systematically organize existing achievements in preparation for changes. The Korea Educational Research Information Service (RISS) collected data including the 'metaverse platform' in the keyword and used the text mining technique, one of the big data analysis. The collected data were analyzed for word cloud frequency, connection strength between keywords, and semantic network analysis to examine the trends of metaverse platform research. As a result of the study, keywords appeared in the order of 'use', 'digital', 'technology', and 'education' in word cloud analysis. As a result of analyzing the connection strength (N-gram) between keywords, 'Edue→Tech' showed the highest connection strength and a total of three clusters of word chain clusters were derived. Detailed research areas were classified into five areas, including 'digital technology'. Considering the analysis results comprehensively, It seems necessary to discover and discuss more active research topics from the long-term perspective of developing a metaverse platform.

Efficient Privacy-Preserving Duplicate Elimination in Edge Computing Environment Based on Trusted Execution Environment (신뢰실행환경기반 엣지컴퓨팅 환경에서의 암호문에 대한 효율적 프라이버시 보존 데이터 중복제거)

  • Koo, Dongyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.9
    • /
    • pp.305-316
    • /
    • 2022
  • With the flood of digital data owing to the Internet of Things and big data, cloud service providers that process and store vast amount of data from multiple users can apply duplicate data elimination technique for efficient data management. The user experience can be improved as the notion of edge computing paradigm is introduced as an extension of the cloud computing to improve problems such as network congestion to a central cloud server and reduced computational efficiency. However, the addition of a new edge device that is not entirely reliable in the edge computing may cause increase in the computational complexity for additional cryptographic operations to preserve data privacy in duplicate identification and elimination process. In this paper, we propose an efficiency-improved duplicate data elimination protocol while preserving data privacy with an optimized user-edge-cloud communication framework by utilizing a trusted execution environment. Direct sharing of secret information between the user and the central cloud server can minimize the computational complexity in edge devices and enables the use of efficient encryption algorithms at the side of cloud service providers. Users also improve the user experience by offloading data to edge devices, enabling duplicate elimination and independent activity. Through experiments, efficiency of the proposed scheme has been analyzed such as up to 78x improvements in computation during data outsourcing process compared to the previous study which does not exploit trusted execution environment in edge computing architecture.

Detection of Signs of Hostile Cyber Activity against External Networks based on Autoencoder (오토인코더 기반의 외부망 적대적 사이버 활동 징후 감지)

  • Park, Hansol;Kim, Kookjin;Jeong, Jaeyeong;Jang, jisu;Youn, Jaepil;Shin, Dongkyoo
    • Journal of Internet Computing and Services
    • /
    • v.23 no.6
    • /
    • pp.39-48
    • /
    • 2022
  • Cyberattacks around the world continue to increase, and their damage extends beyond government facilities and affects civilians. These issues emphasized the importance of developing a system that can identify and detect cyber anomalies early. As above, in order to effectively identify cyber anomalies, several studies have been conducted to learn BGP (Border Gateway Protocol) data through a machine learning model and identify them as anomalies. However, BGP data is unbalanced data in which abnormal data is less than normal data. This causes the model to have a learning biased result, reducing the reliability of the result. In addition, there is a limit in that security personnel cannot recognize the cyber situation as a typical result of machine learning in an actual cyber situation. Therefore, in this paper, we investigate BGP (Border Gateway Protocol) that keeps network records around the world and solve the problem of unbalanced data by using SMOTE. After that, assuming a cyber range situation, an autoencoder classifies cyber anomalies and visualizes the classified data. By learning the pattern of normal data, the performance of classifying abnormal data with 92.4% accuracy was derived, and the auxiliary index also showed 90% performance, ensuring reliability of the results. In addition, it is expected to be able to effectively defend against cyber attacks because it is possible to effectively recognize the situation by visualizing the congested cyber space.