• Title/Summary/Keyword: Network Technique

Search Result 4,415, Processing Time 0.034 seconds

3D Mesh Reconstruction Technique from Single Image using Deep Learning and Sphere Shape Transformation Method (딥러닝과 구체의 형태 변형 방법을 이용한 단일 이미지에서의 3D Mesh 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.2
    • /
    • pp.160-168
    • /
    • 2022
  • In this paper, we propose a 3D mesh reconstruction method from a single image using deep learning and a sphere shape transformation method. The proposed method has the following originality that is different from the existing method. First, the position of the vertex of the sphere is modified to be very similar to the 3D point cloud of an object through a deep learning network, unlike the existing method of building edges or faces by connecting nearby points. Because 3D point cloud is used, less memory is required and faster operation is possible because only addition operation is performed between offset value at the vertices of the sphere. Second, the 3D mesh is reconstructed by covering the surface information of the sphere on the modified vertices. Even when the distance between the points of the 3D point cloud created by correcting the position of the vertices of the sphere is not constant, it already has the face information of the sphere called face information of the sphere, which indicates whether the points are connected or not, thereby preventing simplification or loss of expression. can do. In order to evaluate the objective reliability of the proposed method, the experiment was conducted in the same way as in the comparative papers using the ShapeNet dataset, which is an open standard dataset. As a result, the IoU value of the method proposed in this paper was 0.581, and the chamfer distance value was It was calculated as 0.212. The higher the IoU value and the lower the chamfer distance value, the better the results. Therefore, the efficiency of the 3D mesh reconstruction was demonstrated compared to the methods published in other papers.

A Study on the Activation Plan for Professional Sport League through Exploration of Inducing Factors of Match Fixing (승부조작 유발요인 탐색을 통한 프로스포츠 활성화 방안)

  • Bang, Shin-Woong;Park, In-Sil;Kim, Wook-Ki
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.3
    • /
    • pp.153-170
    • /
    • 2021
  • This study was attempted to derive strategic implications for activating professional sports by conducting in-depth interviews with professional sports officials such as players, teams, federations, agencies, etc., by searching for factors that cause match fixing and deriving preventive strategies based on them. Eight people with more than 3 years of experience working in professional sports were selected using the snowball sampling technique. Data were collected and analyzed by applying a semi-structured in-depth interview method for them. As a result of the analysis, five core categories (the learning effect from the cartel for entering university, the culture learned in a camp training, the manifestation of the latent learning effect, the negative effect of the human network, the personal disposition) were derived as factors causing match-fixing. As for the strategy to prevent match fixing, first, improving the college entrance examination system oriented on individual capability, second, improving the education system for student athlete, third, establishing a prevention system, fourth, continuing education, fifth, and activating the agent system as the core categories. Implications for the derived research results and future research directions were discussed.

Optimum conditions for artificial neural networks to simulate indicator bacteria concentrations for river system (하천의 지표 미생물 모의를 위한 인공신경망 최적화)

  • Bae, Hun Kyun
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1053-1060
    • /
    • 2021
  • Current water quality monitoring systems in Korea carried based on in-situ grab sample analysis. It is difficult to improve the current water quality monitoring system, i.e. shorter sampling period or increasing sampling points, because the current systems are both cost- and labor-intensive. One possible way to improve the current water quality monitoring system is to adopt a modeling approach. In this study, a modeling technique was introduced to support the current water quality monitoring system, and an artificial neural network model, the computational tool which mimics the biological processes of human brain, was applied to predict water quality of the river. The approach tried to predict concentrations of Total coliform at the outlet of the river and this showed, somewhat, poor estimations since concentrations of Total coliform were rapidly fluctuated. The approach, however, could forecast whether concentrations of Total coliform would exceed the water quality standard or not. As results, modeling approaches is expected to assist the current water quality monitoring system if the approach is applied to judge whether water quality factors could exceed the water quality standards or not and this would help proper water resource managements.

Construction of an Audio Steganography Botnet Based on Telegram Messenger (텔레그램 메신저 기반의 오디오 스테가노그래피 봇넷 구축)

  • Jeon, Jin;Cho, Youngho
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.127-134
    • /
    • 2022
  • Steganography is a hidden technique in which secret messages are hidden in various multimedia files, and it is widely exploited for cyber crime and attacks because it is very difficult for third parties other than senders and receivers to identify the presence of hidden information in communication messages. Botnet typically consists of botmasters, bots, and C&C (Command & Control) servers, and is a botmasters-controlled network with various structures such as centralized, distributed (P2P), and hybrid. Recently, in order to enhance the concealment of botnets, research on Stego Botnet, which uses SNS platforms instead of C&C servers and performs C&C communication by applying steganography techniques, has been actively conducted, but image or video media-oriented stego botnet techniques have been studied. On the other hand, audio files such as various sound sources and recording files are also actively shared on SNS, so research on stego botnet based on audio steganography is needed. Therefore, in this study, we present the results of comparative analysis on hidden capacity by file type and tool through experiments, using a stego botnet that performs C&C hidden communication using audio files as a cover medium in Telegram Messenger.

A Comparative study on smoothing techniques for performance improvement of LSTM learning model

  • Tae-Jin, Park;Gab-Sig, Sim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.17-26
    • /
    • 2023
  • In this paper, we propose a several smoothing techniques are compared and applied to increase the application of the LSTM-based learning model and its effectiveness. The applied smoothing technique is Savitky-Golay, exponential smoothing, and weighted moving average. Through this study, the LSTM algorithm with the Savitky-Golay filter applied in the preprocessing process showed significant best results in prediction performance than the result value shown when applying the LSTM model to Bitcoin data. To confirm the predictive performance results, the learning loss rate and verification loss rate according to the Savitzky-Golay LSTM model were compared with the case of LSTM used to remove complex factors from Bitcoin price prediction, and experimented with an average value of 20 times to increase its reliability. As a result, values of (3.0556, 0.00005) and (1.4659, 0.00002) could be obtained. As a result, since crypto-currencies such as Bitcoin have more volatility than stocks, noise was removed by applying the Savitzky-Golay in the data preprocessing process, and the data after preprocessing were obtained the most-significant to increase the Bitcoin prediction rate through LSTM neural network learning.

A Study on Research Trends in Metaverse Platform Using Big Data Analysis (빅데이터 분석을 활용한 메타버스 플랫폼 연구 동향 분석)

  • Hong, Jin-Wook;Han, Jung-Wan
    • Journal of Digital Convergence
    • /
    • v.20 no.5
    • /
    • pp.627-635
    • /
    • 2022
  • As the non-face-to-face situation continues for a long time due to COVID-19, the underlying technologies of the 4th industrial revolution such as IOT, AR, VR, and big data are affecting the metaverse platform overall. Such changes in the external environment such as society and culture can affect the development of academics, and it is very important to systematically organize existing achievements in preparation for changes. The Korea Educational Research Information Service (RISS) collected data including the 'metaverse platform' in the keyword and used the text mining technique, one of the big data analysis. The collected data were analyzed for word cloud frequency, connection strength between keywords, and semantic network analysis to examine the trends of metaverse platform research. As a result of the study, keywords appeared in the order of 'use', 'digital', 'technology', and 'education' in word cloud analysis. As a result of analyzing the connection strength (N-gram) between keywords, 'Edue→Tech' showed the highest connection strength and a total of three clusters of word chain clusters were derived. Detailed research areas were classified into five areas, including 'digital technology'. Considering the analysis results comprehensively, It seems necessary to discover and discuss more active research topics from the long-term perspective of developing a metaverse platform.

Efficient Privacy-Preserving Duplicate Elimination in Edge Computing Environment Based on Trusted Execution Environment (신뢰실행환경기반 엣지컴퓨팅 환경에서의 암호문에 대한 효율적 프라이버시 보존 데이터 중복제거)

  • Koo, Dongyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.9
    • /
    • pp.305-316
    • /
    • 2022
  • With the flood of digital data owing to the Internet of Things and big data, cloud service providers that process and store vast amount of data from multiple users can apply duplicate data elimination technique for efficient data management. The user experience can be improved as the notion of edge computing paradigm is introduced as an extension of the cloud computing to improve problems such as network congestion to a central cloud server and reduced computational efficiency. However, the addition of a new edge device that is not entirely reliable in the edge computing may cause increase in the computational complexity for additional cryptographic operations to preserve data privacy in duplicate identification and elimination process. In this paper, we propose an efficiency-improved duplicate data elimination protocol while preserving data privacy with an optimized user-edge-cloud communication framework by utilizing a trusted execution environment. Direct sharing of secret information between the user and the central cloud server can minimize the computational complexity in edge devices and enables the use of efficient encryption algorithms at the side of cloud service providers. Users also improve the user experience by offloading data to edge devices, enabling duplicate elimination and independent activity. Through experiments, efficiency of the proposed scheme has been analyzed such as up to 78x improvements in computation during data outsourcing process compared to the previous study which does not exploit trusted execution environment in edge computing architecture.

Detection of Signs of Hostile Cyber Activity against External Networks based on Autoencoder (오토인코더 기반의 외부망 적대적 사이버 활동 징후 감지)

  • Park, Hansol;Kim, Kookjin;Jeong, Jaeyeong;Jang, jisu;Youn, Jaepil;Shin, Dongkyoo
    • Journal of Internet Computing and Services
    • /
    • v.23 no.6
    • /
    • pp.39-48
    • /
    • 2022
  • Cyberattacks around the world continue to increase, and their damage extends beyond government facilities and affects civilians. These issues emphasized the importance of developing a system that can identify and detect cyber anomalies early. As above, in order to effectively identify cyber anomalies, several studies have been conducted to learn BGP (Border Gateway Protocol) data through a machine learning model and identify them as anomalies. However, BGP data is unbalanced data in which abnormal data is less than normal data. This causes the model to have a learning biased result, reducing the reliability of the result. In addition, there is a limit in that security personnel cannot recognize the cyber situation as a typical result of machine learning in an actual cyber situation. Therefore, in this paper, we investigate BGP (Border Gateway Protocol) that keeps network records around the world and solve the problem of unbalanced data by using SMOTE. After that, assuming a cyber range situation, an autoencoder classifies cyber anomalies and visualizes the classified data. By learning the pattern of normal data, the performance of classifying abnormal data with 92.4% accuracy was derived, and the auxiliary index also showed 90% performance, ensuring reliability of the results. In addition, it is expected to be able to effectively defend against cyber attacks because it is possible to effectively recognize the situation by visualizing the congested cyber space.

Comparative Study of Anomaly Detection Accuracy of Intrusion Detection Systems Based on Various Data Preprocessing Techniques (다양한 데이터 전처리 기법 기반 침입탐지 시스템의 이상탐지 정확도 비교 연구)

  • Park, Kyungseon;Kim, Kangseok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.449-456
    • /
    • 2021
  • An intrusion detection system is a technology that detects abnormal behaviors that violate security, and detects abnormal operations and prevents system attacks. Existing intrusion detection systems have been designed using statistical analysis or anomaly detection techniques for traffic patterns, but modern systems generate a variety of traffic different from existing systems due to rapidly growing technologies, so the existing methods have limitations. In order to overcome this limitation, study on intrusion detection methods applying various machine learning techniques is being actively conducted. In this study, a comparative study was conducted on data preprocessing techniques that can improve the accuracy of anomaly detection using NGIDS-DS (Next Generation IDS Database) generated by simulation equipment for traffic in various network environments. Padding and sliding window were used as data preprocessing, and an oversampling technique with Adversarial Auto-Encoder (AAE) was applied to solve the problem of imbalance between the normal data rate and the abnormal data rate. In addition, the performance improvement of detection accuracy was confirmed by using Skip-gram among the Word2Vec techniques that can extract feature vectors of preprocessed sequence data. PCA-SVM and GRU were used as models for comparative experiments, and the experimental results showed better performance when sliding window, skip-gram, AAE, and GRU were applied.

Analysis of Perception on Happy Housing Using Blog Mining Technique (블로그 마이닝을 활용한 행복주택의 인식 분석)

  • Hwang, Ji Hyoun
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.2
    • /
    • pp.211-223
    • /
    • 2022
  • This study aims to verify the possibility of using the blog mining to collect public opinion in the field of housing policy, thus, it collected blog posts with the keyword 'Happy Housing', extracted the main keywords from them, and analyzed the public's perception through keyword and word cluster analysis. 137,002 blog posts were used as analysis data from May 2013, when social discussion about happy housing spread, to August 2021, and the words derived by dividing the period into three stages in consideration of major housing policies and data collection were analyzed. The results are as follows. In the keyword analysis, overall, the importance of words related to the location, the number, the size, and the conditions for occupancy of Happy Housing is high. In the first stage, government policy implementation, in the second stage, the application process for Happy Housing, and in the third stage, recruitment notices, occupancy qualifications, and rental conditions are found to be highly important. In cluster analysis, project progress, application process, and project area were drawn as main themes at all stages. In particular, policy implementation and implementation plan in the first stage, occupancy qualification and financial support in the second stage, and policy implementation and occupancy qualification in the third stage were drawn as main themes. These results present the possibility of the blog mining as a method of collecting public opinion by sharing policy-related information, reflecting social issues, evaluating whether policies are delivered, and inferring the public's participation in policies.