• 제목/요약/키워드: train

Search Result 7,432, Processing Time 0.035 seconds

Machine Learning-based Detection of HTTP DoS Attacks for Cloud Web Applications (머신러닝 기반 클라우드 웹 애플리케이션 HTTP DoS 공격 탐지)

  • Jae Han Cho;Jae Min Park;Tae Hyeop Kim;Seung Wook Lee;Jiyeon Kim
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.66-75
    • /
    • 2023
  • Recently, the number of cloud web applications is increasing owing to the accelerated migration of enterprises and public sector information systems to the cloud. Traditional network attacks on cloud web applications are characterized by Denial of Service (DoS) attacks, which consume network resources with a large number of packets. However, HTTP DoS attacks, which consume application resources, are also increasing recently; as such, developing security technologies to prevent them is necessary. In particular, since low-bandwidth HTTP DoS attacks do not consume network resources, they are difficult to identify using traditional security solutions that monitor network metrics. In this paper, we propose a new detection model for detecting HTTP DoS attacks on cloud web applications by collecting the application metrics of web servers and learning them using machine learning. We collected 18 types of application metrics from an Apache web server and used five machine learning and two deep learning models to train the collected data. Further, we confirmed the superiority of the application metrics-based machine learning model by collecting and training 6 additional network metrics and comparing their performance with the proposed models. Among HTTP DoS attacks, we injected the RUDY and HULK attacks, which are low- and high-bandwidth attacks, respectively. As a result of detecting these two attacks using the proposed model, we found out that the F1 scores of the application metrics-based machine learning model were about 0.3 and 0.1 higher than that of the network metrics-based model, respectively.

Similar Contents Recommendation Model Based On Contents Meta Data Using Language Model (언어모델을 활용한 콘텐츠 메타 데이터 기반 유사 콘텐츠 추천 모델)

  • Donghwan Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.27-40
    • /
    • 2023
  • With the increase in the spread of smart devices and the impact of COVID-19, the consumption of media contents through smart devices has significantly increased. Along with this trend, the amount of media contents viewed through OTT platforms is increasing, that makes contents recommendations on these platforms more important. Previous contents-based recommendation researches have mostly utilized metadata that describes the characteristics of the contents, with a shortage of researches that utilize the contents' own descriptive metadata. In this paper, various text data including titles and synopses that describe the contents were used to recommend similar contents. KLUE-RoBERTa-large, a Korean language model with excellent performance, was used to train the model on the text data. A dataset of over 20,000 contents metadata including titles, synopses, composite genres, directors, actors, and hash tags information was used as training data. To enter the various text features into the language model, the features were concatenated using special tokens that indicate each feature. The test set was designed to promote the relative and objective nature of the model's similarity classification ability by using the three contents comparison method and applying multiple inspections to label the test set. Genres classification and hash tag classification prediction tasks were used to fine-tune the embeddings for the contents meta text data. As a result, the hash tag classification model showed an accuracy of over 90% based on the similarity test set, which was more than 9% better than the baseline language model. Through hash tag classification training, it was found that the language model's ability to classify similar contents was improved, which demonstrated the value of using a language model for the contents-based filtering.

Spatialization of Unstructured Document Information Using AI (AI를 활용한 비정형 문서정보의 공간정보화)

  • Sang-Won YOON;Jeong-Woo PARK;Kwang-Woo NAM
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.3
    • /
    • pp.37-51
    • /
    • 2023
  • Spatial information is essential for interpreting urban phenomena. Methodologies for spatializing urban information, especially when it lacks location details, have been consistently developed. Typical methods include Geocoding using structured address information or place names, spatial integration with existing geospatial data, and manual tasks utilizing reference data. However, a vast number of documents produced by administrative agencies have not been deeply dealt with due to their unstructured nature, even when there's demand for spatialization. This research utilizes the natural language processing model BERT to spatialize public documents related to urban planning. It focuses on extracting sentence elements containing addresses from documents and converting them into structured data. The study used 18 years of urban planning public announcement documents as training data to train the BERT model and enhanced its performance by manually adjusting its hyperparameters. After training, the test results showed accuracy rates of 96.6% for classifying urban planning facilities, 98.5% for address recognition, and 93.1% for address cleaning. When mapping the result data on GIS, it was possible to effectively display the change history related to specific urban planning facilities. This research provides a deep understanding of the spatial context of urban planning documents, and it is hoped that through this, stakeholders can make more effective decisions.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Development of Deep Learning Structure for Defective Pixel Detection of Next-Generation Smart LED Display Board using Imaging Device (영상장치를 이용한 차세대 스마트 LED 전광판의 불량픽셀 검출을 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.345-349
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure for defective pixel detection of next-generation smart LED display board using imaging device. In this research, a technique utilizing imaging devices and deep learning is introduced to automatically detect defects in outdoor LED billboards. Through this approach, the effective management of LED billboards and the resolution of various errors and issues are aimed. The research process consists of three stages. Firstly, the planarized image data of the billboard is processed through calibration to completely remove the background and undergo necessary preprocessing to generate a training dataset. Secondly, the generated dataset is employed to train an object recognition network. This network is composed of a Backbone and a Head. The Backbone employs CSP-Darknet to extract feature maps, while the Head utilizes extracted feature maps as the basis for object detection. Throughout this process, the network is adjusted to align the Confidence score and Intersection over Union (IoU) error, sustaining continuous learning. In the third stage, the created model is employed to automatically detect defective pixels on actual outdoor LED billboards. The proposed method, applied in this paper, yielded results from accredited measurement experiments that achieved 100% detection of defective pixels on real LED billboards. This confirms the improved efficiency in managing and maintaining LED billboards. Such research findings are anticipated to bring about a revolutionary advancement in the management of LED billboards.

APPLICATION OF WIFI-BASED INDOOR LOCATION MONITORING SYSTEM FOR LABOR TRACKING IN CONSTRUCTION SITE - A CASE STUDY in Guangzhou MTR

  • Sunkyu Woo;Seongsu Jeong;Esmond Mok;Linyuan Xia;Muwook Pyeon;Joon Heo
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.869-875
    • /
    • 2009
  • Safety is a big issue in the construction sites. For safe and secure management, tracking locations of construction resources such as labors, materials, machineries, vehicles and so on is important. The materials, machineries and vehicles could be controlled by computer, whereas the movement of labors does not have fixed pattern. So, the location and movement of labors need to be monitored continuously for safety. In general, Global Positioning System(GPS) is an opt solution to obtain the location information in outside environments. But it cannot be used for indoor locations as it requires a clear Line-Of-Sight(LOS) to satellites Therefore, indoor location monitoring system could be a convenient alternative for environments such as tunnel and indoor building construction sites. This paper presents a case study to investigate feasibility of Wi-Fi based indoor location monitoring system in construction site. The system is developed by using fingerprint map of gathering Received Signal Strength Indication(RSSI) from each Access Point(AP). The signal information is gathered by Radio Frequency Identification (RFID) tags, which are attached on a helmet of labors to track their locations, and is sent to server computer. Experiments were conducted in a shield tunnel construction site at Guangzhou, China. This study consists of three phases as follows: First, we have a tracking test in entrance area of tunnel construction site. This experiment was performed to find the effective geometry of APs installation. The geometry of APs installation was changed for finding effective locations, and the experiment was performed using one and more tags. Second, APs were separated into two groups, and they were connected with LAN cable in tunnel construction site. The purpose of this experiment was to check the validity of group separating strategy. One group was installed around the entrance and the other one was installed inside the tunnel. Finally, we installed the system inner area of tunnel, boring machine area, and checked the performance with varying conditions (the presence of obstacles such as train, worker, and so on). Accuracy of this study was calculated from the data, which was collected at some known points. Experimental results showed that WiFi-based indoor location system has a level of accuracy of a few meters in tunnel construction site. From the results, it is inferred that the location tracking system can track the approximate location of labors in the construction site. It is able to alert the labors when they are closer to dangerous zones like poisonous region or cave-in..

  • PDF

An Evaluation of Development Plans for Rolling Stock Maintenance Shop Using Computer Simulation - Emphasizing CDC and Generator Car - (시뮬레이션 기법을 이용한 철도차량 중정비 공장 설계검증 - 디젤동차 및 발전차 중정비 공장을 중심으로 -)

  • Jeon, Byoung-Hack;Jang, Seong-Yong;Lee, Won-Young;Oh, Jeong-Heon
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.23-34
    • /
    • 2009
  • In the railroad rolling stock depot, long-term maintenance tasks is done regularly every two or four year basis to maintain the functionality of equipments and rolling stock body or for the repair operation of the heavily damaged rolling stocks by fatal accidents. This paper addresses the computer simulation model building for the rolling stock maintenance shop for the CDC(Commuter Diesel Car) and Generator Car planned to be constructed at Daejon Rolling Stock Depot, which will be moved from Yongsan Rolling Stock Depot. We evaluated the processing capacity of two layout design alternatives based on the maintenance process chart through the developed simulation models. The performance measures are the number of processed cars per year, the cycle time, shop utilization, work in process and the average number waiting car for input. The simulation result shows that one design alternative outperforms another design alternative in every aspect and superior design alternative can process total 340 number of trains per year 15% more than the proposed target within the current average cycle time.

Cross-Lingual Style-Based Title Generation Using Multiple Adapters (다중 어댑터를 이용한 교차 언어 및 스타일 기반의 제목 생성)

  • Yo-Han Park;Yong-Seok Choi;Kong Joo Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.341-354
    • /
    • 2023
  • The title of a document is the brief summarization of the document. Readers can easily understand a document if we provide them with its title in their preferred styles and the languages. In this research, we propose a cross-lingual and style-based title generation model using multiple adapters. To train the model, we need a parallel corpus in several languages with different styles. It is quite difficult to construct this kind of parallel corpus; however, a monolingual title generation corpus of the same style can be built easily. Therefore, we apply a zero-shot strategy to generate a title in a different language and with a different style for an input document. A baseline model is Transformer consisting of an encoder and a decoder, pre-trained by several languages. The model is then equipped with multiple adapters for translation, languages, and styles. After the model learns a translation task from parallel corpus, it learns a title generation task from monolingual title generation corpus. When training the model with a task, we only activate an adapter that corresponds to the task. When generating a cross-lingual and style-based title, we only activate adapters that correspond to a target language and a target style. An experimental result shows that our proposed model is only as good as a pipeline model that first translates into a target language and then generates a title. There have been significant changes in natural language generation due to the emergence of large-scale language models. However, research to improve the performance of natural language generation using limited resources and limited data needs to continue. In this regard, this study seeks to explore the significance of such research.

Effects of Stabilization Exercise with and without Respiratory Muscle Training on Respiratory Function and Postural Sway in Healthy Adults (호흡근훈련 유무에 따른 안정화 운동이 건강한 성인의 호흡 기능과 자세 동요에 미치는 영향)

  • Hye-Ri Seo;Duk-Hyun An;Mi-Hyun Kim;Min-Joo Ko;Jae-Seop Oh
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.11 no.3
    • /
    • pp.25-33
    • /
    • 2023
  • Purpose : Stabilization exercise and respiratory muscle training are used to train trunk muscles that affect postural control and respiratory function. However, there have been no studies that combine stabilization exercise and respiratory muscle training. The purpose of this study is to investigate effects of stabilization exercise with and without respiratory muscle training on respiratory function and postural sway. Methods : Fifteen healthy adults were recruited for this experiment. All the subjects performed stabilization exercise with and without respiratory muscle training. For stabilization exercise with respiratory muscle training, the subjects sat on a gym ball wearing a stretch sensor. The subjects inspire maximally as long as possible during lifting one foot off the ground, alternately for 30 seconds. The stretch sensor was placed on both anterior superior iliac spine (ASIS), and the stretch sensor was used to monitor inspiration. For stabilization exercise without respiratory muscle training, the subjects sat on a gym ball and lifted one foot off the ground, without respiratory muscle training. Kinovea program used to investigate postural sway tracking during exercise. The maximum inspiratory pressure (MIP) and maximum expiratory pressure (MEP) were measured using a spirometer to investigate changes of respiratory muscle strength before and after exercise. A paired t-test was used to determine significant differences postural sway tracking, MIP, and MEP between stabilization exercise with and without respiratory muscle training. Results : There were significantly lower a distance of postural sway tracking during stabilization exercise with respiratory muscle training, compared with stabilization exercise without respiratory muscle training (p<.05). The MIP and MEP were significantly increased after stabilization exercise with respiratory muscle training compared with before stabilization exercise with respiratory muscle trianing (p<.05). Conclusion : The results of this study suggest that stabilization exercise with repiratory muscle training would be recommended to improve postural control and respiratory muscle strength.

Lightening of Human Pose Estimation Algorithm Using MobileViT and Transfer Learning

  • Kunwoo Kim;Jonghyun Hong;Jonghyuk Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.17-25
    • /
    • 2023
  • In this paper, we propose a model that can perform human pose estimation through a MobileViT-based model with fewer parameters and faster estimation. The based model demonstrates lightweight performance through a structure that combines features of convolutional neural networks with features of Vision Transformer. Transformer, which is a major mechanism in this study, has become more influential as its based models perform better than convolutional neural network-based models in the field of computer vision. Similarly, in the field of human pose estimation, Vision Transformer-based ViTPose maintains the best performance in all human pose estimation benchmarks such as COCO, OCHuman, and MPII. However, because Vision Transformer has a heavy model structure with a large number of parameters and requires a relatively large amount of computation, it costs users a lot to train the model. Accordingly, the based model overcame the insufficient Inductive Bias calculation problem, which requires a large amount of computation by Vision Transformer, with Local Representation through a convolutional neural network structure. Finally, the proposed model obtained a mean average precision of 0.694 on the MS COCO benchmark with 3.28 GFLOPs and 9.72 million parameters, which are 1/5 and 1/9 the number compared to ViTPose, respectively.