• Title/Summary/Keyword: Learning Processing

Search Result 3,607, Processing Time 0.032 seconds

Threat Situation Determination System Through AWS-Based Behavior and Object Recognition (AWS 기반 행위와 객체 인식을 통한 위협 상황 판단 시스템)

  • Ye-Young Kim;Su-Hyun Jeong;So-Hyun Park;Young-Ho Park
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.189-198
    • /
    • 2023
  • As crimes frequently occur on the street, the spread of CCTV is increasing. However, due to the shortcomings of passively operated CCTV, the need for intelligent CCTV is attracting attention. Due to the heavy system of such intelligent CCTV, high-performance devices are required, which has a problem in that it is expensive to replace the general CCTV. To solve this problem, an intelligent CCTV system that recognizes low-quality images and operates even on devices with low performance is required. Therefore, this paper proposes a Saying CCTV system that can detect threats in real time by using the AWS cloud platform to lighten the system and convert images into text. Based on the data extracted using YOLO v4 and OpenPose, it is implemented to determine the risk object, threat behavior, and threat situation, and calculate the risk using machine learning. Through this, the system can be operated anytime and anywhere as long as the network is connected, and the system can be used even with devices with minimal performance for video shooting and image upload. Furthermore, it is possible to quickly prevent crime by automating meaningful statistics on crime by analyzing the video and using the data stored as text.

Comparing the 2015 with the 2022 Revised Primary Science Curriculum Based on Network Analysis (2015 및 2022 개정 초등학교 과학과 교육과정에 대한 비교 - 네트워크 분석을 중심으로 -)

  • Jho, Hunkoog
    • Journal of Korean Elementary Science Education
    • /
    • v.42 no.1
    • /
    • pp.178-193
    • /
    • 2023
  • The aim of this study was to investigate differences in the achievement standards from the 2015 to the 2022 revised national science curriculum and to present the implications for science teaching under the revised curriculum. Achievement standards relevant to primary science education were therefore extracted from the national curriculum documents; conceptual domains in the two curricula were analyzed for differences; various kinds of centrality were computed; and the Louvain algorithm was used to identify clusters. These methods revealed that, in the revised compared with the preceding curriculum, the total number of nodes and links had increased, while the number of achievement standards had decreased by 10 percent. In the revised curriculum, keywords relevant to procedural skills and behavior received more emphasis and were connected to collaborative learning and digital literacy. Observation, survey, and explanation remained important, but varied in application across the fields of science. Clustering revealed that the number of categories in each field of science remained mostly unchanged in the revised compared with the previous curriculum, but that each category highlighted different skills or behaviors. Based on those findings, some implications for science instruction in the classroom are discussed.

Quality Visualization of Quality Metric Indicators based on Table Normalization of Static Code Building Information (정적 코드 내부 정보의 테이블 정규화를 통한 품질 메트릭 지표들의 가시화를 위한 추출 메커니즘)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.199-206
    • /
    • 2023
  • The current software becomes the huge size of source codes. Therefore it is increasing the importance and necessity of static analysis for high-quality product. With static analysis of the code, it needs to identify the defect and complexity of the code. Through visualizing these problems, we make it guild for developers and stakeholders to understand these problems in the source codes. Our previous visualization research focused only on the process of storing information of the results of static analysis into the Database tables, querying the calculations for quality indicators (CK Metrics, Coupling, Number of function calls, Bad-smell), and then finally visualizing the extracted information. This approach has some limitations in that it takes a lot of time and space to analyze a code using information extracted from it through static analysis. That is since the tables are not normalized, it may occur to spend space and time when the tables(classes, functions, attributes, Etc.) are joined to extract information inside the code. To solve these problems, we propose a regularized design of the database tables, an extraction mechanism for quality metric indicators inside the code, and then a visualization with the extracted quality indicators on the code. Through this mechanism, we expect that the code visualization process will be optimized and that developers will be able to guide the modules that need refactoring. In the future, we will conduct learning of some parts of this process.

Automatic Collection of Production Performance Data Based on Multi-Object Tracking Algorithms (다중 객체 추적 알고리즘을 이용한 가공품 흐름 정보 기반 생산 실적 데이터 자동 수집)

  • Lim, Hyuna;Oh, Seojeong;Son, Hyeongjun;Oh, Yosep
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.2
    • /
    • pp.205-218
    • /
    • 2022
  • Recently, digital transformation in manufacturing has been accelerating. It results in that the data collection technologies from the shop-floor is becoming important. These approaches focus primarily on obtaining specific manufacturing data using various sensors and communication technologies. In order to expand the channel of field data collection, this study proposes a method to automatically collect manufacturing data based on vision-based artificial intelligence. This is to analyze real-time image information with the object detection and tracking technologies and to obtain manufacturing data. The research team collects object motion information for each frame by applying YOLO (You Only Look Once) and DeepSORT as object detection and tracking algorithms. Thereafter, the motion information is converted into two pieces of manufacturing data (production performance and time) through post-processing. A dynamically moving factory model is created to obtain training data for deep learning. In addition, operating scenarios are proposed to reproduce the shop-floor situation in the real world. The operating scenario assumes a flow-shop consisting of six facilities. As a result of collecting manufacturing data according to the operating scenarios, the accuracy was 96.3%.

Adverse Effects on EEGs and Bio-Signals Coupling on Improving Machine Learning-Based Classification Performances

  • SuJin Bak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.133-153
    • /
    • 2023
  • In this paper, we propose a novel approach to investigating brain-signal measurement technology using Electroencephalography (EEG). Traditionally, researchers have combined EEG signals with bio-signals (BSs) to enhance the classification performance of emotional states. Our objective was to explore the synergistic effects of coupling EEG and BSs, and determine whether the combination of EEG+BS improves the classification accuracy of emotional states compared to using EEG alone or combining EEG with pseudo-random signals (PS) generated arbitrarily by random generators. Employing four feature extraction methods, we examined four combinations: EEG alone, EG+BS, EEG+BS+PS, and EEG+PS, utilizing data from two widely-used open datasets. Emotional states (task versus rest states) were classified using Support Vector Machine (SVM) and Long Short-Term Memory (LSTM) classifiers. Our results revealed that when using the highest accuracy SVM-FFT, the average error rates of EEG+BS were 4.7% and 6.5% higher than those of EEG+PS and EEG alone, respectively. We also conducted a thorough analysis of EEG+BS by combining numerous PSs. The error rate of EEG+BS+PS displayed a V-shaped curve, initially decreasing due to the deep double descent phenomenon, followed by an increase attributed to the curse of dimensionality. Consequently, our findings suggest that the combination of EEG+BS may not always yield promising classification performance.

Gear Fault Diagnosis Based on Residual Patterns of Current and Vibration Data by Collaborative Robot's Motions Using LSTM (LSTM을 이용한 협동 로봇 동작별 전류 및 진동 데이터 잔차 패턴 기반 기어 결함진단)

  • Baek Ji Hoon;Yoo Dong Yeon;Lee Jung Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.445-454
    • /
    • 2023
  • Recently, various fault diagnosis studies are being conducted utilizing data from collaborative robots. Existing studies performing fault diagnosis on collaborative robots use static data collected based on the assumed operation of predefined devices. Therefore, the fault diagnosis model has a limitation of increasing dependency on the learned data patterns. Additionally, there is a limitation in that a diagnosis reflecting the characteristics of collaborative robots operating with multiple joints could not be conducted due to experiments using a single motor. This paper proposes an LSTM diagnostic model that can overcome these two limitations. The proposed method selects representative normal patterns using the correlation analysis of vibration and current data in single-axis and multi-axis work environments, and generates residual patterns through differences from the normal representative patterns. An LSTM model that can perform gear wear diagnosis for each axis is created using the generated residual patterns as inputs. This fault diagnosis model can not only reduce the dependence on the model's learning data patterns through representative patterns for each operation, but also diagnose faults occurring during multi-axis operation. Finally, reflecting both internal and external data characteristics, the fault diagnosis performance was improved, showing a high diagnostic performance of 98.57%.

Development of SVM-based Construction Project Document Classification Model to Derive Construction Risk (건설 리스크 도출을 위한 SVM 기반의 건설프로젝트 문서 분류 모델 개발)

  • Kang, Donguk;Cho, Mingeon;Cha, Gichun;Park, Seunghee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.6
    • /
    • pp.841-849
    • /
    • 2023
  • Construction projects have risks due to various factors such as construction delays and construction accidents. Based on these construction risks, the method of calculating the construction period of the construction project is mainly made by subjective judgment that relies on supervisor experience. In addition, unreasonable shortening construction to meet construction project schedules delayed by construction delays and construction disasters causes negative consequences such as poor construction, and economic losses are caused by the absence of infrastructure due to delayed schedules. Data-based scientific approaches and statistical analysis are needed to solve the risks of such construction projects. Data collected in actual construction projects is stored in unstructured text, so to apply data-based risks, data pre-processing involves a lot of manpower and cost, so basic data through a data classification model using text mining is required. Therefore, in this study, a document-based data generation classification model for risk management was developed through a data classification model based on SVM (Support Vector Machine) by collecting construction project documents and utilizing text mining. Through quantitative analysis through future research results, it is expected that risk management will be possible by being used as efficient and objective basic data for construction project process management.

Hybrid Offloading Technique Based on Auction Theory and Reinforcement Learning in MEC Industrial IoT Environment (MEC 산업용 IoT 환경에서 경매 이론과 강화 학습 기반의 하이브리드 오프로딩 기법)

  • Bae Hyeon Ji;Kim Sung Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.9
    • /
    • pp.263-272
    • /
    • 2023
  • Industrial Internet of Things (IIoT) is an important factor in increasing production efficiency in industrial sectors, along with data collection, exchange and analysis through large-scale connectivity. However, as traffic increases explosively due to the recent spread of IIoT, an allocation method that can efficiently process traffic is required. In this thesis, I propose a two-stage task offloading decision method to increase successful task throughput in an IIoT environment. In addition, I consider a hybrid offloading system that can offload compute-intensive tasks to a mobile edge computing server via a cellular link or to a nearby IIoT device via a Device to Device (D2D) link. The first stage is to design an incentive mechanism to prevent devices participating in task offloading from acting selfishly and giving difficulties in improving task throughput. Among the mechanism design, McAfee's mechanism is used to control the selfish behavior of the devices that process the task and to increase the overall system throughput. After that, in stage 2, I propose a multi-armed bandit (MAB)-based task offloading decision method in a non-stationary environment by considering the irregular movement of the IIoT device. Experimental results show that the proposed method can obtain better performance in terms of overall system throughput, communication failure rate and regret compared to other existing methods.

Quality of Radiomics Research on Brain Metastasis: A Roadmap to Promote Clinical Translation

  • Chae Jung Park;Yae Won Park;Sung Soo Ahn;Dain Kim;Eui Hyun Kim;Seok-Gu Kang;Jong Hee Chang;Se Hoon Kim;Seung-Koo Lee
    • Korean Journal of Radiology
    • /
    • v.23 no.1
    • /
    • pp.77-88
    • /
    • 2022
  • Objective: Our study aimed to evaluate the quality of radiomics studies on brain metastases based on the radiomics quality score (RQS), Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) checklist, and the Image Biomarker Standardization Initiative (IBSI) guidelines. Materials and Methods: PubMed MEDLINE, and EMBASE were searched for articles on radiomics for evaluating brain metastases, published until February 2021. Of the 572 articles, 29 relevant original research articles were included and evaluated according to the RQS, TRIPOD checklist, and IBSI guidelines. Results: External validation was performed in only three studies (10.3%). The median RQS was 3.0 (range, -6 to 12), with a low basic adherence rate of 50.0%. The adherence rate was low in comparison to the "gold standard" (10.3%), stating the potential clinical utility (10.3%), performing the cut-off analysis (3.4%), reporting calibration statistics (6.9%), and providing open science and data (3.4%). None of the studies involved test-retest or phantom studies, prospective studies, or cost-effectiveness analyses. The overall rate of adherence to the TRIPOD checklist was 60.3% and low for reporting title (3.4%), blind assessment of outcome (0%), description of the handling of missing data (0%), and presentation of the full prediction model (0%). The majority of studies lacked pre-processing steps, with bias-field correction, isovoxel resampling, skull stripping, and gray-level discretization performed in only six (20.7%), nine (31.0%), four (3.8%), and four (13.8%) studies, respectively. Conclusion: The overall scientific and reporting quality of radiomics studies on brain metastases published during the study period was insufficient. Radiomics studies should adhere to the RQS, TRIPOD, and IBSI guidelines to facilitate the translation of radiomics into the clinical field.

Analysis of the Effectiveness of Big Data-Based Six Sigma Methodology: Focus on DX SS (빅데이터 기반 6시그마 방법론의 유효성 분석: DX SS를 중심으로)

  • Kim Jung Hyuk;Kim Yoon Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.1-16
    • /
    • 2024
  • Over recent years, 6 Sigma has become a key methodology in manufacturing for quality improvement and cost reduction. However, challenges have arisen due to the difficulty in analyzing large-scale data generated by smart factories and its traditional, formal application. To address these limitations, a big data-based 6 Sigma approach has been developed, integrating the strengths of 6 Sigma and big data analysis, including statistical verification, mathematical optimization, interpretability, and machine learning. Despite its potential, the practical impact of this big data-based 6 Sigma on manufacturing processes and management performance has not been adequately verified, leading to its limited reliability and underutilization in practice. This study investigates the efficiency impact of DX SS, a big data-based 6 Sigma, on manufacturing processes, and identifies key success policies for its effective introduction and implementation in enterprises. The study highlights the importance of involving all executives and employees and researching key success policies, as demonstrated by cases where methodology implementation failed due to incorrect policies. This research aims to assist manufacturing companies in achieving successful outcomes by actively adopting and utilizing the methodologies presented.