• Title/Summary/Keyword: Performance-Based

Search Result 49,058, Processing Time 0.076 seconds

A study on the advancement of makerspace as a digital transformation platform (디지털전환 플랫폼으로서의 메이커스페이스 고도화 방안연구)

  • Jung, Won-Joong;Choi, NakHyeok
    • Journal of Digital Convergence
    • /
    • v.20 no.5
    • /
    • pp.599-608
    • /
    • 2022
  • There has been criticism that the government-led makerspace did not reflect the actual demand of makers, so there was a limit to performance creation. In this regard, the study aims to diagnose the current status and problems of makerspace and to suggest the implication to respond to policy demand. To this end, we analyzed the current status of makerspace by utilizing the government documents. Then, we conducted a survey of SMEs related to ICT devices that experienced makerspace, and analyzed their opinions on D·N·A technology demand, management difficulties, and governments' support policies. As results, the study proposed several improvement measures to upgrade makerspace as a digital conversion platform as follows. First, due to the nature of the existing industry for introducing D·N·A technology, there is a limit for companies to enter the market on their own, so comprehensive support from the government is needed. Second, it is necessary to establish and expand an empirical test bed for the development of new products and services so that various types of metaverse contents can be discovered and digital transformation convergence models of existing businesses can be derived. Third, by modifying the support method to operate the makerspace as a platform that implements the government-led start-up support policy, boldly transfer what the private sector can do well to the private sector. The participants of Industry·University·Institute Collaboration should freely share ideas and help the common problem be solved. Based on the problems and the findings for improvement, it is expected that the current makerspace would be upgraded to a digital conversion platform suitable for the demand of the field.

DNN Model for Calculation of UV Index at The Location of User Using Solar Object Information and Sunlight Characteristics (태양객체 정보 및 태양광 특성을 이용하여 사용자 위치의 자외선 지수를 산출하는 DNN 모델)

  • Ga, Deog-hyun;Oh, Seung-Taek;Lim, Jae-Hyun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.29-35
    • /
    • 2022
  • UV rays have beneficial or harmful effects on the human body depending on the degree of exposure. An accurate UV information is required for proper exposure to UV rays per individual. The UV rays' information is provided by the Korea Meteorological Administration as one component of daily weather information in Korea. However, it does not provide an accurate UVI at the user's location based on the region's Ultraviolet index. Some operate measuring instrument to obtain an accurate UVI, but it would be costly and inconvenient. Studies which assumed the UVI through environmental factors such as solar radiation and amount of cloud have been introduced, but those studies also could not provide service to individual. Therefore, this paper proposes a deep learning model to calculate UVI using solar object information and sunlight characteristics to provide an accurate UVI at individual location. After selecting the factors, which were considered as highly correlated with UVI such as location and size and illuminance of sun and which were obtained through the analysis of sky images and solar characteristics data, a data set for DNN model was constructed. A DNN model that calculates the UVI was finally realized by entering the solar object information and sunlight characteristics extracted through Mask R-CNN. In consideration of the domestic UVI recommendation standards, it was possible to accurately calculate UVI within the range of MAE 0.26 compared to the standard equipment in the performance evaluation for days with UVI above and below 8.

ECG Compression and Transmission based on Template Matching (템플릿 매칭 기반의 심전도 압축 전송)

  • Lee, Sang-jin;Kim, Sang-kon;Kim, Tae-kon
    • Journal of Internet Computing and Services
    • /
    • v.23 no.1
    • /
    • pp.31-38
    • /
    • 2022
  • An electrocardiogram(ECG) is a recoding of electrical signals of the heart's cyclic activity and an important body information for diagnosing myocardial rhythm. Large amount of information are generated continuously and a significant period of cumulative signal is required for the purpose of diagnosing a specific disease. Therefore, research on compression including clinically acceptable lossy technique has been developed to reduce the amount of information significantly. Recently, wearable smart heart monitoring devices that can transmit electrocardiogram(ECG) are being developed. The use of electrocardiogram, an important personal information for healthcare service, is rapidly increasing. However, devices generally have limited capability and power consumption for user convenience, and it is often difficult to apply the existing compression method directly. It is essential to develop techniques that can process and transmit a large volume of signals in limited resources. A method for compressing and transmitting the ECG signals efficiently by using the cumulative average (template) of the unit waveform is proposed in the paper. The ECG is coded lovelessly using template matching. It is analyzed that the proposed method is superior to the existing compression methods at high compression ratio, and its complexity is not relatively high. And it is also possible to apply compression methods to template matching values.

Human Rights Sensitivity of University Varsity Teams (대학운동부의 인권감수성)

  • Kim, Eon-Hye;Chang, Ik-Young
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.8
    • /
    • pp.427-436
    • /
    • 2020
  • This study aims to understand human rights sensibility in university varsity teams and to compare and analyze differences in human rights sensibility by variables related to university varsity teams. In order to achieve the purpose of this study, 188 student-athletes from 10 universities were selected. The collected data were analyzed in descriptive analysis, reliability analysis, one-way ANOVA, Scheffe using IBM SPSS 24.0. First, based on the episodes, the episodes with the highest human rights sensitivity are the right to labor of migrant workers and happiness rights, and the episodes with the lowest human rights sensitivity are the right to freedom of detention and privacy rights. In addition, among the sub-factors of human rights sensitivity, perception of responsibility and perception of behavior are higher than perception of outcome. Second, there are differences in the human rights sensitivity of the university varsity team depending on the size and the level of performance of the university varsity team. Third, there are differences in the human rights sensibility of the university varsity team depending on the educational characteristics (volunteer activity and human rights education) of the university varsity team.

Experimental Comparison of Network Intrusion Detection Models Solving Imbalanced Data Problem (데이터의 불균형성을 제거한 네트워크 침입 탐지 모델 비교 분석)

  • Lee, Jong-Hwa;Bang, Jiwon;Kim, Jong-Wouk;Choi, Mi-Jung
    • KNOM Review
    • /
    • v.23 no.2
    • /
    • pp.18-28
    • /
    • 2020
  • With the development of the virtual community, the benefits that IT technology provides to people in fields such as healthcare, industry, communication, and culture are increasing, and the quality of life is also improving. Accordingly, there are various malicious attacks targeting the developed network environment. Firewalls and intrusion detection systems exist to detect these attacks in advance, but there is a limit to detecting malicious attacks that are evolving day by day. In order to solve this problem, intrusion detection research using machine learning is being actively conducted, but false positives and false negatives are occurring due to imbalance of the learning dataset. In this paper, a Random Oversampling method is used to solve the unbalance problem of the UNSW-NB15 dataset used for network intrusion detection. And through experiments, we compared and analyzed the accuracy, precision, recall, F1-score, training and prediction time, and hardware resource consumption of the models. Based on this study using the Random Oversampling method, we develop a more efficient network intrusion detection model study using other methods and high-performance models that can solve the unbalanced data problem.

Evaluation of Practical Requirements for Automated Detailed Design Module of Interior Finishes in Architectural Building Information Model (건축 내부 마감부재의 BIM 기반 상세설계 자동화를 위한 실무적 요구사항 분석)

  • Hong, Sunghyun;Koo, Bonsang;Yu, Youngsu;Ha, Daemok;Won, Youngkwon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.5
    • /
    • pp.87-97
    • /
    • 2022
  • Although the use of BIM in architectural projects has increased, repetitive modeling tasks and frequent design errors remain as obstacles to the practical application of BIM. In particular, interior finishing elements include the most varied and detailed requirements, and thus requires improving its modelling efficiency and resolving potential design errors. Recently, visual programming-based modules has gained traction as a way to automate a series of repetitive modeling tasks. However, existing approaches do not adequately reflect the practical modeling needs and focus only on replacing siimple, repetitive tasks. This study developed and evaluated the performance of three modules for automatic detailing of walls, floors and ceilings. The three elements were selected by analyzing the man-hours and the number of errors that typically occur when detailing BIM models. The modules were then applied to automatically detail a sample commercial facility BIM model. Results showed that the implementations met the practical modeling requirements identified by actual modelers of an construction management firm.

A Study on Global Blockchain Economy Ecosystem Classification and Intelligent Stock Portfolio Performance Analysis (글로벌 블록체인 경제 생태계 분류와 지능형 주식 포트폴리오 성과 분석)

  • Kim, Honggon;Ryu, Jongha;Shin, Woosik;Kim, Hee-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.209-235
    • /
    • 2022
  • Starting from 2010, blockchain technology, along with the development of artificial intelligence, has been in the spotlight as the latest technology to lead the 4th industrial revolution. Furthermore, previous research regarding blockchain's technological applications has been ongoing ever since. However, few studies have been examined the standards for classifying the blockchain economic ecosystem from a capital market perspective. Our study is classified into a collection of interviews of software developers, entrepreneurs, market participants and experts who use blockchain technology to utilize the blockchain economic ecosystem from a capital market perspective for investing in stocks, and case study methodologies of blockchain economic ecosystem according to application fields of blockchain technology. Additionally, as a way that can be used in connection with equity investment in the capital market, the blockchain economic ecosystem classification methodology was established to form an investment universe consisting of global blue-chip stocks. It also helped construct an intelligent portfolio through quantitative and qualitative analysis that are based on quant and artificial intelligence strategies and evaluate its performances. Lastly, it presented a successful investment strategy according to the growth of blockchain economic ecosystem. This study not only classifies and analyzes blockchain standardization as a blockchain economic ecosystem from a capital market, rather than a technical, point of view, but also constructs a portfolio that targets global blue-chip stocks while also developing strategies to achieve superior performances. This study provides insights that are fused with global equity investment from the perspectives of investment theory and the economy. Therefore, it has practical implications that can contribute to the development of capital markets.

A Method of Machine Learning-based Defective Health Functional Food Detection System for Efficient Inspection of Imported Food (효율적 수입식품 검사를 위한 머신러닝 기반 부적합 건강기능식품 탐지 방법)

  • Lee, Kyoungsu;Bak, Yerin;Shin, Yoonjong;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.139-159
    • /
    • 2022
  • As interest in health functional foods has increased since COVID-19, the importance of imported food safety inspections is growing. However, in contrast to the annual increase in imports of health functional foods, the budget and manpower required for inspections for import and export are reaching their limit. Hence, the purpose of this study is to propose a machine learning model that efficiently detects unsuitable food suitable for the characteristics of data possessed by government offices on imported food. First, the components of food import/export inspections data that affect the judgment of nonconformity were examined and derived variables were newly created. Second, in order to select features for the machine learning, class imbalance and nonlinearity were considered when performing exploratory analysis on imported food-related data. Third, we try to compare the performance and interpretability of each model by applying various machine learning techniques. In particular, the ensemble model was the best, and it was confirmed that the derived variables and models proposed in this study can be helpful to the system used in import/export inspections.

Corneal Ulcer Region Detection With Semantic Segmentation Using Deep Learning

  • Im, Jinhyuk;Kim, Daewon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.1-12
    • /
    • 2022
  • Traditional methods of measuring corneal ulcers were difficult to present objective basis for diagnosis because of the subjective judgment of the medical staff through photographs taken with special equipment. In this paper, we propose a method to detect the ulcer area on a pixel basis in corneal ulcer images using a semantic segmentation model. In order to solve this problem, we performed the experiment to detect the ulcer area based on the DeepLab model which has the highest performance in semantic segmentation model. For the experiment, the training and test data were selected and the backbone network of DeepLab model which set as Xception and ResNet, respectively were evaluated and compared the performances. We used Dice similarity coefficient and IoU value as an indicator to evaluate the performances. Experimental results show that when 'crop & resized' images are added to the dataset, it segment the ulcer area with an average accuracy about 93% of Dice similarity coefficient on the DeepLab model with ResNet101 as the backbone network. This study shows that the semantic segmentation model used for object detection also has an ability to make significant results when classifying objects with irregular shapes such as corneal ulcers. Ultimately, we will perform the extension of datasets and experiment with adaptive learning methods through future studies so that they can be implemented in real medical diagnosis environment.

Improved Estimation of Hourly Surface Ozone Concentrations using Stacking Ensemble-based Spatial Interpolation (스태킹 앙상블 모델을 이용한 시간별 지상 오존 공간내삽 정확도 향상)

  • KIM, Ye-Jin;KANG, Eun-Jin;CHO, Dong-Jin;LEE, Si-Woo;IM, Jung-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.3
    • /
    • pp.74-99
    • /
    • 2022
  • Surface ozone is produced by photochemical reactions of nitrogen oxides(NOx) and volatile organic compounds(VOCs) emitted from vehicles and industrial sites, adversely affecting vegetation and the human body. In South Korea, ozone is monitored in real-time at stations(i.e., point measurements), but it is difficult to monitor and analyze its continuous spatial distribution. In this study, surface ozone concentrations were interpolated to have a spatial resolution of 1.5km every hour using the stacking ensemble technique, followed by a 5-fold cross-validation. Base models for the stacking ensemble were cokriging, multi-linear regression(MLR), random forest(RF), and support vector regression(SVR), while MLR was used as the meta model, having all base model results as additional input variables. The results showed that the stacking ensemble model yielded the better performance than the individual base models, resulting in an averaged R of 0.76 and RMSE of 0.0065ppm during the study period of 2020. The surface ozone concentration distribution generated by the stacking ensemble model had a wider range with a spatial pattern similar with terrain and urbanization variables, compared to those by the base models. Not only should the proposed model be capable of producing the hourly spatial distribution of ozone, but it should also be highly applicable for calculating the daily maximum 8-hour ozone concentrations.