• Title/Summary/Keyword: 데이터 분석성능

Search Result 5,877, Processing Time 0.034 seconds

Electric Vehicle Wireless Charging Control Module EMI Radiated Noise Reduction Design Study (전기차 무선충전컨트롤 모듈 EMI 방사성 잡음 저감에 관한 설계 연구)

  • Seungmo Hong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.2
    • /
    • pp.104-108
    • /
    • 2023
  • Because of recent expansion of the electric car market. it is highly growing that should be supplemented its performance and safely issue. The EMI problem due to the interlocking of electrical components that causes various safety problems such as fire in electric vehicles is emerging every time. We strive to achieve optimal charging efficiency by combining various technologies and reduce radioactive noise among the EMI noise of a weirless charging control module, one of the important parts of an electric vehicle was designed and tested. In order to analyze the EMI problems occurring in the wireless charging control module, the optimized wireless charging control module by applying the optimization design technology by learning the accumulated test data for critical factors by utilizing the Python-based script function in the Ansys simulation tool. It showed an EMI noise improvement effect of 25 dBu V/m compared to the charge control module. These results not only contribute to the development of a more stable and reliable weirless charging function in electric vehicles, but also increase the usability and efficiency of electric vehicles. This allows electric vehicles to be more usable and efficient, making them an environmentally friendly alternative.

Development of BIM and Augmented Reality-Based Reinforcement Inspection System for Improving Quality Management Efficiency in Railway Infrastructure (철도 인프라 품질관리 효율성 향상을 위한 BIM 기반 AR 철근 점검 시스템 구축)

  • Suk, Chaehyun;Jeong, Yujeong;Jeon, Haein;Yu, Youngsu;Koo, Bonsang
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.6
    • /
    • pp.63-65
    • /
    • 2023
  • BIM and AR technologies have been assessed as a means of enhancing productivity within the construction industry, through the provision of effortless access to critical data on site, achieved via the projection of 3D models and associated information onto actual structures. However, most of the previous researches for applying AR technology in construction quality management has been performed for construction projects in general, resulting in only overall on-site management solutions. Also, a few previous researches for the application of AR in the quality management of specific elements like reinforcements focused only on simple projection, so conducting specific quality inspection was impossible. Hence, this study aimed to develop a practically applicable BIM-based AR quality management system targeted for reinforcements. For the development of this system, the reinforcement inspection items on the quality checklist used at railway construction sites were analyzed, and four types of AR functions that can effectively address these items were developed and installed. The validation result of the system for the actual railway bridge showed a degradation of projection stability. This problem was solved through model simplification and enhancement of the AR device's hardware performance, and then the normal operation of the system was validated. Subsequently, the final developed reinforcement quality inspection system was evaluated for practical applicability by on-site quality experts, and the efficiency of inspection would significantly increase when using the AR system compared to the current inspection method for reinforcements.

Optimal Design of Overtopping Wave Energy Converter Substructure based on Smoothed Particle Hydrodynamics and Structural Analysis (SPH 및 구조해석에 기반한 월파수류형 파력발전기 하부구조물 최적 설계)

  • Sung-Hwan An;Jong-Hyun Lee;Geun-Gon Kim;Dong-hoon Kang
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.992-1001
    • /
    • 2023
  • OWEC (Overtopping Wave Energy Converter) is a wave power generation system using the wave overtopping. The performance and safety of the OWEC are affected by wave characteristics, such as wave height, period. To mitigate this issue, optimal OWEC designs based on wave characteristics must be investigated. In this study, the environmental conditions along the Ulleungdo coast were used. The hydraulic efficiency of the OWEC was calculated using SPH (Smoothed Particle Hydrodynamics) by comparing 4 models that changed the substructure. As a result, it was possible to change the substructure. Through design optimization, a new truss-type structure, which is a substructure capable of carrying the design load, was proposed. Through a case study using member diameter and thickness as design variables, structural safety was secured under allowable stress conditions. Considering wave load, the natural frequency of the proposed structure was compared with the wave period of the relevant sea area. Harmonic response analysis was performed using wave with a 1-year return period as the load. The proposed substructure had a reduced response magnitude at the same exciting force, and achieved weight reduction of more than 32%.

Hybrid Offloading Technique Based on Auction Theory and Reinforcement Learning in MEC Industrial IoT Environment (MEC 산업용 IoT 환경에서 경매 이론과 강화 학습 기반의 하이브리드 오프로딩 기법)

  • Bae Hyeon Ji;Kim Sung Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.9
    • /
    • pp.263-272
    • /
    • 2023
  • Industrial Internet of Things (IIoT) is an important factor in increasing production efficiency in industrial sectors, along with data collection, exchange and analysis through large-scale connectivity. However, as traffic increases explosively due to the recent spread of IIoT, an allocation method that can efficiently process traffic is required. In this thesis, I propose a two-stage task offloading decision method to increase successful task throughput in an IIoT environment. In addition, I consider a hybrid offloading system that can offload compute-intensive tasks to a mobile edge computing server via a cellular link or to a nearby IIoT device via a Device to Device (D2D) link. The first stage is to design an incentive mechanism to prevent devices participating in task offloading from acting selfishly and giving difficulties in improving task throughput. Among the mechanism design, McAfee's mechanism is used to control the selfish behavior of the devices that process the task and to increase the overall system throughput. After that, in stage 2, I propose a multi-armed bandit (MAB)-based task offloading decision method in a non-stationary environment by considering the irregular movement of the IIoT device. Experimental results show that the proposed method can obtain better performance in terms of overall system throughput, communication failure rate and regret compared to other existing methods.

The Measurement Algorithm for Microphone's Frequency Character Response Using OATSP (OATSP를 이용한 마이크로폰의 주파수 특성 응답 측정 알고리즘)

  • Park, Byoung-Uk;Kim, Hack-Yoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.61-68
    • /
    • 2007
  • The frequency response of a microphone, which indicates the frequency range that a microphone can output within the approved level, is one of the most significant standards used to measure the characteristics of a microphone. At present, conventional methods of measuring the frequency response are complicated and involve the use of expensive equipment. To complement the disadvantages, this paper suggests a new algorithm that can measure the frequency response of a microphone in a simple manner. The algorithm suggested in this paper generates the Optimized Aoshima's Time Stretched Pulse(OATSP) signal from a computer via a standard speaker and measures the impulse response of a microphone by convolution the inverse OATSP signal and the received by the microphone to be measured. Then, the frequency response of the microphone to be measured is calculated using the signals. The performance test for the algorithm suggested in the study was conducted through a comparative analysis of the frequency response data and the measures of frequency response of the microphone measured by the algorithm. It proved that the algorithm is suitable for measuring the frequency response of a microphone, and that despite a few errors they are all within the error tolerance.

Investigation of Verification and Evaluation Methods for Tampering Response Techniques Using HW Security Modules (HW 보안 모듈을 활용한 탬퍼링 대응 기술의 검증 및 평가 방안 조사)

  • Dongho Lee;Younghoon Ban;Jae-Deok Lim;Haehyun Cho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.335-345
    • /
    • 2024
  • In the digital era, data security has become an increasingly critical issue, drawing significant attention. Particularly, anti-tampering technology has emerged as a key defense mechanism against indiscriminate hacking and unauthorized access. This paper explores case studies that exemplify the trends in the development and application of TPM (Trusted Platform Module) and software anti-tampering technology in today's digital ecosystem. By analyzing various existing security guides and guidelines, this paper identifies ambiguous areas within them and investigates recent trends in domestic and international research on software anti-tampering. Consequently, while guidelines exist for applying anti-tampering techniques, it was found that there is a lack of methods for evaluating them. Therefore, this paper aims to propose a comprehensive and systematic evaluation framework for assessing both existing and future software anti-tampering techniques. To achieve this, it using various verification methods employed in recent research. The proposed evaluation framework synthesizes these methods, categorizing them into three aspects (functionality, implementation, performance), thereby providing a comprehensive and systematic evaluation approach for assessing software anti-tampering technology in detail.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

Application of MicroPACS Using the Open Source (Open Source를 이용한 MicroPACS의 구성과 활용)

  • You, Yeon-Wook;Kim, Yong-Keun;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Tae-Sung;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

  • PDF

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.