• Title/Summary/Keyword: Power Feature

Search Result 966, Processing Time 0.028 seconds

A Study on the ZVZCS Three Level DC/DC Converter without Primary Freewheeling Diodes (1차측 환류 다이오드를 제거한 ZVZCS Three Level DC/DC 컨버터에 관한 연구)

  • Bae, Jin-Yong;Kim, Yong;Baek, Soo-Hyun;Kwon, Soon-Do;Kim, Pil-Soo;Gye, Sang-Bum
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.16 no.6
    • /
    • pp.66-73
    • /
    • 2002
  • This paper presents ZVZCS(Zero-Voltage and Zero-Current Switching) Three Level DC/DC Converter without primary freewheeling diodes. The new converter presented in this paper used a phase shirt control with a flying capacitor in the primary side to achieve ZVS for the outer switches. A secondary anxiliary circuit which consists of one small capacitor, two small diodes and one coupled inductor, is added in the secondary to provide ZVZCS conditions to primary switches, ZVS for outer switches and ZCS for inner switches. Many advantages include simple secondary auxiliary circuit topology, high efficiency, and low cost make the new converter attractive for high power applications. Also the circulating current flows through the circuit so that it causes the needless coduction loss to be occurred in the devices and the transformer of the circuit The new converter has no primary auxiliary diodes for freewheeling current. The principle of operation, feature and design considerations are illustrated and verified through the experiment with a 1[㎾] 50[KHz]IGBT based experimental circuit.

Geographical Name Denoising by Machine Learning of Event Detection Based on Twitter (트위터 기반 이벤트 탐지에서의 기계학습을 통한 지명 노이즈제거)

  • Woo, Seungmin;Hwang, Byung-Yeon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.10
    • /
    • pp.447-454
    • /
    • 2015
  • This paper proposes geographical name denoising by machine learning of event detection based on twitter. Recently, the increasing number of smart phone users are leading the growing user of SNS. Especially, the functions of short message (less than 140 words) and follow service make twitter has the power of conveying and diffusing the information more quickly. These characteristics and mobile optimised feature make twitter has fast information conveying speed, which can play a role of conveying disasters or events. Related research used the individuals of twitter user as the sensor of event detection to detect events that occur in reality. This research employed geographical name as the keyword by using the characteristic that an event occurs in a specific place. However, it ignored the denoising of relationship between geographical name and homograph, it became an important factor to lower the accuracy of event detection. In this paper, we used removing and forecasting, these two method to applied denoising technique. First after processing the filtering step by using noise related database building, we have determined the existence of geographical name by using the Naive Bayesian classification. Finally by using the experimental data, we earned the probability value of machine learning. On the basis of forecast technique which is proposed in this paper, the reliability of the need for denoising technique has turned out to be 89.6%.

Modeling and analysis of dynamic heat transfer in the cable penetration fire stop system by using a new hybrid algorithm (새로운 혼합알고리즘을 이용한 CPFS 내에서의 일어나는 동적 열전달의 수식화 및 해석)

  • Yoon En Sup;Yun Jongpil;Kwon Seong-Pil
    • Journal of the Korean Institute of Gas
    • /
    • v.7 no.4 s.21
    • /
    • pp.44-52
    • /
    • 2003
  • In this work dynamic heat transfer in a CPFS (cable penetration fire stop) system built in the firewall of nuclear power plants is three-dimensionally investigated to develop a test-simulator that can be used to verify effectiveness of the sealant. Dynamic heat transfer in the fire stop system is formulated in a parabolic PDE (partial differential equation) subjected to a set of initial and boundary conditions. First, the PDE model is divided into two parts; one corresponding to heat transfer in the axial direction and the other corresponding to heat transfer on the vertical planes. The first PDE is converted to a series of ODEs (ordinary differential equations) at finite discrete axial points for applying the numerical method of SOR (successive over-relaxation) to the problem. The ODEs are solved by using an ODE solver In such manner, the axial heat flux can be calculated at least at the finite discrete points. After that, all the planes are separated into finite elements, where the time and spatial functions are assumed to be of orthogonal collocation state at each element. The initial condition of each finite element can be obtained from the above solution. The heat fluxes on the vertical planes are calculated by the Galerkin FEM (finite element method). The CPFS system was modeled, simulated, and analyzed here. The simulation results were illustrated in three-dimensional graphics. Through simulation, it was shown clearly that the temperature distribution was influenced very much by the number, position, and temperature of the cable stream, and that dynamic heat transfer through the cable stream was one of the most dominant factors, and that the feature of heat conduction could be understood as an unsteady-state process.

  • PDF

Comparative analysis of linear model and deep learning algorithm for water usage prediction (물 사용량 예측을 위한 선형 모형과 딥러닝 알고리즘의 비교 분석)

  • Kim, Jongsung;Kim, DongHyun;Wang, Wonjoon;Lee, Haneul;Lee, Myungjin;Kim, Hung Soo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1083-1093
    • /
    • 2021
  • It is an essential to predict water usage for establishing an optimal supply operation plan and reducing power consumption. However, the water usage by consumer has a non-linear characteristics due to various factors such as user type, usage pattern, and weather condition. Therefore, in order to predict the water consumption, we proposed the methodology linking various techniques that can consider non-linear characteristics of water use and we called it as KWD framework. Say, K-means (K) cluster analysis was performed to classify similar patterns according to usage of each individual consumer; then Wavelet (W) transform was applied to derive main periodic pattern of the usage by removing noise components; also, Deep (D) learning algorithm was used for trying to do learning of non-linear characteristics of water usage. The performance of a proposed framework or model was analyzed by comparing with the ARMA model, which is a linear time series model. As a result, the proposed model showed the correlation of 92% and ARMA model showed about 39%. Therefore, we had known that the performance of the proposed model was better than a linear time series model and KWD framework could be used for other nonlinear time series which has similar pattern with water usage. Therefore, if the KWD framework is used, it will be possible to accurately predict water usage and establish an optimal supply plan every the various event.

A Study on the Characteristics in Chinese Contemporary Tragic Films - Focused on the film - (영화 <5일의 마중>으로 본 현대 중국 비극 영화의 특성 연구)

  • Wu, Ying Zhe
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.3
    • /
    • pp.65-73
    • /
    • 2021
  • This research analyzes the characteristics of Chinese tragic films with Chinese traditional ethical ideology as the core, analyzes its specific performance in the plot and ending setting of the film , and analyzes the director's tragic narrative strategy of cultural reconciliation in the face of political environment to understand the characteristics of Chinese contemporary tragic films.The film is a Chinese contemporary tragic film with The Great Cultural Revolution as its historical background. This film is a representative film of Chinese contemporary tragic films. The classic plot has played a certain role in the expression of Chinese traditional ethical ideology such as fatalism and optimistic attitude to life. The male lead's thought changes interpret the Chinese-style tragedy characteristics containing Chinese traditional ethical ideology. In the setting of the ending, the film broke through the "happy ending" model of Chinese traditional tragedies, and chose the open ending of "one tragedy to the end", further showing the time feature of Chinese contemporary tragic film. The euphemism and tenderness of the film as a tragic film is not only due to the compromise with the political culture of power, but also the result of the director's in-depth understanding of the aesthetics of Chinese tragedy. Through the use of symbolic signs in the film language, it has formed the implicit characteristics of the film narrative in the tragic aesthetic experience. In this paper, the author conducts text analysis for the film and discusses presentation of Director Yimou Zhang's tragic feelings and using the tragic narrative strategy of cultural reconciliation to show his creative wisdom in pursuing artistic breakthroughs under political pressure.

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.

A Study on Extending of the Addressable Object of Address of Things (사물주소 부여대상 확대 방안 연구)

  • Yang, Sungchul
    • Journal of Cadastre & Land InformatiX
    • /
    • v.54 no.1
    • /
    • pp.75-87
    • /
    • 2024
  • There There is a difference in terms of administrative power in that the address of things are not an address under Public Act. In terms of location expression, it is possible to express the location more flexibly and in more detail than the road name address, so it should be improved so that it can be assigned and managed in an appropriate location, so that the location of the entire territory can be expressed together with the road name address. As a result of the comparison between the road name address and the address of things based on the analysis results of related laws such as the existing Road Name Address Act, the Building Act, and the Regulations on the Preparation and Management of Basic Address Information, it was confirmed that there are fundamental limitations of the address of things system. Accordingly, this study attempted to suggest ways to improve the address of thing system by broadly dividing it into the legal aspect and the addressable object aspect. From the legal point of view, firstly, it is necessary to improve the upper and lower level laws by unification together with a clear definition of the term subject of addressable object; secondly, according to the Building Act, facilities that are not used for residence among buildings must be given an address of thing; and thirdly, it is necessary to make it easy to use and link with heterogeneous public data by classifying the registration items of the basic address information map by type of geographical feature to be assigned an address. From the point of view of addressability, firstly, it must be given to all facilities in the relevant category so that it can be recognised that all specific facilities have object addresses, and secondly, it is necessary to be able to address the address of things to places that are used by many, even if there are no facilities.

Acoustic images of the submarine fan system of the northern Kumano Basin obtained during the experimental dives of the Deep Sea AUV URASHIMA (심해 자율무인잠수정 우라시마의 잠항시험에서 취득된 북 구마노 분지 해저 선상지 시스템의 음향 영상)

  • Kasaya, Takafumi;Kanamatsu, Toshiya;Sawa, Takao;Kinosita, Masataka;Tukioka, Satoshi;Yamamoto, Fujio
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.1
    • /
    • pp.80-87
    • /
    • 2011
  • Autonomous underwater vehicles (AUVs) present the important advantage of being able to approach the seafloor more closely than surface vessel surveys can. To collect bathymetric data, bottom material information, and sub-surface images, multibeam echosounder, sidescan sonar (SSS) and subbottom profiler (SBP) equipment mounted on an AUV are powerful tools. The 3000m class AUV URASHIMA was developed by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC). After finishing the engineering development and examination phase of a fuel-cell system used for the vehicle's power supply system, a renovated lithium-ion battery power system was installed in URASHIMA. The AUV was redeployed from its prior engineering tasks to scientific use. Various scientific instruments were loaded on the vehicle, and experimental dives for science-oriented missions conducted from 2006. During the experimental cruise of 2007, high-resolution acoustic images were obtained by SSS and SBP on the URASHIMA around the northern Kumano Basin off Japan's Kii Peninsula. The map of backscatter intensity data revealed many debris objects, and SBP images revealed the subsurface structure around the north-eastern end of our study area. These features suggest a structure related to the formation of the latest submarine fan. However, a strong reflection layer exists below ~20 ms below the seafloor in the south-western area, which we interpret as a denudation feature, now covered with younger surface sediments. We continue to improve the vehicle's performance, and expect that many fruitful results will be obtained using URASHIMA.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.