• Title/Summary/Keyword: A* algorithm

Search Result 54,221, Processing Time 0.083 seconds

The bidirectional DC module type PCS design for the System Inter Connection PV-ESS of Secure to Expandability (계통 연계 PV-ESS 확장성 확보를 위한 병렬 DC-모듈형 PCS 설계)

  • Hwang, Lark-Hoon;Na, Seung-Kwon;Choi, Byung-Sang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.1
    • /
    • pp.56-69
    • /
    • 2021
  • In this paper, the PV system with a link to the commercial system needs some advantages like small capacity, high power factor, high reliability, low harmonic output, maximum power operation of solar cell, and low cost, etc. as well as the properties of inverter. To transfer the PV energy of photovoltaic power generation system to the system and load, it requires PCS in both directions. The purpose of this paper is to confirm the stable power supply through the load leveling by presenting the PCS considering ESS of photovoltaic power generation. In order to achieve these purpose, 5 step process of operation mode algorithm were used according to the solar insolation amount and load capacity and the controller for charging/ discharging control was designed. For bidirectional and effective energy transfer, the bidirectional converter and battery at DC-link stage were connected and the DC-link voltage and inverter output voltage through the interactive inverter were controlled. In order to prove the validity of the suggested system, the simulation using PSIM was performed and were reviewed for its validity and stability. The 3[kW] PCS was manufactured and its test was conducted in order to check this situation. In addition, the system characteristics suggested through the test results was verified and the PCS system presented in this study was excellent and stronger than that of before system.

The Analysis of Changes in East Coast Tourism using Topic Modeling (토핑 모델링을 활용한 동해안 관광의 변화 분석)

  • Jeong, Eun-Hee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.489-495
    • /
    • 2020
  • The amount of data is increasing through various IT devices in a hyper-connected society where the 4th revolution is progressing, and new value can be created by analyzing that data. This paper was collected total 1,526 articles from 2017 to 2019 in central magazines, economic magazines, regional associations, and major broadcasting companies with the keyword "(East Coast Tourism or East Coast Travel) and Gangwon-do" through Bigkinds. It was performed the topic modeling using LDA algorithm implemented in the R language to analyze the collected 1,526 articles. It was extracted keywords for each year from 2017 to 2019, and classified and compared keywords with high frequency for each year. It was setted the optimal number of topics to 8 using Log Likelihood and Perplexity, and then inferred 8 topics using the Gibbs Sampling method. The inferred topics were Gangneung and Beach, Goseong and Mt.Geumgang, KTX and Donghae-Bukbu line, weekend sea tour, Sokcho and Unification Observatory, Yangyang and Surfing, experience tour, and transportation network infra. The changes of articles on East coast tourism was was analyzed using the proportion of the inferred eight topics. As the result, the proportion of Unification Observatory and Mt. Geumgang showed no significant change, the proportion of KTX and experience tour increased, and the proportion of other topics decreased in 2018 compared to 2017. In 2019, the proportion of KTX and experience tour decreased, but the proportion of other topics showed no significant change.

Technical Development for Extraction of Discontinuities in Rock Mass Using LiDAR (LiDAR를 이용한 암반 불연속면 추출 기술의 개발 현황)

  • Lee, Hyeon-woo;Kim, Byung-ryeol;Choi, Sung-oong
    • Tunnel and Underground Space
    • /
    • v.31 no.1
    • /
    • pp.10-24
    • /
    • 2021
  • Rock mass classification for construction of underground facilities is essential to secure their stabilities. Therefore, the reliable values for rock mass classification from the precise information on rock discontinuities are most important factors, because rock mass discontinuities can affect exclusively on the physical and mechanical properties of rock mass. The conventional classification operation for rock mass has been usually performed by hand mapping. However, there have been many issues for its precision and reliability; for instance, in large-scale survey area for regional geological survey, or rock mass classification operation by non-professional engineers. For these reasons, automated rock mass classification using LiDAR becomes popular for obtaining the quick and precise information. But there are several suggested algorithms for analyzing the rock mass discontinuities from point cloud data by LiDAR scanning, and it is known that the different algorithm gives usually different solution. Also, it is not simple to obtain the exact same value to hand mapping. In this paper, several discontinuity extract algorithms have been explained, and their processes for extracting rock mass discontinuities have been simulated for real rock bench. The application process for several algorithms is anticipated to be a good reference for future researches on extracting rock mass discontinuities from digital point cloud data by laser scanner, such as LiDAR.

An Addition-Chain Heuristics and Two Modular Multiplication Algorithms for Fast Modular Exponentiation (모듈라 멱승 연산의 빠른 수행을 위한 덧셈사슬 휴리스틱과 모듈라 곱셈 알고리즘들)

  • 홍성민;오상엽;윤현수
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.7 no.2
    • /
    • pp.73-92
    • /
    • 1997
  • A modular exponentiation( E$M^{$=varepsilon$}$mod N) is one of the most important operations in Public-key cryptography. However, it takes much time because the modular exponentiation deals with very large operands as 512-bit integers. Modular exponentiation is composed of repetition of modular multiplications, and the number of repetition is the same as the length of the addition-chain of the exponent(E). Therefore, we can reduce the execution time of modular exponentiation by finding shorter addition-chain(i.e. reducing the number of repetitions) or by reducing the execution time of each modular multiplication. In this paper, we propose an addition-chain heuristics and two fast modular multiplication algorithms. Of two modular multiplication algorithms, one is for modular multiplication between different integers, and the other is for modular squaring. The proposed addition-chain heuristics finds the shortest addition-chain among exisiting algorithms. Two proposed modular multiplication algorithms require single-precision multiplications fewer than 1/2 times of those required for previous algorithms. Implementing on PC, proposed algorithms reduce execution times by 30-50% compared with the Montgomery algorithm, which is the best among previous algorithms.

High Performance Hardware Implementation of the 128-bit SEED Cryptography Algorithm (128비트 SEED 암호 알고리즘의 고속처리를 위한 하드웨어 구현)

  • 전신우;정용진
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.11 no.1
    • /
    • pp.13-23
    • /
    • 2001
  • This paper implemented into hardware SEED which is the KOREA standard 128-bit block cipher. First, at the respect of hardware implementation, we compared and analyzed SEED with AES finalist algorithms - MARS, RC6, RIJNDAEL, SERPENT, TWOFISH, which are secret key block encryption algorithms. The encryption of SEED is faster than MARS, RC6, TWOFISH, but is as five times slow as RIJNDAEL which is the fastest. We propose a SEED hardware architecture which improves the encryption speed. We divided one round into three parts, J1 function block, J2 function block J3 function block including key mixing block, because SEED repeatedly executes the same operation 16 times, then we pipelined one round into three parts, J1 function block, J2 function block, J3 function block including key mixing block, because SEED repeatedly executes the same operation 16 times, then we pipelined it to make it more faster. G-function is implemented more easily by xoring four extended 4 byte SS-boxes. We tested it using ALTERA FPGA with Verilog HDL. If the design is synthesized with 0.5 um Samsung standard cell library, encryption of ECB and decryption of ECB, CBC, CFB, which can be pipelined would take 50 clock cycles to encrypt 384-bit plaintext, and hence we have 745.6 Mbps assuming 97.1 MHz clock frequency. Encryption of CBC, OFB, CFB and decryption of OFB, which cannot be pipelined have 258.9 Mbps under same condition.

Doubly-robust Q-estimation in observational studies with high-dimensional covariates (고차원 관측자료에서의 Q-학습 모형에 대한 이중강건성 연구)

  • Lee, Hyobeen;Kim, Yeji;Cho, Hyungjun;Choi, Sangbum
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.309-327
    • /
    • 2021
  • Dynamic treatment regimes (DTRs) are decision-making rules designed to provide personalized treatment to individuals in multi-stage randomized trials. Unlike classical methods, in which all individuals are prescribed the same type of treatment, DTRs prescribe patient-tailored treatments which take into account individual characteristics that may change over time. The Q-learning method, one of regression-based algorithms to figure out optimal treatment rules, becomes more popular as it can be easily implemented. However, the performance of the Q-learning algorithm heavily relies on the correct specification of the Q-function for response, especially in observational studies. In this article, we examine a number of double-robust weighted least-squares estimating methods for Q-learning in high-dimensional settings, where treatment models for propensity score and penalization for sparse estimation are also investigated. We further consider flexible ensemble machine learning methods for the treatment model to achieve double-robustness, so that optimal decision rule can be correctly estimated as long as at least one of the outcome model or treatment model is correct. Extensive simulation studies show that the proposed methods work well with practical sample sizes. The practical utility of the proposed methods is proven with real data example.

A preliminary study for numerical and analytical evaluation of surface settlement due to EPB shield TBM excavation (토압식 쉴드 TBM 굴착에 따른 지반침하 거동 평가에 관한 해석적 기초연구)

  • An, Jun-Beom;Kang, Seok-Jun;Kim, Jung Joo;Kim, Kyoung Yul;Cho, Gye-Chun
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.3
    • /
    • pp.183-198
    • /
    • 2021
  • The EPB (Earth Pressure Balanced) shield TBM method restrains the ground deformation through continuous excavation and support. Still, the significant surface settlement occurs due to the ground conditions, tunnel dimensions, and construction conditions. Therefore, it is necessary to clarify the settlement behavior with its influence factors and evaluate the possible settlement during construction. In this study, the analytical model of surface settlement based on the influence factors and their mechanisms were proposed. Then, the parametric study for controllable factors during excavation was conducted by numerical method. Through the numerical analysis, the settlement behavior according to the construction conditions was quantitatively derived. Then, the qualitative trend according to the ground conditions was visualized by coupling the numerical results with the analytical model of settlement. Based on the results of this study, it is expected to contribute to the derivation of the settlement prediction algorithm for EPB shield TBM excavation.

Improving Efficiency of Food Hygiene Surveillance System by Using Machine Learning-Based Approaches (기계학습을 이용한 식품위생점검 체계의 효율성 개선 연구)

  • Cho, Sanggoo;Cho, Seung Yong
    • The Journal of Bigdata
    • /
    • v.5 no.2
    • /
    • pp.53-67
    • /
    • 2020
  • This study employees a supervised learning prediction model to detect nonconformity in advance of processed food manufacturing and processing businesses. The study was conducted according to the standard procedure of machine learning, such as definition of objective function, data preprocessing and feature engineering and model selection and evaluation. The dependent variable was set as the number of supervised inspection detections over the past five years from 2014 to 2018, and the objective function was to maximize the probability of detecting the nonconforming companies. The data was preprocessed by reflecting not only basic attributes such as revenues, operating duration, number of employees, but also the inspections track records and extraneous climate data. After applying the feature variable extraction method, the machine learning algorithm was applied to the data by deriving the company's risk, item risk, environmental risk, and past violation history as feature variables that affect the determination of nonconformity. The f1-score of the decision tree, one of ensemble models, was much higher than those of other models. Based on the results of this study, it is expected that the official food control for food safety management will be enhanced and geared into the data-evidence based management as well as scientific administrative system.

Counseling Outcomes Research Trend Analysis Using Topic Modeling - Focus on 「Korean Journal of Counseling」 (토픽 모델링을 활용한 상담 성과 연구동향 분석 - 「상담학연구」 학술지를 중심으로)

  • Park, Kwi Hwa;Lee, Eun Young;Yune, So Jung
    • Journal of Digital Convergence
    • /
    • v.19 no.11
    • /
    • pp.517-523
    • /
    • 2021
  • The outcome of the consultation is important to both the counselor and the researcher. Analyzing the trends of research on the results of counseling that have been carried out so far will help to comprehensively structure the results of consultations. The purpose of this research is to analyze research trends in Korea, focusing on research related to the outcomes of counseling published in 「Korean Journal of Counseling」 from 2011 to 2021, which is one of the well-known academic journals in the field of counseling in Korea. This is to explore the direction of future research by navigating the knowledge structure of research. There were 197 studies used for analysis, and the final 339 keyword were extracted during the node extraction process and used for analysis. As a result of extracting potential topics using the LDA algorithm, "Measurement and evaluation of counseling outcomes", "emotions and mediate factors affecting interpersonal relationships", and "career stress and coping strategies" are the main topics. Identifying major topics through trend analysis of counseling performance research contributed to structuring counseling performance. In-depth research on these topics needs to continue thereafter.

A Non-annotated Recurrent Neural Network Ensemble-based Model for Near-real Time Detection of Erroneous Sea Level Anomaly in Coastal Tide Gauge Observation (비주석 재귀신경망 앙상블 모델을 기반으로 한 조위관측소 해수위의 준실시간 이상값 탐지)

  • LEE, EUN-JOO;KIM, YOUNG-TAEG;KIM, SONG-HAK;JU, HO-JEONG;PARK, JAE-HUN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.4
    • /
    • pp.307-326
    • /
    • 2021
  • Real-time sea level observations from tide gauges include missing and erroneous values. Classification as abnormal values can be done for the latter by the quality control procedure. Although the 3𝜎 (three standard deviations) rule has been applied in general to eliminate them, it is difficult to apply it to the sea-level data where extreme values can exist due to weather events, etc., or where erroneous values can exist even within the 3𝜎 range. An artificial intelligence model set designed in this study consists of non-annotated recurrent neural networks and ensemble techniques that do not require pre-labeling of the abnormal values. The developed model can identify an erroneous value less than 20 minutes of tide gauge recording an abnormal sea level. The validated model well separates normal and abnormal values during normal times and weather events. It was also confirmed that abnormal values can be detected even in the period of years when the sea level data have not been used for training. The artificial neural network algorithm utilized in this study is not limited to the coastal sea level, and hence it can be extended to the detection model of erroneous values in various oceanic and atmospheric data.