• Title/Summary/Keyword: Implementation technique

Search Result 2,093, Processing Time 0.035 seconds

Implementation of DTW-kNN-based Decision Support System for Discriminating Emerging Technologies (DTW-kNN 기반의 유망 기술 식별을 위한 의사결정 지원 시스템 구현 방안)

  • Jeong, Do-Heon;Park, Ju-Yeon
    • Journal of Industrial Convergence
    • /
    • v.20 no.8
    • /
    • pp.77-84
    • /
    • 2022
  • This study aims to present a method for implementing a decision support system that can be used for selecting emerging technologies by applying a machine learning-based automatic classification technique. To conduct the research, the architecture of the entire system was built and detailed research steps were conducted. First, emerging technology candidate items were selected and trend data was automatically generated using a big data system. After defining the conceptual model and pattern classification structure of technological development, an efficient machine learning method was presented through an automatic classification experiment. Finally, the analysis results of the system were interpreted and methods for utilization were derived. In a DTW-kNN-based classification experiment that combines the Dynamic Time Warping(DTW) method and the k-Nearest Neighbors(kNN) classification model proposed in this study, the identification performance was up to 87.7%, and particularly in the 'eventual' section where the trend highly fluctuates, the maximum performance difference was 39.4% points compared to the Euclidean Distance(ED) algorithm. In addition, through the analysis results presented by the system, it was confirmed that this decision support system can be effectively utilized in the process of automatically classifying and filtering by type with a large amount of trend data.

A Review of Seismic Full Waveform Inversion Based on Deep Learning (딥러닝 기반 탄성파 전파형 역산 연구 개관)

  • Sukjoon, Pyun;Yunhui, Park
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.227-241
    • /
    • 2022
  • Full waveform inversion (FWI) in the field of seismic data processing is an inversion technique that is used to estimate the velocity model of the subsurface for oil and gas exploration. Recently, deep learning (DL) technology has been increasingly used for seismic data processing, and its combination with FWI has attracted remarkable research efforts. For example, DL-based data processing techniques have been utilized for preprocessing input data for FWI, enabling the direct implementation of FWI through DL technology. DL-based FWI can be divided into the following methods: pure data-based, physics-based neural network, encoder-decoder, reparameterized FWI, and physics-informed neural network. In this review, we describe the theory and characteristics of the methods by systematizing them in the order of advancements. In the early days of DL-based FWI, the DL model predicted the velocity model by preparing a large training data set to adopt faithfully the basic principles of data science and apply a pure data-based prediction model. The current research trend is to supplement the shortcomings of the pure data-based approach using the loss function consisting of seismic data or physical information from the wave equation itself in deep neural networks. Based on these developments, DL-based FWI has evolved to not require a large amount of learning data, alleviating the cycle-skipping problem, which is an intrinsic limitation of FWI, and reducing computation times dramatically. The value of DL-based FWI is expected to increase continually in the processing of seismic data.

Organizational Reform for the Successful Implementation of Infrastructure Asset Management using Balanced Score Cards (균형성과지표를 활용한 사회기반시설 자산관리 조직 개선 방안)

  • Chae, Myung Jin;Park, Ha Jin;Lee, Gu;Lee, Geon Hee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.6D
    • /
    • pp.745-752
    • /
    • 2009
  • Management of social infrastructure has been advanced from facility management (FM) to asset management (AM), which adopts the aggressive and proactive methods in predicting the deterioration of infrastructure, prevents failures, and eventually saves maintenance cost. Infrastructure asset management is not a simple engineering technique, but it is a new paradigm evolved from facility management practices. To implement the infrastructure asset management successfully, organizational reform is very important. This paper suggests critical success factors and key performance indicators to implement the infrastructure asset management for facility managers of government owned social infrastructures such as roads and bridges. Reorganizing the facility management group requires new vision, objectives, strategies for the paradigm-changing asset management. This paper uses Balanced Score Card (BSC) which is a proven method in measuring and setting new objectives for an organization. Once the performance indicators are reviewed repeatedly by facility managers through experts workshops, developed BSC can be used in practice. This paper discusses the development of robust BSC scoring method through in depth literature reviews and investigation of asset management practices of domestic and international cases.

A Multi-Compartment Secret Sharing Method (다중 컴파트먼트 비밀공유 기법)

  • Cheolhoon Choi;Minsoo Ryu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.34-40
    • /
    • 2024
  • Secret sharing is a cryptographic technique that involves dividing a secret or a piece of sensitive information into multiple shares or parts, which can significantly increase the confidentiality of a secret. There has been a lot of research on secret sharing for different contexts or situations. Tassa's conjunctive secret sharing method employs polynomial derivatives to facilitate hierarchical secret sharing. However, the use of derivatives introduces several limitations in hierarchical secret sharing. Firstly, only a single group of participants can be created at each level due to the shares being generated from a sole derivative. Secondly, the method can only reconstruct a secret through conjunction, thereby restricting the specification of arbitrary secret reconstruction conditions. Thirdly, Birkhoff interpolation is required, adding complexity compared to the more accessible Lagrange interpolation used in polynomial-based secret sharing. This paper introduces the multi-compartment secret sharing method as a generalization of the conjunctive hierarchical secret sharing. Our proposed method first encrypts a secret using external groups' shares and then generates internal shares for each group by embedding the encrypted secret value in a polynomial. While the polynomial can be reconstructed with the internal shares, the polynomial just provides the encrypted secret, requiring external shares for decryption. This approach enables the creation of multiple participant groups at a single level. It supports the implementation of arbitrary secret reconstruction conditions, as well as conjunction. Furthermore, the use of polynomials allows the application of Lagrange interpolation.

High-Speed Implementation and Efficient Memory Usage of Min-Entropy Estimation Algorithms in NIST SP 800-90B (NIST SP 800-90B의 최소 엔트로피 추정 알고리즘에 대한 고속 구현 및 효율적인 메모리 사용 기법)

  • Kim, Wontae;Yeom, Yongjin;Kang, Ju-Sung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.1
    • /
    • pp.25-39
    • /
    • 2018
  • NIST(National Institute of Standards and Technology) has recently published SP 800-90B second draft which is the document for evaluating security of entropy source, a key element of a cryptographic random number generator(RNG), and provided a tool implemented on Python code. In SP 800-90B, the security evaluation of the entropy sources is a process of estimating min-entropy by several estimators. The process of estimating min-entropy is divided into IID track and non-IID track. In IID track, the entropy sources are estimated only from MCV estimator. In non-IID Track, the entropy sources are estimated from 10 estimators including MCV estimator. The running time of the NIST's tool in non-IID track is approximately 20 minutes and the memory usage is over 5.5 GB. For evaluation agencies that have to perform repeatedly evaluations on various samples, and developers or researchers who have to perform experiments in various environments, it may be inconvenient to estimate entropy using the tool and depending on the environment, it may be impossible to execute. In this paper, we propose high-speed implementations and an efficient memory usage technique for min-entropy estimation algorithm of SP 800-90B. Our major achievements are the three improved speed and efficient memory usage reduction methods which are the method applying advantages of C++ code for improving speed of MultiMCW estimator, the method effectively reducing the memory and improving speed of MultiMMC by rebuilding the data storage structure, and the method improving the speed of LZ78Y by rebuilding the data structure. The tool applied our proposed methods is 14 times faster and saves 13 times more memory usage than NIST's tool.

Parallel Processing of Satellite Images using CUDA Library: Focused on NDVI Calculation (CUDA 라이브러리를 이용한 위성영상 병렬처리 : NDVI 연산을 중심으로)

  • LEE, Kang-Hun;JO, Myung-Hee;LEE, Won-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.29-42
    • /
    • 2016
  • Remote sensing allows acquisition of information across a large area without contacting objects, and has thus been rapidly developed by application to different areas. Thus, with the development of remote sensing, satellites are able to rapidly advance in terms of their image resolution. As a result, satellites that use remote sensing have been applied to conduct research across many areas of the world. However, while research on remote sensing is being implemented across various areas, research on data processing is presently insufficient; that is, as satellite resources are further developed, data processing continues to lag behind. Accordingly, this paper discusses plans to maximize the performance of satellite image processing by utilizing the CUDA(Compute Unified Device Architecture) Library of NVIDIA, a parallel processing technique. The discussion in this paper proceeds as follows. First, standard KOMPSAT(Korea Multi-Purpose Satellite) images of various sizes are subdivided into five types. NDVI(Normalized Difference Vegetation Index) is implemented to the subdivided images. Next, ArcMap and the two techniques, each based on CPU or GPU, are used to implement NDVI. The histograms of each image are then compared after each implementation to analyze the different processing speeds when using CPU and GPU. The results indicate that both the CPU version and GPU version images are equal with the ArcMap images, and after the histogram comparison, the NDVI code was correctly implemented. In terms of the processing speed, GPU showed 5 times faster results than CPU. Accordingly, this research shows that a parallel processing technique using CUDA Library can enhance the data processing speed of satellites images, and that this data processing benefits from multiple advanced remote sensing techniques as compared to a simple pixel computation like NDVI.

Implementation of Water Bolus in Patient with Large Tissue Defect (조직결손이 큰 환자에서 물 볼루스의 적용에 관한 고찰)

  • Park, Hyo-Kuk;Lee, Sang-Kyu;Yoon, Jong-Won;Cho, Jeong-Hee;Kim, Dong-Wook;Kim, Joo-Ho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.18 no.2
    • /
    • pp.105-112
    • /
    • 2006
  • Purpose: To demonstrate that water bolus in the patient surface can decrease the dose inhomogeneity by patient surface large tissue defect when the surface is in an electron-beam field. And We tried to find a easy way to water control. Methods and Materials: To demonstrate the use of water bolus in the irregular surface clinically, the case of a patient with myxofibrosarcoma of the chest wall who was treated with electrons. We obtained dose distribution using missing tissue option of PINACLE 6.2b (ADAC, USA). We fabricate a Mev-green for water bolus in patient with defect of tissue. Then put the water bolus which is vinyl packed water into the designed Mev-green. We peformed CT scan with CT-simulator. Three-dimensional (3D) dose distributions with and without water bolus in the large irregular chest wall were calculated for a representative patient. Resulting dose distributions and dose-volume histograms of water bolus were compared with missing tissue option and non bolus plans. We fabricate a new water control device. Results: Controlled Water bolus markedly decrease the dose heterogeneity, and minimizes normal tissue exposure caused by the surface irregularities of the chest wall mass. In the test case, The non bolus plan has a maximum target dose of 132%. After applying water bolus, the maximum target dose has been reduced substantially to 110.4%. The maximum target dose was reduced by 21.6% using this technique. Conclusion: The results showed that controlled water bolus could significantly improve the dose homogeneity in the PTV for patients treated with electron therapy using water control device. This technique may reduce the incidence of normal organ complications that occur after electron-beam therapy in irregular surface. And our new device shows handiness of water control.

  • PDF

Study on the Selecting of Suitable Sites for Integrated Riparian Eco-belts Connecting Dam Floodplains and Riparian Zone - Case Study of Daecheong Reservoir in Geum-river Basin - (댐 홍수터와 수변구역을 연계한 통합형 수변생태벨트 적지 선정방안 연구 - 금강 수계 대청호 사례 연구 -)

  • Bahn, Gwonsoo;Cho, Myeonghyeon;Kang, Jeonkyeong;Kim, Leehyung
    • Journal of Wetlands Research
    • /
    • v.23 no.4
    • /
    • pp.327-341
    • /
    • 2021
  • The riparian eco-belt is an efficient technique that can reduce non-point pollution sources in the basin and improve ecological connectivity and health. In Korea, a legal system for the construction and management of riparian eco-belts is in operation. However, it is currently excluded that rivers and floodplains in dam reservoir that are advantageous for buffer functions such as control of non-point pollutants and ecological habitats. Accordingly, this study presented and analyzed a plan to select a site for an integrated riparian ecol-belt that comprehensively evaluates the water quality and ecosystem characteristics of each dam floodplain and riparian zone for the Daecheong Dam basin in Geum River watershed. First, the Daecheong Dam basin was divided into 138 sub-basin with GIS, and the riparian zone adjacent to the dam floodplain was analyzed. Sixteen evaluation factors related to the ecosystem and water quality impact that affect the selection of integrated riparian eco-belt were decided, and weights for the importance of each factor were set through AHP analysis. The priority of site suitability was derived by conducting an integrated evaluation by applying weights to sub-basin by floodplains and riparian zone factors. In order to determine whether the sites derived through GIS site analysis are sutiable for actual implementation, five sites were inspected according to three factors: land use, pollution sources, and ecological connectivity. As a result, it was confirmed that all sites were appropriate to apply integrated riparian ecol-belt. It is judged that the riparian eco-belt site analysis technique proposed through this study can be applied as a useful tool when establishing an integrated riparian zone management policy in the future. However, it might be necessary to experiment various evaluation factors and weights for each item according to the characteristics and issues of each dam. Additional research need to be conducted on elaborated conservation and restoration strategies considering the Green-Blue Network aspect, evaluation of ecosystem services, and interconnection between related laws and policy and its improvements.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

A study on the Success Factors and Strategy of Information Technology Investment Based on Intelligent Economic Simulation Modeling (지능형 시뮬레이션 모형을 기반으로 한 정보기술 투자 성과 요인 및 전략 도출에 관한 연구)

  • Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.35-55
    • /
    • 2013
  • Information technology is a critical resource necessary for any company hoping to support and realize its strategic goals, which contribute to growth promotion and sustainable development. The selection of information technology and its strategic use are imperative for the enhanced performance of every aspect of company management, leading a wide range of companies to have invested continuously in information technology. Despite researchers, managers, and policy makers' keen interest in how information technology contributes to organizational performance, there is uncertainty and debate about the result of information technology investment. In other words, researchers and managers cannot easily identify the independent factors that can impact the investment performance of information technology. This is mainly owing to the fact that many factors, ranging from the internal components of a company, strategies, and external customers, are interconnected with the investment performance of information technology. Using an agent-based simulation technique, this research extracts factors expected to affect investment performance on information technology, simplifies the analyses of their relationship with economic modeling, and examines the performance dependent on changes in the factors. In terms of economic modeling, I expand the model that highlights the way in which product quality moderates the relationship between information technology investments and economic performance (Thatcher and Pingry, 2004) by considering the cost of information technology investment and the demand creation resulting from product quality enhancement. For quality enhancement and its consequences for demand creation, I apply the concept of information quality and decision-maker quality (Raghunathan, 1999). This concept implies that the investment on information technology improves the quality of information, which, in turn, improves decision quality and performance, thus enhancing the level of product or service quality. Additionally, I consider the effect of word of mouth among consumers, which creates new demand for a product or service through the information diffusion effect. This demand creation is analyzed with an agent-based simulation model that is widely used for network analyses. Results show that the investment on information technology enhances the quality of a company's product or service, which indirectly affects the economic performance of that company, particularly with regard to factors such as consumer surplus, company profit, and company productivity. Specifically, when a company makes its initial investment in information technology, the resultant increase in the quality of a company's product or service immediately has a positive effect on consumer surplus, but the investment cost has a negative effect on company productivity and profit. As time goes by, the enhancement of the quality of that company's product or service creates new consumer demand through the information diffusion effect. Finally, the new demand positively affects the company's profit and productivity. In terms of the investment strategy for information technology, this study's results also reveal that the selection of information technology needs to be based on analysis of service and the network effect of customers, and demonstrate that information technology implementation should fit into the company's business strategy. Specifically, if a company seeks the short-term enhancement of company performance, it needs to have a one-shot strategy (making a large investment at one time). On the other hand, if a company seeks a long-term sustainable profit structure, it needs to have a split strategy (making several small investments at different times). The findings from this study make several contributions to the literature. In terms of methodology, the study integrates both economic modeling and simulation technique in order to overcome the limitations of each methodology. It also indicates the mediating effect of product quality on the relationship between information technology and the performance of a company. Finally, it analyzes the effect of information technology investment strategies and information diffusion among consumers on the investment performance of information technology.