• Title/Summary/Keyword: pre-computation

Search Result 174, Processing Time 0.021 seconds

Study on Diagnosis by Facial Shapes and Signs as a Disease-Prediction Data for a Construction of the Ante-disease Pattern Diagno-Therapeutic System - Focusing on Gallbladder's versus Bladder's Body and Masculine versus Feminine Shape - (미병학(未病學) 체계구축을 위한 질병예측자(疾病豫側子)로서의 형상진단연구 - 담방광체(膽膀胱體)와 남녀형상(男女形象)을 중심으로 -)

  • Kim, Jong-Wan;Kim, Kyung-Chul;Lee, Yang-Tae;Lee, In-Seon;Kim, Kyu-Kon;Chi, Gyoo-Yang
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.23 no.3
    • /
    • pp.540-547
    • /
    • 2009
  • There needs disease-predictable signs in order to enable preventive diagnosis and therapy. Then traditional Chinese medicine applies various medical diagnostic equipments used in western medicine to diagnosing sub-healthy state. But such data are not originated from inherent oriental medicine, and not obtained easily in ordinary clinical practice. This paper is to provide synopsis of the ante-disease diagno-therapeutics partly and to show predictable data based on the facial shapes and signs, especially of gall bladder's versus bladder's body and masculine versus feminine shape. Ante-disease means not only the complete healthy state, but also the state unseen any symptoms in macrographically in the course of outbreak of disease. It contains two stages, first one is the former state of disease and second one is untransmitted state of disease. The patterns of ante-disease consist of latent disease, pre-disease, transmission type like senescent syndrome, abnormal reactive syndrome(變證), syndrome of transmission and transmutation. The classification with gall bladder and bladder type manifests the differences of shape, color and size of each organ in comparison of the universal and standard figures of the human being. On the other hand, the classification with masculine and feminine shape contrasts the innate sexual difference and the shape, characteristics originated from in itself. These two classification theories have their own pathologic types and syndrome types with each disease so that disease-predictable data can be constructed based on such a relationship. In addition, this diagnostic method by facial shapes and signs is able to be applied to whole stages from prenatal to present state of disease even if the cause and inducement are not clear. Ante-disease diagno-theraputic system by Gall Bladder's versus Bladder's Body and Masculine versus Feminine Shape is getting more important in the chronic and internal disease in comparison of the acute and traumatic disease. So this study is able to make up for the limit of diagnosis on ante-disease in the field of oriental medicine clinic.

Design and Evaluation of an Edge-Fog Cloud-based Hierarchical Data Delivery Scheme for IoT Applications (사물인터넷 응용을 위한 에지-포그 클라우드 기반 계층적 데이터 전달 방법의 설계 및 평가)

  • Bae, Ihn-Han
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.37-47
    • /
    • 2018
  • The number of capabilities of Internet of Things (IoT) devices will exponentially grow over the next years. These devices may generate a vast amount of time-constrained data. In the context of IoT, data management should act as a layer between the objects and devices generating the data and the applications accessing the data for analysis purposes and services. In addition, most of IoT services will be content-centric rather than host centric to increase the data availability and the efficiency of data delivery. IoT will enable all the communication devices to be interconnected and make the data generated by or associated with devices or objects globally accessible. Also, fog computing keeps data and computation close to end users at the edge of network, and thus provides a new breed of applications and services to end users with low latency, high bandwidth, and geographically distributed. In this paper, we propose Edge-Fog cloud-based Hierarchical Data Delivery ($EFcHD^2$) method that effectively and reliably delivers IoT data to associated with IoT applications with ensuring time sensitivity. The proposed $EFcHD^2$ method stands on basis of fully decentralized hybrid of Edge and Fog compute cloud model, Edge-Fog cloud, and uses information-centric networking and bloom filters. In addition, it stores the replica of IoT data or the pre-processed feature data by edge node in the appropriate locations of Edge-Fog cloud considering the characteristic of IoT data: locality, size, time sensitivity and popularity. Then, the performance of $EFcHD^2$ method is evaluated through an analytical model, and is compared to fog server-based and Content-Centric Networking (CCN)-based data delivery methods.

Study for Progress Rate of Standard Deviation of Irregularity Based on Track Properties for the Railway Track Maintenance Cycle Analysis (궤도 유지보수 주기 예측을 위한 구간 특성에 따른 궤도틀림 표준편차 진전정도 분석)

  • Jeong, Min Chul;Kim, Jung Hoon;Lee, Jee Ha;Kang, Yun Suk;Kong, Jung Sik
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.16 no.3
    • /
    • pp.31-40
    • /
    • 2012
  • The irregularity of railway track affects not only the comfort of ride such as noise or vibration but also the safety of train operation. For this reason, it is an interesting research area to design a reliable and sustainable railway track system and to analyze the train movement mechanism based on systematic approaches considering reasons of track irregularity possible in a specific local environment. Irregularity data inspected by EM-120, an railway inspection system in Korea includes unavoidable incomplete and erratic information, so it is encountered lots of problem to analyse those data without appropriate pre-data-refining processes. In this research, for the efficient management and maintenance of railway system, progress rate of standard deviation of irregularity is quantified. During the computation, some important components of railways such as rail joint, ballast, roadbed, and fastener have been considered. Probabilistic distributions of irregularity growth with respect to time are computed to predict the remaining service life of railway track and to be adapted for the safety assessment.

Development and Validation of A Decision Support System for the Real-time Monitoring and Management of Reservoir Turbidity Flows: A Case Study for Daecheong Dam (실시간 저수지 탁수 감시 및 관리를 위한 의사결정지원시스템 개발 및 검증: 대청댐 사례)

  • Chung, Se-Woong;Jung, Yong-Rak;Ko, Ick-Hwan;Kim, Nam-Il
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.3
    • /
    • pp.293-303
    • /
    • 2008
  • Reservoir turbidity flows degrade the efficiency and sustainability of water supply system in many countries located in monsoon climate region. A decision support system called RTMMS aimed to assist reservoir operations was developed for the real time monitoring, modeling, and management of turbidity flows induced by flood runoffs in Daecheong reservoir. RTMMS consists of a real time data acquisition module that collects and stores field monitoring data, a data assimilation module that assists pre-processing of model input data, a two dimensional numerical model for the simulation of reservoir hydrodynamics and turbidity, and a post-processor that aids the analysis of simulation results and alternative management scenarios. RTMMS was calibrated using field data obtained during the flood season of 2004, and applied to real-time simulations of flood events occurred on July of 2006 for assessing its predictive capability. The system showed fairly satisfactory performance in reproducing the density flow regimes and fate of turbidity plumes in the reservoir with efficient computation time that is a vital requirement for a real time application. The configurations of RTMMS suggested in this study can be adopted in many reservoirs that have similar turbidity issues for better management of water supply utilities and downstream aquatic ecosystem.

Subband Sparse Adaptive Filter for Echo Cancellation in Digital Hearing Aid Vent (디지털 보청기 벤트 반향제거를 위한 부밴드 성긴 적응필터)

  • Bae, Hyeonl-Deok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.538-542
    • /
    • 2018
  • Echo generated in digital hearing aid vent give rise to user's discomfort. For cancelling feedback echo in vent, it is required to estimate vent impulse response exactly. The vent impulse response has time varying and sparse characteristics. The IPNLMS has been known a useful adaptive algorithm to estimate vent impulse response with these characteristics. In this paper, subband sparse adaptive filter which applying IPNLMS to subband hearing aid structure is proposed to cancel echo of vent by estimating sparse vent impulse response. In the propose method, the decomposition of input signal to subband can pre-whiten each subband signal, so adaptive filter convergence speed can be improved. And the poly phase component decomposition of adaptive filter increases sparsity of each components, and the better echo cancellation can be possible without additional computation. To derive coefficients update equation of the adaptive filter, by defining the cost function based weight NLMS is defined, and the coefficient update equation of each subband is derived. For verifying performances of the adaptive filter, convergence speed, and steady state error by white signal input, and echo cancelling results by real speech input are evaluated by comparing conventional adaptive filters.

Real-time Watermarking Algorithm using Multiresolution Statistics for DWT Image Compressor (DWT기반 영상 압축기의 다해상도의 통계적 특성을 이용한 실시간 워터마킹 알고리즘)

  • 최순영;서영호;유지상;김대경;김동욱
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.13 no.6
    • /
    • pp.33-43
    • /
    • 2003
  • In this paper, we proposed a real-time watermarking algorithm to be combined and to work with a DWT(Discrete Wavelet Transform)-based image compressor. To reduce the amount of computation in selecting the watermarking positions, the proposed algorithm uses a pre-established look-up table for critical values, which was established statistically by computing the correlation according to the energy values of the corresponding wavelet coefficients. That is, watermark is embedded into the coefficients whose values are greater than the critical value in the look-up table which is searched on the basis of the energy values of the corresponding level-1 subband coefficients. Therefore, the proposed algorithm can operate in a real-time because the watermarking process operates in parallel with the compression procession without affecting the operation of the image compression. Also it improved the property of losing the watermark and the efficiency of image compression by watermark inserting, which results from the quantization and Huffman-Coding during the image compression. Visual recognizable patterns such as binary image were used as a watermark The experimental results showed that the proposed algorithm satisfied the properties of robustness and imperceptibility that are the major conditions of watermarking.

Distributed Certificate Authority under the GRID-Location Aided Routing Protocol (Ad hoc 네트워크에서 GRID-Location Aided Routing 프로토콜을 이용한 분산 CA 구성)

  • Lim, Ji-Hyung;Kang, Jeon-Il;Koh, Jae-Young;Han, Kwang-Taek;Nyang, Dae-Hun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.15 no.6
    • /
    • pp.59-69
    • /
    • 2005
  • Ad hoc network is the network which can be considered without a pre-constructed infrastructure, and a mobile node can join the network freely. However, the participation of the mobile nodes to the ad hoc network brings up much burden of re-computation for new routes, because it leads to losing the connection frequently. And, also, it causes serious security problem to be broadcasted wrong information by the malicious user. Therefore, it needs authentication against the mobile nodes. To make that Possible, we have two methods: single CA and distributed CA. In the case of CA method, the wireless network can be collapsed owing to expose the CA, but still the distributed CA method is a little more safe than previous one because it needs attacks toward a lot of CAs to collapse the network We can consider Secret Share scheme as the method that constructs the distributed CA system, but it is weak when the network size is too large. In this paper, we suggest hierarchical structure for the authentication method to solve this problem, and we will show the results of simulation for this suggestion.

Text Classification Using Heterogeneous Knowledge Distillation

  • Yu, Yerin;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.29-41
    • /
    • 2022
  • Recently, with the development of deep learning technology, a variety of huge models with excellent performance have been devised by pre-training massive amounts of text data. However, in order for such a model to be applied to real-life services, the inference speed must be fast and the amount of computation must be low, so the technology for model compression is attracting attention. Knowledge distillation, a representative model compression, is attracting attention as it can be used in a variety of ways as a method of transferring the knowledge already learned by the teacher model to a relatively small-sized student model. However, knowledge distillation has a limitation in that it is difficult to solve problems with low similarity to previously learned data because only knowledge necessary for solving a given problem is learned in a teacher model and knowledge distillation to a student model is performed from the same point of view. Therefore, we propose a heterogeneous knowledge distillation method in which the teacher model learns a higher-level concept rather than the knowledge required for the task that the student model needs to solve, and the teacher model distills this knowledge to the student model. In addition, through classification experiments on about 18,000 documents, we confirmed that the heterogeneous knowledge distillation method showed superior performance in all aspects of learning efficiency and accuracy compared to the traditional knowledge distillation.

A Study on the Intelligent Quick Response System for Fast Fashion(IQRS-FF) (패스트 패션을 위한 지능형 신속대응시스템(IQRS-FF)에 관한 연구)

  • Park, Hyun-Sung;Park, Kwang-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.163-179
    • /
    • 2010
  • Recentlythe concept of fast fashion is drawing attention as customer needs are diversified and supply lead time is getting shorter in fashion industry. It is emphasized as one of the critical success factors in the fashion industry how quickly and efficiently to satisfy the customer needs as the competition has intensified. Because the fast fashion is inherently susceptible to trend, it is very important for fashion retailers to make quick decisions regarding items to launch, quantity based on demand prediction, and the time to respond. Also the planning decisions must be executed through the business processes of procurement, production, and logistics in real time. In order to adapt to this trend, the fashion industry urgently needs supports from intelligent quick response(QR) system. However, the traditional functions of QR systems have not been able to completely satisfy such demands of the fast fashion industry. This paper proposes an intelligent quick response system for the fast fashion(IQRS-FF). Presented are models for QR process, QR principles and execution, and QR quantity and timing computation. IQRS-FF models support the decision makers by providing useful information with automated and rule-based algorithms. If the predefined conditions of a rule are satisfied, the actions defined in the rule are automatically taken or informed to the decision makers. In IQRS-FF, QRdecisions are made in two stages: pre-season and in-season. In pre-season, firstly master demand prediction is performed based on the macro level analysis such as local and global economy, fashion trends and competitors. The prediction proceeds to the master production and procurement planning. Checking availability and delivery of materials for production, decision makers must make reservations or request procurements. For the outsourcing materials, they must check the availability and capacity of partners. By the master plans, the performance of the QR during the in-season is greatly enhanced and the decision to select the QR items is made fully considering the availability of materials in warehouse as well as partners' capacity. During in-season, the decision makers must find the right time to QR as the actual sales occur in stores. Then they are to decide items to QRbased not only on the qualitative criteria such as opinions from sales persons but also on the quantitative criteria such as sales volume, the recent sales trend, inventory level, the remaining period, the forecast for the remaining period, and competitors' performance. To calculate QR quantity in IQRS-FF, two calculation methods are designed: QR Index based calculation and attribute similarity based calculation using demographic cluster. In the early period of a new season, the attribute similarity based QR amount calculation is better used because there are not enough historical sales data. By analyzing sales trends of the categories or items that have similar attributes, QR quantity can be computed. On the other hand, in case of having enough information to analyze the sales trends or forecasting, the QR Index based calculation method can be used. Having defined the models for decision making for QR, we design KPIs(Key Performance Indicators) to test the reliability of the models in critical decision makings: the difference of sales volumebetween QR items and non-QR items; the accuracy rate of QR the lead-time spent on QR decision-making. To verify the effectiveness and practicality of the proposed models, a case study has been performed for a representative fashion company which recently developed and launched the IQRS-FF. The case study shows that the average sales rateof QR items increased by 15%, the differences in sales rate between QR items and non-QR items increased by 10%, the QR accuracy was 70%, the lead time for QR dramatically decreased from 120 hours to 8 hours.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF