• Title/Summary/Keyword: Real-Time Computation Methods

Search Result 162, Processing Time 0.026 seconds

Aberrant Expression of CCAT1 Regulated by c-Myc Predicts the Prognosis of Hepatocellular Carcinoma

  • Zhu, Hua-Qiang;Zhou, Xu;Chang, Hong;Li, Hong-Guang;Liu, Fang-Feng;Ma, Chao-Qun;Lu, Jun
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.13
    • /
    • pp.5181-5185
    • /
    • 2015
  • Background: CCAT1 has been reported to be linked with pathogenesis of malignancies including colon cancer and gastric cancer. However, the regulatory effect of CCAT1 in hepatocellular carcinoma (HCC) remains unclear. The purpose of this research was to identify any role of CCAT1 in the progression of HCC. Materials and Methods: Real time-PCR was performed to test the relative expression of CCAT1 in HCC tissues. A computation screen of CCAT1 promoter was conducted to search for transcription-factor-binding sites. The association of c-Myc with CCAT1 promoter in vivo was tested by Pearson correlation analysis and chromatin immunoprecipitation assay. Additionally, Kaplan-Meier analysis and Cox proportional hazards analyses were performed. Results: c-Myc directly binds to the E-box element in the promoter region of CCAT, and when ectopically expressed increases promoter activity and expression of CCAT1. Moreover, Kaplan-Meier analysis demonstrated that the patients with low expression of CCAT1 demonstrated better overall and relapse-free survival compared with the high expression group. Cox proportional hazards analyses showed that CCAT1 expression was an independent prognostic factor for HCC patients. Conclusions: The findings demonstrated CCAT1, acting as a potential biomarker in predicting the prognosis of HCC, is regulated by c-Myc.

A Fast SAD Algorithm for Area-based Stereo Matching Methods (영역기반 스테레오 영상 정합을 위한 고속 SAD 알고리즘)

  • Lee, Woo-Young;Kim, Cheong Ghil
    • Journal of Satellite, Information and Communications
    • /
    • v.7 no.2
    • /
    • pp.8-12
    • /
    • 2012
  • Area-based stereo matchng algorithms are widely used for image analysis for stereo vision. SAD (Sum of Absolute Difference) algorithm is one of well known area-based stereo matchng algorithms with the characteristics of data intensive computing application. Therefore, it requires very high computation capabilities and its processing speed becomes very slow with software realization. This paper proposes a fast SAD algorithm utilizing SSE (Streaming SIMD Extensions) instructions based on SIMD (Single Instruction Multiple Data) parallism. CPU supporing SSE instructions has 16 XMM registers with 128 bits. For the performance evaluation of the proposed scheme, we compare the processing speed between SAD with/without SSE instructions. The proposed scheme achieves four times performance improvement over the general SAD, which shows the possibility of the software realization of real time SAD algorithm.

Deep Learning based Image Recognition Models for Beef Sirloin Classification (딥러닝 이미지 인식 기술을 활용한 소고기 등심 세부 부위 분류)

  • Han, Jun-Hee;Jung, Sung-Hun;Park, Kyungsu;Yu, Tae-Sun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.1-9
    • /
    • 2021
  • This research examines deep learning based image recognition models for beef sirloin classification. The sirloin of beef can be classified as the upper sirloin, the lower sirloin, and the ribeye, whereas during the distribution process they are often simply unified into the sirloin region. In this work, for detailed classification of beef sirloin regions we develop a model that can learn image information in a reasonable computation time using the MobileNet algorithm. In addition, to increase the accuracy of the model we introduce data augmentation methods as well, which amplifies the image data collected during the distribution process. This data augmentation enables to consider a larger size of training data set by which the accuracy of the model can be significantly improved. The data generated during the data proliferation process was tested using the MobileNet algorithm, where the test data set was obtained from the distribution processes in the real-world practice. Through the computational experiences we confirm that the accuracy of the suggested model is up to 83%. We expect that the classification model of this study can contribute to providing a more accurate and detailed information exchange between suppliers and consumers during the distribution process of beef sirloin.

Divisible Electronic Cash System based on a Blinding ECDSA (Blinding ECDSA를 기반으로 한 분할가능 전자화폐 시스템)

  • 전병욱;권용진
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.9 no.1
    • /
    • pp.103-114
    • /
    • 1999
  • Recently, various prototypes for electronic commerce are realized and its related researches are active under the present condition which it is increasing for the reality of its extended applications. First of all, actual demands are increasing for more secure and efficient electronic payment systems. Electronic cash, one of the Electronic payment systems, must have several properties like real money. Blind signature scheme by D. Chaum stands for the methods of obtaining privacy. In this paper, we propose a method for obtaining the blind signature based on the Elliptic Curve Cryptosystems, where the crytosystems are known as solving some problems of conventional crytosystems in views of computation time and key space. Also, we present a method for the divisibility of the electronic cash using our proposal by re-signing spare cash. Thus applying the proposed method, we can develop an efficient electronic payment systems.

Moving Object Contour Detection Using Spatio-Temporal Edge with a Fixed Camera (고정 카메라에서의 시공간적 경계 정보를 이용한 이동 객체 윤곽선 검출 방법)

  • Kwak, Jae-Ho;Kim, Whoi-Yul
    • Journal of Broadcast Engineering
    • /
    • v.15 no.4
    • /
    • pp.474-486
    • /
    • 2010
  • In this paper, we propose a new method for detection moving object contour using spatial and temporal edge. In general, contour pixels of the moving object are likely present around pixels with high gradient value along the time axis and the spatial axis. Therefore, we can detect the contour of the moving objects by finding pixels which have high gradient value in the time axis and spatial axis. In this paper, we introduce a new computation method, termed as temporal edge, to compute an gradient value along the time axis for any pixel on an image. The temporal edge can be computed using two input gray images at time t and t-2 using the Sobel operator. Temporal edge is utilized to detect a candidate region of the moving object contour and then the detected candidate region is used to extract spatial edge information. The final contour of the moving object is detected using the combination of these two edge information, which are temporal edge and spatial edge, and then the post processing such as a morphological operation and a background edge removing procedure are applied to remove noise regions. The complexity of the proposed method is very low because it dose not use any background scene and high complex operation, therefore it can be applied to real-time applications. Experimental results show that the proposed method outperforms the conventional contour extraction methods in term of processing effort and a ghost effect which is occurred in the case of entropy method.

Optimal Design of Water Distribution System considering the Uncertainties on the Demands and Roughness Coefficients (수요와 조도계수의 불확실성을 고려한 상수도관망의 최적설계)

  • Jung, Dong-Hwi;Chung, Gun-Hui;Kim, Joong-Hoon
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.10 no.1
    • /
    • pp.73-80
    • /
    • 2010
  • The optimal design of water distribution system have started with the least cost design of single objective function using fixed hydraulic variables, eg. fixed water demand and pipe roughness. However, more adequate design is accomplished with considering uncertainties laid on water distribution system such as uncertain future water demands, resulting in successful estimation of real network's behaviors. So, many researchers have suggested a variety of approaches to consider uncertainties in water distribution system using uncertainties quantification methods and the optimal design of multi-objective function is also studied. This paper suggests the new approach of a multi-objective optimization seeking the minimum cost and maximum robustness of the network based on two uncertain variables, nodal demands and pipe roughness uncertainties. Total design procedure consists of two folds: least cost design and final optimal design under uncertainties. The uncertainties of demands and roughness are considered with Latin Hypercube sampling technique with beta probability density functions and multi-objective genetic algorithms (MOGA) is used for the optimization process. The suggested approach is tested in a case study of real network named the New York Tunnels and the applicability of new approach is checked. As the computation time passes, we can check that initial populations, one solution of solutions of multi-objective genetic algorithm, spread to lower right section on the solution space and yield Pareto Optimum solutions building Pareto Front.

Control of pH Neutralization Process using Simulation Based Dynamic Programming in Simulation and Experiment (ICCAS 2004)

  • Kim, Dong-Kyu;Lee, Kwang-Soon;Yang, Dae-Ryook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.620-626
    • /
    • 2004
  • For general nonlinear processes, it is difficult to control with a linear model-based control method and nonlinear controls are considered. Among the numerous approaches suggested, the most rigorous approach is to use dynamic optimization. Many general engineering problems like control, scheduling, planning etc. are expressed by functional optimization problem and most of them can be changed into dynamic programming (DP) problems. However the DP problems are used in just few cases because as the size of the problem grows, the dynamic programming approach is suffered from the burden of calculation which is called as 'curse of dimensionality'. In order to avoid this problem, the Neuro-Dynamic Programming (NDP) approach is proposed by Bertsekas and Tsitsiklis (1996). To get the solution of seriously nonlinear process control, the interest in NDP approach is enlarged and NDP algorithm is applied to diverse areas such as retailing, finance, inventory management, communication networks, etc. and it has been extended to chemical engineering parts. In the NDP approach, we select the optimal control input policy to minimize the value of cost which is calculated by the sum of current stage cost and future stages cost starting from the next state. The cost value is related with a weight square sum of error and input movement. During the calculation of optimal input policy, if the approximate cost function by using simulation data is utilized with Bellman iteration, the burden of calculation can be relieved and the curse of dimensionality problem of DP can be overcome. It is very important issue how to construct the cost-to-go function which has a good approximate performance. The neural network is one of the eager learning methods and it works as a global approximator to cost-to-go function. In this algorithm, the training of neural network is important and difficult part, and it gives significant effect on the performance of control. To avoid the difficulty in neural network training, the lazy learning method like k-nearest neighbor method can be exploited. The training is unnecessary for this method but requires more computation time and greater data storage. The pH neutralization process has long been taken as a representative benchmark problem of nonlin ar chemical process control due to its nonlinearity and time-varying nature. In this study, the NDP algorithm was applied to pH neutralization process. At first, the pH neutralization process control to use NDP algorithm was performed through simulations with various approximators. The global and local approximators are used for NDP calculation. After that, the verification of NDP in real system was made by pH neutralization experiment. The control results by NDP algorithm was compared with those by the PI controller which is traditionally used, in both simulations and experiments. From the comparison of results, the control by NDP algorithm showed faster and better control performance than PI controller. In addition to that, the control by NDP algorithm showed the good results when it applied to the cases with disturbances and multiple set point changes.

  • PDF

Lightweight Super-Resolution Network Based on Deep Learning using Information Distillation and Recursive Methods (정보 증류 및 재귀적인 방식을 이용한 심층 학습법 기반 경량화된 초해상도 네트워크)

  • Woo, Hee-Jo;Sim, Ji-Woo;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.378-390
    • /
    • 2022
  • With the recent development of deep composite multiplication neural network learning, deep learning techniques applied to single-image super-resolution have shown good results, and the strong expression ability of deep networks has enabled complex nonlinear mapping between low-resolution and high-resolution images. However, there are limitations in applying it to real-time or low-power devices with increasing parameters and computational amounts due to excessive use of composite multiplication neural networks. This paper uses blocks that extract hierarchical characteristics little by little using information distillation and suggests the Recursive Distillation Super Resolution Network (RDSRN), a lightweight network that improves performance by making more accurate high frequency components through high frequency residual purification blocks. It was confirmed that the proposed network restores images of similar quality compared to RDN, restores images 3.5 times faster with about 32 times fewer parameters and about 10 times less computation, and produces 0.16 dB better performance with about 2.2 times less parameters and 1.8 times faster processing time than the existing lightweight network CARN.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Numerical Analysis of Steel-strengthened Concrete Panels Exposed to Effects of Blast Wave and Fragment Impact Load Using Multi-solver Coupling (폭풍파 및 파편 충돌에 대한 강판보강 콘크리트 패널의 복합적 수치해석)

  • Yun, Sung-Hwan;Park, Taehyo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.1A
    • /
    • pp.25-33
    • /
    • 2011
  • The impact damage behavior of steel-strengthened concrete panels exposed to explosive loading is investigated. Since real explosion experiments require the vast costs to facilities as well as the blast and impact damage mechanisms are too complicated, numerical analysis has lately become a subject of special attention. However, for engineering problems involving blast wave and fragment impact, there is no single numerical method that is appropriate to the various problems. In order to evaluate the retrofit performance of a steel-strengthened concrete panel subject to blast wave and fragment impact loading, an explicit analysis program, AUTODYN is used in this work. The multi-solver coupling methods such as Euler-Lagrange and SPH-Lagrange coupling method in order to improve efficiency and accuracy of numerical analysis is implemented. The simplified and idealized two dimensional and axisymmetric models are used in order to obtain a reasonable computation running time. As a result of the analysis, concrete panels subject to either blast wave or fragment impact loading without the steel plate are shown the scabbing and perforation. The perforation can be prevented by concrete panels reinforced with steel plate. The numerical results show good agreement with the results of the experiments.