• Title/Summary/Keyword: Bank performance

Search Result 560, Processing Time 0.025 seconds

A Rapid Signal Acquisition Scheme for Noncoherent UWB Systems (비동기식 초광대역 시스템을 위한 고속 신호 동기획득 기법)

  • Kim Jae-Woon;Yang Suck-Chel;Choi Sung-Soo;Shin Yo-An
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.4C
    • /
    • pp.331-340
    • /
    • 2006
  • In this Paper, we propose to extend the TSS-LS(Two-Step Search scheme with Linear search based Second step) scheme which was already proposed by the authors for coherent UWB(Ultra Wide Band) systems, to rapid and reliable acquisition of noncoherent UWB systems in multipath channels. The proposed noncoherent TSS-LS employing simple energy window banks utilizes two different thresholds and search windows to achieve fast acquisition. Furthermore, the linear search is adopted for the second step in the proposed scheme to correctly find the starting point in the range of effective delay spread of the multipath channels, and to obtain reliable BER(Bit Error Rate) performance of the noncoherent UWB systems. Simulation results with multipath channel models by IEEE 802.15.3a show that the proposed two-step search scheme can achieve significant reduction of the required mean acquisition time as compared to general search schemes. ]n addition, the proposed scheme achieves quite good BER performance for large signal-to-noise ratios, which is favorably comparable to the case of ideal perfect timing.

Low Power TLB System by Using Continuous Accessing Distinction Algorithm (연속적 접근 판별 알고리즘을 이용한 저전력 TLB 구조)

  • Lee, Jung-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.14A no.1 s.105
    • /
    • pp.47-54
    • /
    • 2007
  • In this paper we present a translation lookaside buffer (TLB) system with low power consumption for imbedded processors. The proposed TLB is constructed as multiple banks, each with an associated block buffer and a corresponding comparator. Either the block buffer or the main bank is selectively accessed on the basis of two bits in the block buffer (tag buffer). Dynamic power savings are achieved by reducing the number of entries accessed in parallel, as a result of using the tag buffer as a filtering mechanism. The performance overhead of the proposed TLB is negligible compared with other hierarchical TLB structures. For example, the two-cycle overhead of the proposed TLB is only about 1%, as compared with 5% overhead for a filter (micro)-TLB and 14% overhead for a same structure without continuos accessing distinction algorithm. We show that the average hit ratios of the block buffers and the main banks of the proposed TLB are 95% and 5% respectively. Dynamic power is reduced by about 95% with respect to with a fully associative TLB, 90% with respect to a filter-TLB, and 40% relative to a same structure without continuos accessing distinction algorithm.

Using noise filtering and sufficient dimension reduction method on unstructured economic data (노이즈 필터링과 충분차원축소를 이용한 비정형 경제 데이터 활용에 대한 연구)

  • Jae Keun Yoo;Yujin Park;Beomseok Seo
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.119-138
    • /
    • 2024
  • Text indicators are increasingly valuable in economic forecasting, but are often hindered by noise and high dimensionality. This study aims to explore post-processing techniques, specifically noise filtering and dimensionality reduction, to normalize text indicators and enhance their utility through empirical analysis. Predictive target variables for the empirical analysis include monthly leading index cyclical variations, BSI (business survey index) All industry sales performance, BSI All industry sales outlook, as well as quarterly real GDP SA (seasonally adjusted) growth rate and real GDP YoY (year-on-year) growth rate. This study explores the Hodrick and Prescott filter, which is widely used in econometrics for noise filtering, and employs sufficient dimension reduction, a nonparametric dimensionality reduction methodology, in conjunction with unstructured text data. The analysis results reveal that noise filtering of text indicators significantly improves predictive accuracy for both monthly and quarterly variables, particularly when the dataset is large. Moreover, this study demonstrated that applying dimensionality reduction further enhances predictive performance. These findings imply that post-processing techniques, such as noise filtering and dimensionality reduction, are crucial for enhancing the utility of text indicators and can contribute to improving the accuracy of economic forecasts.

A study on the legal relationship between the change in the date of performance of trade contracts and the date of shipment of letters of credit (무역계약의 이행기일과 신용장 선적기일의 변경 간의 법률관계에 대한 연구)

  • Je-Hyun Lee
    • Korea Trade Review
    • /
    • v.48 no.3
    • /
    • pp.23-41
    • /
    • 2023
  • The seller and the buyer write down the agreed details in the trade contract as trade contract clauses. In the case where a letter of credit is agreed to be the payment condition, the buyer shall open a letter of credit to the seller with the shipping date specified in the trade contract through its bank. In this case, the legal relationship between the performance date of the trade contract and the shipment date of the letter of credit, the change of the performance date of the trade contract due to the change of the trade contract and the change of the shipment date specified in the letter of credit, the seller's letter of credit A problem arises in the legal interpretation of the approval period and the change request period. Therefore, this paper analyzed the precedents of the Seongnam Branch of the Suwon District Court and the Seoul High Court related to these legal issues. The performance date of a trade contract is the seller's delivery date and the buyer's payment date. In the letter of credit transaction, the date of performance of the trade contract is regarded as the date of shipment and the date of negotiation of documents specified in the letter of credit. The seller must decide whether to accept the letter of credit within 5 banking days after receiving the letter of credit from the buyer. After this period has elapsed, the seller cannot refuse the letter of credit. However, if the buyer is unable to decide whether to accept the letter of credit within 5 banking days due to reasons attributable to the buyer, the delivery date specified in the letter of credit will be extended. If the seller requests an amendment to the letter of credit, the buyer must accept it and open the letter of credit the seller desires to the seller. If the buyer refuses the seller's request to change the letter of credit, company A has the obligation to change and reopen the letter of credit as requested by company B. Expect by agreeing on the quotation As it is a fundamental breach of contract stipulated in Article 25 of the United Nations Convention on Contracts for the International Sale of Goods, company B can cancel the trade contract and claim damages from company A. Compensation for damages caused by Company A's breach of the trade contract shall be an amount equal to the loss suffered by Company B as a result of the breach, including loss of profits.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Evaluation of IH-1000 for Automated ABO-Rh Typing and Irregular Antibody Screening (ABO 및 RhD 혈액형 검사와 비예기항체 선별검사를 위한 자동화장비 IH-1000의 평가)

  • Park, Youngchun;Lim, Jinsook;Ko, Younghuyn;Kwon, Kyechul;Koo, Sunhoe;Kim, Jimyung
    • The Korean Journal of Blood Transfusion
    • /
    • v.23 no.2
    • /
    • pp.127-135
    • /
    • 2012
  • Background: Despite modern advances in laboratory automated medicine, work-process in the blood bank is still handled manually. Several automated immunohematological instruments have been developed and are available in the market. The IH-1000 (Bio-Rad Laboratories, Hercules, CA, USA), a fully automated instrument for immunohematology, was recently introduced. In this study, we evaluated the performance of the IH-1000 for ABO/Rh typing and irregular antibody screening. Methods: In October 2011, a total of 373 blood samples for ABO/Rh typing and 303 cases for unexpected antibody screening were collected. The IH-1000 was compared to the manual tube and slide methods for ABO/Rh typing and to the microcolumn agglutination method (DiaMed-ID system) for antibody screening. Results: For ABO/Rh typing, concordance rate was 100%. For unexpected antibody screening, positive results for both column agglutination and IH-1000 were observed in 10 cases (four cases of anti-E and c, three of anti-E, one of anti-D, one of anti-M, and one of anti-Xg) and negative results for both were observed in 289 cases. The concordance rate between IH-1000 and column agglutination was 98.7%. Sensitivity and specificity were 90.9% and 99.3%, respectively. Conclusion: The automated IH-1000 showed good correlation with the manual tube and slide methods and the microcolumn agglutination method for ABO-RhD typing and irregular antibody screening. The IH-1000 can be used for routine pre-transfusion testing in the blood bank.

High Serum Level of Retinol and α-Tocopherol Affords Protection Against Oral Cancer in a Multiethnic Population

  • Athirajan, Vimmitra;Razak, Ishak Abdul;Thurairajah, Nalina;Ghani, Wan Maria Nabillah;Ching, Helen-Ng Lee;Yang, Yi-Hsin;Peng, Karen-Ng Lee;Rahman, Zainal Ariff Abdul;Mustafa, Wan Mahadzir Wan;Abraham, Mannil Thomas;Kiong, Tay Keng;Mun, Yuen Kar;Jalil, Norma;Zain, Rosnah Binti
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.19
    • /
    • pp.8183-8189
    • /
    • 2014
  • Background: A comparative cross-sectional study involving oral cancer patients and healthy individuals was designed to investigate associations between retinol, ${\alpha}$-tocopherol and ${\beta}$-carotene with the risk of oral cancer. Materials and Methods: This study included a total of 240 matched cases and controls where subjects were selected from the Malaysian Oral Cancer Database and Tissue Bank System (MOCDTBS). Retinol, ${\alpha}$-tocopherol and ${\beta}$-carotene levels and intake were examined by high-performance liquid chromatography (HPLC) and food frequency questionnaire (FFQ) respectively. Results: It was found that results from the two methods applied did not correlate, so that further analysis was done using the HPLC method utilising blood serum. Serum levels of retinol and ${\alpha}$-tocopherol among cases ($0.177{\pm}0.081$, $1.649{\pm}1.670{\mu}g/ml$) were significantly lower than in controls ($0.264{\pm}0.137$, $3.225{\pm}2.054{\mu}g/ml$) (p<0.005). Although serum level of ${\beta}$-carotene among cases ($0.106{\pm}0.159{\mu}g/ml$) were lower compared to controls ($0.134{\pm}0.131{\mu}g/ml$), statistical significance was not observed. Logistic regression analysis showed that high serum level of retinol (OR=0.501, 95% CI=0.254-0.992, p<0.05) and ${\alpha}$-tocopherol (OR=0.184, 95% CI=0.091-0.370, p<0.05) was significantly related to lower risk of oral cancer, whereas no relationship was observed between ${\beta}$-carotene and oral cancer risk. Conclusions: High serum levels of retinol and ${\alpha}$-tocopherol confer protection against oral cancer risk.

An Analysis on the Economic Impact of ICT Based Innovation within Creative Industries in South Korea (창조산업 내 ICT기반 혁신의 경제적 파급효과 분석)

  • Lee, Youngjoo;Kim, Byungchae;Lee, Yeonwoo
    • Journal of Technology Innovation
    • /
    • v.23 no.3
    • /
    • pp.341-372
    • /
    • 2015
  • While creativity and innovation is the key to drive the creative economy in the South Korea, the development of analysis framework to evaluate the size and performance is limited. The present study suggests a framework and a method to assess economic impact of the creative economy using inter-industry analysis which employs input-output efficiencies populated by the Korean Bank and empirical data from the national informatization survey conducted by the National Information-society Agency(NIA). The results indicated that, as of 2013, despite of economy downturn, the creative innovation based on the information communication technology(ICT) had been significantly led the production, value-added, and employment inducement. The effect is predominant in the creative industry in an broad sense, that is, technology intensive manufacturing industry. Theoretical and policy implications are discussed.

SNP Discovery in the Leptin Promoter Gene and Association with Meat Quality and Carcass Traits in Korean Cattle

  • Chung, E.R.;Shin, S.C.;Shin, K.H.;Chung, K.Y.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.21 no.12
    • /
    • pp.1689-1695
    • /
    • 2008
  • Leptin, the hormone product of the obese gene, is secreted predominately from white adipose tissue and regulates feed intake, energy metabolism and body composition. It has been considered a candidate gene for performance, carcass and meat quality traits in beef cattle. The objective of this study was to identify SNPs in the promoter region of the leptin gene and to evaluate the possible association of the SNP genotypes with carcass and meat quality traits in Korean cattle. We identified a total of 25 SNPs in the promoter region (1,208-3,049 bp upstream from the transcription start site) of the leptin gene, eleven (g.1508C>G, g.1540G>A, g.1545G>A, g.1551C>T, g.1746T>G, g.1798ins(G), g.1932del(T), g.1933del(T), g.1934del(T), g.1993C>T and g.2033C>T) of which have not been reported previously. Their sequences were deposited in GenBank database with accession number DQ202319. Genotyping of the SNPs located at positions g.2418C>G and g.2423G>A within the promoter region was performed by direct sequencing and PCR-SSCP method to investigate the effects of SNP genotypes on carcass and meat quality traits in Korean cattle. The SNP and SSCP genotypes from the two mutations of the leptin promoter were shown to be associated with the BF trait. The average BF value of animals with heterozygous SNP genotype was significantly greater than that of animals with the homozygous SNP genotypes for the g.2418C>G and g.2423G>A SNPs (p<0.05). Analysis of the combined genotype effect in both SNPs showed that animals with the AC SSCP genotype had higher BF value than animals with BB or AA SSCP genotypes (p<0.05). These results suggest that SNP of the leptin promoter region may be useful markers for selection of economic traits in Korean cattle.

Application of Random Over Sampling Examples(ROSE) for an Effective Bankruptcy Prediction Model (효과적인 기업부도 예측모형을 위한 ROSE 표본추출기법의 적용)

  • Ahn, Cheolhwi;Ahn, Hyunchul
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.525-535
    • /
    • 2018
  • If the frequency of a particular class is excessively higher than the frequency of other classes in the classification problem, data imbalance problems occur, which make machine learning distorted. Corporate bankruptcy prediction often suffers from data imbalance problems since the ratio of insolvent companies is generally very low, whereas the ratio of solvent companies is very high. To mitigate these problems, it is required to apply a proper sampling technique. Until now, oversampling techniques which adjust the class distribution of a data set by sampling minor class with replacement have popularly been used. However, they are a risk of overfitting. Under this background, this study proposes ROSE(Random Over Sampling Examples) technique which is proposed by Menardi and Torelli in 2014 for the effective corporate bankruptcy prediction. The ROSE technique creates new learning samples by synthesizing the samples for learning, so it leads to better prediction accuracy of the classifiers while avoiding the risk of overfitting. Specifically, our study proposes to combine the ROSE method with SVM(support vector machine), which is known as the best binary classifier. We applied the proposed method to a real-world bankruptcy prediction case of a Korean major bank, and compared its performance with other sampling techniques. Experimental results showed that ROSE contributed to the improvement of the prediction accuracy of SVM in bankruptcy prediction compared to other techniques, with statistical significance. These results shed a light on the fact that ROSE can be a good alternative for resolving data imbalance problems of the prediction problems in social science area other than bankruptcy prediction.