• Title/Summary/Keyword: 결합 알고리즘

Search Result 1,723, Processing Time 0.029 seconds

Radiation Dose Reduction in Digital Mammography by Deep-Learning Algorithm Image Reconstruction: A Preliminary Study (딥러닝 알고리즘을 이용한 저선량 디지털 유방 촬영 영상의 복원: 예비 연구)

  • Su Min Ha;Hak Hee Kim;Eunhee Kang;Bo Kyoung Seo;Nami Choi;Tae Hee Kim;You Jin Ku;Jong Chul Ye
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.2
    • /
    • pp.344-359
    • /
    • 2022
  • Purpose To develop a denoising convolutional neural network-based image processing technique and investigate its efficacy in diagnosing breast cancer using low-dose mammography imaging. Materials and Methods A total of 6 breast radiologists were included in this prospective study. All radiologists independently evaluated low-dose images for lesion detection and rated them for diagnostic quality using a qualitative scale. After application of the denoising network, the same radiologists evaluated lesion detectability and image quality. For clinical application, a consensus on lesion type and localization on preoperative mammographic examinations of breast cancer patients was reached after discussion. Thereafter, coded low-dose, reconstructed full-dose, and full-dose images were presented and assessed in a random order. Results Lesions on 40% reconstructed full-dose images were better perceived when compared with low-dose images of mastectomy specimens as a reference. In clinical application, as compared to 40% reconstructed images, higher values were given on full-dose images for resolution (p < 0.001); diagnostic quality for calcifications (p < 0.001); and for masses, asymmetry, or architectural distortion (p = 0.037). The 40% reconstructed images showed comparable values to 100% full-dose images for overall quality (p = 0.547), lesion visibility (p = 0.120), and contrast (p = 0.083), without significant differences. Conclusion Effective denoising and image reconstruction processing techniques can enable breast cancer diagnosis with substantial radiation dose reduction.

Single Trace Analysis against HyMES by Exploitation of Joint Distributions of Leakages (HyMES에 대한 결합 확률 분포 기반 단일 파형 분석)

  • Park, ByeongGyu;Kim, Suhri;Kim, Hanbit;Jin, Sunghyun;Kim, HeeSeok;Hong, Seokhie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.5
    • /
    • pp.1099-1112
    • /
    • 2018
  • The field of post-quantum cryptography (PQC) is an active area of research as cryptographers look for public-key cryptosystems that can resist quantum adversaries. Among those categories in PQC, code-based cryptosystem provides high security along with efficiency. Recent works on code-based cryptosystems focus on the side-channel resistant implementation since previous works have indicated the possible side-channel vulnerabilities on existing algorithms. In this paper, we recovered the secret key in HyMES(Hybrid McEliece Scheme) using a single power consumption trace. HyMES is a variant of McEliece cryptosystem that provides smaller keys and faster encryption and decryption speed. During the decryption, the algorithm computes the parity-check matrix which is required when computing the syndrome. We analyzed HyMES using the fact that the joint distributions of nonlinear functions used in this process depend on the secret key. To the best of our knowledge, we were the first to propose the side-channel analysis based on joint distributions of leakages on public-key cryptosystem.

Implementation of Evolving Neural Network Controller for Inverted Pendulum System (도립진자 시스템을 위한 진화형 신경회로망 제어기의 실현)

  • 심영진;김태우;최우진;이준탁
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.14 no.3
    • /
    • pp.68-76
    • /
    • 2000
  • The stabilization control of Inverted Pendulum(IP) system is difficult because of its nonlinearity and structural unstability. Futhermore, a series of conventional techniques such as the pole placement and the optimal control based on the local linearizations have narrow stabilizable regions. At the same time, the fine tunings of their gain parameters are also troublesome. Thus, in this paper, an Evolving Neural Network Controller(ENNC) which its structure and its connection weights are optimized simultaneously by Real Variable Elitist Genetic Algorithm(RVEGA) was presented for stabilization of an IP system with nonlinearity. This proposed ENNC was described by a simple genetic chromosome. And the deletion of neuron, the according to the various flag types. Therefore, the connection weights, its structure and the neuron types in the given ENNC can be optimized by the proposed evolution strategy. And the proposed ENNC was implemented successfully on the ADA-2310 data acquisition board and the 80586 microprocessor in order to stabilize the IP system. Through the simulation and experimental results, we showed that the finally acquired optimal ENNC was very useful in the stabilization control of IP system.

  • PDF

A New Face Detection Method using Combined Features of Color and Edge under the illumination Variance (컬러와 에지정보를 결합한 조명변화에 강인한 얼굴영역 검출방법)

  • 지은미;윤호섭;이상호
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.809-817
    • /
    • 2002
  • This paper describes a new face detection method that is a pre-processing algorithm for on-line face recognition. To complement the weakness of using only edge or rotor features from previous face detection method, we propose the two types of face detection method. The one is a combined method with edge and color features and the other is a center area color sampling method. To prevent connecting the people's face area and the background area, which have same colors, we propose a new adaptive edge detection algorithm firstly. The adaptive edge detection algorithm is robust to illumination variance so that it extracts lots of edges and breakouts edges steadily in border between background and face areas. Because of strong edge detection, face area appears one or multi regions. We can merge these isolated regions using color information and get the final face area as a MBR (Minimum Bounding Rectangle) form. If the size of final face area is under or upper threshold, color sampling method in center area from input image is used to detect new face area. To evaluate the proposed method, we have experimented with 2,100 face images. A high face detection rate of 96.3% has been obtained.

8.1 Gbps High-Throughput and Multi-Mode QC-LDPC Decoder based on Fully Parallel Structure (전 병렬구조 기반 8.1 Gbps 고속 및 다중 모드 QC-LDPC 복호기)

  • Jung, Yongmin;Jung, Yunho;Lee, Seongjoo;Kim, Jaeseok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.11
    • /
    • pp.78-89
    • /
    • 2013
  • This paper proposes a high-throughput and multi-mode quasi-cyclic (QC) low-density parity-check (LDPC) decoder based on a fully parallel structure. The proposed QC-LDPC decoder employs the fully parallel structure to provide very high throughput. The high interconnection complexity, which is the general problem in the fully parallel structure, is solved by using a broadcasting-based sum-product algorithm and proposing a low-complexity cyclic shift network. The high complexity problem, which is caused by using a large amount of check node processors and variable node processors, is solved by proposing a combined check and variable node processor (CCVP). The proposed QC-LDPC decoder can support the multi-mode decoding by proposing a routing-based interconnection network, the flexible CCVP and the flexible cyclic shift network. The proposed QC-LDPC decoder is operated at 100 MHz clock frequency. The proposed QC-LDPC decoder supports multi-mode decoding and provides 8.1 Gbps throughput for a (1944, 1620) QC-LDPC code.

Optimizing Similarity Threshold and Coverage of CBR (사례기반추론의 유사 임계치 및 커버리지 최적화)

  • Ahn, Hyunchul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.535-542
    • /
    • 2013
  • Since case-based reasoning(CBR) has many advantages, it has been used for supporting decision making in various areas including medical checkup, production planning, customer classification, and so on. However, there are several factors to be set by heuristics when designing effective CBR systems. Among these factors, this study addresses the issue of selecting appropriate neighbors in case retrieval step. As the criterion for selecting appropriate neighbors, conventional studies have used the preset number of neighbors to combine(i.e. k of k-nearest neighbor), or the relative portion of the maximum similarity. However, this study proposes to use the absolute similarity threshold varying from 0 to 1, as the criterion for selecting appropriate neighbors to combine. In this case, too small similarity threshold value may make the model rarely produce the solution. To avoid this, we propose to adopt the coverage, which implies the ratio of the cases in which solutions are produced over the total number of the training cases, and to set it as the constraint when optimizing the similarity threshold. To validate the usefulness of the proposed model, we applied it to a real-world target marketing case of an online shopping mall in Korea. As a result, we found that the proposed model might significantly improve the performance of CBR.

Regionalization of Extreme Rainfall with Spatio-Temporal Pattern (극치강수량의 시공간적 특성을 이용한 지역빈도분석)

  • Lee, Jeong-Ju;Kwon, Hyun-Han;Kim, Byung-Sik;Yoon, Seok-Yeong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2010.05a
    • /
    • pp.1429-1433
    • /
    • 2010
  • 수공구조물의 설계, 수자원 관리계획의 수립, 재해영향 검토 등을 수행할 때, 재현기간에 따른 확률개념의 강우량, 홍수량, 저수량 등을 산정하여 사용하게 되며, 보통 대상지역의 장기 수문관측 자료를 이용하여 수문사상의 확률분포를 산정한 후 재현기간을 연장하여 원하는 설계빈도에 해당하는 양을 추정하게 된다. 미계측지역 또는 관측자료의 보유기간이 짧은 지역의 경우는 지역빈도 분석 결과를 이용하게 된다. 지역빈도해석을 위해서는 강우자료들의 동질성을 파악하는 것이 가장 기본적인 과정이 되며 이를 위해 통계학적인 범주화분석이 선행되어야 한다. 지점 빈도분석의 수문학적 동질성 판별을 위해 L-moment 방법, K-means 방법에 의한 군집분석 등이 주로 사용되며 관측소 위치좌표를 이용한 공간보간법을 적용하여 시각화하고 있다. 강수량은 시공간적으로 변하는 수문변량으로서 강수량의 시간적인 특성 또한 강수량의 특성을 정의하는데 매우 중요한 요소이다. 이러한 점에서 본 연구를 통해 강수지점의 공간적인 좌표 및 강수량의 양적인 범주화에 초점을 맞춘 기존 지역빈도분석의 범주화 과정에 덧붙여 시간적인 영향을 고려할 수 있는 요소들을 결정하고 이를 활용할 수 있는 범주화 과정을 제시하고자 한다. 즉, 극치강수량의 발생 시기에 대한 정량적인 분석이 가능한 순환통계기법을 이용하여 관측 지점별 시간 통계량을 산정하고, 이를 극치강수량과 결합하여 시 공간적인 특성자료를 생성한 후 이를 이용한 군집화 해석 모형을 개발하는데 연구의 목적이 있다. 분석 과정에 있어서 시간속성의 정량화 및 일반화는 순환통계기법을 사용하였으며, 극치강수량과 발생시점의 속성자료는 각각의 평균과 표준편차를 이용하였다. K-means 알고리즘을 이용해 결합자료를 군집화 하고, L-moment 방법으로 지역화 결과에 대한 검증을 수행하였다. 속성 결합 자료의 군집화 효과는 모의데이터 실험을 통해 확인하였으며, 우리 나라의 58개 기상관측소 자료를 이용하여 분석을 수행하였다. 예비해석 단계에서 100회의 군집분석을 통해 평균적인 centroid를 산정하고, 해당 값을 본 해석의 초기 centroid로 지정하여, 변동적인 클러스터링 경향을 안정화시켜 해석이 반복됨에 따라 군집화 결과가 달라지는 오류를 방지하였다. 또한 K-means 방법으로 계산된 군집별 공간거리 합의 크기에 따라 군집번호를 부여함으로써 군집의 번호순서대로 물리적인 연관성이 인접하도록 설정하였으며, 군집간의 경계선을 추출할 때 발생할 수 있는 오류를 방지하였다. 지역빈도분석 결과는 3차원 Spline 기법으로 도시하였다.

  • PDF

A Classification Model for Illegal Debt Collection Using Rule and Machine Learning Based Methods

  • Kim, Tae-Ho;Lim, Jong-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.93-103
    • /
    • 2021
  • Despite the efforts of financial authorities in conducting the direct management and supervision of collection agents and bond-collecting guideline, the illegal and unfair collection of debts still exist. To effectively prevent such illegal and unfair debt collection activities, we need a method for strengthening the monitoring of illegal collection activities even with little manpower using technologies such as unstructured data machine learning. In this study, we propose a classification model for illegal debt collection that combine machine learning such as Support Vector Machine (SVM) with a rule-based technique that obtains the collection transcript of loan companies and converts them into text data to identify illegal activities. Moreover, the study also compares how accurate identification was made in accordance with the machine learning algorithm. The study shows that a case of using the combination of the rule-based illegal rules and machine learning for classification has higher accuracy than the classification model of the previous study that applied only machine learning. This study is the first attempt to classify illegalities by combining rule-based illegal detection rules with machine learning. If further research will be conducted to improve the model's completeness, it will greatly contribute in preventing consumer damage from illegal debt collection activities.

Grasping a Target Object in Clutter with an Anthropomorphic Robot Hand via RGB-D Vision Intelligence, Target Path Planning and Deep Reinforcement Learning (RGB-D 환경인식 시각 지능, 목표 사물 경로 탐색 및 심층 강화학습에 기반한 사람형 로봇손의 목표 사물 파지)

  • Ryu, Ga Hyeon;Oh, Ji-Heon;Jeong, Jin Gyun;Jung, Hwanseok;Lee, Jin Hyuk;Lopez, Patricio Rivera;Kim, Tae-Seong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.363-370
    • /
    • 2022
  • Grasping a target object among clutter objects without collision requires machine intelligence. Machine intelligence includes environment recognition, target & obstacle recognition, collision-free path planning, and object grasping intelligence of robot hands. In this work, we implement such system in simulation and hardware to grasp a target object without collision. We use a RGB-D image sensor to recognize the environment and objects. Various path-finding algorithms been implemented and tested to find collision-free paths. Finally for an anthropomorphic robot hand, object grasping intelligence is learned through deep reinforcement learning. In our simulation environment, grasping a target out of five clutter objects, showed an average success rate of 78.8%and a collision rate of 34% without path planning. Whereas our system combined with path planning showed an average success rate of 94% and an average collision rate of 20%. In our hardware environment grasping a target out of three clutter objects showed an average success rate of 30% and a collision rate of 97% without path planning whereas our system combined with path planning showed an average success rate of 90% and an average collision rate of 23%. Our results show that grasping a target object in clutter is feasible with vision intelligence, path planning, and deep RL.

Data analysis by Integrating statistics and visualization: Visual verification for the prediction model (통계와 시각화를 결합한 데이터 분석: 예측모형 대한 시각화 검증)

  • Mun, Seong Min;Lee, Kyung Won
    • Design Convergence Study
    • /
    • v.15 no.6
    • /
    • pp.195-214
    • /
    • 2016
  • Predictive analysis is based on a probabilistic learning algorithm called pattern recognition or machine learning. Therefore, if users want to extract more information from the data, they are required high statistical knowledge. In addition, it is difficult to find out data pattern and characteristics of the data. This study conducted statistical data analyses and visual data analyses to supplement prediction analysis's weakness. Through this study, we could find some implications that haven't been found in the previous studies. First, we could find data pattern when adjust data selection according as splitting criteria for the decision tree method. Second, we could find what type of data included in the final prediction model. We found some implications that haven't been found in the previous studies from the results of statistical and visual analyses. In statistical analysis we found relation among the multivariable and deducted prediction model to predict high box office performance. In visualization analysis we proposed visual analysis method with various interactive functions. Finally through this study we verified final prediction model and suggested analysis method extract variety of information from the data.