• Title/Summary/Keyword: 특징변환

Search Result 1,728, Processing Time 0.028 seconds

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.

A High Performance License Plate Recognition System (고속처리 자동차 번호판 인식시스템)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1352-1357
    • /
    • 2002
  • This Paper describes algorithm to extract license plates in vehicle images. Conventional methods perform preprocessing on the entire vehicle image to produce the edge image and binarize it. Hough transform is applied to the binary image to find horizontal and vertical lines, and the license plate area is extracted using the characteristics of license plates. Problems with this approach are that real-time processing is not feasible due to long processing time and that the license plate area is not extracted when lighting is irregular such as at night or when the plate boundary does not show up in the image. This research uses the gray level transition characteristics of license plates to verify the digit area by examining the digit width and the level difference between the background area the digit area, and then extracts the plate area by testing the distance between the verified digits. This research solves the problem of failure in extracting the license plates due to degraded plate boundary as in the conventional methods and resolves the problem of the time requirement by processing the real time such that practical application is possible. This paper Presents a power automated license plate recognition system, which is able to read license numbers of cars, even under circumstances, which are far from ideal. In a real-life test, the percentage of rejected plates wan 13%, whereas 0.4% of the plates were misclassified. Suggestions for further improvements are given.

ID-Based Proxy Re-encryption Scheme with Chosen-Ciphertext Security (CCA 안전성을 제공하는 ID기반 프락시 재암호화 기법)

  • Koo, Woo-Kwon;Hwang, Jung-Yeon;Kim, Hyoung-Joong;Lee, Dong-Hoon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.1
    • /
    • pp.64-77
    • /
    • 2009
  • A proxy re-encryption scheme allows Alice to temporarily delegate the decryption rights to Bob via a proxy. Alice gives the proxy a re-encryption key so that the proxy can convert a ciphertext for Alice into the ciphertext for Bob. Recently, ID-based proxy re-encryption schemes are receiving considerable attention for a variety of applications such as distributed storage, DRM, and email-forwarding system. And a non-interactive identity-based proxy re-encryption scheme was proposed for achieving CCA-security by Green and Ateniese. In the paper, we show that the identity-based proxy re-encryption scheme is unfortunately vulnerable to a collusion attack. The collusion of a proxy and a malicious user enables two parties to derive other honest users' private keys and thereby decrypt ciphertexts intended for only the honest user. To solve this problem, we propose two ID-based proxy re-encryption scheme schemes, which are proved secure under CPA and CCA in the random oracle model. For achieving CCA-security, we present self-authentication tag based on short signature. Important features of proposed scheme is that ciphertext structure is preserved after the ciphertext is re-encrypted. Therefore it does not lead to ciphertext expansion. And there is no limitation on the number of re-encryption.

On Optimizing LDA-extentions Using a Pre-Clustering (사전 클러스터링을 이용한 LDA-확장법들의 최적화)

  • Kim, Sang-Woon;Koo, Byum-Yong;Choi, Woo-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.3
    • /
    • pp.98-107
    • /
    • 2007
  • For high-dimensional pattern recognition, such as face classification, the small number of training samples leads to the Small Sample Size problem when the number of pattern samples is smaller than the number of dimensionality. Recently, various LDA-extensions have been developed, including LDA, PCA+LDA, and Direct-LDA, to address the problem. This paper proposes a method of improving the classification efficiency by increasing the number of (sub)-classes through pre-clustering a training set prior to the execution of Direct-LDA. In LDA (or Direct-LDA), since the number of classes of the training set puts a limit to the dimensionality to be reduced, it is increased to the number of sub-classes that is obtained through clustering so that the classification performance of LDA-extensions can be improved. In other words, the eigen space of the training set consists of the range space and the null space, and the dimensionality of the range space increases as the number of classes increases. Therefore, when constructing the transformation matrix, through minimizing the null space, the loss of discriminatve information resulted from this space can be minimized. Experimental results for the artificial data of X-OR samples as well as the bench mark face databases of AT&T and Yale demonstrate that the classification efficiency of the proposed method could be improved.

A Novel Video Copy Detection Method based on Statistical Analysis (통계적 분석 기반 불법 복제 비디오 영상 감식 방법)

  • Cho, Hye-Jeong;Kim, Ji-Eun;Sohn, Chae-Bong;Chung, Kwang-Sue;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.661-675
    • /
    • 2009
  • The carelessly and illegally copied contents are raising serious social problem as internet and multimedia technologies are advancing. Therefore, development of video copy detection system must be settled without delay. In this paper, we propose the hierarchical video copy detection method that estimates similarity using statistical characteristics between original video and manipulated(transformed) copy video. We rank according to luminance value of video to be robust to spacial transformation, and choose similar videos categorized as candidate segments in huge amount of database to reduce processing time and complexity. The copy videos generally insert black area in the edge of the image, so we remove rig black area and decide copy or not by using statistical characteristics of original video and copied video with center part of frame that contains important information of video. Experiment results show that the proposed method has similar keyframe accuracy to reference method, but we use less memory to save feature information than reference's, because the number of keyframes is less 61% than that of reference's. Also, the proposed method detects if the video is copied or not efficiently despite expansive spatial transformations such as blurring, contrast change, zoom in, zoom out, aspect ratio change, and caption insertion.

On-Line music score recognition by DPmatching (DP매칭에 의한 On-Line 악보인식)

  • 구상훈;이병선;김수경;이은주
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2002.11a
    • /
    • pp.502-511
    • /
    • 2002
  • 컴퓨터의 기술적 발전은 사회 여러 분야에 막대한 영향을 끼쳤다. 그중 악보 인식분야에도 커다란 영향을 주었다 그러나, On-line 상에서 그린 악보를 실시간으로 정형화된 악보형태로 변환하는 처리에 대한 연구가 미흡하여 이에 대한 연구가 필요하다. 본 논문에서는 실시간으로 악보를 인식하고, 사용자의 편의를 도모하기 위해 DP(Dynamic Programming) 매칭법을 이용한 On-Line 악보인식에 관한 방법을 제안하였다. 본 연구에서는 실시간으로 입력되는 악상기호를 인식하기 위해, 가장 유효한 정보인 악상 기호내의 방향, x, y 좌표를 이용하여 벡터형태로 추출한 후 음표와 비음표(쉼표, 기타기호)의 두개의 그룹으로 나누어진 표준패턴과의 DP매칭을 통해 인식한다. 먼저 tablet을 통해 실시간으로 악상 기호를 입력할 때 생기는 x, y좌표를 이용하여, 펜의 움직임에 대한 16방향 부호화를 수행한다. 음표와 비음표를 구분하기 위한 시간을 줄이고자 16방향 부호화를 적용하치 않고 사사분면부호화를 적용한다. 음표를 약식으로 그릴 경우 음표 머리에 해당하는 부분의 좌표는 삼사분면에 분포하고, 폐곡선의 음표일 경우에는 좌표가 사사분면에 고르게 나타난다. 폐곡선을 제외한 음표의 머리는 폐곡선과 같은 조건이면서 입력받은 y좌표값들 중에서 최소값과 최대값을 구한 다음 2로 나눈 값을 지나는 y좌표의 개수가 임의의 임계값 이상이면 음표로 판단한다. 위 조건을 만족하지 않을 경우 비음표로 취급한다. 음표와 비음표를 결정한 다음, 입력패턴과 표준패턴과의 DP매칭을 통하여 벌점을 구한다. 그리고 경로탐색을 통해 벌점에 대한 각각의 합계를 구해 최소값을 악상기호로 인식 하였다. 실험결과, 표준패턴을 음표와 비음표의 두개의 그룹으로 나누어 인식함으로써 DP 매칭의 처리 속도를 개선시켰고, 국소적인 변형이 있는 패턴과 특징의 수가 다른 패턴의 경우에도 좋은 인식률을 얻었다.r interferon alfa concentrated solution can be established according to the monograph of EP suggesting the revision of Minimum requirements for biological productss of e-procurement, e-placement, e-payment are also investigated.. monocytogenes, E. coli 및 S. enteritidis에 대한 키토산의 최소저해농도는 각각 0.1461 mg/mL, 0.2419 mg/mL, 0.0980 mg/mL 및 0.0490 mg/mL로 측정되었다. 또한 2%(v/v) 초산 자체의 최소저해농도를 측정한 결과, B. cereus, L. mosocytogenes, E. eoli에 대해서는 control과 비교시 유의적인 항균효과는 나타나지 않았다. 반면에 S. enteritidis의 경우는 배양시간 4시간까지는 항균활성을 나타내었지만, 8시간 이후부터는 S. enteritidis의 성장이 control 보다 높아져 배양시간 20시간에서는 control 보다 약 2배 이상 균주의 성장을 촉진시켰다.차에 따른 개별화 학습을 가능하게 할 뿐만 아니라 능동적인 참여를 유도하여 학습효율을 높일 수 있을 것으로 기대된다.향은 패션마케팅의 정의와 적용범위를 축소시킬 수 있는 위험을 내재한 것으로 보여진다. 그런가 하면, 많이 다루어진 주제라 할지라도 개념이나 용어가 통일되지 않고 사용되며 검증되어 통용되는 측정도구의 부재로 인하여 연구결과의 축적이 미비한 상태이다. 따라서, 이에 대한 재고와 새로운 방향 모색이 필요하다고 사료된다.로 사료되며, 임신관련 cytokine에 대한 다양한 연구가 요구되고 있다.₂/Hf(Variable)/Si 계에서 HfO₂ 박막이 Si 기판위에 직접 증착되면, 순수 HfO₂ 박막의

  • PDF

Removing SAR Speckle Noise Based on the Edge Sharpenig Algorithm (경계선 보존을 기반으로 한 SAR 영상의 잡영 제거 알고리즘에 대한 연구)

  • 손홍규;박정환;피문희
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.3-8
    • /
    • 2003
  • 모든 SAR 영상에는 전자기파 간의 간섭으로 인한 스페클 잡영(speckle)이 존재하며, 이를 제거하는 것은 양질의 SAR 영상을 얻기 위한 필수적인 전처리 과정 중 하나라고 할 수 있다. 그러나 이러한 스페클 잡영을 제거하기 위하여 기존에 제안되었던 알고리즘은 잡영은 효과적으로 감소시키는 반면 경계선과 같은 영상의 고유 정보까지 함께 감소시키는 한계가 있었다. 따라서 본 연구에서는 SAR 영상의 경계선은 보존시키면서 영상으로부터 불필요한 잡영을 제거할 수 있는 알고리즘을 구현하고, 기존의 알고리즘과 비교하여 그 효율성을 평가하고자 한다. 영상의 통계적 특성에 근거하는 기존의 알고리즘과는 달리 웨이블렛 변환(Wavelet transform)으로 경계선 및 특징 정보의 여부를 판별한 후 평균 필터(mean filter)를 적용하는 경계선 보존(edge sharpening) 알고리즘은 경계 정보의 신뢰성을 향상시킬 수 있으며, 1차원 필터를 수평, 수직, 대각선, 역대각선 방향으로 적용함으로써 하나의 영상소를 중심으로 모든 방향에 대한 경계선 여부를 확인할 수 있는 장점이 있다. 본 연구에서는 512 × 512로 절취한 1-look SAR 영상에 대하여 창 크기 5 × 5의 경계선 보존 필터를 적용하고 동일영상에 대하여 기존의 Lee, Kuan, Frost 필터 등의 실험결과를 비교함으로써 그 적합성을 판단하고자 하였다. 실험결과에 대한 수치적인 평가는 ①정규화 평균을 이용하여 평균값의 보존 여부, ②편차 계수를 이용한 스페클 잡영의 제거 여부, ③경계선 보존지수(EPI)를 이용한 경계선의 보존 정도를 통해 이루어졌다. 본 연구의 실험결과를 통해 경계선 보존 필터는 평균값의 보존 여부 및 스페클 잡영 제거 정도에 있어 다른 필터들과 큰 차이가 없지만 경계선보존지수는 다른 필터들에 비하여 가장 우수함을 확인할 수 있었다.rbon 탐식효율을 조사한 결과 B, D 및 E 분획에서 유의적인 효과를 나타내었다. 이상의 결과를 종합해볼 때, ${\beta}$-glucan은 고용량일 때 직접적으로 또는 $IFN-{\gamma}$ 존재시에는 저용량에서도 복강 큰 포식세로를 활성화시킬 뿐 아니라, 탐식효율도 높임으로써 면역기능을 증진 시키는 것으로 나타났고, 그 효과는 crude ${\beta}$-glucan의 추출조건에 따라 달라지는 것을 알 수 있었다.eveloped. Design concepts and control methods of a new crane will be introduced in this paper.and momentum balance was applied to the fluid field of bundle. while the movement of′ individual material was taken into account. The constitutive model relating the surface force and the deformation of bundle was introduced by considering a representative prodedure that stands for the bundle movement. Then a fundamental equations system could be simplified considering a steady state of the process. On the basis of the simplified model, the simulation was performed and the results could be confirmed by the experiments under various conditions.뢰, 결속 등 다차원

  • PDF

Novel Detection Algorithm of The Upstroke of Pulse Waveform for Continuously Varying Contact Pressure Method (연속 가압방식의 맥파 측정방법을 위한 시작점 검출 알고리즘 개발)

  • Bae, Jang-Han;Jeon, Young-Ju;Kim, Jong-Yeol;Kim, Jae-Uk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.2
    • /
    • pp.46-54
    • /
    • 2012
  • We propose a continuously varying contact pressure(CVCP)-adaptive feature extraction algorithm for pulse diagnostic analysis. The CVCP method measures the pulse waveform with continuously increasing contact pressure(CP). This method offer a high resolution signal of the pulse waveform amplitude(PWA) as a function of the contact pressure. Therefore it enables us to overcome the limitation of commercially available pulse-taking devices whose analysis rely on a few number of PWA-CP pairs. We show that an efficient feature extraction algorithm which covers the features of the CVCP-method can be developed by sequentially applying Fast Fourier Transform, peak detection by center-to-edges method, baseline drift removal, detection of the percussion wave upstroke by intersecting tangent method and detection of the analysis region. Finally, by a clinical study with 30 subjects, we show that our CVCP-adaptive feature extraction algorithm detected the upstroke with accuracy of 99.46% and sensitivity of 99.51%, which were about 4.82% and 2.46% increases respectively, compared to a conventional feature extraction method. The proposed CVCP method and the CVCP-adaptive feature extraction algorithm are expected to improve the accuracy in the pulse diagnostic algorithms such as floating/sunken pulse qualities and deficient/excess pulse qualities.

Mesozoic Gold-Silver Mineralization in South Korea: Metallogenic Provinces Reestimated to the Geodynamic Setting (남한의 중생대 금-은광화작용: 지구동력학적 관점에서 재검토된 금-은광상구)

  • Choi, Seon-Gyu;Park, Sang-Joon;Kim, Sung-Won;Kim, Chang-Seong;Oh, Chang-Whan
    • Economic and Environmental Geology
    • /
    • v.39 no.5 s.180
    • /
    • pp.567-581
    • /
    • 2006
  • The Au-Ag lode deposits in South Korea are closely associated with the Mesozoic granitoids. Namely, the Jurassic deposits formed in mesozonal environments related to deep-seated granitoids, whereas the Cretaceous ones were developed in porphyry-related environments related to subvolcanic granitoids. The time-space relationships of the Au-Ag lode deposits in South Korea are closely related to the changing plate motions during the Mesozoic. Most of the Jurassic auriferous deposits (about $165{\sim}145$ Ma) show fluid characteristics typical of an orogenic-type gold deposits, and were probably generated in a compressional to transpressional regime caused by an orthogonal to oblique convergence of the Izanagi Plate into the East Asian continental margin. On the other hand, strike-slip faults and caldera-related fractures together with subvolcanic activity are associated with major strike-slip faults reactivated by a northward (oblique) to northwestward (orthogonal) convergence, and probably have played an important role in the formation of the Cretaceous Au-Ag lode deposits (about $110{\sim}45$ Ma) under a continental arc setting. The temporal and spatial distinctions between the two typical Mesozoic deposit styles in South Korea probably reflect a different thermal episodes (i.e., late orogenic and post-orogenic) and ore-forming fluids related to different depths of emplacement of magma due to regional changes in tectonic environment.

Isolated Word Recognition Using k-clustering Subspace Method and Discriminant Common Vector (k-clustering 부공간 기법과 판별 공통벡터를 이용한 고립단어 인식)

  • Nam, Myung-Woo
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.42 no.1
    • /
    • pp.13-20
    • /
    • 2005
  • In this paper, I recognized Korean isolated words using CVEM which is suggested by M. Bilginer et al. CVEM is an algorithm which is easy to extract the common properties from training voice signals and also doesn't need complex calculation. In addition CVEM shows high accuracy in recognition results. But, CVEM has couple of problems which are impossible to use for many training voices and no discriminant information among extracted common vectors. To get the optimal common vectors from certain voice classes, various voices should be used for training. But CVEM is impossible to get continuous high accuracy in recognition because CVEM has a limitation to use many training voices and the absence of discriminant information among common vectors can be the source of critical errors. To solve above problems and improve recognition rate, k-clustering subspace method and DCVEM suggested. And did various experiments using voice signal database made by ETRI to prove the validity of suggested methods. The result of experiments shows improvements in performance. And with proposed methods, all the CVEM problems can be solved with out calculation problem.