• Title/Summary/Keyword: New Algorithm

Search Result 11,746, Processing Time 0.041 seconds

3D Model Retrieval Using Sliced Shape Image (단면 형상 영상을 이용한 3차원 모델 검색)

  • Park, Yu-Sin;Seo, Yung-Ho;Yun, Yong-In;Kwon, Jun-Sik;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.27-37
    • /
    • 2008
  • Applications of 3D data increase with advancement of multimedia technique and contents, and it is necessary to manage and to retrieve for 3D data efficiently. In this paper, we propose a new method using the sliced shape which extracts efficiently a feature description for shape-based retrieval of 3D models. Since the feature descriptor of 3D model should be invariant to translation, rotation and scale for its model, normalization of models requires for 3D model retrieval system. This paper uses principal component analysis(PCA) method in order to normalize all the models. The proposed algorithm finds a direction of each axis by the PCA and creates orthogonal n planes in each axis. These planes are orthogonalized with each axis, and are used to extract sliced shape image. Sliced shape image is the 2D plane created by intersecting at between 3D model and these planes. The proposed feature descriptor is a distribution of Euclidean distances from center point of sliced shape image to its outline. A performed evaluation is used for average of the normalize modified retrieval rank(ANMRR) with a standard evaluation from MPEG-7. In our experimental results, we demonstrate that the proposed method is an efficient 3D model retrieval.

Quantitative Assessment of 3D Reconstruction Procedure Using Stereo Matching (스테레오 정합을 이용한 3차원 재구성 과정의 정량적 평가)

  • Woo, Dong-Min
    • Journal of IKEEE
    • /
    • v.17 no.1
    • /
    • pp.1-9
    • /
    • 2013
  • The quantitative evaluation of DEM(Digital Elevation Map) is very important to the assessment of the effectiveness for the applied 3D image analysis technique. This paper presents a new quantitative evaluation method of 3D reconstruction process by using synthetic images. The proposed method is based on the assumption that a preacquired DEM and ortho-image should be the pseudo ground truth. The proposed evaluation process begins by generating a pair of photo-realistic synthetic images of the terrain from any viewpoint in terms of application of the constructed ray tracing algorithm to the pseudo ground truth. By comparing the DEM obtained by a pair of photo-realistic synthetic images with the assumed pseudo ground truth, we can analyze the quantitative error in DEM and evaluate the effectiveness of the applied 3D analysis method. To verify the effectiveness of the proposed evaluation method, we carry out the quantitative and the qualitative experiments. For the quantitative experiment, we prove the accuracy of the photo-realistic synthetic image. Also, the proposed evaluation method is experimented on the 3D reconstruction with regards to the change of the matching window. Based on the fact that the experimental result agrees with the anticipation, we can qualitatively manifest the effectiveness of the proposed evaluation method.

Traffic Congestion Estimation by Adopting Recurrent Neural Network (순환인공신경망(RNN)을 이용한 대도시 도심부 교통혼잡 예측)

  • Jung, Hee jin;Yoon, Jin su;Bae, Sang hoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.6
    • /
    • pp.67-78
    • /
    • 2017
  • Traffic congestion cost is increasing annually. Specifically congestion caused by the CDB traffic contains more than a half of the total congestion cost. Recent advancement in the field of Big Data, AI paved the way to industry revolution 4.0. And, these new technologies creates tremendous changes in the traffic information dissemination. Eventually, accurate and timely traffic information will give a positive impact on decreasing traffic congestion cost. This study, therefore, focused on developing both recurrent and non-recurrent congestion prediction models on urban roads by adopting Recurrent Neural Network(RNN), a tribe in machine learning. Two hidden layers with scaled conjugate gradient backpropagation algorithm were selected, and tested. Result of the analysis driven the authors to 25 meaningful links out of 33 total links that have appropriate mean square errors. Authors concluded that RNN model is a feasible model to predict congestion.

Planning of Optimal Work Path for Minimizing Exposure Dose During Radiation Work in Radwaste Storage (방사성 폐기물 저장시설에서의 방사선 작업 중 피폭선량 최소화를 위한 최적 작업경로 계획)

  • Park, Won-Man;Kim, Kyung-Soo;Whang, Joo-Ho
    • Journal of Radiation Protection and Research
    • /
    • v.30 no.1
    • /
    • pp.17-25
    • /
    • 2005
  • Since the safety of nuclear power plant has been becoming a big social issue the exposure dose of radiation for workers has been one of the important factors concerning the safety problem. The existing calculation methods of radiation dose used in the planning of radiation work assume that dose rate does not depend on the location within a work space thus the variation of exposure dose by different work path is not considered. In this study, a modified numerical method was presented to estimate the exposure dose during radiation work in radwaste storage considering the effects of the distance between a worker and sources. And a new numerical algorithm was suggested to search the optimal work path minimizing the exposure dose in pre-defined work space with given radiation sources. Finally, a virtual work simulation program was developed to visualize the exposure dose of radiation doting radiation works in radwaste storage and provide the capability of simulation for work planning. As a numerical example, a test radiation work was simulated under given space and two radiation sources, and the suggested optimal work path was compared with three predefined work paths. The optimal work path obtained in the study could reduce the exposure dose for the given test work. Based on the results, tile developed numerical method and simulation program could be useful tools in the planning of radiation work.

Performance Comparison of Out-Of-Vocabulary Word Rejection Algorithms in Variable Vocabulary Word Recognition (가변어휘 단어 인식에서의 미등록어 거절 알고리즘 성능 비교)

  • 김기태;문광식;김회린;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.27-34
    • /
    • 2001
  • Utterance verification is used in variable vocabulary word recognition to reject the word that does not belong to in-vocabulary word or does not belong to correctly recognized word. Utterance verification is an important technology to design a user-friendly speech recognition system. We propose a new utterance verification algorithm for no-training utterance verification system based on the minimum verification error. First, using PBW (Phonetically Balanced Words) DB (445 words), we create no-training anti-phoneme models which include many PLUs(Phoneme Like Units), so anti-phoneme models have the minimum verification error. Then, for OOV (Out-Of-Vocabulary) rejection, the phoneme-based confidence measure which uses the likelihood between phoneme model (null hypothesis) and anti-phoneme model (alternative hypothesis) is normalized by null hypothesis, so the phoneme-based confidence measure tends to be more robust to OOV rejection. And, the word-based confidence measure which uses the phoneme-based confidence measure has been shown to provide improved detection of near-misses in speech recognition as well as better discrimination between in-vocabularys and OOVs. Using our proposed anti-model and confidence measure, we achieve significant performance improvement; CA (Correctly Accept for In-Vocabulary) is about 89%, and CR (Correctly Reject for OOV) is about 90%, improving about 15-21% in ERR (Error Reduction Rate).

  • PDF

Impulse Response Filtration Technique for the Determination of Phase Velocities from SASW Measurements (SASW시험에 의한 위상속도 결정을 위한 임펄스 응답필터 기법)

  • ;Stokoe, K.H., Il
    • Geotechnical Engineering
    • /
    • v.13 no.1
    • /
    • pp.111-122
    • /
    • 1997
  • The calculation of phase velocities in Spectral-Analysis -of-Surface -Waves (SASW) meas urements requires unwrapping phase angles. In case of layered systems with strong stiffness contrast like a pavement system, conventional phase unwrapping algorithm to add in teger multiples of 2n to the principal value of a phase angle may lead to wrong phase volocities. This is because there is difficulty in counting the number of jumps in the phase spectrum especially at the receiver spacing where the measurements are in the transition Bone of defferent modes. A new phase interpretation scheme, called "Impulse Response Fil traction ( IRF) Technique," is proposed, which is based on the separation of wave groups by the filtration of the impulse response determinded between two receivers. The separation of a wave group is based on the impulse response filtered by using information from Gabor spectrogram, which visualizes the propagation of wave groups at the frequency -time space. The filtered impulse response leads to clear interpretation of phase spectrum, which eliminates difficulty in counting number of jumps in the phase spectrum. Verification of the IRF technique was performed by theoretical simulation of the SASW measurement on a pavement system which complicates wave propagation.opagation.

  • PDF

Design of an Adaptive Reed-Solomon Decoder with Varying Block Length (가변 블록길이를 갖는 적응형 리드솔로몬 복호기의 설계)

  • Song, Moon-Kyou;Kong, Min-Han
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.4C
    • /
    • pp.365-373
    • /
    • 2003
  • In this paper, we design a versatle RS decoder which can decode RS codes of any block length n as well as any message length k, based on a modified Euclid's algorithm (MEA). This unique feature is favorable for a shortened RS code of any block length it eliminates the need to insert zeros before decoding a shortened RS code. Furthermore, the value of error correcting capability t can be changed in real time at every codeword block. Thus, when a return channel is available, the error correcting capability can be adaptiverly altered according to channel state. The decoder permits 4-step pipelined processing : (1) syndrome calculation (2) MEA block (3) error magnitude calculation (4) decoder failure check. Each step is designed to form a structure suitable for decoding a RS code with varying block length. A new architecture is proposed for a MEA block in step (2) and an architecture of outputting in reversed order is employed for a polynomial evaluation in step (3). To maintain to throughput rate with less circuitry, the MEA block uses not only a multiplexing and recursive technique but also an overclocking technique. The adaptive RS decoder over GF($2^8$) with the maximal error correcting capability of 10 has been designed in VHDL, and successfully synthesized in a FPGA.

Coherence Time Estimation for Performance Improvement of IEEE 802.11n Link Adaptation (IEEE 802.11n에서 전송속도 조절기법의 성능 향상을 위한 Coherence Time 예측 방식)

  • Yeo, Chang-Yeon;Choi, Mun-Hwan;Kim, Byoung-Jin;Choi, Sung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.3A
    • /
    • pp.232-239
    • /
    • 2011
  • IEEE 802.11n standard provides a framework for new link adaptation. A station can request that another station provide a Modulation and Coding Scheme (MCS) feedback, to fully exploit channel variations on a link. However, if the time elapsed between MCS feedback request and the data frame transmission using the MCS feedback becomes bigger, the previously received feedback information may be obsolete. In that case, the effectiveness of the feedback-based link adaptation is compromised. If a station can estimate how fast the channel quality to the target station changes, it can improve accuracy of the link adaptation. The contribution of this paper is twofold. First, through a thorough NS-2 simulation, we show how the coherence time affects the performance of the MCS feedback based link adaptation of 802.11n networks. Second, this paper proposes an effective algorithm for coherence time estimation. Using Allan variance information statistic, a station estimates the coherence time of the receiving link. A proposed link adaptation scheme considering the coherence time can provide better performance.

A Study on the Safety Distance of Benzene and Acrylonitrile Releases in Sccordance with Dike and Hole Size (벤젠 및 아크릴로나이트릴 누출시 방류벽 유무 및 누출공에 따른 피해 영향범위 산정에 관한 연구)

  • Kawg, Youngmin;Oak, Jaemin;Yoon, Sukyoung;Jung, Seungho
    • Journal of the Korean Institute of Gas
    • /
    • v.22 no.1
    • /
    • pp.18-25
    • /
    • 2018
  • As the industries become more developed, the amounts of hazardous materials have been increased. Because of that, the possibility of accidents in plants is expected to increase. Especially, the dispersions of toxic materials cause serious effect to human life and environment, So it is very important to confirm safety distance of discharge accident. For this paper, we proposed new algorithms for toxic liquid, such as benzene and acrylonitrile. and using this argorithm, we are going to predict safety distance. The scenario of accidental release was assumed to be the release of entire quantity in 10 minutes is defined as worst-case scenario and Instantaneous release. Also the release from a partial rupture of line is used as an alternative case scenarios as NICS(National Institute of Chemical Safety) guidelines. Using ALOHA program and the algorithm for liquid toxic materials and suggested the graph, as well as correlated equations which can utilize emergency responders.

Color-Depth Combined Semantic Image Segmentation Method (색상과 깊이정보를 융합한 의미론적 영상 분할 방법)

  • Kim, Man-Joung;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.687-696
    • /
    • 2014
  • This paper presents a semantic object extraction method using user's stroke input, color, and depth information. It is supposed that a semantically meaningful object is surrounded with a few strokes from a user, and has similar depths all over the object. In the proposed method, deciding the region of interest (ROI) is based on the stroke input, and the semantically meaningful object is extracted by using color and depth information. Specifically, the proposed method consists of two steps. The first step is over-segmentation inside the ROI using color and depth information. The second step is semantically meaningful object extraction where over-segmented regions are classified into the object region and the background region according to the depth of each region. In the over-segmentation step, we propose a new marker extraction method where there are two propositions, i.e. an adaptive thresholding scheme to maximize the number of the segmented regions and an adaptive weighting scheme for color and depth components in computation of the morphological gradients that is required in the marker extraction. In the semantically meaningful object extraction, we classify over-segmented regions into the object region and the background region in order of the boundary regions to the inner regions, the average depth of each region being compared to the average depth of all regions classified into the object region. In experimental results, we demonstrate that the proposed method yields reasonable object extraction results.