• Title/Summary/Keyword: Number Matching

Search Result 803, Processing Time 0.072 seconds

A Study on Motion Estimator Design Using DCT DC Value (DCT 직류 값을 이용한 움직임 추정기 설계에 관한 연구)

  • Lee, Gwon-Cheol;Park, Jong-Jin;Jo, Won-Gyeong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.3
    • /
    • pp.258-268
    • /
    • 2001
  • The compression method is necessarily used to send the high quality moving picture that contains a number of data in image processing. In the field of moving picture compression method, the motion estimation algorithm is used to reduce the temporal redundancy. Block matching algorithm to be usually used is distinguished partial search algorithm with full search algorithm. Full search algorithm be used in this paper is the method to compare the reference block with entire block in the search window. It is very efficient and has simple data flow and control circuit. But the bigger the search window, the larger hardware size, because large computational operation is needed. In this paper, we design the full search block matching motion estimator. Using the DCT DC values, we decide luminance. And we apply 3 bit compare-selector using bit plane to I(Intra coded) picture, not using 8 bit luminance signals. Also it is suggested that use the same selective bit for the P(Predicted coded) and B(Bidirectional coded) picture. We compare based full search method with PSNR(Peak Signal to Noise Ratio) for C language modeling. Its condition is the reference block 8$\times$8, the search window 24$\times$24 and 352$\times$288 gray scale standard video images. The result has small difference that we cannot see. And we design the suggested motion estimator that hardware size is proved to reduce 38.3% for structure I and 30.7% for structure II. The memory is proved to reduce 31.3% for structure I and II.

  • PDF

A Prediction Search Algorithm by using Temporal and Spatial Motion Information from the Previous Frame (이전 프레임의 시공간 모션 정보에 의한 예측 탐색 알고리즘)

  • Kwak, Sung-Keun;Wee, Young-Cheul;Kimn, Ha-Jine
    • Journal of the Korea Computer Graphics Society
    • /
    • v.9 no.3
    • /
    • pp.23-29
    • /
    • 2003
  • There is the temporal correlation of the video sequence between the motion vector of current block and the motion vector of the previous block. If we can obtain useful and enough information from the motion vector of the same coordinate block of the previous frame, the total number of search points used to find the motion vector of the current block may be reduced significantly. In this paper, we propose the block-matching motion estimation using an adaptive initial search point by the predicted motion information from the same block of the previous frame. And the first search point of the proposed algorithm is moved an initial point on the location of being possibility and the searching process after moving the first search point is processed according to the fast search pattern. Simulation results show that PSNR(Peak-to-Signal Noise Ratio) values are improved UP to the 1.05dB as depend on the image sequences and improved about 0.33~0.37dB on an average. Search times are reduced about 29~97% than the other fast search algorithms. Simulation results also show that the performance of the proposed scheme gives better subjective picture quality than the other fast search algorithms and is closer to that of the FS(Full Search) algorithm.

  • PDF

Exploitation of Auxiliary Motion Vector in Video Coding for Robust Transmission over Internet (화상통신에서의 오류전파 제어를 위한 보조모션벡터 코딩 기법)

  • Lee, Joo-Kyong;Choi, Tae-Uk;Chung, Ki-Dong
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.571-578
    • /
    • 2002
  • In this paper, we propose a video sequence coding scheme called AMV (Auxiliary Motion Vector) to minimize error propagation caused by transmission errors over the Internet. Unlike the conventional coding schemes the AMY coder, for a macroblock in a frame, selects two best matching blocks among several preceding frames. The best matching block, called a primary block, is used for motion compensation of the destination macroblock. The other block, called an auxiliary block, replaces the primary block in case of its loss at the decoder. When a primary block is corrupted or lost during transmission, the decoder can efficiently and simply suppress error propagation to the subsequent frames by replacing the block with an auxiliary block. This scheme has an advantage of reducing both the number and the impact of error propagations. We implemented the proposed coder by modifying H.263 standard coding and evaluated the performance of our proposed scheme in the simulation. The simulation results show that AMV coder is more efficient than the H.263 baseline coder at the high packet loss rate.

Integration Application of Node-Link Data Using Open LR Method (Open LR 기법을 이용한 노드-링크 데이터의 통합활용 방안에 관한 연구)

  • Kwon, Tae Ho;Choi, Yun-Soo
    • Journal of the Korean Institute of Gas
    • /
    • v.25 no.5
    • /
    • pp.78-87
    • /
    • 2021
  • This paper analyzes the range and attributes of the traffic information service between domestic public institutions and private companies, and suggests the possibility of joint application of node-link information for each company and the possibility of joint use of private traffic information. For this purpose, the present condition and attributes of domestic and foreign traffic information node-links (link length, node ID number, U-turn information, lane information, left turn information, right turn information, etc.) were analyzed. The analysis targets, the node-link of the national standard node and the two companies were analyzed. The area of the experiment was selected in Jongno-gu, Seoul, where standard-link information is complex, traffic volume is high, and various standard-links exist. The experiment was conducted by comparing and analyzing the traffic information attributes of three types of node-links and performing node-links overlapping matching (utilizing encoding_decoding method), and the possibility of matching node-links and attributes of different specifications was analyzed using Open LR technique.

3D Model Construction and Evaluation Using Drone in Terms of Time Efficiency (시간효율 관점에서 드론을 이용한 3차원 모형 구축과 평가)

  • Son, Seung-Woo;Kim, Dong-Woo;Yoon, Jeong-Ho;Jeon, Hyung-Jin;Kang, Young-Eun;Yu, Jae-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.11
    • /
    • pp.497-505
    • /
    • 2018
  • In a situation where the amount of bulky waste needs to be quantified, a three-dimensional model of the wastes can be constructed using drones. This study constructed a drone-based 3D model with a range of flight parameters and a GCPs survey, analyzed the relationship between the accuracy and time required, and derived a suitable drone application technique to estimate the amount of waste in a short time. Images of waste were photographed using the drone and auto-matching was performed to produce a model using 3D coordinates. The accuracy of the 3D model was evaluated by RMSE calculations. An analysis of the time required and the characteristics of the top 15 models with high accuracy showed that the time required for Model 1, which had the highest accuracy with an RMSE of 0.08, was 954.87 min. The RMSE of the 10th 3D model, which required the shortest time (98.27 min), was 0.15, which is not significantly different from that of the model with the highest accuracy. The most efficient flight parameters were a high overlapping ratio at a flight altitude of 150 m (60-70% overlap and 30-40% sidelap) and the minimum number of GCPs required for image matching was 10.

Restoration of implant-supported fixed dental prosthesis using the automatic abutment superimposition function of the intraoral scanner in partially edentulous patients (부분무치악 환자에서 구강스캐너의 지대주 자동중첩기능을 이용한 임플란트 고정성 보철물 수복 증례)

  • Park, Keun-Woo;Park, Ji-Man;Lee, Keun-Woo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.59 no.1
    • /
    • pp.79-87
    • /
    • 2021
  • The digital workflow of optical impressions by the intraoral scanner and CADCAM manufacture of dental prostheses is actively developing. The complex process of traditional impression taking, definite cast fabrication, wax pattern making, and casting has been shortened, and the number of patient's visits can also be reduced. Advances in intraoral scanner technology have increased the precision and accuracy of optical impression, and its indication is progressively widened toward the long span fixed dental prosthesis. This case report describes the long span implant case, and the operator fully utilized digital workflow such as computer-guided implant surgical template and CAD-CAM produced restoration after the digital impression. The provisional restoration and customized abutments were prepared with the optical impression taken on the same day of implant surgery. Moreover, the final prosthesis was fabricated with the digital scan while utilizing the same customized abutment from the provisional restoration. During the data acquisition step, stl data of customized abutments, previously scanned at the time of provisional restoration delivery, were imported and automatically aligned with digital impression data using an 'A.I. abutment matching algorithm' the intraoral scanner software. By using this algorithm, it was possible to obtain the subgingival margin without the gingival retraction or abutment removal. Using the digital intraoral scanner's advanced functions, the operator could shorten the total treatment time. So that both the patient and the clinician could experience convenient and effective treatment, and it was possible to manufacture a prosthesis with predictability.

Matching Analysis between Actress Son Ye-jin's Core Persona and Audience Responses to Her Starring Works (배우 손예진의 코어 페르소나와 주연 작품에 대한 수용자 반응과의 정합성 분석)

  • Kim, Jeong-Seob
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.4
    • /
    • pp.93-106
    • /
    • 2019
  • Persona is an actor's external ego constructed by playing various roles, and his/her another self-portrait in the eyes of the audience. This study was conducted to analyze persona identity containing core persona(CP) and to gain implications for the growth strategy of the actress Son Ye-jin called "melo queen" by verifying consistency between the CP and audience responses to her starring works of the past. According to the related theories and models, the persona was firstly set as image, visuality, personality and consistency, and it was used to extract and sort descriptive texts about Son related news articles in the last 5 years of the six major Korean newspapers using the content analysis method. After that, we analyzed the number of viewers of her movies and the audience share of her dramas by genre. As a result, Son's persona components were found to have a proportion for 54.2% images (34.0% for melo and romance images, 20.2% for non-melo and romance images), 25.6% for visibility, 13.8% for consistency, and 6.4% for personality. Her CP was derived from a melo and romance image. Comparing this with the audience reaction, the melo romance genre dominated and showed consistency in the drama, but in the case of the film, the non-melo romance was dominant and did not match each other. The results were attributed to a wide gap between dramas and movies in terms of key viewers, box office factors, degree of genre hybridity and experimentality. Therefore, Son should actively use her CP in the drama and challenge the various roles in order to expand her persona spectrum in the film.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Fast Sequential Bundle Adjustment Algorithm for Real-time High-Precision Image Georeferencing (실시간 고정밀 영상 지오레퍼런싱을 위한 고속 연속 번들 조정 알고리즘)

  • Choi, Kyoungah;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.2
    • /
    • pp.183-195
    • /
    • 2013
  • Real-time high-precision image georeferencing is important for the realization of image based precise navigation or sophisticated augmented reality. In general, high-precision image georeferencing can be achieved using the conventional simultaneous bundle adjustment algorithm, which can be performed only as post-processing due to its processing time. The recently proposed sequential bundle adjustment algorithm can rapidly produce the results of the similar accuracy and thus opens a possibility of real-time processing. However, since the processing time still increases linearly according to the number of images, if the number of images are too large, its real-time processing is not guaranteed. Based on this algorithm, we propose a modified fast algorithm, the processing time of which is maintained within a limit regardless of the number of images. Since the proposed algorithm considers only the existing images of high correlation with the newly acquired image, it can not only maintain the processing time but also produce accurate results. We applied the proposed algorithm to the images acquired with 1Hz. It is found that the processing time is about 0.02 seconds at the acquisition time of each image in average and the accuracy is about ${\pm}5$ cm on the ground point coordinates in comparison with the results of the conventional simultaneous bundle adjustment algorithm. If this algorithm is converged with a fast image matching algorithm of high reliability, it enables high precision real-time georeferencing of the moving images acquired from a smartphone or UAV by complementing the performance of position and attitude sensors mounted together.

Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection

  • Bajwa, Waheed U.;Calderbank, Robert;Jafarpour, Sina
    • Journal of Communications and Networks
    • /
    • v.12 no.4
    • /
    • pp.289-307
    • /
    • 2010
  • The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence-termed as the worst-case coherence and the average coherence-among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.