• Title/Summary/Keyword: Segment Algorithm

Search Result 590, Processing Time 0.028 seconds

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF

The Speaker Recognition System using the Pitch Alteration (피치변경을 이용한 화자인식 시스템)

  • Jung JongSoon;Bae MyungJin
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.115-118
    • /
    • 2002
  • Parameters used in a speaker recognition system are desirable expressing speaker's characteristics filly and have in a speech. That is to say, if inter-speaker than intra-speaker variance a big characteristic, it is useful to distinguish between speakers. Also, to make minimum error between speakers, it is required the improved recognition technology as well as the distinguishing characteristics. When we see the result of recent simulation performance, we obtain more exact performance by using dynamic characteristics and constant characteristics by a speaking habit. Therefore we suggest it to solve this problem as followings. The prosodic information is used by a characteristic vector of speech. Characteristics vector generally using in speaker recognition system is a modeling spectrum information and is working for a high performance in non-noise circumstance. However, it is found a problem that characteristic vector is distorted in noise circumstance and it makes a reduction of recognition rate. In this paper, we change pitch line divided by segment which can estimate a dynamic characteristic and it is used as a recognition characteristic. we confirmed that the dynamic characteristic is very robust in noise circumstance with a simulation. We make a decision of acceptance or rejection by comparing test pattern and recognition rate using the proposed algorithm has more improvement than using spectrum and prosodic information. Especially stational recognition rate can be obtained in noise circumstance through the simulation.

  • PDF

Time-Scale Modification of Polyphonic Audio Signals Using Sinusoidal Modeling (정현파 모델링을 이용한 폴리포닉 오디오 신호의 시간축 변화)

  • 장호근;박주성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.77-85
    • /
    • 2001
  • This paper proposes a method of time-scale modification of polyphonic audio signals based on a sinusoidal model. The signals are modeled with sinusoidal component and noise component. A multiresolution filter bank is designed which splits the input signal into six octave-spaced subbands without aliasing and sinusoidal modeling is applied to each subband signal. To alleviate smearing of transients in time-scale modification a dynamic segmentation method is applied to subbands which determines the analysis-synthesis frame size adaptively to fit time-frequency characteristics of the subband signal. For extracting sinusoidal components and calculating their parameters matching pursuit algorithm is applied to each analysis frame of subband signal. In accordance with spectrum analysis a psychoacoustic model implementing the effect of frequency masking is incorporated with matching pursuit to provide a resonable stop condition of iteration and reduce the number of sinusoids. The noise component obtained by subtracting the synthesized signal with sinusoidal components from the original signal is modeled by line-segment model of short time spectrum envelope. For various polyphonic audio signals the result of simulation shows suggested sinusoidal modeling can synthesize original signal without loss of perceptual quality and do more robust and high quality time-scale modification for large scale factor because of representing transients without any perceptual loss.

  • PDF

A Study on Optical Condition and preprocessing for Input Image Improvement of Dented and Raised Characters of Rubber Tires (고무타이어 문자열 입력영상 개선을 위한 전처리와 광학조건에 관한 연구)

  • 류한성;최중경;권정혁;구본민;박무열
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.1
    • /
    • pp.124-132
    • /
    • 2002
  • In this paper, we present a vision algorithm and method for input image improvement and preprocessing of dented and raised characters on the sidewall of tires. we define optical condition between reflect coefficient and reflectance by the physical vector calculate. On the contrary this work will recognize the engraved characters using the computer vision technique. Tire input images have all most same grey levels between the characters and backgrounds. The reflectance is little from a tire surface. therefore, it's very difficult segment the characters from the background. Moreover, one side of the character string is raised and the other is dented. So, the captured images are varied with the angle of camera and illumination. For optimum Input images, the angle between camera and illumination was found out to be with in 90$^{\circ}$. In addition, We used complex filtering with low-pass and high-pass band filters to improve input images, for clear input images. Finally we define equation reflect coefficient and reflectance. By doing this, we obtained good images of tires for pattern recognition.

Trajectory Index Structure based on Signatures for Moving Objects on a Spatial Network (공간 네트워크 상의 이동객체를 위한 시그니처 기반의 궤적 색인구조)

  • Kim, Young-Jin;Kim, Young-Chang;Chang, Jae-Woo;Sim, Chun-Bo
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.3
    • /
    • pp.1-18
    • /
    • 2008
  • Because we can usually get many information through analyzing trajectories of moving objects on spatial networks, efficient trajectory index structures are required to achieve good retrieval performance on their trajectories. However, there has been little research on trajectory index structures for spatial networks such as FNR-tree and MON-tree. Also, because FNR-tree and MON-tree store the segment unit of moving objects, they can't support the trajectory of whole moving objects. In this paper, we propose an efficient trajectory index structures based on signatures on a spatial network, named SigMO-Tree. For this, we divide moving object data into spatial and temporal attributes, and design an index structure which supports not only range query but trajectory query by preserving the whole trajectory of moving objects. In addition, we divide user queries into trajectory query based on spatio-temporal area and similar-tralectory query, and propose query processing algorithms to support them. The algorithm uses a signature file in order to retrieve candidate trajectories efficiently Finally, we show from our performance analysis that our trajectory index structure outperforms the existing index structures like FNR-Tree and MON-Tree.

  • PDF

A Study on Optimal Shape-Size Index Extraction for Classification of High Resolution Satellite Imagery (고해상도 영상의 분류결과 개선을 위한 최적의 Shape-Size Index 추출에 관한 연구)

  • Han, You-Kyung;Kim, Hye-Jin;Choi, Jae-Wan;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.2
    • /
    • pp.145-154
    • /
    • 2009
  • High spatial resolution satellite image classification has a limitation when only using the spectral information due to the complex spatial arrangement of features and spectral heterogeneity within each class. Therefore, the extraction of the spatial information is one of the most important steps in high resolution satellite image classification. This study proposes a new spatial feature extraction method, named SSI(Shape-Size Index). SSI uses a simple region-growing based image segmentation and allocates spatial property value in each segment. The extracted feature is integrated with spectral bands to improve overall classification accuracy. The classification is achieved by applying a SVM(Support Vector Machines) classifier. In order to evaluate the proposed feature extraction method, KOMPSAT-2 and QuickBird-2 data are used for experiments. It is demonstrated that proposed SSI algorithm leads to a notable increase in classification accuracy.

Context Sharing Framework Based on Time Dependent Metadata for Social News Service (소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크)

  • Ga, Myung-Hyun;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.39-53
    • /
    • 2013
  • The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.

The Evaluation of Usefulness of Wide Beam Reconstruction Method on Segmental Perfusion and Regional Wall Motion in Myocardial Perfusion SPECT (심근관류 SPECT의 분절별 관류 및 국소벽 운동에서 Wide Beam Reconstruction기법의 유용성 평가)

  • Seong, Yong-Joon;Kim, Tae-Yeob;Moon, Il-Sang;Cho, Seong-Wook;Woo, Jae-Ryong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.51-57
    • /
    • 2011
  • Purpose: The aim of this study is to identify clinical usefulness of Wide Beam Reconstruction (WBR) which is called Xpress.cardiac$^{TM}$ to confirm the agreement between segmental perfusion and regional wall motion in myocardium compared to conventional OSEM method. Materials and Methods: Subjects were separated two groups. First group was composed of 20 normal control group. Second group was composed of 10 patients (abnormal group) who had coronary artery disease. Subjects underwent myocardial perfusion SPECT ($^{201}Tl$ rest and $^{99m}Tc$-MIBI stress). Image acquisition and reconstruction were that rest stage was each step per 30, 15 seconds and stress stage was each step per 25, 13 seconds, OSEM and WBR methods were applied. Segmental perfusion and regional wall motion were applied 20-segment model of QPS, QGS algorithm in AutoQuant. Status of perfusion was composed of 5 point scoring system (0=normal, 1=mild, 2=moderate, 3=severe hypokinesia, 4=dyskinesia). Status of regional wall motion was also composed of 5 point scoring (0=normal, 1=mild, 2=moderate, 3=severe hypokinesia, 4=dyskinesia). We evaluated the agreement between conventional OSEM and WBR through automatic quantification value. Results: The agreement of rest segmental perfusion between conventional OSEM and WBR in normal patients was 99% (396/400, k=0.662, p<0.0001) and one of rest regional wall motion was 83.8% (335/400, k=0.283), the agreement of stress segmental perfusion was 95.8%(383/400, k=0.656), one of stress regional wall motion was 87.3% (349/400, k=0.390). The match rate of rest segmental perfusion in abnormal patients was 83% (166/200, k=0.605, p<0.0001) and one of rest regional wall motion was 55.5% (111/200, k=0.385), the agreement of stress segmental perfusion was 79.5% (159/200, k=0.682), one of stress regional wall motion was 63.5% (127/200, k=0.486). Conclusion: Compared to conventional OSEM, WBR method had a good agreement of segmental perfusion in myocardium in normal and abnormal groups. However regional wall motion showed meaningful low agreement. Although WBR offers high resolution and contrast ratio, it is not useful method for gated myocardial perfusion SPECT.

  • PDF

An Implementation of Dynamic Gesture Recognizer Based on WPS and Data Glove (WPS와 장갑 장치 기반의 동적 제스처 인식기의 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.561-568
    • /
    • 2006
  • WPS(Wearable Personal Station) for next generation PC can define as a core terminal of 'Ubiquitous Computing' that include information processing and network function and overcome spatial limitation in acquisition of new information. As a way to acquire significant dynamic gesture data of user from haptic devices, traditional gesture recognizer based on desktop-PC using wire communication module has several restrictions such as conditionality on space, complexity between transmission mediums(cable elements), limitation of motion and incommodiousness on use. Accordingly, in this paper, in order to overcome these problems, we implement hand gesture recognition system using fuzzy algorithm and neural network for Post PC(the embedded-ubiquitous environment using blue-tooth module and WPS). Also, we propose most efficient and reasonable hand gesture recognition interface for Post PC through evaluation and analysis of performance about each gesture recognition system. The proposed gesture recognition system consists of three modules: 1) gesture input module that processes motion of dynamic hand to input data 2) Relational Database Management System(hereafter, RDBMS) module to segment significant gestures from input data and 3) 2 each different recognition modulo: fuzzy max-min and neural network recognition module to recognize significant gesture of continuous / dynamic gestures. Experimental result shows the average recognition rate of 98.8% in fuzzy min-nin module and 96.7% in neural network recognition module about significantly dynamic gestures.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.