• 제목/요약/키워드: 패턴검색

Search Result 575, Processing Time 0.03 seconds

Profiles of Toxin genes and Antibiotic Susceptibility of Staphylococcus aureus Isolated from Perilla Leaf Cultivation Area (들깻잎 재배단지에서 분리한 Staphylococcus aureus의 독소 유전자와 항생제 감수성 분석)

  • Kim, Se-Ri;Cha, Min-Hee;Chung, Duck-Hwa;Shim, Won-Bo
    • Journal of Food Hygiene and Safety
    • /
    • v.30 no.1
    • /
    • pp.51-58
    • /
    • 2015
  • Thirty one of Staphylococcus aureus isolated from perilla leaf cultivation areas in Miryang were investigated on the characteristics, such as enterotoxin genes and antibiotic susceptibility. Five toxin genes (sea, seb, sec, sed, and see) were examined by PCR method. Disc diffusion method was used to examine the antibiotic susceptibility of S. aureus by using 18 types of antibiotic discs with different concentrations. Among enterotoxin-encoding genes, sea and sed genes were co-detected from 4 isolates (12.9%), sed gene was founded in 9 isolates (29.0%), and see gene was founded in 1 isolate (3.2%). However seb and sec and tsst were not detected in any isolates. As a result of antibiotic susceptibility test, 7 isolates (22.6%) were resistant to 12 antibiotics (penicillin, ampicillin, oxacillin, amoxicillin-clavulanic acid, cefazolin, cephalothin, imipenem, gentamicin, tetracycline, ofloxacin, norfloxacin, and erythromycin). 2 isolates (6.5%) were resistant to 5 antibiotics (penicillin, ampicillin, amoxicillin-clavulanic acid, gentamycin, and telithromycin). MRSA (Methicilline Resistant Staphylococcus aureus) was founded in packing vinyl, hands, and perilla leaves.

The Dispersion Phenomenon of Journal Citations in a Digital Environment (디지털 환경에서 학술지 인용의 분산화 현상에 관한 연구)

  • Shin, Eun-Ja
    • Journal of the Korean Society for information Management
    • /
    • v.26 no.2
    • /
    • pp.211-222
    • /
    • 2009
  • Electronic publishing has influenced, and in some ways changed, information seeking, reading patterns and citation behaviours. This study collected the Cited Half-Lives, the indicator implies the life-span of scholarly journals, from JCR Social Science edition the before and after of the prevalence of electronic journals, and observed if there are some changes in these two periods. The analysis results of eight disciplines show that the average Cited Half-Lives increased in 2007 than in 1994 for seven disciplines except the demography. Especially in the four disciplines of economics, education, finance and sociology, the average Cited Half-Lives increased significantly. This results show that the concentration, researchers cite more recent articles and concentrate their citations on fewer ones, is lightening and the dispersion of citations is actually increasing. With the online availability of articles and journals the old online materials can be often accessed, used and cited more frequently, the more growth potential of Cited Half-Lives are made in a digital environment. Further research needs to investigate if the phenomenon will become more obvious in various disciplines after a few years.

The Consensus String Problem based on Radius is NP-complete (거리반경기반 대표문자열 문제의 NP-완전)

  • Na, Joong-Chae;Sim, Jeong-Seop
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.3
    • /
    • pp.135-139
    • /
    • 2009
  • The problems to compute the distances or similarities of multiple strings have been vigorously studied in such diverse fields as pattern matching, web searching, bioinformatics, computer security, etc. One well-known method to compare multiple strings in the given set is finding a consensus string which is a representative of the given set. There are two objective functions that are frequently used to find a consensus string, one is the radius and the other is the consensus error. The radius of a string x with respect to a set S of strings is the smallest number r such that the distance between the string x and each string in S is at most r. A consensus string based on radius is a string that minimizes the radius with respect to a given set. The consensus error of a string with respect to a given set S is the sum of the distances between x and all the strings in S. A consensus string of S based on consensus error is a string that minimizes the consensus error with respect to S. In this paper, we show that the problem of finding a consensus string based on radius is NP-complete when the distance function is a metric.

Analysis of Radio Propagation Environment in Busan Area for DTV Service (디지털 TV서비스를 위한 부산지역 전파환경 분석)

  • Sung Tae-Kyung;Weon Young-Su;Cho Hyung-Rae;Kim Ki-Moon
    • Journal of Navigation and Port Research
    • /
    • v.28 no.10 s.96
    • /
    • pp.869-874
    • /
    • 2004
  • Digital TV broadcasting's resolution or tone quality is very excellent than analog broadcasting and has many advantages including various multimedia functions such as home shopping, home banking, internet search, telecommuting, VOD, etc. In this study, it is essential to analyze the regional electromagnetic environment before Digital TV broadcasting, and therefore we analyzed Busan area's limitation using ETRI propagation model. For maintaining high-quality Digital TV signals, we measure electric field intensity wide and far in Busan including mountains area and high-rise buildings. Generally, it has lower value by a standard ETRI propagation model than simulated value about standard model, but distribution pattern are similar with it. Compared theoretical values with the measured results, they have similar values for flat area but very different values for crowded city area and mountains area So we conclude that ETRI propagation model and theoretical model are not suitable for Busan in a free space.

On Optimizing Dissimilarity-Based Classifications Using a DTW and Fusion Strategies (DTW와 퓨전기법을 이용한 비유사도 기반 분류법의 최적화)

  • Kim, Sang-Woon;Kim, Seung-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.2
    • /
    • pp.21-28
    • /
    • 2010
  • This paper reports an experimental result on optimizing dissimilarity-based classification(DBC) by simultaneously using a dynamic time warping(DTW) and a multiple fusion strategy(MFS). DBC is a way of defining classifiers among classes; they are not based on the feature measurements of individual samples, but rather on a suitable dissimilarity measure among the samples. In DTW, the dissimilarity is measured in two steps: first, we adjust the object samples by finding the best warping path with a correlation coefficient-based DTW technique. We then compute the dissimilarity distance between the adjusted objects with conventional measures. In MFS, fusion strategies are repeatedly used in generating dissimilarity matrices as well as in designing classifiers: we first combine the dissimilarity matrices obtained with the DTW technique to a new matrix. After training some base classifiers in the new matrix, we again combine the results of the base classifiers. Our experimental results for well-known benchmark databases demonstrate that the proposed mechanism achieves further improved results in terms of classification accuracy compared with the previous approaches. From this consideration, the method could also be applied to other high-dimensional tasks, such as multimedia information retrieval.

Development of Workbench for Analysis and Visualization of Whole Genome Sequence (전유전체(Whole gerlome) 서열 분석과 가시화를 위한 워크벤치 개발)

  • Choe, Jeong-Hyeon;Jin, Hui-Jeong;Kim, Cheol-Min;Jang, Cheol-Hun;Jo, Hwan-Gyu
    • The KIPS Transactions:PartA
    • /
    • v.9A no.3
    • /
    • pp.387-398
    • /
    • 2002
  • As whole genome sequences of many organisms have been revealed by small-scale genome projects, the intensive research on individual genes and their functions has been performed. However on-memory algorithms are inefficient to analysis of whole genome sequences, since the size of individual whole genome is from several million base pairs to hundreds billion base pairs. In order to effectively manipulate the huge sequence data, it is necessary to use the indexed data structure for external memory. In this paper, we introduce a workbench system for analysis and visualization of whole genome sequence using string B-tree that is suitable for analysis of huge data. This system consists of two parts : analysis query part and visualization part. Query system supports various transactions such as sequence search, k-occurrence, and k-mer analysis. Visualization system helps biological scientist to easily understand whole structure and specificity by many kinds of visualization such as whole genome sequence, annotation, CGR (Chaos Game Representation), k-mer, and RWP (Random Walk Plot). One can find the relations among organisms, predict the genes in a genome, and research on the function of junk DNA using our workbench.

Character-based Subtitle Generation by Learning of Multimodal Concept Hierarchy from Cartoon Videos (멀티모달 개념계층모델을 이용한 만화비디오 컨텐츠 학습을 통한 등장인물 기반 비디오 자막 생성)

  • Kim, Kyung-Min;Ha, Jung-Woo;Lee, Beom-Jin;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.451-458
    • /
    • 2015
  • Previous multimodal learning methods focus on problem-solving aspects, such as image and video search and tagging, rather than on knowledge acquisition via content modeling. In this paper, we propose the Multimodal Concept Hierarchy (MuCH), which is a content modeling method that uses a cartoon video dataset and a character-based subtitle generation method from the learned model. The MuCH model has a multimodal hypernetwork layer, in which the patterns of the words and image patches are represented, and a concept layer, in which each concept variable is represented by a probability distribution of the words and the image patches. The model can learn the characteristics of the characters as concepts from the video subtitles and scene images by using a Bayesian learning method and can also generate character-based subtitles from the learned model if text queries are provided. As an experiment, the MuCH model learned concepts from 'Pororo' cartoon videos with a total of 268 minutes in length and generated character-based subtitles. Finally, we compare the results with those of other multimodal learning models. The Experimental results indicate that given the same text query, our model generates more accurate and more character-specific subtitles than other models.

A Quality-Attribute-Driven Software Architecture Brokering Mechanism for Intelligent Service Robots (지능형 서비스 로봇을 위한 품질특성 기반의 소프트웨어 아키텍처 브로커링 방법)

  • Seo, Seung-Yeol;Koo, Hyung-Min;Ko, In-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.21-29
    • /
    • 2009
  • An intelligent service robot is a robot that monitors its surroundings, and then provides a service to meet a user's goal. It is normally impossible for a robot to anticipate all the needs of its user and various situations in the surroundings ahead, and to prepare for all the necessary functions to cope with them. Therefore, it is required to support the self-growing capability by which robots can extend their functionality based on users' needs and external conditions. In this paper, as an enabler of the self-growing capability, we propose a method that allows a robot to select a component-composition pattern represented in an architectural form (called a sub-architecture), and to extend its functionality by obtaining a set of software components that are prescribed in the pattern. Sub-architecture is selected and instantiated not only based on the functionality required but also based on quality requirements of a user and the surrounding environment. To provide this method, we constructed a quality-attributes-in-use ontology and developed a brokering mechanism that matches quality requirements of users and surroundings against quality attributes of sub-architectures. The ontology provides the common vocabularies to represent quality requirements and attributes, and enables the semantically-based reasoning in matching and instantiating appropriate sub-architectures in supporting services to users. This ontology-based approach contributes to provide a great flexibility in extending robot functionality based on available software components, and to narrow the gap between users' Quality requirements and the Quality of the actual services provided by a robot.

The Estimation of IDF Curve Considering Climate Change (기후변화를 고려한 IDF곡선 추정방안에 대한 연구)

  • Kim, Byung-Sik;Kyoung, Min-Soo;Lee, Keon-Haeng;Kim, Hyung-Soo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2007.05a
    • /
    • pp.774-779
    • /
    • 2007
  • IDF 곡선은 전통적으로 지점에서의 과거 관측 강우량 시계열 자료를 수집하여 작성하여 왔으며, 이때 과거 강우량 자료는 정상성을 지니고 있고 미래를 대변한다는 가정을 전제로 한다. 그러나 이미 많은 연구자들에 의해 기후변화가 전구적으로 발생하고 있으며 우리나라에서도 더 이상 기후변화의 사실여부는 이제 더이상 논란 꺼리가 아니다. 특히, 기후변화의 영향을 직접적으로 받을 수밖에 없는 수자원 분야에서는 1990년대부터 잦은 홍수와 가뭄의 반복으로 곤란을 겪고 있다. 특히, 우리나라는 협소한 국토면적과 과다한 인구로 토지나 수자원 등 국토자원 이용의 강도가 다른 나라에 비하여 현저하게 높기 때문에 지구온난화에 따른 기후변화와 같은 약간의 기후변동으로도 심각한 문제가 발생할 가능성이 내포되어 있다. 특히, 기후변화는 유역 규모의 강우 발생 패턴과 강우량의 증가 및 감소에 영향을 미치게 되며 이로 인해 강우 시계열 자료는 비정상성과 경향성을 지니게 된다. 그러나 지금까지는 IDF 곡선의 작성시 강우의 경향성을 무시해 왔다. 본 연구에서는 기후변화가 IDF 곡선에 미치는 영향을 분석하기 위하여 GCM 기후변화 시나리오를 이용하여 IDF 곡선을 작성하였다. 이를 위하여 먼저, YONU CGCM의 제한실험과 점증실험을 실시하여 전구적 규모의 기후변화 시나리오를 작성하였으며, 통계학적 축소기법과 추계학적 일기발생기법을 이용하여 대상지점의 일 수문기상 시계열을 모의하였다. 그리고 BLRP(Bartlett Lewis Rectangular Pulse) 모형과 분해(koutsoyiannis, 2000) 기법을 이용하여 모의된 일 강우 자료를 시자료로 분해하였으며 이를 이용하여 IDF 곡선을 작성하였다. 그 결과, 기후변화 시 지속기간별 재현기간별 강우량이 현재에 크게 비해 증가됨을 확인할 수 있었다.으며 여러명이 동시에 서버에 접속을 하기 때문에 컴퓨터에 부하가 많이 걸리는 모델링이나 복잡한 분석은 실시하기 어려우며, 대용량 데이터를 전송할 수 있는 대역폭이 확보 되어야 한다. 또한, Internet 환경으로 개발을 해야되기 때문에 데스크탑용 GIS에 비해 개발속도가 느리며 개발 초기비용이 많이 들게 된다. 하지만, 네트워크 기술의 발달과 모바일과의 연계 등으로 이러한 약점을 극복할 수 있을 것으로 판단된다. 따라서 본 논문에서는 인터넷 GIS를 이용하여 홍수재해 정보를 검색, 처리, 분석, 예경보할 수 있는 홍수방재정보 시스템을 구축토록 하였다.비해 초음파 감시하 치골상부 방광천자가 정확하고 안전한 채뇨법으로 권장되어야 한다고 생각한다.應裝置) 및 운용(運用)에 별다른 어려움이 없고, 내열성(耐熱性)이 강(强)하므로 쉬운 조건하(條件下)에서 경제적(經濟的)으로 공업적(工業的) 이용(利用)에 유리(有利)하다고 판단(判斷)되어진다.reatinine은 함량이 적었다. 관능검사결과(官能檢査結果) 자가소화(自家消化)시킨 크릴간장은 효소(酵素)처리한 것이나 재래식 콩간장에 비하여 품질 면에서 손색이 없고 저장성(貯藏性)이 좋은 크릴간장을 제조(製造)할 수 있다는 결론을 얻었다.이 있음을 확인할 수 있었다.에 착안하여 침전시 슬러지층과 상등액의 온도차를 측정하여 대사열량의 발생량을 측정하고 슬러지의 활성을 측정할 수 있는 방법을 개발하였다.enin과 Rhaponticin의 작용(作用)에 의(依)한 것이며, 이는 한의학(韓醫學) 방제(方劑) 원리(原理)인 군신좌사(君臣佐使) 이론(理論)에서 군약(君藥)이 주증(主症)에 주(主)로 작용(作用)하는 약물(藥物)이라는 것을 밝혀주는 것이라고 사료(思料)된다.일전 $13.447\;{\mu}g/hr/g$, 섭취 7일중 $8.123

  • PDF

Designing mobile personal assistant agent based on users' experience and their position information (위치정보 및 사용자 경험을 반영하는 모바일 PA에이전트의 설계)

  • Kang, Shin-Bong;Noh, Sang-Uk
    • Journal of Internet Computing and Services
    • /
    • v.12 no.1
    • /
    • pp.99-110
    • /
    • 2011
  • Mobile environments rapidly changing and digital convergence widely employed, mobile devices including smart phones have been playing a critical role that changes users' lifestyle in the areas of entertainments, businesses and information services. The various services using mobile devices are developing to meet the personal needs of users in the mobile environments. Especially, an LBS (Location-Based Service) is combined with other services and contents such as augmented reality, mobile SNS (Social Network Service), games, and searching, which can provide convenient and useful services to mobile users. In this paper, we design and implement the prototype of mobile personal assistant (PA) agents. Our personal assistant agent helps users do some tasks by hiding the complexity of difficult tasks, performing tasks on behalf of the users, and reflecting the preferences of users. To identify user's preferences and provide personalized services, clustering and classification algorithms of data mining are applied. The clusters of the log data using clustering algorithms are made by measuring the dissimilarity between two objects based on usage patterns. The classification algorithms produce user profiles within each cluster, which make it possible for PA agents to provide users with personalized services and contents. In the experiment, we measured the classification accuracy of user model clustered using clustering algorithms. It turned out that the classification accuracy using our method was increased by 17.42%, compared with that using other clustering algorithms.