• Title/Summary/Keyword: 순차탐색

Search Result 188, Processing Time 0.026 seconds

A Semantic-Based Mashup Development Tool Supporting Various Open API Types (다양한 Open API 타입들을 지원하는 시맨틱 기반 매쉬업 개발 툴)

  • Lee, Yong-Ju
    • Journal of Internet Computing and Services
    • /
    • v.13 no.3
    • /
    • pp.115-126
    • /
    • 2012
  • Mashups have become very popular over the last few years, and their use also varies for IT convergency services. In spite of their popularity, there are several challenging issues when combining Open APIs into mashups, First, since portal sites may have a large number of APIs available for mashups, manually searching and finding compatible APIs can be a tedious and time-consuming task. Second, none of the existing portal sites provides a way to leverage semantic techniques that have been developed to assist users in locating and integrating APIs like those seen in traditional SOAP-based web services. Third, although suitable APIs have been discovered, the integration of these APIs is required for in-depth programming knowledge. To solve these issues, we first show that existing techniques and algorithms used for finding and matching SOAP-based web services can be reused, with only minor changes. Next, we show how the characteristics of APIs can be syntactically defined and semantically described, and how to use the syntactic and semantic descriptions to aid the easy discovery and composition of Open APIs. Finally, we propose a goal-directed interactive approach for the dynamic composition of APIs, where the final mashup is gradually generated by a forward chaining of APIs. At each step, a new API is added to the composition.

Antifungal Activity of Methylene Chloride Fraction of Pimpinella brachycarpa Against Aspergillus niger (참나물 Methylene Chloride 분획의 Aspergillus niger에 대한 항진균 활성)

  • Ahn, Seon-Mi;Choi, Tae-Ho;Kwun, In-Sook;Sohn, Ho-Yong
    • Microbiology and Biotechnology Letters
    • /
    • v.39 no.2
    • /
    • pp.168-174
    • /
    • 2011
  • In order to develop safe and economic novel antifungal agents, we prepared 73 methanol extracts from medicinal and edible herbs and examined their 365 solvent fractions using n-hexane, methylene chloride, ethylacetate, butanol and water residue based on the sequential organic solvent fraction method. When using the various fractions in the screening step for antifungal activity, we discovered ethylacetate fraction of Morus alba L., methylene chloride fraction of Pimpinella brachycarpa (MCPB), and n-hexane fraction of Salvia miltiorrhiza Bunge, which all have activities in methanol extracts, as potential sources of antifungal agents. Amongst these, the antifungal activity of P. brachycarpa has not to date been reported on. In addition, the mycelial growth inhibition and spore germination inhibition activities of MCPB against A. niger were confirmed by disc-diffusion assay in a 10 day culture. The MIC and MFC of MCPB were determined as 0.25 and 0.5 mg/ml, respectively. The MCPB has no hemolytic activity against human RBC at 0.5 mg/ml and glycoside-flavonoids are theorized to be active constituents. These results suggest that MCPB has a prominent antifungal activity and that the application of sequential organic solvent fractions, instead of simple natural product extracts, is useful in the screening process of novel bioactive substances.

Establishment of a Selection System for the Site-Specific Incorporation of Unnatural Amino Acids into Protein (비천연 아미노산의 위치특이적 단백질 삽입을 위한 Amino Acyl-tRNA Synthetase 선별시스템 개발)

  • Edan, Dawood Salim;Choi, Inkyung;Park, Jungchan
    • Korean Journal of Microbiology
    • /
    • v.50 no.1
    • /
    • pp.1-7
    • /
    • 2014
  • Site-specific incorporation of unnatural amino acids (SSIUA) into protein can be achieved in vivo by coexpression of an orthogonal pair of suppressor tRNA and engineered aminoacyl-tRNA synthetase (ARS) that specifically ligates an unnatural amino acid to the suppressor tRNA. As a step to develop the SSIUA technique in Escherichia coli, here we established a new 2-step screening system that can be used for selecting an ARS variant(s) that ligates an unnatural amino acid to a suppressor tRNA. A positive selection system consists of chloramphenicol acetyl transferase gene containing an amber mutation at the $27^{th}$ residue, and efficiently concentrated amber suppressible ARS with a maximum enrichment factor of $9.0{\times}10^5$. On the other hand, a negative selection system was constructed by adding multiple amber codons in front of a lethal gene encoding the control of cell death B toxin (ccdB) which acts as an inhibitory protein of bacterial topoisomerase II. Amber suppression of ccdB by an orthogonal pair of Saccharomyces cerevisiae tyrosyl-tRNA synthetase (TyrRS) and an amber suppressor tRNA significantly inhibits bacterial growth. This selection system was also able to efficiently remove amber suppressible ARS which could ligate natural amino acids to the suppressor tRNA. Thus, sequential combination of these two selection systems might be able to function as a powerful tool for selecting an ARS variant that specifically ligates an unnatural amino acid to the suppressor tRNA from an ARS mutant pool.

Technique for Placing Continuous Media on a Disk Array under Fault-Tolerance and Arbitrary-Rate Search (결함허용과 임의 속도 탐색을 고려한 연속 매체 디스크 배치 기법)

  • O, Yu-Yeong;Kim, Seong-Su;Kim, Jae-Hun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1166-1176
    • /
    • 1999
  • 연속 매체, 특히 비디오 데이타에 대한 일반 사용자 연산에는 재생뿐만 아니라 임의 속도 탐색 연산, 정지 연산, 그리고 그 외 다양한 연산이 있다. 이 연산 중에서 원하는 화면을 빨리 찾는 데에 유용한 고속 전진(FF: fast-forward)과 고속 후진(FB: fast-backward)은 재생 연산과는 달리 비순차적인 디스크 접근을 요구한다. 이러한 경우에 디스크 부하가 균등하지 않으면 일부 디스크에 접근이 편중되어 서비스 품질이 떨어진다. 본 논문에서는 디스크 배열을 이용한 저장 시스템에서 디스크 접근을 고르게 분산시키기 위하여 '소수 라운드 로빈(PRR: Prime Round Robin)' 방식으로 연속 매체를 디스크에 배치하는 기법에서 문제가 됐던 낭비된 디스크 저장 공간을 신뢰도 향상을 위해서 사용하는 '그룹화된 패리티를 갖는 소수 라운드 로빈(PRRgp: PRR with Grouped Parities)' 방식을 제안한다. 이 기법은 PRR 기법처럼 임의 속도 검색 연산에 있어서 디스크 배열을 구성하는 모든 디스크의 부하를 균등하게 할뿐만 아니라 낭비됐던 디스크 저장 공간에 신뢰도를 높이기 위한 패리티 정보를 저장함으로서 신뢰도를 향상시킬 수 있다. 신뢰도 모델링 방법으로 조합 모델과 마르코프 모델을 이용해서 결함발생율과 결함복구율을 고려한 신뢰도를 산출하고 비교.분석한다. PRR 기법으로 연속 매체를 저장하고 낭비되는 공간에 패리티 정보를 저장할 경우에 동시에 두 개 이상의 결함 발생 시에 그 결함으로부터 복구가 불가능하지만 PRRgp 기법에서는 약 30% 이상의경우에 대해서 동시에 두 개의 결함 발생 시에 저장한 패리티 정보를 이용한 복구가 가능할 뿐만 아니라 패리티 그룹의 수가 두 개 이상인 경우에는 두 개 이상의 결함에 대해서도 복구가 가능하다.Abstract End-user operations on continuous media (say video data) consist of arbitrary-rate search, pause, and others as well as normal-rate play. FF(fast-forward) / FB(fast-backward) among those operations are desirable to find out the scene of interest but they require non-sequential access of disks. When accesses are clustered to several disks without considering load balance, high quality services in playback may not be available. In this paper, we propose a new disk placement scheme, called PRRgp(Prime Round Robin with Grouped Parities), with enhanced reliability by using the wasted disk storage space in an old one(PRR: Prime Round Robin), in which continuous media are placed on a disk array based storage systems to distribute disk accesses uniformly. The PRRgp can not only achieve load balance of disks consisting of a disk array under arbitrary-rate search like PRR, but also improve reliability by storing parity information on the wasted disk space appropriately. We use combinatorial and Markov models to evaluate the reliability for a disk array and to analyze the results. When continuous media like PRR are placed and parity information on the wasted disk space is stored, we cannot tolerate more than two simultaneous faults. But they can be recovered by using stored parity information for about 30 percent as a whole in case of PRRgp presented in this paper. In addition, more than two faults can be tolerated in case there are more than two parity groups.

Efficient Collaboration Method Between CPU and GPU for Generating All Possible Cases in Combination (조합에서 모든 경우의 수를 만들기 위한 CPU와 GPU의 효율적 협업 방법)

  • Son, Ki-Bong;Son, Min-Young;Kim, Young-Hak
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.9
    • /
    • pp.219-226
    • /
    • 2018
  • One of the systematic ways to generate the number of all cases is a combination to construct a combination tree, and its time complexity is O($2^n$). A combination tree is used for various purposes such as the graph homogeneity problem, the initial model for calculating frequent item sets, and so on. However, algorithms that must search the number of all cases of a combination are difficult to use realistically due to high time complexity. Nevertheless, as the amount of data becomes large and various studies are being carried out to utilize the data, the number of cases of searching all cases is increasing. Recently, as the GPU environment becomes popular and can be easily accessed, various attempts have been made to reduce time by parallelizing algorithms having high time complexity in a serial environment. Because the method of generating the number of all cases in combination is sequential and the size of sub-task is biased, it is not suitable for parallel implementation. The efficiency of parallel algorithms can be maximized when all threads have tasks with similar size. In this paper, we propose a method to efficiently collaborate between CPU and GPU to parallelize the problem of finding the number of all cases. In order to evaluate the performance of the proposed algorithm, we analyze the time complexity in the theoretical aspect, and compare the experimental time of the proposed algorithm with other algorithms in CPU and GPU environment. Experimental results show that the proposed CPU and GPU collaboration algorithm maintains a balance between the execution time of the CPU and GPU compared to the previous algorithms, and the execution time is improved remarkable as the number of elements increases.

Exploring Teachers' Perceptions of Computational Thinking Embedded in Professional Development Program (컴퓨팅 사고를 반영한 교사연수 과정에서 나타난 교사의 인식 탐색)

  • Hwang, Gyu Jin;Park, Young-Shin
    • Journal of the Korean earth science society
    • /
    • v.42 no.3
    • /
    • pp.344-364
    • /
    • 2021
  • The study explored how two elementary school teachers perceived computational thinking, reflected them into curriculum revision, and taught them in the classroom during longitudinal professional developed program (PDP) for nine months. Computational thinking is a new direction in educational policy-making including science education; therefore we planned to investigate participating teachers' perception of computational thinking to provide their fundamental understandings. Nine meetings, lasting about two hours each, were held with the participating teachers and they developed 11 lesson plans for one unit each, as they formed new understandings about computational thinking. Data were collected through PDP program while two teachers started perceiving computational thinking, revising their curriculum, and implementing it into their class for nine months. The results were as follows; first, elementary school teachers' perception of computational thinking was that the definition of scientific literacy as the purpose of science education was extended, i.e., it refers to scientific literacy to prepare students to be creative problem solvers. Second, STEAM (science, technology, engineering, arts, and mathematics) lessons were divided into two stages; concept formation stage where scientific thinking is emphasized, and concept application, where computational thinking is emphasized. Thirdly, computational thinking is a cognitive thinking process, and ICT (informational and communications technology) is a functional tool. Fourth, computational thinking components appear repeatedly and may not be sequential. Finally, STEAM education can be improved by utilizing computational thinking. Based on this study, we imply that STEAM education can be activated by computational thinking when teachers are equipped with competencies of understanding and implementing computational thinking within the systematic PDPs, which is very essential for newly policies.

Exploring the Applicability of PLC Protocol for Enhancing Science Teachers' Teaching Expertise on Inquiry Class (과학 교사의 탐구 수업 전문성 신장을 위한 교사학습공동체(PLC) 프로토콜의 활용 가능성 탐색)

  • Lee, Kiyoung;Jeong, Eunyoung;Kwak, Youngsun
    • Journal of The Korean Association For Science Education
    • /
    • v.42 no.4
    • /
    • pp.439-448
    • /
    • 2022
  • The goal of this study is to develop a protocol that can be used for the purpose of developing inquiry class expertise in science teacher PLC, and to explore the possibility of field application of the developed protocol through test application with in-service teachers. PLC protocol for science inquiry class, consisting of five stages, was developed and applied sequentially to six participating teachers. In order to check the applicability of the protocol, the participating teachers wrote a reflection journal for each stage, and after the completion of the five-stage protocol, the participants' perceptions of the protocol were investigated through a group interview. The results are as follows: first, a protocol for enhancing science teachers' professionalism of inquiry classes was composed and developed in five stages such as (1) Revealing ideas about science inquiry classes, (2) Sharing science inquiry class experiences, (3) Looking together at students' scientific inquiry results, (4) Building literacy for science inquiry teaching, and (5) making science inquiry lesson plans. Second, the possibility of extensive application of the PLC protocol developed in this study was confirmed through the reflection journal and post-interview analysis results of the participants. According to the participating teachers, the protocol helped the systematic operation of PLC and teachers' participation. In addition, by experiencing the five-stage protocol, the teachers had an opportunity to reflect on their inquiry classes and ponder for improvement, and gained confidence in inquiry classes. Based on the research results, ways to develop and utilize the PLC protocol for science teachers were suggested.

Exploring the Essence of Missionary Kid's Experience of Ethnic Identity as TCK(Third Culture Kids) (선교사 자녀의 TCK(Third Culture Kids)로서의 민족정체성 겪음에 대한 본질 탐색)

  • Mun Mikyung
    • Journal of Christian Education in Korea
    • /
    • v.76
    • /
    • pp.193-212
    • /
    • 2023
  • Purpose of Study: This study is a qualitative study to understand the essential experiences of missionary children related to national identity. Research Contents and Methods: Ten children of missionaries who were re-entered to their home countries to receive university education were selected as participants for the study. Two preliminary surveys (2016, 2019) were conducted to determine the direction and subject of the study. Two in-depth interviews and one non-face-to-face survey were conducted with the study participants. Based on preliminary research and prior research, the questionnaire explored identity experiences by discovering four areas: language, culture, group, and place. In addition, rich research results were derived with schematic interview data, surveys using Phinney's 1992 national identity test tool, non-face-to-face surveys with parents of study participants, and self-report identity graphs. Conclusions and suggestions: As a result of the study, missionary kids as TCKs were able to know their names in identity confusion by sequentially experiencing international mobility, separation, and discrepancy in four areas. After all, TCK seems to suffer from identity difficulties because it remains primarily 'minority' in relation to the four domains. This study is meaningful in that it specifically revealed the support needed for TCK missionary children with multicultural background by revealing the importance of providing visiting experience in Korea and schoo(herd)l experience before entering Korean universities to re-adaptate TCK.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Development of Three-Dimensional Trajectory Model for Detecting Source Region of the Radioactive Materials Released into the Atmosphere (대기 누출 방사성물질 선원 위치 추적을 위한 3차원 궤적모델 개발)

  • Suh, Kyung-Suk;Park, Kihyun;Min, Byung-Il;Kim, Sora;Yang, Byung-Mo
    • Journal of Radiation Protection and Research
    • /
    • v.41 no.1
    • /
    • pp.31-39
    • /
    • 2016
  • Background: It is necessary to consider the overall countermeasure for analysis of nuclear activities according to the increase of the nuclear facilities like nuclear power and reprocessing plants in the neighboring countries including China, Taiwan, North Korea, Japan and South Korea. South Korea and comprehensive nuclear-test-ban treaty organization (CTBTO) are now operating the monitoring instruments to detect radionuclides released into the air. It is important to estimate the origin of radionuclides measured using the detection technology as well as the monitoring analysis in aspects of investigation and security of the nuclear activities in neighboring countries. Materials and methods: A three-dimensional forward/backward trajectory model has been developed to estimate the origin of radionuclides for a covert nuclear activity. The developed trajectory model was composed of forward and backward modules to track the particle positions using finite difference method. Results and discussion: A three-dimensional trajectory model was validated using the measured data at Chernobyl accident. The calculated results showed a good agreement by using the high concentration measurements and the locations where was near a release point. The three-dimensional trajectory model had some uncertainty according to the release time, release height and time interval of the trajectory at each release points. An atmospheric dispersion model called long-range accident dose assessment system (LADAS), based on the fields of regards (FOR) technique, was applied to reduce the uncertainties of the trajectory model and to improve the detective technology for estimating the radioisotopes emission area. Conclusion: The detective technology developed in this study can evaluate in release area and origin for covert nuclear activities based on measured radioisotopes at monitoring stations, and it might play critical tool to improve the ability of the nuclear safety field.