• Title/Summary/Keyword: optimal algorithm

Search Result 6,806, Processing Time 0.034 seconds

Fragment Combination From DNA Sequence Data Using Fuzzy Reasoning Method (퍼지 추론기법을 이용한 DNA 염기 서열의 단편결합)

  • Kim, Kwang-Baek;Park, Hyun-Jung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2329-2334
    • /
    • 2006
  • In this paper, we proposed a method complementing failure of combining DNA fragments, defect of conventional contig assembly programs. In the proposed method, very long DNA sequence data are made into a prototype of fragment of about 700 bases that can be analyzed by automatic sequence analyzer at one time, and then matching ratio is calculated by comparing a standard prototype with 3 fragmented clones of about 700 bases generated by the PCR method. In this process, the time for calculation of matching ratio is reduced by Compute Agreement algorithm. Two candidates of combined fragments of every prototype are extracted by the degree of overlapping of calculated fragment pairs, and then degree of combination is decided using a fuzzy reasoning method that utilizes the matching ratios of each extracted fragment, and A, C, G, T membership degrees of each DNA sequence, and previous frequencies of each A, C, G, T. In this paper. DNA sequence combination is completed by the iteration of the process to combine decided optimal test fragments until no fragment remains. For the experiments, fragments or about 700 bases were generated from each sequence of 10,000 bases and 100,000 bases extracted from 'PCC6803', complete protein genome. From the experiments by applying random notations on these fragments, we could see that the proposed method was faster than FAP program, and combination failure, defect of conventional contig assembly programs, did not occur.

A Multi-sensor basedVery Short-term Rainfall Forecasting using Radar and Satellite Data - A Case Study of the Busan and Gyeongnam Extreme Rainfall in August, 2014- (레이더-위성자료 이용 다중센서 기반 초단기 강우예측 - 2014년 8월 부산·경남 폭우사례를 중심으로 -)

  • Jang, Sangmin;Park, Kyungwon;Yoon, Sunkwon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.2
    • /
    • pp.155-169
    • /
    • 2016
  • In this study, we developed a multi-sensor blending short-term rainfall forecasting technique using radar and satellite data during extreme rainfall occurrences in Busan and Gyeongnam region in August 2014. The Tropical Z-R relationship ($Z=32R^{1.65}$) has applied as a optimal radar Z-R relation, which is confirmed that the accuracy is improved during 20mm/h heavy rainfall. In addition, the multi-sensor blending technique has applied using radar and COMS (Communication, Ocean and Meteorological Satellite) data for quantitative precipitation estimation. The very-short-term rainfall forecasting performance was improved in 60 mm/h or more of the strong heavy rainfall events by multi-sensor blending. AWS (Automatic Weather System) and MAPLE data were used for verification of rainfall prediction accuracy. The results have ensured about 50% or more in accuracy of heavy rainfall prediction for 1-hour before rainfall prediction, which are correlations of 10-minute lead time have 0.80 to 0.53, and root mean square errors have 3.99 mm/h to 6.43 mm/h. Through this study, utilizing of multi-sensor blending techniques using radar and satellite data are possible to provide that would be more reliable very-short-term rainfall forecasting data. Further we need ongoing case studies and prediction and estimation of quantitative precipitation by multi-sensor blending is required as well as improving the satellite rainfall estimation algorithm.

Sensitivity Analysis of Satellite BUV Ozone Profile Retrievals on Meteorological Parameter Errors (기상 입력장 오차에 대한 자외선 오존 프로파일 산출 알고리즘 민감도 분석)

  • Shin, Daegeun;Bak, Juseon;Kim, Jae Hwan
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.3
    • /
    • pp.481-494
    • /
    • 2018
  • The accurate radiative transfer model simulation is essential for an accurate ozone profile retrieval using optimal estimation from backscattered ultraviolet (BUV) measurement. The input parameters of the radiative transfer model are the main factors that determine the model accuracy. In particular, meteorological parameters such as temperature and surface pressure have a direct effect on simulating radiation spectrum as a component for calculating ozone absorption cross section and Rayleigh scattering. Hence, a sensitivity of UV ozone profile retrievals to these parameters has been investigated using radiative transfer model. The surface pressure shows an average error within 100 hPa in the daily / monthly climatological data based on the numerical weather prediction model, and the calculated ozone retrieval error is less than 0.2 DU for each layer. On the other hand, the temperature shows an error of 1-7K depending on the observation station and altitude for the same daily / monthly climatological data, and the calculated ozone retrieval error is about 4 DU for each layer. These results can help to understand the obtained vertical ozone information from satellite. In addition, they are expected to be used effectively in selecting the meteorological input data and establishing the system design direction in the process of applying the algorithm to satellite operation.

Analytical Solution for Attitude Command Generation of Agile Spacecraft (고기동 인공위성의 해석적 자세명령생성 기법 연구)

  • Mok, Sung-Hoon;Bang, Hyochoong;Kim, Hee-Seob
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.8
    • /
    • pp.639-651
    • /
    • 2018
  • An analytical solution to generate attitude command profile for agile spacecraft is proposed. In realistic environment, obtaining analytical minimum-time optimal solution is very difficult because of following constraints-: 1) actuator saturation, 2) flexible mode excitation, 3) uplink command bandwidth limit. For that reasons, this paper applies two simplifications, an eigen-axis rotation and a finite-jerk approximated profile, to derive the solution in an analytical manner. The resulting attitude profile can be used as a feedforward or reference input to on-board attitude controller, and it can enhance spacecraft agility. Equations of attitude command profile are derived in two general boundary conditions: rest-to-rest maneuver and spin-to-spin maneuver. Simulation results demonstrate that the initial and final boundary conditions, in terms of time, attitude, and angular velocities, are well satisfied with the proposed analytical solution. The derived attitude command generation algorithm may be used to minimize a number of parameters to be uploaded to spacecraft or to automate a sequence of attitude command generation on-board.

An Equality-Based Model for Real-Time Application of A Dynamic Traffic Assignment Model (동적통행배정모형의 실시간 적용을 위한 변동등식의 응용)

  • Shin, Seong-Il;Ran, Bin;Choi, Dae-Soon;Baik, Nam-Tcheol
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.3
    • /
    • pp.129-147
    • /
    • 2002
  • This paper presents a variational equality formulation by Providing new dynamic route choice condition for a link-based dynamic traffic assignment model. The concepts of used paths, used links, used departure times are employed to derive a new link-based dynamic route choice condition. The route choice condition is formulated as a time-dependent variational equality problem and necessity and sufficiency conditions are provided to prove equivalence of the variational equality model. A solution algorithm is proposed based on physical network approach and diagonalization technique. An asymmetric network computational study shows that ideal dynamic-user optimal route condition is satisfied when the length of each time interval is shortened. The I-394 corridor study shows that more than 93% of computational speed improved compared to conventional variational inequality approach, and furthermore as the larger network size, the more computational performance can be expected. This paper concludes that the variational equality could be a promising approach for real-time application of a dynamic traffic assignment model based on fast computational performance.

A Case Study for Simulation of a Debris Flow with DEBRIS-2D at Inje, Korea (DEBRIS-2D를 이용한 인제지역 토석류 산사태 거동모사 사례 연구)

  • Chae, Byung-Gon;Liu, Ko-Fei;Kim, Man-Il
    • The Journal of Engineering Geology
    • /
    • v.20 no.3
    • /
    • pp.231-242
    • /
    • 2010
  • In order to assess applicability of debris flow simulation on natural terrain in Korea, this study introduced the DEBRIS-2D program which had been developed by Liu and Huang (2006). For simulation of large debris flows composed of fine and coarse materials, DEBRIS-2D was developed using the constitutive relation proposed by Julien and Lan (1991). Based on the theory of DEBRIS-2D, this study selected a valley where a large debris flow was occurred on July 16th, 2006 at Deoksanri, Inje county, Korea. The simulation results show that all mass were already flowed into the stream at 10 minutes after starting. In 10minutes, the debris flow reached the first geological turn and an open area, resulting in slow velocity and changing its flow direction. After that, debris flow started accelerating again and it reached the village after 40 minutes. The maximum velocity is rather low between 1 m/sec and 2 m/sec. This is the reason why debris flow took 50 minutes to reach the village. The depth change of debris flow shows enormous effect of the valley shape. The simulated result is very similar to what happened in the field. It means that DEBRIS-2D program can be applied to the geologic and topographic conditions in Korea without large modification of analysis algorithm. However, it is necessary to determine optimal reference values of Korean geologic and topographic properties for more reliable simulation of debris flows.

Data Congestion Control Using Drones in Clustered Heterogeneous Wireless Sensor Network (클러스터된 이기종 무선 센서 네트워크에서의 드론을 이용한 데이터 혼잡 제어)

  • Kim, Tae-Rim;Song, Jong-Gyu;Im, Hyun-Jae;Kim, Bum-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.12-19
    • /
    • 2020
  • The clustered heterogeneous wireless sensor network is comprised of sensor nodes and cluster heads, which are hierarchically organized for different objectives. In the network, we should especially take care of managing node resources to enhance network performance based on memory and battery capacity constraints. For instances, if some interesting events occur frequently in the vicinity of particular sensor nodes, those nodes might receive massive amounts of data. Data congestion can happen due to a memory bottleneck or link disconnection at cluster heads because the remaining memory space is filled with those data. In this paper, we utilize drones as mobile sinks to resolve data congestion and model the network, sensor nodes, and cluster heads. We also design a cost function and a congestion indicator to calculate the degree of congestion. Then we propose a data congestion map index and a data congestion mapping scheme to deploy drones at optimal points. Using control variable, we explore the relationship between the degree of congestion and the number of drones to be deployed, as well as the number of drones that must be below a certain degree of congestion and within communication range. Furthermore, we show that our algorithm outperforms previous work by a minimum of 20% in terms of memory overflow.

An Efficient Subsequence Matching Method Based on Index Interpolation (인덱스 보간법에 기반한 효율적인 서브시퀀스 매칭 기법)

  • Loh Woong-Kee;Kim Sang-Wook
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.345-354
    • /
    • 2005
  • Subsequence matching is one of the most important operations in the field of data mining. The existing subsequence matching algorithms use only one index, and their performance gets worse as the difference between the length of a query sequence and the site of windows, which are subsequences of a same length extracted from data sequences to construct the index, increases. In this paper, we propose a new subsequence matching method based on index interpolation to overcome such a problem. An index interpolation method constructs two or more indexes, and performs search ing by selecting the most appropriate index among them according to the given query sequence length. In this paper, we first examine the performance trend with the difference between the query sequence length and the window size through preliminary experiments, and formulate a search cost model that reflects the distribution of query sequence lengths in the view point of the physical database design. Next, we propose a new subsequence matching method based on the index interpolation to improve search performance. We also present an algorithm based on the search cost formula mentioned above to construct optimal indexes to get better search performance. Finally, we verify the superiority of the proposed method through a series of experiments using real and synthesized data sets.

An improvement in FGS coding scheme for high quality scalability (고화질 확장성을 위한 FGS 코딩 구조의 개선)

  • Boo, Hee-Hyung;Kim, Sung-Ho
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.249-254
    • /
    • 2011
  • FGS (fine granularity scalability) supporting scalability in MPEG-4 Part 2 is a scalable video coding scheme that provides bit-rate adaptation to varying network bandwidth thereby achieving of its optimal video quality. In this paper, we proposed FGS coding scheme which performs one more bit-plane coding for residue signal occured in the enhancement-layer of the basic FGS coding scheme. The experiment evaluated in terms of video quality scalability of the proposed FGS coding scheme by comparing with FGS coding scheme of the MPEG-4 verification model (VM-FGS). The comparison was conducted by analysis of PSNR values of three tested video sequences. The results showed that when using rate control algorithm VM5+, the proposed FGS coding scheme obtained Y, U, V PSNR of 0.4 dB, 9.4 dB, 9 dB averagely higher and when using fixed QP value 17, obtained Y, U, V PSNR of 4.61 dB, 20.21 dB, 16.56 dB averagely higher than the existing VM-FGS. From results, we found that the proposed FGS coding scheme has higher video quality scalability to be able to achieve video quality from minimum to maximum than VM-FGS.

Prediction of Lung Cancer Based on Serum Biomarkers by Gene Expression Programming Methods

  • Yu, Zhuang;Chen, Xiao-Zheng;Cui, Lian-Hua;Si, Hong-Zong;Lu, Hai-Jiao;Liu, Shi-Hai
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.21
    • /
    • pp.9367-9373
    • /
    • 2014
  • In diagnosis of lung cancer, rapid distinction between small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) tumors is very important. Serum markers, including lactate dehydrogenase (LDH), C-reactive protein (CRP), carcino-embryonic antigen (CEA), neurone specific enolase (NSE) and Cyfra21-1, are reported to reflect lung cancer characteristics. In this study classification of lung tumors was made based on biomarkers (measured in 120 NSCLC and 60 SCLC patients) by setting up optimal biomarker joint models with a powerful computerized tool - gene expression programming (GEP). GEP is a learning algorithm that combines the advantages of genetic programming (GP) and genetic algorithms (GA). It specifically focuses on relationships between variables in sets of data and then builds models to explain these relationships, and has been successfully used in formula finding and function mining. As a basis for defining a GEP environment for SCLC and NSCLC prediction, three explicit predictive models were constructed. CEA and NSE are requentlyused lung cancer markers in clinical trials, CRP, LDH and Cyfra21-1 have significant meaning in lung cancer, basis on CEA and NSE we set up three GEP models-GEP 1(CEA, NSE, Cyfra21-1), GEP2 (CEA, NSE, LDH), GEP3 (CEA, NSE, CRP). The best classification result of GEP gained when CEA, NSE and Cyfra21-1 were combined: 128 of 135 subjects in the training set and 40 of 45 subjects in the test set were classified correctly, the accuracy rate is 94.8% in training set; on collection of samples for testing, the accuracy rate is 88.9%. With GEP2, the accuracy was significantly decreased by 1.5% and 6.6% in training set and test set, in GEP3 was 0.82% and 4.45% respectively. Serum Cyfra21-1 is a useful and sensitive serum biomarker in discriminating between NSCLC and SCLC. GEP modeling is a promising and excellent tool in diagnosis of lung cancer.