• Title/Summary/Keyword: multiple targets

Search Result 391, Processing Time 0.03 seconds

Test Case Generation for Simulink/Stateflow Model Based on a Modified Rapidly Exploring Random Tree Algorithm (변형된 RRT 알고리즘 기반 Simulink/Stateflow 모델 테스트 케이스 생성)

  • Park, Han Gon;Chung, Ki Hyun;Choi, Kyung Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.12
    • /
    • pp.653-662
    • /
    • 2016
  • This paper describes a test case generation algorithm for Simulink/Stateflow models based on the Rapidly exploring Random Tree (RRT) algorithm that has been successfully applied to path finding. An important factor influencing the performance of the RRT algorithm is the metric used for calculating the distance between the nodes in the RRT space. Since a test case for a Simulink/Stateflow (SL/SF) model is an input sequence to check a specific condition (called a test target in this paper) at a specific status of the model, it is necessary to drive the model to the status before checking the condition. A status maps to a node of the RRT. It is usually necessary to check various conditions at a specific status. For example, when the specific status represents an SL/SF model state from which multiple transitions are made, we must check multiple conditions to measure the transition coverage. We propose a unique distance calculation metric, based on the observation that the test targets are gathered around some specific status such as an SL/SF state, named key nodes in this paper. The proposed metric increases the probability that an RRT is extended from key nodes by imposing penalties to non-key nodes. A test case generation algorithm utilizing the proposed metric is proposed. Three models of Electrical Control Units (ECUs) embedded in a commercial vehicle are used for the performance evaluation. The performances are evaluated in terms of penalties and compared with those of the algorithm using a typical RRT algorithm.

Detection Method for Bean Cotyledon Locations under Vinyl Mulch Using Multiple Infrared Sensors

  • Lee, Kyou-Seung;Cho, Yong-jin;Lee, Dong-Hoon
    • Journal of Biosystems Engineering
    • /
    • v.41 no.3
    • /
    • pp.263-272
    • /
    • 2016
  • Purpose: Pulse crop damage due to wild birds is a serious problem, to the extent that the rate of damage during the period of time between seeding and the stage of cotyledon reaches 45.4% on average. This study investigated a method of fundamentally blocking birds from eating crops by conducting vinyl mulching after seeding and identifying the growing locations for beans to perform punching. Methods: Infrared (IR) sensors that could measure the temperature without contact were used to recognize the locations of soybean cotyledons below vinyl mulch. To expand the measurable range, 10 IR sensors were arranged in a linear array. A sliding mechanical device was used to reconstruct the two-dimensional spatial variance information of targets. Spatial interpolation was applied to the two-dimensional temperature distribution information measured in real time to improve the resolution of the bean coleoptile locations. The temperature distributions above the vinyl mulch for five species of soybeans over a period of six days from the appearance of the cotyledon stage were analyzed. Results: During the experimental period, cases where bean cotyledons did and did not come into contact with the bottom of the vinyl mulch were both observed, and depended on the degree of growth of the bean cotyledons. Although the locations of bean cotyledons could be estimated through temperature distribution analyses in cases where they came into contact with the bottom of the vinyl mulch, this estimation showed somewhat large errors according to the time that had passed after the cotyledon stage. The detection results were similar for similar types of crops. Thus, this method could be applied to crops with similar growth patterns. According to the results of 360 experiments that were conducted (five species of bean ${\times}$ six days ${\times}$ four speed levels ${\times}$ three repetitions), the location detection performance had an accuracy of 36.9%, and the range of location errors was 0-4.9 cm (RMSE = 3.1 cm). During a period of 3-5 days after the cotyledon stage, the location detection performance had an accuracy of 59% (RMSE = 3.9 cm). Conclusions: In the present study, to fundamentally solve the problem of damage to beans from birds in the early stage after seeding, a working method was proposed in which punching is carried out after seeding, thereby breaking away from the existing method in which seeding is carried out after punching. Methods for the accurate detection of soybean growing locations were studied to allow punching to promote the continuous growth of soybeans that had reached the cotyledon stage. Through experiments using multiple IR sensors and a sliding mechanical device, it was found that the locations of the crop could be partially identified 3-5 days after reaching the cotyledon stage regardless of the kind of pulse crop. It can be concluded that additional studies of robust detection methods considering environmental factors and factors for crop growth are necessary.

Analysis Method for Full-length LiDAR Waveforms (라이다 파장 분석 방법론에 대한 연구)

  • Jung, Myung-Hee;Yun, Eui-Jung;Kim, Cheon-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.4 s.316
    • /
    • pp.28-35
    • /
    • 2007
  • Airbone laser altimeters have been utilized for 3D topographic mapping of the earth, moon, and planets with high resolution and accuracy, which is a rapidly growing remote sensing technique that measures the round-trip time emitted laser pulse to determine the topography. The traveling time from the laser scanner to the Earth's surface and back is directly related to the distance of the sensor to the ground. When there are several objects within the travel path of the laser pulse, the reflected laser pluses are distorted by surface variation within the footprint, generating multiple echoes because each target transforms the emitted pulse. The shapes of the received waveforms also contain important information about surface roughness, slope and reflectivity. Waveform processing algorithms parameterize and model the return signal resulting from the interaction of the transmitted laser pulse with the surface. Each of the multiple targets within the footprint can be identified. Assuming each response is gaussian, returns are modeled as a mixture gaussian distribution. Then, the parameters of the model are estimated by LMS Method or EM algorithm However, each response actually shows the skewness in the right side with the slowly decaying tail. For the application to require more accurate analysis, the tail information is to be quantified by an approach to decompose the tail. One method to handle with this problem is proposed in this study.

Multi-threaded Web Crawling Design using Queues (큐를 이용한 다중스레드 방식의 웹 크롤링 설계)

  • Kim, Hyo-Jong;Lee, Jun-Yun;Shin, Seung-Soo
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.2
    • /
    • pp.43-51
    • /
    • 2017
  • Background/Objectives : The purpose of this study is to propose a multi-threaded web crawl using queues that can solve the problem of time delay of single processing method, cost increase of parallel processing method, and waste of manpower by utilizing multiple bots connected by wide area network Design and implement. Methods/Statistical analysis : This study designs and analyzes applications that run on independent systems based on multi-threaded system configuration using queues. Findings : We propose a multi-threaded web crawler design using queues. In addition, the throughput of web documents can be analyzed by dividing by client and thread according to the formula, and the efficiency and the number of optimal clients can be confirmed by checking efficiency of each thread. The proposed system is based on distributed processing. Clients in each independent environment provide fast and reliable web documents using queues and threads. Application/Improvements : There is a need for a system that quickly and efficiently navigates and collects various web sites by applying queues and multiple threads to a general purpose web crawler, rather than a web crawler design that targets a particular site.

Machine learning-based Fine Dust Prediction Model using Meteorological data and Fine Dust data (기상 데이터와 미세먼지 데이터를 활용한 머신러닝 기반 미세먼지 예측 모형)

  • KIM, Hye-Lim;MOON, Tae-Heon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.1
    • /
    • pp.92-111
    • /
    • 2021
  • As fine dust negatively affects disease, industry and economy, the people are sensitive to fine dust. Therefore, if the occurrence of fine dust can be predicted, countermeasures can be prepared in advance, which can be helpful for life and economy. Fine dust is affected by the weather and the degree of concentration of fine dust emission sources. The industrial sector has the largest amount of fine dust emissions, and in industrial complexes, factories emit a lot of fine dust as fine dust emission sources. This study targets regions with old industrial complexes in local cities. The purpose of this study is to explore the factors that cause fine dust and develop a predictive model that can predict the occurrence of fine dust. weather data and fine dust data were used, and variables that influence the generation of fine dust were extracted through multiple regression analysis. Based on the results of multiple regression analysis, a model with high predictive power was extracted by learning with a machine learning regression learner model. The performance of the model was confirmed using test data. As a result, the models with high predictive power were linear regression model, Gaussian process regression model, and support vector machine. The proportion of training data and predictive power were not proportional. In addition, the average value of the difference between the predicted value and the measured value was not large, but when the measured value was high, the predictive power was decreased. The results of this study can be developed as a more systematic and precise fine dust prediction service by combining meteorological data and urban big data through local government data hubs. Lastly, it will be an opportunity to promote the development of smart industrial complexes.

Evaluation of the Modified Hybrid-VMAT for multiple bone metastatic cancer (다중표적 뼈 전이암의 하이브리드 세기변조(modified hybrid-VMAT) 방사선치료계획 유용성 평가)

  • Jung, Il Hun;Cho, Yoon Jin;Chang, Won Suk;Kim, Sei Joon;Ha, Jin Sook;Jeon, Mi Jin;Jung, In Ho;Kim, Jong Dea;Shin, Dong Bong;Lee, Ik Jae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.161-167
    • /
    • 2018
  • Purpose : This study evaluates the usefulness of the Modified Hybrid-VMAT scheme with consideration of background radiation when establishing a treatment plan for multiple bone metastatic cancer including multiple tumors on the same axis. Materials and Methods : The subjects of this study consisted of five patients with multiple bone metastatic cancer on the same axis. The planning target volume(PTV) prescription dose was 30 Gy, and the treatment plan was established using Ray Station(Ray station, 5.0.2.35, Sweden). In the treatment plan for each patient, two or more tumors were set as one isocenter. A volumetric modulated arc therapy(VMAT) plan, a hybrid VMAT(h) plan with no consideration of background radiation, and a modified hybrid VMAT(mh) with consideration of background radiation were established. Then, using each dose volume histogram(DVH), the PTV maximum dose($D_{max}$), mean dose($D_{mean}$), conformity index(CI), and homogeneity index(HI) were compared among the plans. In addition, the organ at risk(OAR) of each treatment site was evaluated, and the total MU(Monitor Unit) and treatment time were also analyzed. Results : The PTV $D_{max}$ values of VMAT, VMAT(h) and VMAT(mh) were 3188.33 cGy, 3526 cGy, and 3285.67 cGy, the $D_{mean}$ values were 3081 cGy, 3252 cGy, and 3094 cGy; the CI values were $1.35{\pm}0.19$, $1.43{\pm}0.12$, and $1.30{\pm}0.06$; the HI values were $1.06{\pm}0.01$, $1.14{\pm}0.06$, and $1.09{\pm}0.02$; and the VMAT(h) OAR value was increased 3 %, and VMAT(mh) OAR value was decreased 18 %, respectively. Furthermore, the mean MU values were 904.90, 911.73, and 1202.13, and the mean beam on times were $128.67{\pm}10.97$, $167.33{\pm}7.57$, and $190.33{\pm}4.51$ respectively. Conclusions : Applying Modified Hybrid-VMAT when treating multiple targets can prevent overdose by correcting the overlapping of doses. Furthermore, it is possible to establish a treatment plan that can protect surrounding normal organs more effectively while satisfying the inclusion of PTV dose. Long-term follow-up of many patients is necessary to confirm the clinical efficacy of Modified Hybrid-VMAT.

  • PDF

The Change of Cell-cycle Related Proteins and Tumor Suppressive Effect in Non-small Cell Lung Cancer Cell Line after Transfection of p16(MTS1) Gene (폐암세포에 p16 (MTS1) 유전자 주입후 암생성능의 변화 및 세포주기관련 단백질의 변동에 관한 연구)

  • Kim, Young-Whan;Kim, Jae-Yeol;Yoo, Chul-Gyu;Han, Sung-Koo;Shim, Young-Soo;Lee, Kye-Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.4
    • /
    • pp.796-805
    • /
    • 1997
  • Background : It is clear that deregulation of cell cycle progression is a hallmark of neoplastic transformation and genes involved in the $G_1$/S transition of the cell cycle are especially frequent targets for mutations in human cancers, including lung cancer. p16 gene product, one of the G1 cell-cycle related proteins, that is recently identified plays an important role in the negative regulation of the the kinase activity of the cyclin dependent kinase (cdk) enzymes. Therefore p16 gene is known to be an important tumor suppressor gene and is also called MTS1 (multiple tumor suppressor 1). No more oncogenes have been reported to be frequently related to multiple different malignancies than the alterations of p16 gene. Especially when it comes to non-small cell lung cancer, there was no expression of p16 in more than 70% of cell lines examined. And also it is speculated that p16 gene could exert a key role in the development of non-small cell lung cancer. This study was designed to evaluate whether p16 gene could be used as a candidate for gene therapy of non-small cell lung cancer. Methods : After the extraction of total RNA from normal fibroblast cell line and subsequent reverse transcriptase reaction and polymerase chain reaction, the amplified p16 cDNA was subcloned into eukaryotic expression plasmid vector, pRC-CMV. The constructed pRC-CMV-p16 was transfected into the NCI-H441 NSCLC cell line using lipofectin. The changes of G1 cell-cycle related proteins were investigated with Western blot analysis and immunoprecipitation after extraction of proteins from cell lysates and tumor suppressive effect was observed by clonogenic assay. Results : (1) p16(-) NCI-H441 cell line transfected with pRC-CMV-p16 showed the formation of p16 : cdk 4 complex and decreased phosphorylated Rb protein, while control cell line did not. (2) Clonogenic assay demonstrated that the number of colony formation was markedly decreased in p16(-) NCI-H441 cell line transfected with pRC-CMV-p16 than the control cell line. Conclusion : It is confirmed that the expression of p16 protein in p16 absent NSCLC cell line with the gene transfection leads to p16 : cdk4 complex formation, subsequent decrease of phosphorylated pRb protein and ultimately tumor suppressive effects. And also it provides the foundation for the application of p16 gene as a important candidate for the gene therapy of NSCLC.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

A Cannabinoid Receptor Agonist N-Arachidonoyl Dopamine Inhibits Adipocyte Differentiation in Human Mesenchymal Stem Cells

  • Ahn, Seyeon;Yi, Sodam;Seo, Won Jong;Lee, Myeong Jung;Song, Young Keun;Baek, Seung Yong;Yu, Jinha;Hong, Soo Hyun;Lee, Jinyoung;Shin, Dong Wook;Jeong, Lak Shin;Noh, Minsoo
    • Biomolecules & Therapeutics
    • /
    • v.23 no.3
    • /
    • pp.218-224
    • /
    • 2015
  • Endocannabinoids can affect multiple cellular targets, such as cannabinoid (CB) receptors, transient receptor potential cation channel, subfamily V, member 1 (TRPV1) and peroxisome proliferator-activated receptor ${\gamma}$($PPAR{\gamma}$). The stimuli to induce adipocyte differentiation in hBM-MSCs increase the gene transcription of the $CB_1$ receptor, TRPV1 and $PPAR{\gamma}$. In this study, the effects of three endocannabinoids, N-arachidonoyl ethanolamine (AEA), N-arachidonoyl dopamine (NADA) and 2-arachidonoyl glycerol (2-AG), on adipogenesis in hBM-MSCs were evaluated. The adipocyte differentiation was promoted by AEA whereas inhibited by NADA. No change was observed by the treatment of non-cytotoxic concentrations of 2-AG. The difference between AEA and NADA in the regulation of adipogenesis is associated with their effects on $PPAR{\gamma}$ transactivation. AEA can directly activate $PPAR{\gamma}$. The effect of AEA on $PPAR{\gamma}$ in hBM-MSCs may prevail over that on the $CB_1$ receptor mediated signal transduction, giving rise to the AEA-induced promotion of adipogenesis. In contrast, NADA had no effect on the $PPAR{\gamma}$ activity in the $PPAR{\gamma}$ transactivation assay. The inhibitory effect of NADA on adipogenesis in hBM-MSCs was reversed not by capsazepine, a TRPV1 antagonist, but by rimonabant, a $CB_1$ antagonist/inverse agonist. Rimonabant by itself promoted adipogenesis in hBM-MSCs, which may be interpreted as the result of the inverse agonism of the $CB_1$ receptor. This result suggests that the constantly active $CB_1$ receptor may contribute to suppress the adipocyte differentiation of hBM-MSCs. Therefore, the selective $CB_1$ agonists that are unable to affect cellular $PPAR{\gamma}$ activity inhibit adipogenesis in hBM-MSCs.

Crepe Search System Design using Web Crawling (웹 크롤링 이용한 크레페 검색 시스템 설계)

  • Kim, Hyo-Jong;Han, Kun-Hee;Shin, Seung-Soo
    • Journal of Digital Convergence
    • /
    • v.15 no.11
    • /
    • pp.261-269
    • /
    • 2017
  • The purpose of this paper is to provide a search system using a method of accessing the web in real time without using a database server in order to guarantee the up-to-date information in a single network, rather than using a plurality of bots connected by a wide area network Design. The method of the research is to design and analyze the system which can search the person and keyword quickly and accurately in crepe system. In the crepe server, when the user registers information, the body tag matching conversion process stores all the information as it is, since various styles are applied to each user, such as a font, a font size, and a color. The crepe server does not cause a problem of body tag matching. However, when executing the crepe retrieval system, the style and characteristics of users can not be formalized. This problem can be solved by using the html_img_parser function and the Go language html parser package. By applying queues and multiple threads to a general-purpose web crawler, rather than a web crawler design that targets a specific site, it is possible to utilize a multiplier that quickly and efficiently searches and collects various web sites in various applications.