• Title/Summary/Keyword: 벡터법

Search Result 789, Processing Time 0.032 seconds

Koreanized Analysis System Development for Groundwater Flow Interpretation (지하수유동해석을 위한 한국형 분석시스템의 개발)

  • Choi, Yun-Yeong
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.3 no.3 s.10
    • /
    • pp.151-163
    • /
    • 2003
  • In this study, the algorithm of groundwater flow process was established for koreanized groundwater program development dealing with the geographic and geologic conditions of the aquifer have dynamic behaviour in groundwater flow system. All the input data settings of the 3-DFM model which is developed in this study are organized in Korean, and the model contains help function for each input data. Thus, it is designed to get detailed information about each input parameter when the mouse pointer is placed on the corresponding input parameter. This model also is designed to easily specify the geologic boundary condition for each stratum or initial head data in the work sheet. In addition, this model is designed to display boxes for input parameter writing for each analysis condition so that the setting for each parameter is not so complicated as existing MODFLOW is when steady and unsteady flow analysis are performed as well as the analysis for the characteristics of each stratum. Descriptions for input data are displayed on the right side of the window while the analysis results are displayed on the left side as well as the TXT file for this results is available to see. The model developed in this study is a numerical model using finite differential method, and the applicability of the model was examined by comparing and analyzing observed and simulated groundwater heads computed by the application of real recharge amount and the estimation of parameters. The 3-DFM model is applied in this study to Sehwa-ri, and Songdang-ri area, Jeju, Korea for analysis of groundwater flow system according to pumping, and obtained the results that the observed and computed groundwater head were almost in accordance with each other showing the range of 0.03 - 0.07 error percent. It is analyzed that the groundwater flow distributed evenly from Nopen-orum and Munseogi-orum to Wolang-bong, Yongnuni-orum, and Songja-bong through the computation of equipotentials and velocity vector using the analysis result of simulation which was performed before the pumping started in the study area. These analysis results show the accordance with MODFLOW's.

The pattern of movement and stress distribution during retraction of maxillary incisors using a 3-D finite element method (상악 전치부 후방 견인 시 이동 양상과 응력 분포에 관한 삼차원 유한요소법적 연구)

  • Chung, Ae-Jin;Kim, Un-Su;Lee, Soo-Haeng;Kang, Seong-Soo;Choi, Hee-In;Jo, Jin-Hyung;Kim, Sang-Cheol
    • The korean journal of orthodontics
    • /
    • v.37 no.2 s.121
    • /
    • pp.98-113
    • /
    • 2007
  • Objective: The purpose of this study was to evaluate the displacement pattern and the stress distribution shown on a finite element model 3-D visualization of a dry human skull using CT during the retraction of upper anterior teeth. Methods: Experimental groups were differentiated into 8 groups according to corticotomy, anchorage (buccal: mini implant between the maxillary second premolar and first molar and second premolar reinforced with a mini Implant, palatal: mini implant between the maxillary first molar and second molar and mini implant on the midpalatal suture) and force application point (use of a power arm or not). Results: In cases where anterior teeth were retracted by a conventional T-loop arch wire, the anterior teeth tipped more postero-inferiorly and the posterior teeth moved slightly in a mesial direction. In cases where anterior teeth were retracted with corticotomy, the stress at the anterior bone segment was distributed widely and showed a smaller degree of tipping movement of the anterior teeth, but with a greater amount of displacement. In cases where anterior teeth were retracted from the buccal side with force applied to the mini implant placed between the maxillary second premolar and the first molar to the canine power arm, it showed that a smaller degree of tipping movement was generated than when force was applied to the second premolar reinforced with a mini implant from the canine bracket. In cases where anterior teeth were retracted from the palatal side with force applied to the mini implant on the midpalatal suture, it resulted in a greater degree of tipping movement than when force was applied to the mini implant between the maxillary first and second molars. Conclusion: The results of this study verifies the effects of corticotomies and the effects of controlling orthodontic force vectors during tooth movement.

Development of Marker-free TaGlu-Ax1 Transgenic Rice Harboring a Wheat High-molecular-weight Glutenin Subunit (HMW-GS) Protein (벼에서 밀 고분자 글루테닌 단백질(TaGlu-Ax1) 발현을 통하여 쌀가루 가공적성 증진을 위한 마커프리(marker-free) 형질전환 벼의 개발)

  • Jeong, Namhee;Jeon, Seung-Ho;Kim, Dool-Yi;Lee, Choonseok;Ok, Hyun-Choong;Park, Ki-Do;Hong, Ha-Cheol;Lee, Seung-Sik;Moon, Jung-Kyung;Park, Soo-Kwon
    • Journal of Life Science
    • /
    • v.26 no.10
    • /
    • pp.1121-1129
    • /
    • 2016
  • High-molecular-weight glutenin subunits (HMW-GSs) are extremely important determinants of the functional properties of wheat dough. Transgenic rice plants containing a wheat TaGlu-Ax1 gene encoding a HMG-GS were produced from the Korean wheat cultivar ‘Jokyeong’ and used to enhance the bread-making quality of rice dough using the Agrobacterium-mediated co-transformation method. Two expression cassettes with separate DNA fragments containing only TaGlu-Ax1 and hygromycin phosphotransferase II (HPTII) resistance genes were introduced separately into the Agrobacterium tumefaciens EHA105 strain for co-infection. Rice calli were infected with each EHA105 strain harboring TaGlu-Ax1 or HPTII at a 3:1 ratio of TaGlu-Ax1 and HPTII. Among 210 hygromycin-resistant T0 plants, 20 transgenic lines harboring both the TaGlu-Ax1 and HPTII genes in the rice genome were obtained. The integration of the TaGlu-Ax1 gene into the rice genome was reconfirmed by Southern blot analysis. The transcripts and proteins of the wheat TaGlu-Ax1 were stably expressed in rice T1 seeds. Finally, the marker-free plants harboring only the TaGlu-Ax1 gene were successfully screened in the T1 generation. There were no morphological differences between the wild-type and marker-free transgenic plants. The quality of only one HMW-GS (TaGlu-Ax1) was unsuitable for bread making using transgenic rice dough. Greater numbers and combinations of HMW and LMW-GSs and gliadins of wheat are required to further improve the processing qualities of rice dough. TaGlu-Ax1 marker-free transgenic plants could provide good materials to make transgenic rice with improved bread-making qualities.

Expression of Yippee-Like 5 (YPEL5) Gene During Activation of Human Peripheral T Lymphocytes by Immobilized Anti-CD3 (인체 말초혈액의 활성화 과정 중 yippee-like 5 (YPEL5) 유전자의 발현 양상)

  • Jun, Do-Youn;Park, Hye-Won;Kim, Young-Ho
    • Journal of Life Science
    • /
    • v.17 no.12
    • /
    • pp.1641-1648
    • /
    • 2007
  • Yippee-like proteins, which have been identified as the homolog of Drosophila yippee protein containing a zinc-finger domain, are known to be highly conserved among eukaryotes. However, their functional roles are still poorly understood. Recently we initiated ordered differential display (ODD)-polymerase chain reaction (PCR) to isolate genes of which expressions are altered following activation of human T cells. On the ODD-PCR image, one PCR-product detected in unstimulated T cells was not detectable at the time when the activated T cells traversed near $G_1/S$ boundary following activation by immobilized anti-CD3. Cloning and nucleotide sequence analysis revealed that the PCR-product was yippee-like 5 (YPEL5) gene, which was known as a human homolog of the Drosophila yippee gene. Northern blot analysis confirmed the amount of ${\sim}2.2$ kb YPEL5 mRNA expression detectable in unstimulated T cells was sustained until 1.5 hr after activation and then rapidly declined to undetectable level by 5 hr. Ectopic expression of YPEL5 gene in human cervix epitheloid carcinoma HeLa cells caused a significant reduction in cell proliferation to the level of 47% of the control. Expression of GFP-YPEL5 fusion protein in HeLa cells showed its nuclear localization. These results demonstrated that the expression level of human YPEL5 mRNA was negatively regulated in the early stage of T cell activation, and suggested that YPEL5 might exert an inhibitory effect on the cell proliferation as a nuclear protein.

Development of Marker-free Transgenic Rice for Increasing Bread-making Quality using Wheat High Molecular Weight Glutenin Subunits (HMW-GS) Gene (밀 고분자 글루테닌 유전자를 이용하여 빵 가공적성 증진을 위한 마커 프리 형질전환 벼의 개발)

  • Park, Soo-Kwon;Shin, DongJin;Hwang, Woon-Ha;Oh, Se-Yun;Cho, Jun-Hyun;Han, Sang-Ik;Nam, Min-Hee;Park, Dong-Soo
    • Journal of Life Science
    • /
    • v.23 no.11
    • /
    • pp.1317-1324
    • /
    • 2013
  • High-molecular weight glutenin subunits (HMW-GS) have been shown to play a crucial role in determining the processing properties of the wheat grain. We have produced marker-free transgenic rice plants containing a wheat Glu-1Bx7 gene encoding the HMG-GS from the Korean wheat cultivar 'Jokyeong' using the Agrobacterium-mediated co-transformation method. The Glu-1Bx7-own promoter was inserted into a binary vector for seed-specific expression of the Glu-1Bx7 gene. Two expression cassettes comprised of separate DNA fragments containing only Glu-1Bx7 and hygromycin phosphotransferase II (HPTII) resistance genes were introduced separately to the Agrobacterium tumefaciens EHA105 strain for co-infection. Each EHA105 strain harboring Glu-1Bx7 or HPTII was infected to rice calli at a 3:1 ratio of Glu-1Bx7 and HPTII, respectively. Then, among 216 hygromycin-resistant $T_0$ plants, we obtained 24 transgenic lines with both Glu-1Bx7 and HPTII genes inserted into the rice genome. We reconfirmed integration of the Glu-1Bx7 gene into the rice genome by Southern blot analysis. Transcripts and proteins of the wheat Glu-1Bx7 were stably expressed in the rice $T_1$ seeds. Finally, the marker-free plants harboring only the Glu-1Bx7 gene were successfully screened at the $T_1$ generation.

Development of a Classification Method for Forest Vegetation on the Stand Level, Using KOMPSAT-3A Imagery and Land Coverage Map (KOMPSAT-3A 위성영상과 토지피복도를 활용한 산림식생의 임상 분류법 개발)

  • Song, Ji-Yong;Jeong, Jong-Chul;Lee, Peter Sang-Hoon
    • Korean Journal of Environment and Ecology
    • /
    • v.32 no.6
    • /
    • pp.686-697
    • /
    • 2018
  • Due to the advance in remote sensing technology, it has become easier to more frequently obtain high resolution imagery to detect delicate changes in an extensive area, particularly including forest which is not readily sub-classified. Time-series analysis on high resolution images requires to collect extensive amount of ground truth data. In this study, the potential of land coverage mapas ground truth data was tested in classifying high-resolution imagery. The study site was Wonju-si at Gangwon-do, South Korea, having a mix of urban and natural areas. KOMPSAT-3A imagery taken on March 2015 and land coverage map published in 2017 were used as source data. Two pixel-based classification algorithms, Support Vector Machine (SVM) and Random Forest (RF), were selected for the analysis. Forest only classification was compared with that of the whole study area except wetland. Confusion matrixes from the classification presented that overall accuracies for both the targets were higher in RF algorithm than in SVM. While the overall accuracy in the forest only analysis by RF algorithm was higher by 18.3% than SVM, in the case of the whole region analysis, the difference was relatively smaller by 5.5%. For the SVM algorithm, adding the Majority analysis process indicated a marginal improvement of about 1% than the normal SVM analysis. It was found that the RF algorithm was more effective to identify the broad-leaved forest within the forest, but for the other classes the SVM algorithm was more effective. As the two pixel-based classification algorithms were tested here, it is expected that future classification will improve the overall accuracy and the reliability by introducing a time-series analysis and an object-based algorithm. It is considered that this approach will contribute to improving a large-scale land planning by providing an effective land classification method on higher spatial and temporal scales.

Numerical modeling of secondary flow behavior in a meandering channel with submerged vanes (잠긴수제가 설치된 만곡수로에서의 이차류 거동 수치모의)

  • Lee, Jung Seop;Park, Sang Deog;Choi, Cheol Hee;Paik, Joongcheol
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.10
    • /
    • pp.743-752
    • /
    • 2019
  • The flow in the meandering channel is characterized by the spiral motion of secondary currents that typically cause the erosion along the outer bank. Hydraulic structures, such as spur dike and groyne, are commonly installed on the channel bottom near the outer bank to mitigate the strength of secondary currents. This study is to investigate the effects of submerged vanes installed in a $90^{\circ}$ meandering channel on the development of secondary currents through three-dimensional numerical modeling using the hybrid RANS/LES method for turbulence and the volume of fluid method, based on OpenFOAM open source toolbox, for capturing the free surface at the Froude number of 0.43. We employ the second-order-accurate finite volume methods in the space and time for the numerical modeling and compare numerical results with experimental measurements for evaluating the numerical predictions. Numerical results show that the present simulations well reproduce the experimental measurements, in terms of the time-averaged streamwise velocity and secondary velocity vector fields in the bend with submerged vanes. The computed flow fields reveal that the streamwise velocity near the bed along the outer bank at the end section of bend dramatically decrease by one third of mean velocity after the installation of vanes, which support that submerged vanes mitigate the strength of primary secondary flow and are helpful for the channel stability along the outer bank. The flow between the top of vanes and the free surface accelerates and the maximum velocity of free surface flow near the flow impingement along the outer bank increases about 20% due to the installation of submerged vanes. Numerical solutions show the formations of the horseshoe vortices at the front of vanes and the lee wakes behind the vanes, which are responsible for strong local scour around vanes. Additional study on the shapes and arrangement of vanes is required for mitigate the local scour.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.