• Title/Summary/Keyword: algorithm expression

Search Result 516, Processing Time 0.021 seconds

Conservative Genes among 1,309 Species of Prokaryotes (원핵생물 1,309종의 보존적 유전자)

  • Lee, Dong-Geun
    • Journal of Life Science
    • /
    • v.32 no.6
    • /
    • pp.463-467
    • /
    • 2022
  • As a result of applying the COG (Cluster of Orthologous Groups of Protein) algorithm to 1,309 species to confirm the conserved genes of prokaryotes, ribosomal protein S11 (COG0100) was identified. The numbers of conservative genes were 2, 5, 5, and 6 in 1,308, 1,307, 1,306, and 1,305 species, respectively. Twenty-nine genes were conserved in over 1,302 species, and they encoded 23 ribosomal proteins, 3 tRNA synthetases, 2 translation factors, and 1 RNA polymerase subunit. Most of them were related to protein production, suggesting the importance of protein expression in prokaryotes. The highest conservative COG was COG0048 (ribosomal protein S12) among the 29 COGs. The 29 conserved genes usually have one protein for each prokaryote. COG0090 (ribosomal protein L2) had not only the lowest conservation value but also the largest standard deviation of phylogenetic distance value. As COG0090 is not only a member of the ribosome, but also a regulator of replication and transcription, it could be inferred that prokaryotes have large variations in COG0090 to survive in various environments. This study could provide data necessary for basic science, tumor control, and development of antibacterial agents.

Development of Detailed Design Automation Technology for AI-based Exterior Wall Panels and its Backframes

  • Kim, HaYoung;Yi, June-Seong
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1249-1249
    • /
    • 2022
  • The facade, an exterior material of a building, is one of the crucial factors that determine its morphological identity and its functional levels, such as energy performance, earthquake and fire resistance. However, regardless of the type of exterior materials, huge property and human casualties are continuing due to frequent exterior materials dropout accidents. The quality of the building envelope depends on the detailed design and is closely related to the back frames that support the exterior material. Detailed design means the creation of a shop drawing, which is the stage of developing the basic design to a level where construction is possible by specifying the exact necessary details. However, due to chronic problems in the construction industry, such as reducing working hours and the lack of design personnel, detailed design is not being appropriately implemented. Considering these characteristics, it is necessary to develop the detailed design process of exterior materials and works based on the domain-expert knowledge of the construction industry using artificial intelligence (AI). Therefore, this study aims to establish a detailed design automation algorithm for AI-based condition-responsive exterior wall panels and their back frames. The scope of the study is limited to "detailed design" performed based on the working drawings during the exterior work process and "stone panels" among exterior materials. First, working-level data on stone works is collected to analyze the existing detailed design process. After that, design parameters are derived by analyzing factors that affect the design of the building's exterior wall and back frames, such as structure, floor height, wind load, lift limit, and transportation elements. The relational expression between the derived parameters is derived, and it is algorithmized to implement a rule-based AI design. These algorithms can be applied to detailed designs based on 3D BIM to automatically calculate quantity and unit price. The next goal is to derive the iterative elements that occur in the process and implement a robotic process automation (RPA)-based system to link the entire "Detailed design-Quality calculation-Order process." This study is significant because it expands the design automation research, which has been rather limited to basic and implemented design, to the detailed design area at the beginning of the construction execution and increases the productivity by using AI. In addition, it can help fundamentally improve the working environment of the construction industry through the development of direct and applicable technologies to practice.

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

A Destructive Method in the Connection of the Algorithm and Design in the Digital media - Centered on the Rapid Prototyping Systems of Product Design - (디지털미디어 환경(環境)에서 디자인 특성(特性)에 관한 연구(硏究) - 실내제품(室內製品) 디자인을 중심으로 -)

  • Kim Seok-Hwa
    • Journal of Science of Art and Design
    • /
    • v.5
    • /
    • pp.87-129
    • /
    • 2003
  • The purpose of this thesis is to propose a new concept of design of the 21st century, on the basis of the study on the general signification of the structures and the signs of industrial product design, by examining the difference between modern and post-modern design, which is expected to lead the users to different design practice and interpretation of it. The starting point of this study is the different styles and patterns of 'Gestalt' in the post-modern design of the late 20th century from modern design - the factor of determination in industrial product design. That is to say, unlike functional and rational styles of modern product design, the late 20th century is based upon the pluralism characterized by complexity, synthetic and decorativeness. So far, most of the previous studies on design seem to have excluded visual aspects and usability, focused only on effective communication of design phenomena. These partial studies on design, blinded by phenomenal aspects, have resulted in failure to discover a principle of fundamental system. However, design varies according to the times; and the transformation of design is reflected in Design Pragnanz to constitute a new text of design. Therefore, it can be argued that Design Pragnanz serves as an essential factor under influence of the significance of text. In this thesis, therefore, I delve into analysis of the 20th century product design, in the light of Gestalt theory and Design Pragnanz, which have been functioning as the principle of the past design. For this study, I attempted to discover the fundamental elements in modern and post-modern designs, and to examine the formal structure of product design, the users' aesthetic preference and its semantics, from the integrative viewpoint. Also, with reference to history and theory of design my emphasis is more on fundamental visual phenomena than on structural analysis or process of visualization in product design, in order to examine the formal properties of modern and post-modern designs. Firstly, In Chapter 1, 'Issues and Background of the Study', I investigated the Gestalt theory and Design Pragnanz, on the premise of formal distinction between modern and post-modern designs. These theories are founded upon the discussion on visual perception of Gestalt in Germany in 1910's, in pursuit of the principle of perception centered around visual perception of human beings. In Chapter 2, I dealt with functionalism of modern design, as an advance preparation for the further study on the product design of the late 20th century. First of all, in Chapter 2-1, I examined the tendency of modern design focused on functionalism, which can be exemplified by the famous statement 'Form follows function'. Excluding all unessential elements in design - for example, decoration, this tendency has attained the position of the international style based on the spirit of Bauhause - universality and regularity - in search of geometric order, standardization and rationalization. In Chapter 2-2, I investigated the anthropological viewpoint that modern design started representing culture in a symbolic way including overall aspects of the society - politics, economics and ethics, and its criticism on functionalist design that aesthetic value is missing in exchange of excessive simplicity in style. Moreover, I examined the pluralist phenomena in post-modern design such as kitsch, eclecticism, reactionism, hi-tech and digital design, breaking away from functionalist purism of modern design. In Chapter 3, I analyzed Gestalt Pragnanz in design in a practical way, against the background of design trends. To begin with, I selected mass product design among those for the 20th century products as a target of analysis, highlighting representative styles in each category of the products. For this analysis, I adopted the theory of J. M Lehnhardt, who gradated in percentage the aesthetic and semantic levels of Pragnantz in design expression, and that of J. K. Grutter, who expressed it in a formula of M = O : C. I also employed eight units of dichotomies, according to the G. D. Birkhoff's aesthetic criteria, for the purpose of scientific classification of the degree of order and complexity in design; and I analyzed phenomenal aspects of design form represented in each unit. For Chapter 4, I executed a questionnaire about semiological phenomena of Design Pragnanz with 28 units of antonymous adjectives, based upon the research in the previous chapter. Then, I analyzed the process of signification of Design Pragnanz, founded on this research. Furthermore, the interpretation of the analysis served as an explanation to preference, through systematic analysis of Gestalt and Design Pragnanz in product design of the late 20th century. In Chapter 5, I determined the position of Design Pragnanz by integrating the analyses of Gestalt and Pragnanz in modern and post-modern designs In this process, 1 revealed the difference of each Design Pragnanz in formal respect, in order to suggest a vision of the future as a result, which will provide systemic and structural stimulation to current design.

  • PDF

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.