• Title/Summary/Keyword: Web Novel

Search Result 253, Processing Time 0.027 seconds

Biodiversity and Enzyme Activity of Marine Fungi with 28 New Records from the Tropical Coastal Ecosystems in Vietnam

  • Pham, Thu Thuy;Dinh, Khuong V.;Nguyen, Van Duy
    • Mycobiology
    • /
    • v.49 no.6
    • /
    • pp.559-581
    • /
    • 2021
  • The coastal marine ecosystems of Vietnam are one of the global biodiversity hotspots, but the biodiversity of marine fungi is not well known. To fill this major gap of knowledge, we assessed the genetic diversity (ITS sequence) of 75 fungal strains isolated from 11 surface coastal marine and deeper waters in Nha Trang Bay and Van Phong Bay using a culture-dependent approach and 5 OTUs (Operational Taxonomic Units) of fungi in three representative sampling sites using next-generation sequencing. The results from both approaches shared similar fungal taxonomy to the most abundant phylum (Ascomycota), genera (Candida and Aspergillus) and species (Candida blankii) but were different at less common taxa. Culturable fungal strains in this study belong to 3 phyla, 5 subdivisions, 7 classes, 12 orders, 17 families, 22 genera and at least 40 species, of which 29 species have been identified and several species are likely novel. Among identified species, 12 and 28 are new records in global and Vietnamese marine areas, respectively. The analysis of enzyme activity and the checklist of trophic mode and guild assignment provided valuable additional biological information and suggested the ecological function of planktonic fungi in the marine food web. This is the largest dataset of marine fungal biodiversity on morphology, phylogeny and enzyme activity in the tropical coastal ecosystems of Vietnam and Southeast Asia. Biogeographic aspects, ecological factors and human impact may structure mycoplankton communities in such aquatic habitats.

Assessment of the performance of composite steel shear walls with T-shaped stiffeners

  • Zarrintala, Hadi;Maleki, Ahmad;Yaghin, Mohammad Ali Lotfollahi
    • Earthquakes and Structures
    • /
    • v.23 no.3
    • /
    • pp.297-313
    • /
    • 2022
  • Composite steel plate shear wall (CSPSW) is a relatively novel structural system proposed to improve the performance of steel plate shear walls by adding one or two layers of concrete walls to the infill plate. In addition, the buckling of the infill steel plate has a significant negative effect on the shear strength and energy dissipation capacity of the overall systems. Accordingly, in this study, using the finite element (FE) method, the performance and behavior of composite steel shear walls using T-shaped stiffeners to prevent buckling of the infill steel plate and increase the capacity of CSPSW systems have been investigated. In this paper, after modeling composite steel plate shear walls with and without steel plates with finite element methods and calibration the models with experimental results, effects of parameters such as several stiffeners, vertical, horizontal, diagonal, and a combination of T-shaped stiffeners located in the composite wall have been investigated on the ultimate capacity, web-plate buckling, von-Mises stress, and failure modes. The results showed that the arrangement of stiffeners has no significant effect on the capacity and performance of the CSPSW so that the use of vertical or horizontal stiffeners did not have a significant effect on the capacity and performance of the CSPSW. On the other hand, the use of diagonal hardeners has potentially affected the performance of CSPSWs, increasing the capacity of steel shear walls by up to 25%.

A Framework for Implementing Information Systems Integration to Optimize Organizational Performance

  • Ali Sirageldeen Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.11-20
    • /
    • 2023
  • The primary aim of this study is to investigate the influence of Service Provider Quality (SPQ), System Quality (SQ), Information Quality (IQ), and Training Quality (TQ) on the interconnected aspect of organizational performance known as growth and development (GD). The study examined the influence of information systems (IS) on organisational performance and provided a theory-based technique for conducting research. The theoretical foundation for this study is derived from the widely employed [1]. IS success model in information systems research. The study's framework incorporates several novel elements, drawn from a comprehensive review of both recent and earlier literature, which researchers have utilized to evaluate the dimensions of [1]. In this study, we collected data from a diverse group of 348 individuals representing various industries through a web-based questionnaire. The collected data were subjected to analysis using SPSS. We conducted a multiple regression analysis involving 15 factors to assess several hypotheses regarding the relationship between the independent construct IS effectiveness and the dependent construct organizational performance. Several noteworthy descriptive statistics emerged, which hold significance for management. The study's findings strongly indicate that information systems exert a significant and beneficial influence on organizational performance. To sustain and continually enhance organizational effectiveness, the study recommends that managers periodically scrutinize and assess their information systems.

Application of Sensor Fault Detection Scheme Based on AANN to Risk Measurement System (AANN-기반 센서 고장 검출 기법의 방재시스템에의 적용)

  • Kim Sung-Ho;Lee Young-Sam
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.11 no.2
    • /
    • pp.92-96
    • /
    • 2006
  • NLPCA(Nonlinear Principal Component Analysis) is a novel technique for multivariate data analysis, similar to the well-known method of principal component analysis. NLPCA operates by a feedforward neural network called AANN(Auto Associative Neural Network) which performs the identity mapping. In this work, a sensor fault detection system based on NLPCA is presented. To verify its applicability, simulation study on the data supplied from risk management system is executed.

Satellite Imagery and AI-based Disaster Monitoring and Establishing a Feasible Integrated Near Real-Time Disaster Monitoring System (위성영상-AI 기반 재난모니터링과 실현 가능한 준실시간 통합 재난모니터링 시스템)

  • KIM, Junwoo;KIM, Duk-jin
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.236-251
    • /
    • 2020
  • As remote sensing technologies are evolving, and more satellites are orbited, the demand for using satellite data for disaster monitoring is rapidly increasing. Although natural and social disasters have been monitored using satellite data, constraints on establishing an integrated satellite-based near real-time disaster monitoring system have not been identified yet, and thus a novel framework for establishing such system remains to be presented. This research identifies constraints on establishing satellite data-based near real-time disaster monitoring systems by devising and testing a new conceptual framework of disaster monitoring, and then presents a feasible disaster monitoring system that relies mainly on acquirable satellite data. Implementing near real-time disaster monitoring by satellite remote sensing is constrained by technological and economic factors, and more significantly, it is also limited by interactions between organisations and policy that hamper timely acquiring appropriate satellite data for the purpose, and institutional factors that are related to satellite data analyses. Such constraints could be eased by employing an integrated computing platform, such as Amazon Web Services(AWS), which enables obtaining, storing and analysing satellite data, and by developing a toolkit by which appropriate satellites'sensors that are required for monitoring specific types of disaster, and their orbits, can be analysed. It is anticipated that the findings of this research could be used as meaningful reference when trying to establishing a satellite-based near real-time disaster monitoring system in any country.

WordNet-Based Category Utility Approach for Author Name Disambiguation (저자명 모호성 해결을 위한 개념망 기반 카테고리 유틸리티)

  • Kim, Je-Min;Park, Young-Tack
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.225-232
    • /
    • 2009
  • Author name disambiguation is essential for improving performance of document indexing, retrieval, and web search. Author name disambiguation resolves the conflict when multiple authors share the same name label. This paper introduces a novel approach which exploits ontologies and WordNet-based category utility for author name disambiguation. Our method utilizes author knowledge in the form of populated ontology that uses various types of properties: titles, abstracts and co-authors of papers and authors' affiliation. Author ontology has been constructed in the artificial intelligence and semantic web areas semi-automatically using OWL API and heuristics. Author name disambiguation determines the correct author from various candidate authors in the populated author ontology. Candidate authors are evaluated using proposed WordNet-based category utility to resolve disambiguation. Category utility is a tradeoff between intra-class similarity and inter-class dissimilarity of author instances, where author instances are described in terms of attribute-value pairs. WordNet-based category utility has been proposed to exploit concept information in WordNet for semantic analysis for disambiguation. Experiments using the WordNet-based category utility increase the number of disambiguation by about 10% compared with that of category utility, and increase the overall amount of accuracy by around 98%.

Introducing Keyword Bibliographic Coupling Analysis (KBCA) for Identifying the Intellectual Structure (지적구조 규명을 위한 키워드서지결합분석 기법에 관한 연구)

  • Lee, Jae Yun;Chung, EunKyung
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.1
    • /
    • pp.309-330
    • /
    • 2022
  • Intellectual structure analysis, which quantitatively identifies the structure, characteristics, and sub-domains of fields, has rapidly increased in recent years. Analysis techniques traditionally used to conduct intellectual structure analysis research include bibliographic coupling analysis, co-citation analysis, co-occurrence analysis, and author bibliographic coupling analysis. This study proposes a novel intellectual structure analysis method, Keyword Bibliographic Coupling Analysis (KBCA). The Keyword Bibliographic Coupling Analysis (KBCA) is a variation of the author bibliographic coupling analysis, which targets keywords instead of authors. It calculates the number of references shared by two keywords to the degree of coupling between the two keywords. A set of 1,366 articles in the field of 'Open Data' searched in the Web of Science were collected using the proposed KBCA technique. A total of 63 keywords that appeared more than 7 times, extracted from 1,366 article sets, were selected as core keywords in the open data field. The intellectual structure presented by the KBCA technique with 63 key keywords identified the main areas of open government and open science and 10 sub-areas. On the other hand, the intellectual structure network of co-occurrence word analysis was found to be insufficient in the overall structure and detailed domain structure. This result can be considered because the KBCA sufficiently measures the relationship between keywords using the degree of bibliographic coupling.

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.

Online Information Sources of Coronavirus Using Webometric Big Data (코로나19 사태와 온라인 정보의 다양성 연구 - 빅데이터를 활용한 글로벌 접근법)

  • Park, Han Woo;Kim, Ji-Eun;Zhu, Yu-Peng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.728-739
    • /
    • 2020
  • Using webometric big data, this study examines the diversity of online information sources about the novel coronavirus causing the COVID-19 pandemic. Specifically, it focuses on some 28 countries where confirmed coronavirus cases occurred in February 2020. In the results, the online visibility of Australia, Canada, and Italy was the highest, based on their producing the most relevant information. There was a statistically significant correlation between the hit counts per country and the frequency of visiting the domains that act as information channels. Interestingly, Japan, China, and Singapore, which had a large number of confirmed cases at that time, were providing web data related to the novel coronavirus. Online sources were classified using an N-tuple helix model. The results showed that government agencies were the largest supplier of coronavirus information in cyberspace. Furthermore, the two-mode network technique revealed that media companies, university hospitals, and public healthcare centers had taken a positive attitude towards online circulation of coronavirus research and epidemic prevention information. However, semantic network analysis showed that health, school, home, and public had high centrality values. This means that people were concerned not only about personal prevention rules caused by the coronavirus outbreak, but also about response plans caused by life inconveniences and operational obstacles.

In silico Design of Discontinuous Peptides Representative of B and T-cell Epitopes from HER2-ECD as Potential Novel Cancer Peptide Vaccines

  • Manijeh, Mahdavi;Mehrnaz, Keyhanfar;Violaine, Moreau;Hassan, Mohabatkar;Abbas, Jafarian;Mohammad, Rabbani
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.14 no.10
    • /
    • pp.5973-5981
    • /
    • 2013
  • At present, the most common cause of cancer-related death in women is breast cancer. In a large proportion of breast cancers, there is the overexpression of human epidermal growth factor receptor 2 (HER2). This receptor is a 185 KDa growth factor glycoprotein, also known as the first tumor-associated antigen for different types of breast cancers. Moreover, HER2 is an appropriate cell-surface specific antigen for passive immunotherapy, which relies on the repeated application of monoclonal antibodies that are transferred to the patient. However, vaccination is preferable because it would stimulate a patient's own immune system to actively respond to a disease. In the current study, several bioinformatics tools were used for designing synthetic peptide vaccines. PEPOP was used to predict peptides from HER2 ECD subdomain III in the form of discontinuous-continuous B-cell epitopes. Then, T-cell epitope prediction web servers MHCPred, SYFPEITHI, HLA peptide motif search, Propred, and SVMHC were used to identify class-I and II MHC peptides. In this way, PEPOP selected 12 discontinuous peptides from the 3D structure of the HER2 ECD subdomain III. Furthermore, T-cell epitope prediction analyses identified four peptides containing the segments 77 (384-391) and 99 (495-503) for both B and T-cell epitopes. This work is the only study to our knowledge focusing on design of in silico potential novel cancer peptide vaccines of the HER2 ECD subdomain III that contain epitopes for both B and T-cells. These findings based on bioinformatics analyses may be used in vaccine design and cancer therapy; saving time and minimizing the number of tests needed to select the best possible epitopes.