• Title/Summary/Keyword: Web Base System

Search Result 360, Processing Time 0.034 seconds

Investigation of Study Items for the Patterns of Care Study in the Radiotherapy of Laryngeal Cancer: Preliminary Results (후두암의 방사선치료 Patterns of Care Study를 위한 프로그램 항목 개발: 예비 결과)

  • Chung Woong-Ki;Kim I1-Han;Ahn Sung-Ja;Nam Taek-Keun;Oh Yoon-Kyeong;Song Ju-Young;Nah Byung-Sik;Chung Gyung-Ai;Kwon Hyoung-Cheol;Kim Jung-Soo;Kim Soo-Kon;Kang Jeong-Ku
    • Radiation Oncology Journal
    • /
    • v.21 no.4
    • /
    • pp.299-305
    • /
    • 2003
  • Purpose: In order to develop the national guide-lines for the standardization of radiotherapy we are planning to establish a web-based, on-line data-base system for laryngeal cancer. As a first step this study was performed to accumulate the basic clinical information of laryngeal cancer and to determine the items needed for the data-base system. Materials and Methods: We analyzed the clinical data on patients who were treated under the diagnosis of laryngeal cancer from January 1998 through December 1999 In the South-west area of Korea. Eligiblity criteria of the patients are as follows: 18 years or older, currently diagnosed with primary epithelial carcinoma of larynx, and no history of previous treatments for another cancers and the other laryngeal diseases. The items were developed and filled out by radiation oncologlst who are members of forean Southwest Radiation Oncology Group. SPSS vl0.0 software was used for statistical analysis. Results: Data of forty-five patients were collected. Age distribution of patients ranged from 28 to 88 years(median, 61). Laryngeal cancer occurred predominantly In males (10 : 1 sex ratio). Twenty-eight patients (62$\%$) had primary cancers in the glottis and 17 (38$\%$) in the supraglottis. Most of them were diagnosed pathologically as squamous cell carcinoma (44/45, 98$\%$). Twenty-four of 28 glottic cancer patients (86$\%$) had AJCC (American Joint Committee on Cancer) stage I/II, but 50$\%$ (8/16) had In supraglottic cancer patients (p=0.02). Most patients(89$\%$) had the symptom of hoarseness. indirect laryngoscopy was done in all patients and direct laryngoscopy was peformed in 43 (98$\%$) patients. Twenty-one of 28 (75$\%$) glottic cancer cases and 6 of 17 (35$\%$) supraglottic cancer cases were treated with radiation alone, respectively. The combined treatment of surgery and radiation was used in 5 (18$\%$) glottic and 8 (47$\%$) supraglottic patients. Chemotherapy and radiation was used in 2 (7$\%$) glottic and 3 (18$\%$) supraglottic patients. There was no statistically significant difference in the use of combined modality treatments between glottic and supraglottic cancers (p=0.20). In all patients, 6 MV X-ray was used with conventional fractionation. The iraction size was 2 Gy In 80$\%$ of glottic cancer patients compared with 1.8 Gy in 59$\%$ of the patients with supraglottic cancers. The mean total dose delivered to primary lesions were 65.98 ey and 70.15 Gy in glottic and supraglottic patients treated, respectively, with radiation alone. Based on the collected data, 12 modules with 90 items were developed or the study of the patterns of care In laryngeal cancer. Conclusion: The study Items for laryngeal cancer were developed. In the near future, a web system will be established based on the Items Investigated, and then a nation-wide analysis on laryngeal cancer will be processed for the standardization and optimization of radlotherapy.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

Development of Drawing & Specification Management System Using 3D Object-based Product Model (3차원 객체기반 모델을 이용한 설계도면 및 시방서관리 시스템 구축)

  • Kim Hyun-nam;Wang Il-kook;Chin Sang-yoon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.1 no.3 s.3
    • /
    • pp.124-134
    • /
    • 2000
  • In construction projects, the design information, which should contain accurate product information in a systematic way, needs to be applicable through the life-cycle of projects. However, paper-based 2D drawings and relevant documents has difficulties in communicating and sharing the owner's and architect's intention and requirement effectively and building a corporate knowledge base through on-going projects due to Tack of interoperability between specific task or function-oriented software and handling massive information. Meanwhile, computer and information technologies are being developed so rapidly that the practitioners are even hard to adapt them into the industry efficiently. 3D modeling capabilities in CAD systems are enormously developed and enables users to associate 3D models with other relevant information. However, this still requires a great deal of efforts and costs to have all the design information represented in CAD system, and the sophisticated system is difficult to manage. This research focuses on the transition period from 2D-based design Information management to 3D-based, which means co-existence of 2D and 3D-based management. This research proposes a model of a compound system of 2D and 3D-based CAD system which presents the general design information using 3D model integrating with 2D CAD drawings for detailed design information. This research developed an integrated information management system for design and specification by associating 2D drawings and 3D models, where 2D drawings represents detailed design and parts that are hard to express in 3D objects. To do this, related management processes was analyzed to build an information model which in turn became the basis of the integrated information management system.

  • PDF

Prediction Model for Gas-Energy Consumption using Ontology-based Breakdown Structure of Multi-Family Housing Complex (온톨로지 기반 공동주택 분류체계를 활용한 가스에너지 사용량 예측 모델)

  • Hong, Tae-Hoon;Park, Sung-Ki;Koo, Choong-Wan;Kim, Hyun-Joong;Kim, Chun-Hag
    • Korean Journal of Construction Engineering and Management
    • /
    • v.12 no.6
    • /
    • pp.110-119
    • /
    • 2011
  • Global warming caused by excessive greenhouse gas emission is causing climate change all over the world. In Korea, greenhouse gas emission from residential buildings accounts for about 10% of gross domestic emission. Also, the number of deteriorated multi-family housing complexes is increasing. Therefore, the goal of this research is to establish the bases to manage energy consumption continuously and methodically during MR&R period of multi-family housings. The research process and methodologies are as follows. First, research team collected the data on project characteristics and energy consumption of multi-family housing complexes in Seoul. Second, an ontology-based breakdown structure was established with some primary characteristics affecting the energy consumption, which were selected by statistical analysis. Finally, a predictive model of energy consumption was developed based on the ontology-based breakdown structure, with application of CBR, ANN, MRA and GA. In this research, PASW (Predictive Analytics SoftWare) Statistics 18, Microsoft EXCEL, Protege 4.1 were utilized for data analysis and prediction. In future research, the model will be more continuous and methodical by developing the web-base system. And it has facility manager of government or local government, or multi-family housing complex make a decision with definite references regarding moderate energy consumption.

Establishment and service of user analysis environment related to computational science and engineering simulation platform

  • Kwon, Yejin;Jeon, Inho;On, Noori;Seo, Jerry H.;Lee, Jongsuk R.
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.123-132
    • /
    • 2020
  • The EDucation-research Integration through Simulation On the Net (EDISON) platform, which is a web-based platform that provides computational science and engineering simulation execution environments, can offer various analysis environments to students, general users, as well as computational science and engineering researchers. To expand the user base of the simulation environment services, the EDISON platform holds a challenge every year and attempts to increase the competitiveness and excellence of the platform by analyzing the user requirements of the various simulation environment offered. The challenge platform system in the field of computational science and engineering is provided to users in relation to the simulation service used in the existing EDISON platform. Previously, EDISON challenge servicesoperated independently from simulation services, and hence, services such as end-user review and intermediate simulation results could not be linked. To meet these user requirements, the currently in-service challenge platform for computational science and engineering is linked to the existing computational science and engineering service. In addition, it was possible to increase the efficiency of service resources by providing limited services through various analyses of all users participating in the challenge. In this study, by analyzing the simulation and usage environments of users, we provide an improved challenge platform; we also analyze ways to improve the simulation execution environment.

GIS-based Disaster Management System for a Private Insurance Company in Case of Typhoons(I) (지리정보기반의 재해 관리시스템 구축(I) -민간 보험사의 사례, 태풍의 경우-)

  • Chang Eun-Mi
    • Journal of the Korean Geographical Society
    • /
    • v.41 no.1 s.112
    • /
    • pp.106-120
    • /
    • 2006
  • Natural or man-made disaster has been expected to be one of the potential themes that can integrate human geography and physical geography. Typhoons like Rusa and Maemi caused great loss to insurance companies as well as public sectors. We have implemented a natural disaster management system for a private insurance company to produce better estimation of hazards from high wind as well as calculate vulnerability of damage. Climatic gauge sites and addresses of contract's objects were geo-coded and the pressure values along all the typhoon tracks were vectorized into line objects. National GIS topog raphic maps with scale of 1: 5,000 were updated into base maps and digital elevation model with 30 meter space and land cover maps were used for reflecting roughness of land to wind velocity. All the data are converted to grid coverage with $1km{\times}1km$. Vulnerability curve of Munich Re was ad opted, and preprocessor and postprocessor of wind velocity model was implemented. Overlapping the location of contracts on the grid value coverage can show the relative risk, with given scenario. The wind velocities calculated by the model were compared with observed value (average $R^2=0.68$). The calibration of wind speed models was done by dropping two climatic gauge data, which enhanced $R^2$ values. The comparison of calculated loss with actual historical loss of the insurance company showed both underestimation and overestimation. This system enables the company to have quantitative data for optimizing the re-insurance ratio, to have a plan to allocate enterprise resources and to upgrade the international creditability of the company. A flood model, storm surge model and flash flood model are being added, at last, combined disaster vulnerability will be calculated for a total disaster management system.

Gene Expression Analysis of Inducible cAMP Early Repressor (ICER) Gene in Longissimus dorsi of High- and Low Marbled Hanwoo Steers (한우 등심부위 근육 내 조지방함량에 따른 inducible cAMP early repressor (ICER) 유전자발현 분석)

  • Lee, Seung-Hwan;Kim, Nam-Kuk;Kim, Sung-Kon;Cho, Yong-Min;Yoon, Du-hak;Oh, Sung-Jong;Im, Seok-Ki;Park, Eung-Woo
    • Journal of Life Science
    • /
    • v.18 no.8
    • /
    • pp.1090-1095
    • /
    • 2008
  • Marbling (intramuscular fat) is an important factor in determining meat quality in Korean beef market. A grain based finishing system for improving marbling leads to inefficient meat production due to an excessive fat production. Identification of intramuscular fat-specific gene might be achieved more targeted meat production through alternative genetic improvement program such as marker assisted selection (MAS). We carried out ddRT-PCR in 12 and 27 month old Hanwoo steers and detected 300 bp PCR product of the inducible cAMP early repressor (ICER) gene, showing highly gene expression in 27 months old. A 1.5 kb sequence was re-sequenced using primer designed base on the Hanwoo EST sequence. We then predicted the open reading frame (ORF) of ICER gene in ORF finder web program. Tissue distribution of ICER gene expression was analysed in eight Hanwoo tissue using realtime PCR analysis. The highest ICER gene expression showed in Small intestine followed by Longissimus dorsi. Interestingly, the ICER gene expressed 2.5 time higher in longissimus dorsi than in same muscle type, Rump. For gene expression analysis in high- and low marbled individuals, we selected 4 and 3 animal based on the muscle crude fat contents (high is 17-32%, low is 6-7% of crude fat contents). The ICER gene expression was analysed using ANOVA model. Marbling (muscle crude fat contents) was affected by ICER gene (P=0.012). Particularly, the ICER gene expression was 4 times higher in high group (n=4) than low group (n=3). Therefore, ICER gene might be a functional candidate gene related to marbling in Hanwoo.

Discussions about Expanded Fests of Cartoons and Multimedia Comics as Visual Culture: With a Focus on New Technologies (비주얼 컬처로서 만화영상의 확장된 장(場, fest)에 대한 논의: 뉴 테크놀로지를 중심으로)

  • Lee, Hwa-Ja;Kim, Se-Jong
    • Cartoon and Animation Studies
    • /
    • s.28
    • /
    • pp.1-25
    • /
    • 2012
  • The rapid digitalization across all aspects of society since 1990 led to the digitalization of cartoons. As the medium of cartoons moved from paper to the web, a powerful visual culture emerged. An encounter between cartoons and multimedia technologies has helped cartoons evolve into a video culture. Today cartoons are no longer literate culture. It is critical to pay attention to cartoons as an "expanded fest" and as visual and video culture with much broader significance. In this paper, the investigator set out to diagnose the current position of cartoons changing in the rapidly changing digital age and talk about future directions that they should pursue. Thus she discussed cases of changes from 1990 when colleges began to provide specialized education for cartoons and animation to the present day when cartoon and Multimedia Comics fests exist in addition to the digitalization of cartoons. The encounter between new technologies and cartoons broke down the conventional forms of cartoons. The massive appearance of artists that made active use of new technologies in their works, in particular, has facilitated changes to the content and forms of cartoons and the expansion of character uses. The development of high technologies extends influence to the roles of appreciators beyond the artists' works. Today readers voice their opinions about works actively, build a fan base, promote the works and artists they favor, and help them rise to stardom. As artist groups of various genres were formed, the possibilities of new stories and texts and the appearance of diverse styles and world views have expanded the essence of cartoon texts and the overall cartoon system of cartoon culture, industry, education, institution, and technology. It is expected that cartoons and Multimedia Comics will continue to make a contribution as a messenger to reflect the next generation of culture, mediate it, and communicate with it. Today there is no longer a distinction between print and video cartoons. Cartoons will expand in every field through a wide range of forms and styles, given the current situations involving installation concept cartoons, blockbuster digital videos, fancy items, and characters at theme parks based on a narrative. It is therefore necessary to diversify cartoon and Multimedia Comics education in diverse ways. Today educators are faced with a task to bring up future generations of talents who are capable of leading the culture of overall senses based on literate and video culture by incorporating humanities, social studies, and new technology education into their creative artistic abilities.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Ontology-Based Process-Oriented Knowledge Map Enabling Referential Navigation between Knowledge (지식 간 상호참조적 네비게이션이 가능한 온톨로지 기반 프로세스 중심 지식지도)

  • Yoo, Kee-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.61-83
    • /
    • 2012
  • A knowledge map describes the network of related knowledge into the form of a diagram, and therefore underpins the structure of knowledge categorizing and archiving by defining the relationship of the referential navigation between knowledge. The referential navigation between knowledge means the relationship of cross-referencing exhibited when a piece of knowledge is utilized by a user. To understand the contents of the knowledge, a user usually requires additionally information or knowledge related with each other in the relation of cause and effect. This relation can be expanded as the effective connection between knowledge increases, and finally forms the network of knowledge. A network display of knowledge using nodes and links to arrange and to represent the relationship between concepts can provide a more complex knowledge structure than a hierarchical display. Moreover, it can facilitate a user to infer through the links shown on the network. For this reason, building a knowledge map based on the ontology technology has been emphasized to formally as well as objectively describe the knowledge and its relationships. As the necessity to build a knowledge map based on the structure of the ontology has been emphasized, not a few researches have been proposed to fulfill the needs. However, most of those researches to apply the ontology to build the knowledge map just focused on formally expressing knowledge and its relationships with other knowledge to promote the possibility of knowledge reuse. Although many types of knowledge maps based on the structure of the ontology were proposed, no researches have tried to design and implement the referential navigation-enabled knowledge map. This paper addresses a methodology to build the ontology-based knowledge map enabling the referential navigation between knowledge. The ontology-based knowledge map resulted from the proposed methodology can not only express the referential navigation between knowledge but also infer additional relationships among knowledge based on the referential relationships. The most highlighted benefits that can be delivered by applying the ontology technology to the knowledge map include; formal expression about knowledge and its relationships with others, automatic identification of the knowledge network based on the function of self-inference on the referential relationships, and automatic expansion of the knowledge-base designed to categorize and store knowledge according to the network between knowledge. To enable the referential navigation between knowledge included in the knowledge map, and therefore to form the knowledge map in the format of a network, the ontology must describe knowledge according to the relation with the process and task. A process is composed of component tasks, while a task is activated after any required knowledge is inputted. Since the relation of cause and effect between knowledge can be inherently determined by the sequence of tasks, the referential relationship between knowledge can be circuitously implemented if the knowledge is modeled to be one of input or output of each task. To describe the knowledge with respect to related process and task, the Protege-OWL, an editor that enables users to build ontologies for the Semantic Web, is used. An OWL ontology-based knowledge map includes descriptions of classes (process, task, and knowledge), properties (relationships between process and task, task and knowledge), and their instances. Given such an ontology, the OWL formal semantics specifies how to derive its logical consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. Therefore a knowledge network can be automatically formulated based on the defined relationships, and the referential navigation between knowledge is enabled. To verify the validity of the proposed concepts, two real business process-oriented knowledge maps are exemplified: the knowledge map of the process of 'Business Trip Application' and 'Purchase Management'. By applying the 'DL-Query' provided by the Protege-OWL as a plug-in module, the performance of the implemented ontology-based knowledge map has been examined. Two kinds of queries to check whether the knowledge is networked with respect to the referential relations as well as the ontology-based knowledge network can infer further facts that are not literally described were tested. The test results show that not only the referential navigation between knowledge has been correctly realized, but also the additional inference has been accurately performed.