• Title/Summary/Keyword: Efficient Use

Search Result 6,982, Processing Time 0.039 seconds

Application of Westgard Multi-Rules for Improving Nuclear Medicine Blood Test Quality Control (핵의학 검체검사 정도관리의 개선을 위한 Westgard Multi-Rules의 적용)

  • Jung, Heung-Soo;Bae, Jin-Soo;Shin, Yong-Hwan;Kim, Ji-Young;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.115-118
    • /
    • 2012
  • Purpose: The Levey-Jennings chart controlled measurement values that deviated from the tolerance value (mean ${\pm}2SD$ or ${\pm}3SD$). On the other hand, the upgraded Westgard Multi-Rules are actively recommended as a more efficient, specialized form of hospital certification in relation to Internal Quality Control. To apply Westgard Multi-Rules in quality control, credible quality control substance and target value are required. However, as physical examinations commonly use quality control substances provided within the test kit, there are many difficulties presented in the calculation of target value in relation to frequent changes in concentration value and insufficient credibility of quality control substance. This study attempts to improve the professionalism and credibility of quality control by applying Westgard Multi-Rules and calculating credible target value by using a commercialized quality control substance. Materials and Methods : This study used Immunoassay Plus Control Level 1, 2, 3 of Company B as the quality control substance of Total T3, which is the thyroid test implemented at the relevant hospital. Target value was established as the mean value of 295 cases collected for 1 month, excluding values that deviated from ${\pm}2SD$. The hospital quality control calculation program was used to enter target value. 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T of Westgard Multi-Rules were applied in the Total T3 experiment, which was conducted 194 times for 20 days in August. Based on the applied rules, this study classified data into random error and systemic error for analysis. Results: Quality control substances 1, 2, and 3 were each established as 84.2 ng/$dl$, 156.7 ng/$dl$, 242.4 ng/$dl$ for target values of Total T3, with the standard deviation established as 11.22 ng/$dl$, 14.52 ng/$dl$, 14.52 ng/$dl$ respectively. According to error type analysis achieved after applying Westgard Multi-Rules based on established target values, the following results were obtained for Random error, 12s was analyzed 48 times, 13s was analyzed 13 times, R4s was analyzed 6 times, for Systemic error, 22s was analyzed 10 times, 41s was analyzed 11 times, 2 of 32s was analyzed 17 times, $10\bar{x}$ was analyzed 10 times, and 7T was not applied. For uncontrollable Random error types, the entire experimental process was rechecked and greater emphasis was placed on re-testing. For controllable Systemic error types, this study searched the cause of error, recorded the relevant cause in the action form and reported the information to the Internal Quality Control committee if necessary. Conclusions : This study applied Westgard Multi-Rules by using commercialized substance as quality control substance and establishing target values. In result, precise analysis of Random error and Systemic error was achieved through the analysis of 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T rules. Furthermore, ideal quality control was achieved through analysis conducted on all data presented within the range of ${\pm}3SD$. In this regard, it can be said that the quality control method formed based on the systematic application of Westgard Multi-Rules is more effective than the Levey-Jennings chart and can maximize error detection.

  • PDF

Studies on absorption of ammonium, nitrate-and urea-N by Jinheung and Tongil rice using labelled nitrogen (중질소(重窒素)를 이용(利用)한 진흥(振興)과 통일(統一)벼의 암모니움, 질산(窒酸) 및 요소태(尿素態) 질소(窒素)의 흡수특성(吸收特性) 연구(硏究))

  • Park, Hoon;Seok, Sun Jong
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.10 no.4
    • /
    • pp.225-233
    • /
    • 1978
  • Uptake and distribution of labelled urea, $NH{_4}^+$, and $NO{_3}^-$ by Tongil and Jinheung rice grown with each nitrogen source until ear formation stage under water culture system were as follows. 1. When the previous nitrogen source was same as one tested the uptake rate ($mg^{15}N/g$ d.w. root 2hrs, at $28^{\circ}C$ light) was great in the order of $NH_4$ >urea> $NO_3$ and higher (especially $NH_4$) in Tongil than in Jinheung. Rate limiting step (slowest) seems to be exist at R (root)${\rightarrow}$LS(leaf sheath) for urea, LS${\rightarrow}$LB(leaf blade) for $NH_4$ and M(medium)${\rightarrow}$R for $NO_3$. The fast step of translocation appeare to be at M${\rightarrow}$R for urea R${\rightarrow}$LS for $NH_4$ and LS${\rightarrow}$LB for $NO_3$. 2. The uptake rate of $NH_4$ by the urea-fed plant increased almost linearly from $18^{\circ}C$ via $28^{\circ}C$ to $38^{\circ}C$ in Tongil ($Q_{10}$=1.21 and 1.32 respectively) while no change in Jinheung ($Q_{10}$=0.99 and 1.00 respectively). It decreased by 12% in Jinheung under dark but uo change in Tongil. 3. The uptake rate of nitrogen source by different source-fed plant was great in the order of $NH_4{\rightarrow}^{15}NO_3$ $NO_3{\rightarrow}^{15}NH_4$, $urea{\rightarrow}^{15}NO_3$ and higher (especially $NH_4{\rightarrow}^{15}NO_3$) in Tongil. In the case of $urea{\rightarrow}^{15}NH_4$ it was same in $NH_4{\rightarrow}^{15}NO_3$ for Tongil and slightly lower than that in $NO_3{\rightarrow}^{15}NH_4$ for Jinheung. It was lower (especially Tongil) in $NH_4{\rightarrow}^{15}NO_3$ than in $NH_4{\rightarrow}^{15}NH_4 $ 4. The uptake rate (in $NH_4{\rightarrow}^{15}NO_3$) was higher during 15 minutes than during 2 hours and always higher in Tongil. 5. $^{15}N$ excess % and content in each part, and uptake rate of root seems to have their own significance relatling with metabolism and translocation respectively. The change of nitrogen nutritional environment and source preference of varieties were discussed in relation to field condition and efficient use of nitrogen fertilizer.

  • PDF

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Comparison of Virtual Wedge versus Physical Wedge Affecting on Dose Distribution of Treated Breast and Adjacent Normal Tissue for Tangential Breast Irradiation (유방암의 방사선치료에서 Virtual Wedge와 Physical Wedge사용에 따른 유방선량 및 주변조직선량의 차이)

  • Kim Yeon-Sil;Kim Sung-Whan;Yoon Sel-Chul;Lee Jung-Seok;Son Seok-Hyun;Choi Ihl-Bong
    • Radiation Oncology Journal
    • /
    • v.22 no.3
    • /
    • pp.225-233
    • /
    • 2004
  • Purpose: The Ideal breast irradiation method should provide an optimal dose distribution In the treated breast volume and a minimum scatter dose to the nearby normal tissue. Physical wedges have been used to Improve the dose distribution In the treated breast, but unfortunately Introduce an Increased scatter dose outside the treatment yield, pavllculariy to the contralateral breast. The typical physical wedge (FW) was compared with 4he virtual wedge (VW) to do)ermine the difference In the dose distribution affecting on the treated breast and the contralateral breast, lung, heart and surrounding perlpheral soft tissue. Methods and Materials: The data collected consisted of a measurement taken with solid water, a Humanoid Alderson Rando phantom and patients. The radiation doses at the ipsllateral breast and skin, contralateral breast and skin, surrounding peripheral soft tissue, and Ipsllateral lung and heart were compared using the physical wedge and virtual wedge and the radiation dose distribution and DVH of the treated breast were compared. The beam-on time of each treatment technique was also compared Furthermore, the doses at treated breast skin, contralateral breast skin and skin 1.5 cm away from 4he field margin were also measured using TLD in 7 patients of tangential breast Irradiation and compared the results with phantom measurements. Results: The virtual wedge showed a decreased peripheral dose than those of a typical physical wedge at 15$^{\circ}$, 30$^{\circ}$, 45$^{\circ}$, and 60$^{\circ}$. According to the TLD measurements with 15$^{\circ}$ and 30$^{\circ}$ virtual wedge, the Irradiation dose decreased by 1.35$\%$ and 2.55$\%$ In the contralateral breast and by 0.87$\%$ and 1.9$\%$ In the skin of the contralateral breast respectively. Furthermore, the Irradiation dose decreased by 2.7$\%$ and 6.0$\%$ in the Ipsllateral lung and by 0.96$\%$ and 2.5$\%$ in the heart. The VW fields had lower peripheral doses than those of the PW fields by 1.8$\%$ and 2.33$\%$. However the skin dose Increased by 2.4$\%$ and 4.58$\%$ In the Ipsliateral breast. VW fields, In general, use less monitor units than PW fields and shoriened beam-on time about half of PW. The DVH analysis showed that each delivery technique results In comparable dose distribution in treated breast. Conclusion: A modest dose reduction to the surrounding normal tissue and uniform target homogeneity were observed using the VW technique compare to the PW beam in tangential breast Irradiation The VW field is dosmetrically superlor to the PW beam and can be an efficient method for minimizing acute, late radiation morbidity and reduce 4he linear accelerator loading bV decreasing the radiation delivery time.

The Effect of Nasal BiPAP Ventilation in Acute Exacerbation of Chronic Obstructive Airway Disease (만성 기도폐쇄환자에서 급성 호흡 부전시 BiPAP 환기법의 치료 효과)

  • Cho, Young-Bok;Kim, Ki-Beom;Lee, Hak-Jun;Chung, Jin-Hong;Lee, Kwan-Ho;Lee, Hyun-Woo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.2
    • /
    • pp.190-200
    • /
    • 1996
  • Background : Mechanical ventilation constitutes the last therapeutic method for acute respiratory failure when oxygen therapy and medical treatment fail to improve the respiratory status of the patient. This invasive ventilation, classically administered by endotracheal intubation or by tracheostomy, is associated with significant mortality and morbidity. Consequently, any less invasive method able to avoid the use of endotracheal ventilation would appear to be useful in high risk patient. Over recent years, the efficacy of nasal mask ventilation has been demonstrated in the treatment of chronic restrictive respiratory failure, particularly in patients with neuromuscular diseases. More recently, this method has been successfully used in the treatment of acute respiratory failure due to parenchymal disease. Method : We assessed the efficacy of Bilevel positive airway pressure(BiPAP) in the treatment of acute exacerbation of chronic obstructive pulmonary disease(COPD). This study prospectively evaluated the clinical effectiveness of a treatment schedule with positive pressure ventilation via nasal mask(Respironics BiPAP device) in 22 patients with acute exacerbations of COPD. Eleven patients with acute exacerbations of COPD were treated with nasal pressure support ventilation delivered via a nasal ventilatory support system plus standard treatment for 3 consecutive days. An additional 11 control patients were treated only with standard treatment. The standard treatment consisted of medical and oxygen therapy. The nasal BiPAP was delivered by a pressure support ventilator in spontaneous timed mode and at an inspiratory positive airway pressure $6-8cmH_2O$ and an expiratory positive airway pressure $3-4cmH_2O$. Patients were evaluated with physical examination(respiratory rate), modified Borg scale and arterial blood gas before and after the acute therapeutic intervention. Results : Pretreatment and after 3 days of treatment, mean $PaO_2$ was 56.3mmHg and 79.1mmHg (p<0.05) in BiPAP group and 56.9mmHg and 70.2mmHg (p<0.05) in conventional treatment (CT) group and $PaCO_2$ was 63.9mmHg and 56.9mmHg (p<0.05) in BiPAP group and 53mmHg and 52.8mmHg in CT group respectively. pH was 7.36 and 7.41 (p<0.05) in BiPAP group and 7.37 and 7.38 in cr group respectively. Pretreatment and after treatment, mean respiratory rate was 28 and 23 beats/min in BiPAP group and 25 and 20 beats/min in CT group respectively. Borg scale was 7.6 and 4.7 in BiPAP group and 6.4 and 3.8 in CT group respectively. There were significant differences between the two groups in changes of mean $PaO_2$, $PaCO_2$ and pH respectively. Conclusion: We conclude that short-term nasal pressure-support ventilation delivered via nasal BiPAP in the treatment of acute exacerbation of COPD, is an efficient mode of assisted ventilation for improving blood gas values and dyspnea sensation and may reduce the need for endotracheal intubation with mechanical ventilation.

  • PDF

A Study on the Regional Characteristics of Broadband Internet Termination by Coupling Type using Spatial Information based Clustering (공간정보기반 클러스터링을 이용한 초고속인터넷 결합유형별 해지의 지역별 특성연구)

  • Park, Janghyuk;Park, Sangun;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.45-67
    • /
    • 2017
  • According to the Internet Usage Research performed in 2016, the number of internet users and the internet usage have been increasing. Smartphone, compared to the computer, is taking a more dominant role as an internet access device. As the number of smart devices have been increasing, some views that the demand on high-speed internet will decrease; however, Despite the increase in smart devices, the high-speed Internet market is expected to slightly increase for a while due to the speedup of Giga Internet and the growth of the IoT market. As the broadband Internet market saturates, telecom operators are over-competing to win new customers, but if they know the cause of customer exit, it is expected to reduce marketing costs by more effective marketing. In this study, we analyzed the relationship between the cancellation rates of telecommunication products and the factors affecting them by combining the data of 3 cities, Anyang, Gunpo, and Uiwang owned by a telecommunication company with the regional data from KOSIS(Korean Statistical Information Service). Especially, we focused on the assumption that the neighboring areas affect the distribution of the cancellation rates by coupling type, so we conducted spatial cluster analysis on the 3 types of cancellation rates of each region using the spatial analysis tool, SatScan, and analyzed the various relationships between the cancellation rates and the regional data. In the analysis phase, we first summarized the characteristics of the clusters derived by combining spatial information and the cancellation data. Next, based on the results of the cluster analysis, Variance analysis, Correlation analysis, and regression analysis were used to analyze the relationship between the cancellation rates data and regional data. Based on the results of analysis, we proposed appropriate marketing methods according to the region. Unlike previous studies on regional characteristics analysis, In this study has academic differentiation in that it performs clustering based on spatial information so that the regions with similar cancellation types on adjacent regions. In addition, there have been few studies considering the regional characteristics in the previous study on the determinants of subscription to high-speed Internet services, In this study, we tried to analyze the relationship between the clusters and the regional characteristics data, assuming that there are different factors depending on the region. In this study, we tried to get more efficient marketing method considering the characteristics of each region in the new subscription and customer management in high-speed internet. As a result of analysis of variance, it was confirmed that there were significant differences in regional characteristics among the clusters, Correlation analysis shows that there is a stronger correlation the clusters than all region. and Regression analysis was used to analyze the relationship between the cancellation rate and the regional characteristics. As a result, we found that there is a difference in the cancellation rate depending on the regional characteristics, and it is possible to target differentiated marketing each region. As the biggest limitation of this study and it was difficult to obtain enough data to carry out the analyze. In particular, it is difficult to find the variables that represent the regional characteristics in the Dong unit. In other words, most of the data was disclosed to the city rather than the Dong unit, so it was limited to analyze it in detail. The data such as income, card usage information and telecommunications company policies or characteristics that could affect its cause are not available at that time. The most urgent part for a more sophisticated analysis is to obtain the Dong unit data for the regional characteristics. Direction of the next studies be target marketing based on the results. It is also meaningful to analyze the effect of marketing by comparing and analyzing the difference of results before and after target marketing. It is also effective to use clusters based on new subscription data as well as cancellation data.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

Open Digital Textbook for Smart Education (스마트교육을 위한 오픈 디지털교과서)

  • Koo, Young-Il;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.177-189
    • /
    • 2013
  • In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

Collision of New and Old Control Ideologies, Witnessed through the Moving of Jeong-regun (Tomb of Queen Sindeok) and Repair of Gwangtong-gyo (정릉(貞陵) 이장과 광통교(廣通橋) 개수를 통해 본 조선 초기 지배 이데올로기의 대립)

  • Nam, Hohyun
    • Korean Journal of Heritage: History & Science
    • /
    • v.53 no.4
    • /
    • pp.234-249
    • /
    • 2020
  • The dispute involving the construction of the Tomb of Queen Sindeok (hereinafter "Jeongreung"), King Taejo's wife in Seoul, and the moving of that tomb, represents the most clearly demonstrated case for the collision of new and old ideologies between political powers in the early period of Joseon. Jeongreung, the tomb of Queen Sindeok from the Kang Clan, was built inside the capital fortress, but in 1409, King Taejong forced the tomb to be moved outside the capital, and the stone relics remaining at the original location were used to build the stone bridge, Gwangtong-gyo. In an unofficial story, King Taejong moved the tomb outside the capital and used the stone items there to make the Cheonggyecheon Gwang-gyo so that the people would step upon the area in order to curse Lady Kang. In the final year of King Taejo, Lady Kang and King Taejong were in a politically conflictual relationship, but they were close to being political partners until King Taejo became the king. Sillok records pertaining to the establishment of Jeongreung or Gwangtong-gyo in fact state things more plainly, indicating that the moving of Jeongreung was a result of following the sangeon (a written statement to the king) of Uijeongbu (the highest administrative agency in Joseon), which stated that having the tomb of a king or queen in the capital was inappropriate, and since it was close to the official quarter of envoys, it had to be moved. The assertion that it was aimed at degrading Jeongreung in order to repair Gwangtong-gyo thus does not reflect the factual relationship. This article presents the possibility that the use of stone items from Jeongreung to repair Gwangtong-gyo reflected an emerging need for efficient material procurement that accompanied a drastic increase in demand for materials required in civil works both in- and outside the capital. The cause for constructing Jeongreung within the capital and the cause of moving the tomb outside the capital would therefore be attributable to the heterogeneity of the ideological backgrounds of King Taejo and King Taejong. King Taejo was the ruler of the Confucius state, as he reigned through the Yeokseong Revolution, but he constructed the tomb and Hongcheon-sa, the temple in the capital for his wife Queen Sindeok. In this respect, it is considered that, with the power of Buddhism, there was an attempt to rally supporters and gather the force needed to establish the authority of Queen Sindeok. Yi Seong-gye, who was raised in the Dorugachi clan of Yuan, lived as a military man in the border area, and so he would not have had a high level of understanding in Confucian scholarship. Rather, he was a man of the old system with its 'Buddhist" tendency. On the other hand, King Taejong Yi Bang-won was an elite Confucian student who passed the national examination at the end of the Goryeo era, and he is also known to have held a profound understanding of Neo-Confucianism. To state it differently, it would be reasonable to say that the understanding of symbolic implications for the capital would be more profound in a Confucian state. Since the national system that was ruled by laws had been established following the Three-Kingdom era, the principle of burial outside of the capital that would have seen a grave constructed on the outskirts of the capital was not upheld, without exception. Jeongreung was built inside the capital due to the strong individual desire of King Taejo, but since he was a Confucian scholar prior to becoming king, it would not have been accepted as desirable. After taking the throne, King Taejong took the initiative to begin overhauling the capital in order to reflect his intent to clearly realize Confucian ideology emphasizing 'Yechi' ("ruling with good manners") with the scenic view of the Capital's Hanyang river. It would be reasonable to conclude accordingly that the moving of Jeongreung was undertaken in the context of such a historic background.