• Title/Summary/Keyword: Limited approach

Search Result 1,987, Processing Time 0.04 seconds

Comparison of Cognitive Loads between Koreans and Foreigners in the Reading Process

  • Im, Jung Nam;Min, Seung Nam;Cho, Sung Moon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.35 no.4
    • /
    • pp.293-305
    • /
    • 2016
  • Objective: This study aims to measure cognitive load levels by analyzing the EEG of Koreans and foreigners, when they read a Korean text with care selected by level from the grammar and vocabulary aspects, and compare the cognitive load levels through quantitative values. The study results can be utilized as basic data for more scientific approach, when Korean texts or books are developed, and an evaluation method is built, when the foreigners encounter them for learning or an assignment. Background: Based on 2014, the number of the foreign students studying in Korea was 84,801, and they increase annually. Most of them are from Asian region, and they come to Korea to enter a university or a graduate school in Korea. Because those foreign students aim to learn within Universities in Korea, they receive Korean education from their preparation for study in Korea. To enter a university in Korea, they must acquire grade 4 or higher level in the Test of Proficiency in Korean (TOPIK), or they need to complete a certain educational program at each university's affiliated language institution. In such a program, the learners of the Korean language receive Korean education based on texts, except speaking domain, and the comprehension of texts can determine their academic achievements in studying after they enter their desired schools (Jeon, 2004). However, many foreigners, who finish a language course for the short-term, and need to start university study, cannot properly catch up with university classes requiring expertise with the vocabulary and grammar levels learned during the language course. Therefore, reading education, centered on a strategy to understand university textbooks regarded as top level reading texts to the foreigners, is necessary (Kim and Shin, 2015). This study carried out an experiment from a perspective that quantitative data on the readers of the main player of reading education and teaching materials need to be secured to back up the need for reading education for university study learners, and scientifically approach educational design. Namely, this study grasped the difficulty level of reading through the measurement of cognitive loads indicated in the reading activity of each text by dividing the difficulty of a teaching material (book) into eight levels, and the main player of reading into Koreans and foreigners. Method: To identify cognitive loads indicated upon reading Korean texts with care by Koreans and foreigners, this study recruited 16 participants (eight Koreans and eight foreigners). The foreigners were limited to the language course students studying the intermediate level Korean course at university-affiliated language institutions within Seoul Metropolitan Area. To identify cognitive load, as they read a text by level selected from the Korean books (difficulty: eight levels) published by King Sejong Institute (Sejonghakdang.org), the EEG sensor was attached to the frontal love (Fz) and occipital lobe (Oz). After the experiment, this study carried out a questionnaire survey to measure subjective evaluation, and identified the comprehension and difficulty on grammar and words. To find out the effects on schema that may affect text comprehension, this study controlled the Korean texts, and measured EEG and subjective satisfaction. Results: To identify brain's cognitive load, beta band was extracted. As a result, interactions (Fz: p =0.48; Oz: p =0.00) were revealed according to Koreans and foreigners, and difficulty of the text. The cognitive loads of Koreans, the readers whose mother tongue is Korean, were lower in reading Korean texts than those of the foreigners, and the foreigners' cognitive loads became higher gradually according to the difficulty of the texts. From the text four, which is intermediate level in difficulty, remarkable differences started to appear in comparison of the Koreans and foreigners in the beginner's level text. In the subjective evaluation, interactions were revealed according to the Koreans and foreigners and text difficulty (p =0.00), and satisfaction was lower, as the difficulty of the text became higher. Conclusion: When there was background knowledge in reading, namely schema was formed, the comprehension and satisfaction of the texts were higher, although higher levels of vocabulary and grammar were included in the texts than those of the readers. In the case of a text in which the difficulty of grammar was felt high in the subjective evaluation, foreigners' cognitive loads were also high, which shows the result of the loads' going up higher in proportion to the increase of difficulty. This means that the grammar factor functions as a stress factor to the foreigners' reading comprehension. Application: This study quantitatively evaluated the cognitive loads of Koreans and foreigners through EEG, based on readers and the text difficulty, when they read Korean texts. The results of this study can be used for making Korean teaching materials or Korean education content and topic selection for foreigners. If research scope is expanded to reading process using an eye-tracker, the reading education program and evaluation method for foreigners can be developed on the basis of quantitative values.

Change Analysis of Aboveground Forest Carbon Stocks According to the Land Cover Change Using Multi-Temporal Landsat TM Images and Machine Learning Algorithms (다시기 Landsat TM 영상과 기계학습을 이용한 토지피복변화에 따른 산림탄소저장량 변화 분석)

  • LEE, Jung-Hee;IM, Jung-Ho;KIM, Kyoung-Min;HEO, Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.4
    • /
    • pp.81-99
    • /
    • 2015
  • The acceleration of global warming has required better understanding of carbon cycles over local and regional areas such as the Korean peninsula. Since forests serve as a carbon sink, which stores a large amount of terrestrial carbon, there has been a demand to accurately estimate such forest carbon sequestration. In Korea, the National Forest Inventory(NFI) has been used to estimate the forest carbon stocks based on the amount of growing stocks per hectare measured at sampled location. However, as such data are based on point(i.e., plot) measurements, it is difficult to identify spatial distribution of forest carbon stocks. This study focuses on urban areas, which have limited number of NFI samples and have shown rapid land cover change, to estimate grid-based forest carbon stocks based on UNFCCC Approach 3 and Tier 3. Land cover change and forest carbon stocks were estimated using Landsat 5 TM data acquired in 1991, 1992, 2010, and 2011, high resolution airborne images, and the 3rd, 5th~6th NFI data. Machine learning techniques(i.e., random forest and support vector machines/regression) were used for land cover change classification and forest carbon stock estimation. Forest carbon stocks were estimated using reflectance, band ratios, vegetation indices, and topographical indices. Results showed that 33.23tonC/ha of carbon was sequestrated on the unchanged forest areas between 1991 and 2010, while 36.83 tonC/ha of carbon was sequestrated on the areas changed from other land-use types to forests. A total of 7.35 tonC/ha of carbon was released on the areas changed from forests to other land-use types. This study was a good chance to understand the quantitative forest carbon stock change according to the land cover change. Moreover the result of this study can contribute to the effective forest management.

A Study on the Risk Factors for Maternal and Child Health Care Program with Emphasis on Developing the Risk Score System (모자건강관리를 위한 위험요인별 감별평점분류기준 개발에 관한 연구)

  • 이광옥
    • Journal of Korean Academy of Nursing
    • /
    • v.13 no.1
    • /
    • pp.7-21
    • /
    • 1983
  • For the flexible and rational distribution of limited existing health resources based on measurements of individual risk, the socalled Risk Approach is being proposed by the World Health Organization as a managerial tool in maternal and child health care program. This approach, in principle, puts us under the necessity of developing a technique by which we will be able to measure the degree of risk or to discriminate the future outcomes of pregnancy on the basis of prior information obtainable at prenatal care delivery settings. Numerous recent studies have focussed on the identification of relevant risk factors as the Prior infer mation and on defining the adverse outcomes of pregnancy to be dicriminated, and also have tried on how to develope scoring system of risk factors for the quantitative assessment of the factors as the determinant of pregnancy outcomes. Once the scoring system is established the technique of classifying the patients into with normal and with adverse outcomes will be easily de veloped. The scoring system should be developed to meet the following four basic requirements. 1) Easy to construct 2) Easy to use 3) To be theoretically sound 4) To be valid In searching for a feasible methodology which will meet these requirements, the author has attempted to apply the“Likelihood Method”, one of the well known principles in statistical analysis, to develop such scoring system according to the process as follows. Step 1. Classify the patients into four groups: Group $A_1$: With adverse outcomes on fetal (neonatal) side only. Group $A_2$: With adverse outcomes on maternal side only. Group $A_3$: With adverse outcome on both maternal and fetal (neonatal) sides. Group B: With normal outcomes. Step 2. Construct the marginal tabulation on the distribution of risk factors for each group. Step 3. For the calculation of risk score, take logarithmic transformation of relative proport-ions of the distribution and round them off to integers. Step 4. Test the validity of the score chart. h total of 2, 282 maternity records registered during the period of January 1, 1982-December 31, 1982 at Ewha Womans University Hospital were used for this study and the“Questionnaire for Maternity Record for Prenatal and Intrapartum High Risk Screening”developed by the Korean Institute for Population and Health was used to rearrange the information on the records into an easy analytic form. The findings of the study are summarized as follows. 1) The risk score chart constructed on the basis of“Likelihood Method”ispresented in Table 4 in the main text. 2) From the analysis of the risk score chart it was observed that a total of 24 risk factors could be identified as having significant predicting power for the discrimination of pregnancy outcomes into four groups as defined above. They are: (1) age (2) marital status (3) age at first pregnancy (4) medical insurance (5) number of pregnancies (6) history of Cesarean sections (7). number of living child (8) history of premature infants (9) history of over weighted new born (10) history of congenital anomalies (11) history of multiple pregnancies (12) history of abnormal presentation (13) history of obstetric abnormalities (14) past illness (15) hemoglobin level (16) blood pressure (17) heart status (18) general appearance (19) edema status (20) result of abdominal examination (21) cervix status (22) pelvis status (23) chief complaints (24) Reasons for examination 3) The validity of the score chart turned out to be as follows: a) Sensitivity: Group $A_1$: 0.75 Group $A_2$: 0.78 Group $A_3$: 0.92 All combined : 0.85 b) Specificity : 0.68 4) The diagnosabilities of the“score chart”for a set of hypothetical prevalence of adverse outcomes were calculated as follows (the sensitivity“for all combined”was used). Hypothetidal Prevalence : 5% 10% 20% 30% 40% 50% 60% Diagnosability : 12% 23% 40% 53% 64% 75% 80%.

  • PDF

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

Is Video-assisted Thoracoscopic Resection for Treating Apical Neurogenic Tumors Always Safe? (흉강 첨부 양성 신경종의 흉강경을 이용한 절제술: 언제나 안전하게 시행할 수 있나?)

  • Cho, Deog Gon;Jo, Min Seop;Kang, Chul Ung;Cho, Kyu Do;Choi, Si Young;Park, Jae Kil;Jo, Keon Hyeon
    • Journal of Chest Surgery
    • /
    • v.42 no.1
    • /
    • pp.72-78
    • /
    • 2009
  • Background: Mediastinal neurogenic tumors are generally benign lesions and they are ideal candidates for performing resection via video-assisted thoracoscopic surgery (VATS). However, benign neurogenic tumors at the thoracic apex present technical problems for the surgeon because of the limited exposure of the neurovascular structures, and the optimal way to surgically access these tumors is still a matter of debate. This study aims to clarify the feasibility and safety of the VATS approach for performing surgical resection of benign apical neurogenic tumors (ANT). Material and Method: From January 1996 to September 2008, 31 patients with benign ANT (15 males/16 females, mean age: 45 years, range: 8~73), were operated on by various surgical methods: 14 VATS, 10 lateral thoracotomies, 6 cervical or cervicothoracic incisions and 1 median sternotomy. 3 patients had associated von Recklinhausen's disease. The perioperative variables and complications were retrospectively reviewed according to the surgical approaches, and the surgical results of VATS were compared with those of the other invasive surgeries. Result: In the VATS group, the histologic diagnosis was schwannoma in 9 cases, neurofibroma in 4 cases and ganglioneuroma in 1 case, and the median tumor size was 4.3 cm (range: 1.2~7.0 cm). The operation time, amount of chest tube drainage and the postoperative stay in the VATS group were significantly less than that in the other invasive surgical group (p<0.05). No conversion thoracotomy was required. There were 2 cases of Hornor's syndrome and 2 brachial plexus neuropathies in the VATS group; there was 1 case of Honor's syndrome, 1 brachial plexus neuropathy, 1 vocal cord palsy and 2 non-neurologic complications in the invasive surgical group, and all the complications developed postoperatively. The operative method was an independent predictor for postoperative neuropathies in the VATS group (that is, non-enucleation of the tumor) (p=0.029). Conclusion: The VATS approach for treating benign ANT is a less invasive, safe and feasible method. Enucleation of the tumor during the VATS procedure may be an important technique to decrease the postoperative neurological complications.

Design Evaluation Model Based on Consumer Values: Three-step Approach from Product Attributes, Perceived Attributes, to Consumer Values (소비자 가치기반 디자인 평가 모형: 제품 속성, 인지 속성, 소비자 가치의 3단계 접근)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.57-76
    • /
    • 2017
  • Recently, consumer needs are diversifying as information technologies are evolving rapidly. A lot of IT devices such as smart phones and tablet PCs are launching following the trend of information technology. While IT devices focused on the technical advance and improvement a few years ago, the situation is changed now. There is no difference in functional aspects, so companies are trying to differentiate IT devices in terms of appearance design. Consumers also consider design as being a more important factor in the decision-making of smart phones. Smart phones have become a fashion items, revealing consumers' own characteristics and personality. As the design and appearance of the smartphone become important things, it is necessary to examine consumer values from the design and appearance of IT devices. Furthermore, it is crucial to clarify the mechanisms of consumers' design evaluation and develop the design evaluation model based on the mechanism. Since the influence of design gets continuously strong, various and many studies related to design were carried out. These studies can classify three main streams. The first stream focuses on the role of design from the perspective of marketing and communication. The second one is the studies to find out an effective and appealing design from the perspective of industrial design. The last one is to examine the consumer values created by a product design, which means consumers' perception or feeling when they look and feel it. These numerous studies somewhat have dealt with consumer values, but they do not include product attributes, or do not cover the whole process and mechanism from product attributes to consumer values. In this study, we try to develop the holistic design evaluation model based on consumer values based on three-step approach from product attributes, perceived attributes, to consumer values. Product attributes means the real and physical characteristics each smart phone has. They consist of bezel, length, width, thickness, weight and curvature. Perceived attributes are derived from consumers' perception on product attributes. We consider perceived size of device, perceived size of display, perceived thickness, perceived weight, perceived bezel (top - bottom / left - right side), perceived curvature of edge, perceived curvature of back side, gap of each part, perceived gloss and perceived screen ratio. They are factorized into six clusters named as 'Size,' 'Slimness,' 'No-Frame,' 'Roundness,' 'Screen Ratio,' and 'Looseness.' We conducted qualitative research to find out consumer values, which are categorized into two: look and feel values. We identified the values named as 'Silhouette,' 'Neatness,' 'Attractiveness,' 'Polishing,' 'Innovativeness,' 'Professionalism,' 'Intellectualness,' 'Individuality,' and 'Distinctiveness' in terms of look values. Also, we identifies 'Stability,' 'Comfortableness,' 'Grip,' 'Solidity,' 'Non-fragility,' and 'Smoothness' in terms of feel values. They are factorized into five key values: 'Sleek Value,' 'Professional Value,' 'Unique Value,' 'Comfortable Value,' and 'Solid Value.' Finally, we developed the holistic design evaluation model by analyzing each relationship from product attributes, perceived attributes, to consumer values. This study has several theoretical and practical contributions. First, we found consumer values in terms of design evaluation and implicit chain relationship from the objective and physical characteristics to the subjective and mental evaluation. That is, the model explains the mechanism of design evaluation in consumer minds. Second, we suggest a general design evaluation process from product attributes, perceived attributes to consumer values. It is an adaptable methodology not only smart phone but also other IT products. Practically, this model can support the decision-making when companies initiative new product development. It can help product designers focus on their capacities with limited resources. Moreover, if its model combined with machine learning collecting consumers' purchasing data, most preferred values, sales data, etc., it will be able to evolve intelligent design decision support system.

PRESENT SITUATION AND PROSPECT OF PEDIATRIC DENTISTRY IN KOREA - FOCUSED ON MANAGEMENT OF DENTAL CARIES - (한국 소아치과의 현재와 전망 - 치아우식증관리 분야를 중심으로 -)

  • Lee, Sang-Ho
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.39 no.2
    • /
    • pp.206-225
    • /
    • 2012
  • General status of pediatric dentistry in Korea is to conduct vigorous academic activities and specialized medical care centering the Korean Association of Pediatric Dentistry (KAPD) that has about 1,000 pediatric dentists as members, pediatric dentistry departments of 11 Colleges of Dentistry, numbers of pediatric dentistry training institutions and private clinics specialized in children. From 1996, the accredited pediatric dentists were produced by the KAPD and from 2008, the state began to produce the accredited pediatric dentists. Since then, doctors with expertise in pediatric care had opened private clinics in addition to the university hospitals, it became the basis of a momentum to deepen the specialty of pediatric dentistry. The Dentistry community of Korea is going through rapid and profound changes recently, and the underlying reasons for such changes can be classified largely into a few categories: (1) Decreasing population and structural changes in population (2) Increase in numbers of dentists, (3) Changes in the pattern of dental diseases and (4) Changes in medical environment. In Korea, the children population in the age range of 0 ~ 14 years old had been decreased by 2 million in 2010 compared to that of 2000 due to reduction of birth rate. The current population of children in the age range of 0 ~ 4 years old in 2010 takes up 16.2% of the total population, but it is estimated that such percentage would decrease to 8.0% by 2050. Such percentage is largely behind the estimated mean global population of 19.6% by 2050. On the other hand, the number of dentists had been largely increased from 18,000 in 2000 to 25,000 in 2010. And it is estimated that the number will be increased to 41,000 by 2030. In addition, the specialized personnel of Pediatric dentistry had been shown as increased by 2.5 times during past 10 years. For the changes in the pattern of dental diseases, including dental caries, each df rate of 5 years old children and 12 years old children had been decreased by 21.9% and 16.7% respectively in 2010 compared to 2000. Each df Index also had been decreased by 2.5 teeth and 1.2 teeth respectively. The medical expenditure of Korea is less than that of OECD and more specifically, the expenditure from the National Health Plan is less than OECD but the expenditure covered by households is larger than OECD. These facts indicate that it is considered as requiring the coverage of the national health plan to be reinforced more in the future and as such reinforcement needs continuous promotion. In medical examination pattern of Pediatric dentistry, the preventive and corrective treatment were increased whereas the restorative treatment was decreased. It is considered that such change is caused from decrease of dental caries from activation of the prevention project at national level. For the restorative treatment, the restorations in use of dental amalgam, pre-existing gold crowning and endodontic treatment had been decreased in their proportion while the restorative treatment in use of composite resin had been increased. It is considered that such changes is caused by the change of demands from patients and family or guardians as they desired more aesthetic improvement along with socio-economic growth of Korean society. Due to such changes in dentistry, the pediatric dentistry in Korea also attempts to have changes in the patterns of medical examination as follows; It tends to implement early stage treatment through early diagnosis utilizing various diagnostic tools such as FOTI or QLF. The early stage dental caries so called white spot had been included in the subjects for dental care or management and in order to do so, the medical care guidelines essentially accompanied with remineralization treatment as well as minimally invasive treatment is being generalized gradually. Also, centering the Pediatric dentists, the importance of caries risk assessment is being recognized, in addition that the management of dental caries is being changed from surgical approach to internal medicinal approach. Recently, efforts began to emerge in order to increase the target patients to be managed by dentists and to expand the application scope of Pediatric dentistry along with through such changes. The interest and activities of Pediatric dentists which had been limited to the medical examination room so far, is now being expanded externally, as they put efforts for participating in the preventive policy making process of the community or the state, and to support the political theories. And also opinions are being collected into the direction that the future- oriented strategic political tasks shall be selected and researches as well as presentations on the theoretical rationale of such tasks at the association level.

User-Perspective Issue Clustering Using Multi-Layered Two-Mode Network Analysis (다계층 이원 네트워크를 활용한 사용자 관점의 이슈 클러스터링)

  • Kim, Jieun;Kim, Namgyu;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.93-107
    • /
    • 2014
  • In this paper, we report what we have observed with regard to user-perspective issue clustering based on multi-layered two-mode network analysis. This work is significant in the context of data collection by companies about customer needs. Most companies have failed to uncover such needs for products or services properly in terms of demographic data such as age, income levels, and purchase history. Because of excessive reliance on limited internal data, most recommendation systems do not provide decision makers with appropriate business information for current business circumstances. However, part of the problem is the increasing regulation of personal data gathering and privacy. This makes demographic or transaction data collection more difficult, and is a significant hurdle for traditional recommendation approaches because these systems demand a great deal of personal data or transaction logs. Our motivation for presenting this paper to academia is our strong belief, and evidence, that most customers' requirements for products can be effectively and efficiently analyzed from unstructured textual data such as Internet news text. In order to derive users' requirements from textual data obtained online, the proposed approach in this paper attempts to construct double two-mode networks, such as a user-news network and news-issue network, and to integrate these into one quasi-network as the input for issue clustering. One of the contributions of this research is the development of a methodology utilizing enormous amounts of unstructured textual data for user-oriented issue clustering by leveraging existing text mining and social network analysis. In order to build multi-layered two-mode networks of news logs, we need some tools such as text mining and topic analysis. We used not only SAS Enterprise Miner 12.1, which provides a text miner module and cluster module for textual data analysis, but also NetMiner 4 for network visualization and analysis. Our approach for user-perspective issue clustering is composed of six main phases: crawling, topic analysis, access pattern analysis, network merging, network conversion, and clustering. In the first phase, we collect visit logs for news sites by crawler. After gathering unstructured news article data, the topic analysis phase extracts issues from each news article in order to build an article-news network. For simplicity, 100 topics are extracted from 13,652 articles. In the third phase, a user-article network is constructed with access patterns derived from web transaction logs. The double two-mode networks are then merged into a quasi-network of user-issue. Finally, in the user-oriented issue-clustering phase, we classify issues through structural equivalence, and compare these with the clustering results from statistical tools and network analysis. An experiment with a large dataset was performed to build a multi-layer two-mode network. After that, we compared the results of issue clustering from SAS with that of network analysis. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The sample dataset contains 150 million transaction logs and 13,652 news articles of 5,000 panels over one year. User-article and article-issue networks are constructed and merged into a user-issue quasi-network using Netminer. Our issue-clustering results applied the Partitioning Around Medoids (PAM) algorithm and Multidimensional Scaling (MDS), and are consistent with the results from SAS clustering. In spite of extensive efforts to provide user information with recommendation systems, most projects are successful only when companies have sufficient data about users and transactions. Our proposed methodology, user-perspective issue clustering, can provide practical support to decision-making in companies because it enhances user-related data from unstructured textual data. To overcome the problem of insufficient data from traditional approaches, our methodology infers customers' real interests by utilizing web transaction logs. In addition, we suggest topic analysis and issue clustering as a practical means of issue identification.

Truncation Artifact Reduction Using Weighted Normalization Method in Prototype R/F Chest Digital Tomosynthesis (CDT) System (프로토타입 R/F 흉부 디지털 단층영상합성장치 시스템에서 잘림 아티팩트 감소를 위한 가중 정규화 접근법에 대한 연구)

  • Son, Junyoung;Choi, Sunghoon;Lee, Donghoon;Kim, Hee-Joung
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.1
    • /
    • pp.111-118
    • /
    • 2019
  • Chest digital tomosynthesis has become a practical imaging modality because it can solve the problem of anatomy overlapping in conventional chest radiography. However, because of both limited scan angle and finite-size detector, a portion of chest cannot be represented in some or all of the projection. These bring a discontinuity in intensity across the field of view boundaries in the reconstructed slices, which we refer to as the truncation artifacts. The purpose of this study was to reduce truncation artifacts using a weighted normalization approach and to investigate the performance of this approach for our prototype chest digital tomosynthesis system. The system source-to-image distance was 1100 mm, and the center of rotation of X-ray source was located on 100 mm above the detector surface. After obtaining 41 projection views with ${\pm}20^{\circ}$ degrees, tomosynthesis slices were reconstructed with the filtered back projection algorithm. For quantitative evaluation, peak signal to noise ratio and structure similarity index values were evaluated after reconstructing reference image using simulation, and mean value of specific direction values was evaluated using real data. Simulation results showed that the peak signal to noise ratio and structure similarity index was improved respectively. In the case of the experimental results showed that the effect of artifact in the mean value of specific direction of the reconstructed image was reduced. In conclusion, the weighted normalization method improves the quality of image by reducing truncation artifacts. These results suggested that weighted normalization method could improve the image quality of chest digital tomosynthesis.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.