Processing math: 100%
  • Title/Summary/Keyword: Processing Accuracy

Search Result 3,857, Processing Time 0.07 seconds

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

An event-related potential study of global-local visual perception in female college students with binge drinking (폭음 여자대학생의 전체-세부 시지각 처리에 관한 사건관련전위 연구)

  • So-yeon Lim;Myung-Sun Kim
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.2
    • /
    • pp.111-151
    • /
    • 2023
  • It is reported that binge drinkers show cognitive impairment similar to alcohol use disorder patients. A previous studies using neuropsychological tests and brain imaging techniques to investigate the visual perception of alcohol use disorder patients reported that they had global-local visual perception defects. Although the neurological basis for the global-local visual perception deficit in the heavy drinking group has been presented, there are no studies to date that have investigated the global-local visual perception in the heavy drinking group. This study investigated local-biased visual perception in female college students with binge drinking (BD) using event-related potentials (ERPs). Based on the scores of the Korean version of Alcohol Use Disorder Identification Test and the Alcohol Use Questionnaire, participants were assigned into BD (n=25) and non-BD (n=25) groups. Local-global visual processing was assessed using a local-global paradigm, in which large stimuli (global level) composed of small stimuli (local level) were presented. The stimuli presented at global and local levels were either congruent or incongruent. The behavioral results exhibited that the BD and non-BD groups did not differ in terms of accuracy and response time. In terms of ERPs, the BD and non-BD groups did not show difference in N100, P150 and N200 amplitude. However, the BD group showed significantly smaller P300 amplitude than non-BD group especially in the local condition. In addition, a negative correlation between P300 amplitude and binge drinking score was observed, i.e., severer binge drinking smaller P300 amplitude. The P300 is known to reflect cognitive inhibition and attentional allocation. In the global-local paradigm, the local condition required to attend to local target while ignoring global non-target. Therefore, the present results indicate that female college students with BD do not have local-biased visual processing, instead they seem to have difficulties in inhibition of irrelevant stimuli.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

Establishment Status of the Mandatory Courses for the Qualification of Sensory Developmental Rehabilitation Specialist - Within Curriculums of Baccalaureate Occupational Therapy Programs (감각발달재활사 자격기준 관련 필수과목 개설현황 조사연구 - 4년제 작업치료학과를 중심으로)

  • Kim, Ji-Hyun
    • The Journal of Korean society of community based occupational therapy
    • /
    • v.7 no.3
    • /
    • pp.23-34
    • /
    • 2017
  • Objective : The purpose of this study was to investigate establishment status of the mandatory courses designated by Ministry of Health & Welfare for qualification of sensory developmental rehabilitation specialist(SDRS), within curriculum of baccalaureate occupational therapy(BOT) programs in Korea Methods : This is a narrative study to investigate and analyze certain courses established in curriculums of all 4-years occupational therapy(OT) programs, which is 32 schools. Results : 1) The shared mandatory subject, 'Understanding Children with Disabilities(UDC)', has been established at 9 schools. For the branch mandatory subjects, 'Neuroscience(NS) or Neuroanatomy' has been established at all 32 schools, 'Sensory Processing Dysfunctions and Intervention(SPDI)' or 'Sensory Integration' has been established at 31 schools, and each of 'Assessment & Evaluation for Children(AEC)' and 'Practicum of Sensory Rehabilitation(PSR)' has been established 7 schools for same. 2) For the mandatory courses, all 32 schools were offering designated- and alternative courses of NS, SPDI, AEC, but there was no change in the number of schools offering the practicum course since there was no case of alterative for it. 3) In terms of general provision score, there were 4 schools for score 7, 4 schools for score 6, 2 schools for score 5, 1 schools for score 4, 2 schools for score 3, and 19 schools for score 2. Conclusion : Establishment of the mandatory courses required to the qualification of SDRS among the BOT programs in nation were investigated. Including alternative courses, all the branch mandatory courses except practicum course are established in all the 32 schools. However, the shared mandatory subject, UDC and the practicum subject were established in only few schools. In the provision level evaluation of BOT programs for the SDRS qualification, it is shown that many schools has been started the provision already but still many schools' curriculum did not reflect the willingness and accuracy well. For the schools planning successful accreditation in near future, it is recommended that they prioritize the establishment of the shared mandatory course and the practicum course since these two subjects are recognized as critical factors for that. In addition, it is also needed of comparative inspections for course title and syllabi based on the guideline provided by Ministry of Health & Welfare.

A Double-Blind Comparison of Paroxetine and Amitriptyline in the Treatment of Depression Accompanied by Alcoholism : Behavioral Side Effects during the First 2 Weeks of Treatment (주정중독에 동반된 우울증의 치료에서 Paroxetine과 Amitriptyline의 이중맹 비교 : 치료초기 2주 동안의 행동학적 부작용)

  • Yoon, Jin-Sang;Yoon, Bo-Hyun;Choi, Tae-Seok;Kim, Yong-Bum;Lee, Hyung-Yung
    • Korean Journal of Biological Psychiatry
    • /
    • v.3 no.2
    • /
    • pp.277-287
    • /
    • 1996
  • Objective : It has been proposed that cognition and related aspects of mental functioning are decreased in depression as well as in alcoholism. The objective of the study was to compare behavioral side effects of paroxetine and amitriptyline in depressed patients accompanied by alcoholism. The focused comparisons were drug effects concerning psychomotor performance, cognitive function, sleep and daytime sleepiness during the first 2 weeks of treatment. Methods : After an alcohol detoxification period(3 weeks) and a washout period(1 week), a total of 20 male inpatients with alcohol use disorder (DSM-IV), who also had a major depressive episode(DSM-IV), were treated double-blind with paroxetine 20mg/day(n=10) or amitriptyline 25mg/day(n=10) for 2 weeks. All patients were required to have a scare of at least 18 respectively on bath the Hamilton Rating Scale far Depression(HAM-D) and Beck Depression Inventory(BDI) at pre-drug baseline. Patients randomized to paroxetine received active medication in the morning and placebo in the evening whereas those randomized to amitriptyline received active medication in the evening and placebo in the morning. All patients performed the various tasks in a test battery at baseline and at days 3, 7 and 14. The test battery included : critical flicker fusion threshold for sensory information processing capacity : choice reaction time for gross psychomotor performance : tracking accuracy and latency of response to peripheral stimulus as a measure of line sensorimotor co-ordination and divided attention : digit symbol substitution as a measure of sustained attention and concentration. To rate perceived sleep and daytime sleepiness, 10cm line Visual analogue scales were employed at baseline and at days 3, 7 and 14. The subjective rating scales were adapted far this study from Leeds sleep Evaluation Questionnaire and Epworth Sleepiness Scale. In addition a comprehensive side effect assessment, using the UKU side effect rating scale, was carried out at baseline and at days 7 and 14. The efficacy of treatment was evaluated using HAM-D, BDI and clinical global impression far severity and improvement at days 7 and 14. Results : The pattern of results indicated thai paroxetine improved performance an mast of the lest variables and also improved sleep with no effect on daytime sleepiness aver the study period. In contrast, amitriptyline produced disruption of performance on same tests and improved sleep with increased daytime sleepiness in particular at day 3. On the UKU side effect rating scale, mare side effects were registered an amitriptyline. The therapeutic efficacy was observed in favor of paroxetine early in day 7. Conclusion : These results demonstrated thai paroxetine in much better than amitriptyline for the treatment of depressed patients accompained by alcoholism at least in terms of behavioral safety and tolerability, furthermore the results may assist in explaining the therapeutic outcome of paroxetine. For example, and earlier onset of antidepressant action of paroxetine may be caused by early improved cognitive function or by contributing to good compliance with treatment.

  • PDF

Local Shape Analysis of the Hippocampus using Hierarchical Level-of-Detail Representations (계층적 Level-of-Detail 표현을 이용한 해마의 국부적인 형상 분석)

  • Kim Jeong-Sik;Choi Soo-Mi;Choi Yoo-Ju;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7 s.91
    • /
    • pp.555-562
    • /
    • 2004
  • Both global volume reduction and local shape changes of hippocampus within the brain indicate their abnormal neurological states. Hippocampal shape analysis consists of two main steps. First, construct a hippocampal shape representation model ; second, compute a shape similarity from this representation. This paper proposes a novel method for the analysis of hippocampal shape using integrated Octree-based representation, containing meshes, voxels, and skeletons. First of all, we create multi-level meshes by applying the Marching Cube algorithm to the hippocampal region segmented from MR images. This model is converted to intermediate binary voxel representation. And we extract the 3D skeleton from these voxels using the slice-based skeletonization method. Then, in order to acquire multiresolutional shape representation, we store hierarchically the meshes, voxels, skeletons comprised in nodes of the Octree, and we extract the sample meshes using the ray-tracing based mesh sampling technique. Finally, as a similarity measure between the shapes, we compute L2 Norm and Hausdorff distance for each sam-pled mesh pair by shooting the rays fired from the extracted skeleton. As we use a mouse picking interface for analyzing a local shape inter-actively, we provide an interaction and multiresolution based analysis for the local shape changes. In this paper, our experiment shows that our approach is robust to the rotation and the scale, especially effective to discriminate the changes between local shapes of hippocampus and more-over to increase the speed of analysis without degrading accuracy by using a hierarchical level-of-detail approach.

Test-Retest Reliability of Attention Network Test Scores in Schizophrenia (조현병 환자가 시행한 주의력 네트워크 검사 점수의 검사-재검사 신뢰도)

  • Lee, Jae-Chang;Kim, Ji-Eun;Kim, Min-Young;Yang, Jisun;Han, Myung-Hun;Kwon, Hyukchan;Kim, Kiwoong;Lim, Sanghyun;Jung, Eun-eui;Kim, Ji-Woong;Im, Woo-Young;Lee, Sang-Min;Kim, Seung Jun
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.25 no.2
    • /
    • pp.210-217
    • /
    • 2017
  • Objectives : Although the Attention Network Test(ANT) has been widely used to assess selective attention including alerting, orienting, and conflict processing, data on its test-retest reliability are lacking for clinical population. The objective of the current study was to investigate test-retest reliability of the ANT in healthy controls and patients with schizophrenia. Methods : Fourteen patients with schizophrenia and 23 healthy controls participated in the study. They are tested with ANT twice with 1 week interval. Test-retest reliability was analyzed with Pearson and Intra-class correlations. Results : Patients with schizophrenia showed high test-retest correlations for mean reaction time, orienting effect, and conflict effect. Also, they showed moderate to high test-retest correlations for mean accuracy and moderate test-retest correlations for alerting effect and conflict error rate. On the other hand, healthy controls revealed high test-retest correlations for mean reaction time and moderate to high test-retest correlations for conflict error rate. In addition, they revealed moderate test-retest correlations for alert effect, orienting effect, and conflict effect. Conclusions : The mean reaction time, alerting effect, orienting effect, conflict effect, and conflict error rate of ANT showed acceptable test-retest reliabilities in healthy controls as well as patient with schizophrenia. Therefore, the analyses of these reliable measures of ANT are recommended for case-control studies in patients with schizophrenia.