• 제목/요약/키워드: Part accuracy

검색결과 1,654건 처리시간 0.035초

Local Shape Analysis of the Hippocampus using Hierarchical Level-of-Detail Representations (계층적 Level-of-Detail 표현을 이용한 해마의 국부적인 형상 분석)

  • Kim Jeong-Sik;Choi Soo-Mi;Choi Yoo-Ju;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • 제11A권7호
    • /
    • pp.555-562
    • /
    • 2004
  • Both global volume reduction and local shape changes of hippocampus within the brain indicate their abnormal neurological states. Hippocampal shape analysis consists of two main steps. First, construct a hippocampal shape representation model ; second, compute a shape similarity from this representation. This paper proposes a novel method for the analysis of hippocampal shape using integrated Octree-based representation, containing meshes, voxels, and skeletons. First of all, we create multi-level meshes by applying the Marching Cube algorithm to the hippocampal region segmented from MR images. This model is converted to intermediate binary voxel representation. And we extract the 3D skeleton from these voxels using the slice-based skeletonization method. Then, in order to acquire multiresolutional shape representation, we store hierarchically the meshes, voxels, skeletons comprised in nodes of the Octree, and we extract the sample meshes using the ray-tracing based mesh sampling technique. Finally, as a similarity measure between the shapes, we compute $L_2$ Norm and Hausdorff distance for each sam-pled mesh pair by shooting the rays fired from the extracted skeleton. As we use a mouse picking interface for analyzing a local shape inter-actively, we provide an interaction and multiresolution based analysis for the local shape changes. In this paper, our experiment shows that our approach is robust to the rotation and the scale, especially effective to discriminate the changes between local shapes of hippocampus and more-over to increase the speed of analysis without degrading accuracy by using a hierarchical level-of-detail approach.

Verifying Execution Prediction Model based on Learning Algorithm for Real-time Monitoring (실시간 감시를 위한 학습기반 수행 예측모델의 검증)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • The KIPS Transactions:PartA
    • /
    • 제11A권4호
    • /
    • pp.243-250
    • /
    • 2004
  • Monitoring is used to see if a real-time system provides a service on time. Generally, monitoring for real-time focuses on investigating the current status of a real-time system. To support a stable performance of a real-time system, it should have not only a function to see the current status of real-time process but also a function to predict executions of real-time processes, however. The legacy prediction model has some limitation to apply it to a real-time monitoring. First, it performs a static prediction after a real-time process finished. Second, it needs a statistical pre-analysis before a prediction. Third, transition probability and data about clustering is not based on the current data. We propose the execution prediction model based on learning algorithm to solve these problems and apply it to real-time monitoring. This model gets rid of unnecessary pre-processing and supports a precise prediction based on current data. In addition, this supports multi-level prediction by a trend analysis of past execution data. Most of all, We designed the model to support dynamic prediction which is performed within a real-time process' execution. The results from some experiments show that the judgment accuracy is greater than 80% if the size of a training set is set to over 10, and, in the case of the multi-level prediction, that the prediction difference of the multi-level prediction is minimized if the number of execution is bigger than the size of a training set. The execution prediction model proposed in this model has some limitation that the model used the most simplest learning algorithm and that it didn't consider the multi-regional space model managing CPU, memory and I/O data. The execution prediction model based on a learning algorithm proposed in this paper is used in some areas related to real-time monitoring and control.

Analysis on Longitudinal Dose according to Change of Field Width (선속 폭(Field Width) 변화에 따른 종축선량 분석)

  • Jung, Won-Seok;Back, Jong-Geal;Shin, Ryung-Mi;Oh, Byung-Cheon;Jo, Jun-Young;Kim, Gi-Chul;Choi, Tae-Gu
    • The Journal of Korean Society for Radiation Therapy
    • /
    • 제23권2호
    • /
    • pp.109-117
    • /
    • 2011
  • Purpose: To analyze the accuracy of tumor volume dose following field width change, to check the difference of dose change by using self-made moving car, and to evaluate practical delivery tumor dose when tomotherapy in the treatment of organ influenced by breathing. Materials and Methods: By using self-made moving car, the difference of longitudinal movement (0.0 cm, 1.0 cm, 1.5 cm, 2.0 cm) was applied and compared calculated dose with measured dose according to change of field width (1.05 cm, 2.50 cm, 5.02 cm) and apprehended margin of error. Then done comparative analysis in degree of photosensitivity of DQA film measured by using Gafchromic EBT film. Dose profile and Gamma histogram were used to measure degree of photosensitivity of DQA film. Results: When field width were 1.05 cm, 2.50 cm, 5.02 cm, margin of error of dose delivery coefficient was -2.00%, -0.39%, -2.55%. In dose profile of Gafchromic EBT film's analysis, the movement of moving car had greater motion toward longitudinal direction and as field width was narrower, big error increased considerably at high dose part compared to calculated dose. The more field width was narrowed, gamma index had a large considerable influence of moving at gamma histogram. Conclusion: We could check the difference of longitudinal dose of moving organ. In order to small field width and minimize organ moving due to breathing, it is thought to be needed to develop breathing control unit and fixation tool.

  • PDF

Extracting Beginning Boundaries for Efficient Management of Movie Storytelling Contents (스토리텔링 콘텐츠의 효과적인 관리를 위한 영화 스토리 발단부의 자동 경계 추출)

  • Park, Seung-Bo;You, Eun-Soon;Jung, Jason J.
    • Journal of Intelligence and Information Systems
    • /
    • 제17권4호
    • /
    • pp.279-292
    • /
    • 2011
  • Movie is a representative media that can transmit stories to audiences. Basically, a story is described by characters in the movie. Different from other simple videos, movies deploy narrative structures for explaining various conflicts or collaborations between characters. These narrative structures consist of 3 main acts, which are beginning, middle, and ending. The beginning act includes 1) introduction to main characters and backgrounds, and 2) conflicts implication and clues for incidents. The middle act describes the events developed by both inside and outside factors and the story dramatic tension heighten. Finally, in the end act, the events are developed are resolved, and the topic of story and message of writer are transmitted. When story information is extracted from movie, it is needed to consider that it has different weights by narrative structure. Namely, when some information is extracted, it has a different influence to story deployment depending on where it locates at the beginning, middle and end acts. The beginning act is the part that exposes to audiences for story set-up various information such as setting of characters and depiction of backgrounds. And thus, it is necessary to extract much kind information from the beginning act in order to abstract a movie or retrieve character information. Thereby, this paper proposes a novel method for extracting the beginning boundaries. It is the method that detects a boundary scene between the beginning act and middle using the accumulation graph of characters. The beginning act consists of the scenes that introduce important characters, imply the conflict relationship between them, and suggest clues to resolve troubles. First, a scene that the new important characters don't appear any more should be detected in order to extract a scene completed the introduction of them. The important characters mean the major and minor characters, which can be dealt as important characters since they lead story progression. Extra should be excluded in order to extract a scene completed the introduction of important characters in the accumulation graph of characters. Extra means the characters that appear only several scenes. Second, the inflection point is detected in the accumulation graph of characters. It is the point that the increasing line changes to horizontal line. Namely, when the slope of line keeps zero during long scenes, starting point of this line with zero slope becomes the inflection point. Inflection point will be detected in the accumulation graph of characters without extra. Third, several scenes are considered as additional story progression such as conflicts implication and clues suggestion. Actually, movie story can arrive at a scene located between beginning act and middle when additional several scenes are elapsed after the introduction of important characters. We will decide the ratio of additional scenes for total scenes by experiment in order to detect this scene. The ratio of additional scenes is gained as 7.67% by experiment. It is the story inflection point to change from beginning to middle act when this ratio is added to the inflection point of graph. Our proposed method consists of these three steps. We selected 10 movies for experiment and evaluation. These movies consisted of various genres. By measuring the accuracy of boundary detection experiment, we have shown that the proposed method is more efficient.

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • 제16B권5호
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

A Document Collection Method for More Accurate Search Engine (정확도 높은 검색 엔진을 위한 문서 수집 방법)

  • Ha, Eun-Yong;Gwon, Hui-Yong;Hwang, Ho-Yeong
    • The KIPS Transactions:PartA
    • /
    • 제10A권5호
    • /
    • pp.469-478
    • /
    • 2003
  • Internet information search engines using web robots visit servers conneted to the Internet periodically or non-periodically. They extract and classify data collected according to their own method and construct their database, which are the basis of web information search engines. There procedure are repeated very frequently on the Web. Many search engine sites operate this processing strategically to become popular interneet portal sites which provede users ways how to information on the web. Web search engine contacts to thousands of thousands web servers and maintains its existed databases and navigates to get data about newly connected web servers. But these jobs are decided and conducted by search engines. They run web robots to collect data from web servers without knowledge on the states of web servers. Each search engine issues lots of requests and receives responses from web servers. This is one cause to increase internet traffic on the web. If each web server notify web robots about summary on its public documents and then each web robot runs collecting operations using this summary to the corresponding documents on the web servers, the unnecessary internet traffic is eliminated and also the accuracy of data on search engines will become higher. And the processing overhead concerned with web related jobs on web servers and search engines will become lower. In this paper, a monitoring system on the web server is designed and implemented, which monitors states of documents on the web server and summarizes changes of modified documents and sends the summary information to web robots which want to get documents from the web server. And an efficient web robot on the web search engine is also designed and implemented, which uses the notified summary and gets corresponding documents from the web servers and extracts index and updates its databases.

Identification of Flavonoids from Extracts of Opuntia ficus-indica var. saboten and Content Determination of Marker Components Using HPLC-PDA (손바닥선인장 추출물의 플라보노이드 구조 규명 및 HPLC-PDA를 이용한 지표성분의 함량 분석)

  • Park, Seungbae;Kang, Dong Hyeon;Jin, Changbae;Kim, Hyoung Ja
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • 제46권2호
    • /
    • pp.210-219
    • /
    • 2017
  • This study aimed to establish an optimal extraction process and high-performance liquid chromatography (HPLC)-photodiode array (PDA) analytical method for determination of marker compounds, dihydrokaempferol (DHK) and 3-O-methylquercetin (3-MeQ), as a part of materials standardization for the development of health functional foods from stems of Opuntia ficus-indica var. saboten (OFS). The quantitative determination method of marker compounds was optimized by HPLC analysis, and the correlation coefficient for the calibration curve showed very good linearity. The HPLC-PDA method was applied successfully to quantification of marker compounds in OFS after validation of the method in terms of linearity, accuracy, and precision. Ethanolic extracts from stems of O. ficus-indica var. saboten (OFSEs) were evaluated by reflux extraction at 70 and $80^{\circ}C$ with 50, 70, and 80% ethanol for 3, 4, 5, and 6 h. Among OFSEs, OFS70E at $80^{\circ}C$ showed the highest contents of DHK and 3-MeQ of $26.42{\pm}0.65$ and $3.88{\pm}0.29mg/OFS100g$, respectively. Furthermore, OFSEs were determined for their antioxidant activities by measuring 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging and lipid peroxidation (LPO) inhibitory activities in rat liver homogenate. OFS70E at $70^{\circ}C$ showed the most potent antioxidant activities with $IC_{50}$ values of $1.19{\pm}0.11$ and $0.89{\pm}0.09mg/mL$ in the DPPH radical scavenging and LPO inhibitory assays, respectively. To identify active components of OFS, various chromatographic separation of OFS70E led to isolation of 11 flavonoids: dihydrokaempferol, dihydroquercetin, 3-O-methylquercetin, quercetin, isorhamnetin 3-O-glucoside, isorhamnetin 3-O-galactoside, narcissin, kaempferol 7-O-glucoside, quercetin 3-O-galactoside, isorhamnetin, and kaempferol 3-O-rutinoside. The results suggest that standardization of DHK in OFSEs using HPLC-PDA analysis would be an acceptable method for the development of health functional foods.

Habitat Distribution Change Prediction of Asiatic Black Bears (Ursus thibetanus) Using Maxent Modeling Approach (Maxent 모델을 이용한 반달가슴곰의 서식지 분포변화 예측)

  • Kim, Tae-Geun;Yang, DooHa;Cho, YoungHo;Song, Kyo-Hong;Oh, Jang-Geun
    • Korean Journal of Ecology and Environment
    • /
    • 제49권3호
    • /
    • pp.197-207
    • /
    • 2016
  • This study aims at providing basic data to objectively evaluate the areas suitable for reintroduction of the species of Asiatic black bear (Ursus thibetanus) in order to effectively preserve the Asiatic black bears in the Korean protection areas including national parks, and for the species restoration success. To this end, this study predicted the potential habitats in East Asia, Southeast Asia and India, where there are the records of Asiatic black bears' appearances using the Maxent model and environmental variables related with climate, topography, road and land use. In addition, this study evaluated the effects of the relevant climate and environmental variables. This study also analyzed inhabitation range area suitable for Asiatic black and geographic change according to future climate change. As for the judgment accuracy of the Maxent model widely utilized for habitat distribution research of wildlife for preservation, AUC value was calculated as 0.893 (sd=0.121). This was useful in predicting Asiatic black bears' potential habitat and evaluate the habitat change characteristics according to future climate change. Compare to the distribution map of Asiatic black bears evaluated by IUCN, Habitat suitability by the Maxent model were regionally diverse in extant areas and low in the extinct areas from IUCN map. This can be the result reflecting the regional difference in the environmental conditions where Asiatic black bears inhabit. As for the environment affecting the potential habitat distribution of Asiatic black bears, inhabitation rate was the highest, according to land coverage type, compared to climate, topography and artificial factors like distance from road. Especially, the area of deciduous broadleaf forest was predicted to be preferred, in comparison with other land coverage types. Annual mean precipitation and the precipitation during the driest period were projected to affect more than temperature's annual range, and the inhabitation possibility was higher, as distance was farther from road. The reason is that Asiatic black bears are conjectured to prefer more stable area without human's intervention, as well as prey resource. The inhabitation range was predicted to be expanded gradually to the southern part of India, China's southeast coast and adjacent inland area, and Vietnam, Laos and Malaysia in the eastern coastal areas of Southeast Asia. The following areas are forecast to be the core areas, where Asiatic black bears can inhabit in the Asian region: Jeonnam, Jeonbuk and Gangwon areas in South Korea, Kyushu, Chugoku, Shikoku, Chubu, Kanto and Tohoku's border area in Japan, and Jiangxi, Zhejiang and Fujian border area in China. This study is expected to be used as basic data for the preservation and efficient management of Asiatic black bear's habitat, artificially introduced individual bear's release area selection, and the management of collision zones with humans.

The Optimization of Reconstruction Method Reducing Partial Volume Effect in PET/CT 3D Image Acquisition (PET/CT 3차원 영상 획득에서 부분용적효과 감소를 위한 재구성법의 최적화)

  • Hong, Gun-Chul;Park, Sun-Myung;Kwak, In-Suk;Lee, Hyuk;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • 제14권1호
    • /
    • pp.13-17
    • /
    • 2010
  • Purpose: Partial volume effect (PVE) is the phenomenon to lower the accuracy of image due to low estimate, which is to occur from PET/CT 3D image acquisition. The more resolution is declined and the lesion is small, the more it causes a big error. So that it can influence the test result. Studied the optimum image reconstruction method by using variation of parameter, which can influence the PVE. Materials and Methods: It acquires the image in each size spheres which is injected $^{18}F$-FDG to hot site and background in the ratio 4:1 for 10 minutes by using NEMA 2001 IEC phantom in GE Discovey STE 16. The iterative reconstruction is used and gives variety to iteration 2-50 times, subset number 1-56. The analysis's fixed region of interest in detail part of image and compute % difference and signal to noise ratio (SNR) using $SUV_{max}$. Results: It's measured that $SUV_{max}$ of 10 mm spheres, which is changed subset number to 2, 5, 8, 20, 56 in fixed iteration to times, SNR is indicated 0.19, 0.30, 0.40, 0.48, 0.45. As well as each sphere's of total SNR is measured 2.73, 3.38, 3.64, 3.63, 3.38. Conclusion: In iteration 6th to 20th, it indicates similar value in % difference and SNR ($3.47{\pm}0.09$). Over 20th, it increases the phenomenon, which is placed low value on $SUV_{max}$ through the influence of noise. In addition, the identical iteration, it indicates that SNR is high value in 8th to 20th in variation of subset number. Therefore, to reduce partial volume effect of small lesion, it can be declined the partial volume effect in iteration 6 times, subset number 8~20 times, considering reconstruction time.

  • PDF

A Review on Ultimate Lateral Capacity Prediction of Rigid Drilled Shafts Installed in Sand (사질토에 설치된 강성현장타설말뚝의 극한수평지지력 예측에 관한 재고)

  • Cho Nam Jun;Kulhawy F.H
    • Journal of the Korean Geotechnical Society
    • /
    • 제21권2호
    • /
    • pp.113-120
    • /
    • 2005
  • An understanding of soil-structure interaction is the key to rational and economical design for laterally loaded drilled shafts. It is very difficult to formulate the ultimate lateral capacity into a general equation because of the inherent soil nonlincarity, nonhomogeneity, and complexity enhanced by the three dimensional and asymmetric nature of the problem though extensive research works on the behavior of deep foundations subjected to lateral loads have been conducted for several decades. This study reviews the four most well known methods (i.e., Reese, Broms, Hansen, and Davidson) among many design methods according to the specific site conditions, the drilled shaft geometric characteristics (D/B ratios), and the loading conditions. And the hyperbolic lateral capacities (H$_h$) interpreted by the hyperbolic transformation of the load-displacement curves obtained from model tests carried out as a part of this research have been compared with the ultimate lateral capacities (Hu) predicted by the four methods. The H$_u$ / H$_h$ ratios from Reese's and Hansen's methods are 0.966 and 1.015, respectively, which shows both the two methods yield results very close to the test results. Whereas the H$_u$ predicted by Davidson's method is larger than H$_h$ by about $30\%$, the C.0.V. of the predicted lateral capacities by Davidson is the smallest among the four. Broms' method, the simplest among the few methods, gives H$_u$ / H$_h$ : 0.896, which estimates the ultimate lateral capacity smaller than the others because some other resisting sources against lateral loading are neglected in this method. But it results in one of the most reliable methods with the smallest S.D. in predicting the ultimate lateral capacity. Conclusively, none of the four can be superior to the others in a sense of the accuracy of predicting the ultimate lateral capacity. Also, regardless of how sophisticated or complicated the calculating procedures are, the reliability in the lateral capacity predictions seems to be a different issue.