• Title/Summary/Keyword: software system

Search Result 12,098, Processing Time 0.04 seconds

A Study on a Basic Model for GIS Audit, Based on Various Types of GIS Projects (GIS 사업유형을 고려한 GIS 감리의 기반 모델 연구)

  • Koh, Kwang-Chul;Kim, Eun-Hyung
    • Journal of Korea Spatial Information System Society
    • /
    • v.2 no.2 s.4
    • /
    • pp.5-23
    • /
    • 2000
  • Since 1995, national and local governments have competitively initiated many and large GIS projects and audit for the projects becomes an important issue. So far, the audit in the Information Technology(IT) area has tried to deal with the issue but ineffectiveness has been found for the successful GIS project management. Effective auditing is a critical element for the project management. In order to establish a proper audit model for the GIS projects and to promote auditing activities in the projects, this study constructs two hypotheses and tries to prove them. The hypotheses are as follows : 1) For a good audits model for GIS, unique characteristics of a GIS project audit items and the scope of the audit need to be identified. 2) The scope of audit needs to be classified according to the requests from tasks in the projects. To prove the hypotheses, this study analyzes positive aspects of audit in IT and construction projects, clarifies the audit items in GIS projects by comparing with them, and classifies the scope of the GIS audit based on various types of GIS projects. As a results, 5 types of the GIS audit are identified : (1) audit for project management, (2) audit focused on IT, (3) audit characterized by GIS technologies, (4) GIS database audit and (5) consulting services for critical problems in the projects. In addition, 4 criteria in classifying the GIS projects are suggested for the GIS audit. The 4 criteria are domain, scope, duration, and GIS applications technologies. Especially, GIS technology considered in this study includes GIS software, methodologies for GIS development, GIS database and quality control of GIS data, which are not usually reflected in the existing studies about in GIS audit. Because the GIS audit depends on a type of GIS projects, scopes of the audit can be flexibly reconstructed in accordance with the types of GIS projects. This is a key to effective and realistic audit for the future GIS projects. Strategies for effective GIS audit are also proposed in terms of the following: GIS project management, goal establishment in each audit stage, documentation from GIS audit, timing strategies for intensive GIS audit, and designing team structure.

  • PDF

Effect of Implant Types and Bone Resorption on the Fatigue Life and Fracture Characteristics of Dental Implants (임플란트 형태와 골흡수가 임플란트 피로 수명 및 파절 특성에 미치는 효과에 관한 연구)

  • Won, Ho-Yeon;Choi, Yu-Sung;Cho, In-Ho
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.26 no.2
    • /
    • pp.121-143
    • /
    • 2010
  • To investigate the effect of implant types and bone resorption on the fracture characteristics. 4 types of Osstem$^{(R)}$Implant were chosen and classified into external parallel, internal parallel, external taper, internal taper groups. Finite elements analysis was conducted with ANSYS Multi Physics software. Fatigue fracture test was performed by connecting the mold to the dynamic load fatigue testing machine with maximum load of 600N and minimum load of 60N. The entire fatigue test was performed with frequency of 14Hz and fractured specimens were observed with Hitachi S-3000 H scanning electron microscope. The results were as follows: 1. In the fatigue test of 2 mm exposed implants group, Tapered type and external connected type had higher fatigue life. 2. In the fatigue test of 4 mm exposed implants group, Parallel type and external connected types had higher fatigue life. 3. The fracture patterns of all 4 mm exposed implant system appeared transversely near the dead space of the fixture. With a exposing level of 2 mm, all internally connected implant systems were fractured transversely at the platform of fixture facing the abutment. but externally connected ones were fractured at the fillet of abutment body and hexa of fixture or near the dead space of the fixture. 4. Many fatigue striations were observed near the crack initiation and propagation sites. The cleavage with facet or dimple fractures appeared at the final fracture sites. 5. Effective stress of buccal site with compressive stress is higher than that of lingual site with tensile stress, and effective stress acting on the fixture is higher than that of the abutment screw. Also, maximum effective stress acting on the parallel type fixtures is higher. It is careful to use the internal type implant system in posterior area.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

An Efficient Heuristic for Storage Location Assignment and Reallocation for Products of Different Brands at Internet Shopping Malls for Clothing (의류 인터넷 쇼핑몰에서 브랜드를 고려한 상품 입고 및 재배치 방법 연구)

  • Song, Yong-Uk;Ahn, Byung-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.129-141
    • /
    • 2010
  • An Internet shopping mall for clothing operates a warehouse for packing and shipping products to fulfill its orders. All the products in the warehouse are put into the boxes of same brands and the boxes are stored in a row on shelves equiped in the warehouse. To make picking and managing easy, boxes of the same brands are located side by side on the shelves. When new products arrive to the warehouse for storage, the products of a brand are put into boxes and those boxes are located adjacent to the boxes of the same brand. If there is not enough space for the new coming boxes, however, some boxes of other brands should be moved away and then the new coming boxes are located adjacent in the resultant vacant spaces. We want to minimize the movement of the existing boxes of other brands to another places on the shelves during the warehousing of new coming boxes, while all the boxes of the same brand are kept side by side on the shelves. Firstly, we define the adjacency of boxes by looking the shelves as an one dimensional series of spaces to store boxes, i.e. cells, tagging the series of cells by a series of numbers starting from one, and considering any two boxes stored in the cells to be adjacent to each other if their cell numbers are continuous from one number to the other number. After that, we tried to formulate the problem into an integer programming model to obtain an optimal solution. An integer programming formulation and Branch-and-Bound technique for this problem may not be tractable because it would take too long time to solve the problem considering the number of the cells or boxes in the warehouse and the computing power of the Internet shopping mall. As an alternative approach, we designed a fast heuristic method for this reallocation problem by focusing on just the unused spaces-empty cells-on the shelves, which results in an assignment problem model. In this approach, the new coming boxes are assigned to each empty cells and then those boxes are reorganized so that the boxes of a brand are adjacent to each other. The objective of this new approach is to minimize the movement of the boxes during the reorganization process while keeping the boxes of a brand adjacent to each other. The approach, however, does not ensure the optimality of the solution in terms of the original problem, that is, the problem to minimize the movement of existing boxes while keeping boxes of the same brands adjacent to each other. Even though this heuristic method may produce a suboptimal solution, we could obtain a satisfactory solution within a satisfactory time, which are acceptable by real world experts. In order to justify the quality of the solution by the heuristic approach, we generate 100 problems randomly, in which the number of cells spans from 2,000 to 4,000, solve the problems by both of our heuristic approach and the original integer programming approach using a commercial optimization software package, and then compare the heuristic solutions with their corresponding optimal solutions in terms of solution time and the number of movement of boxes. We also implement our heuristic approach into a storage location assignment system for the Internet shopping mall.

EVALUATING THE RELIABILITY AND REPEATABILITY OF THE DIGITAL COLOR ANALYSIS SYSTEM FOR DENTISTRY (치과용 디지털 색상 분석용 기기의 정확성과 재현 능력에 대한 평가)

  • Jeong, Joong-Jae;Park, Su-Jung;Cho, Hyun-Gu;Hwang, Yun-Chan;Oh, Won-Mann;Hwang, In-Nam
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.4
    • /
    • pp.352-368
    • /
    • 2008
  • This study was done to evaluate the reliability of the digital color analysis system (ShadeScan, CYNOVAD, Montreal. Canada) for dentistry. Sixteen tooth models were made by injecting the A2 shade chemical cured resin for temporary crown into the impression acquired from 16 adults. Surfaces of the model teeth were polished with resin polishing cloth. The window of the ShadeScan handpiece was placed on the labial surface of tooth and tooth images were captured, and each tooth shade was analyzed with the ShadeScan software. Captured images were selected in groups, and compared one another. Two models were selected to evaluate repeatability of ShadeScan, and shade analysis was performed 10 times for each tooth. And, to ascertain the color difference of same shade code analyzed by ShadeScan, CIE $L^*a^*b^*$values of shade guide of Gradia Direct (GC, Tokyo, Japan) were measured on the white and black background using the Spectrolino (GretagMacbeth, USA), and Shade map of each shade guide was captured using the ShadeScan. There were no teeth that were analyzed as A2 shade and unique shade. And shade mapping analyses of the same tooth revealed similar shade and distribution except incisal third. Color difference (${\Delta}E^*$) among the Shade map which analyzed as same shade by ShadeScan were above 3. Within the limits of this study, digital color analysis instrument for dentistry has relatively high repeatability, but has controversial in accuracy.

Development and Effectiveness of the Primary Hospice Education Program for Nurses (간호사를 위한 호스피스 기초 교육 프로그램 및 효과)

  • In, Sook-Jin
    • 한국호스피스완화의료학회:학술대회논문집
    • /
    • 2004.07a
    • /
    • pp.100-102
    • /
    • 2004
  • Under the current medical system, a terminal patient and his/her family who are neglected inevitably face various aspects of crises including not only physical, but also psychological, social, economic, spiritual and legal problems. Nurses often look after many terminal patents with these types of complicated problems. Therefore, educating the nurses who will take care of such patents would greatly reduce stress so the patents end could their lives in peace and without losing their dignity. This research is a quasi experimental study of nonequivalent control group. A pretest-posttest design where a basic education program is developed for nurses, who frequently treat terminal patents, to understand the importance of the role of hospice and to apply their understandings to treat terminal lancer patents. A sample of the nurses were taken from those who were working in general wards at two general hospitals in Seoul during October, 2003${\sim}$December 2003. The study was composed of 46 experimental group and 43 control group. A basic hospice education program was developed by taking emphasized and overlapping parts from advanced practice hospice nurses education course, short-term education course, an extensive literature survey and by consulting three professionals as well. With the group of 5 professors with vast experiences in oncolgy, 5 nursing administrator, 3 nursing practitioner, the tentative first version of the program was developed and reviewed. Afterwards, by utilizing person to person interviews with 2 head nurses experienced with terminal patients, 1 nurse in charge of hospice, 1 nurse on the contents of the program, and a person to person rating on the educating medium by a nurse were performed. The final version of a basic education program was developed after the second revision. The hospice basic education program consists of introduction to hospice, hospice and commucation, management of pain for terminal cancer patients, physical management for terminal cancer patients, socio-psycological caring of terminal cancer patients and management of death and separation. Total education time was four hours organized into 50 minutes of instruction and 10 minutes of break. $Powerpoint^{(R)}$ software was used as the education medium. As research tools, "Knowledge on Hospice" was developed by the author after receiving a review from one expert. "Attitude of Hospice Nursing" was revised Kim(2001)'s attitude measuring tool which was based on Wang(1998), Kwon(1989), Park and Sung(1991)'s tool. "Liability on nursing terminal patients" was used as developed by Zarits(1980) and Mongomory(1985) translated by Lee(1985). For collecting data, preliminary investigation prior to 1 week of the hospice basic education program and post-investigations after 1 week and 4 weeks of the education were carried out for the nurses at a general ward who understood and agreed on the purpose of the program. Collected data were analyzed throughout t-test, $x^2-test$, Manova test and Bonferroni correction in $SAS^{(R)}$ program. The summary of the investigation is as follows: Hypothesis 1: "Educated experimental group would possess more knowledge on hospice compared to the un-educated control group" was supported after 1 (F=12.14, p=.00) and 4 (F=5.3, p=.02) weeks of education. Hypothesis 2: "Educated experimental group would take a positive attitude toward hospice nursing compared to the un-educated control group" was supported after 1(F=3.92, p=.05) and 4(F=5.05, p=.02) weeks of education. Hypothesis 3: "Educated experimental poop would feel less liability compared to the un-educated control group in nursing terminal cancer patients' was rejected. In this study, it was found that knowledge on hospice was significantly important. By applying hospice basic education programs to nurses, the education program helped nurses to take a positive attitude toward terminal patients. It was, however, seen that the education program had no effect on alleviating liability in nursing terminal patients. Therefore, it is expected that this educational program would help hospices and nurses at general wards to understand the concept and the role of hospice so that terminal patents, now neglected under current medical system, would be able to end their lives in peace.

  • PDF

A Performance Evaluation of the e-Gov Standard Framework on PaaS Cloud Computing Environment: A Geo-based Image Processing Case (PaaS 클라우드 컴퓨팅 환경에서 전자정부 표준프레임워크 성능평가: 공간영상 정보처리 사례)

  • KIM, Kwang-Seob;LEE, Ki-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.1-13
    • /
    • 2018
  • Both Platform as a Service (PaaS) as one of the cloud computing service models and the e-government (e-Gov) standard framework from the Ministry of the Interior and Safety (MOIS) provide developers with practical computing environments to build their applications in every web-based services. Web application developers in the geo-spatial information field can utilize and deploy many middleware software or common functions provided by either the cloud-based service or the e-Gov standard framework. However, there are few studies for their applicability and performance in the field of actual geo-spatial information application yet. Therefore, the motivation of this study was to investigate the relevance of these technologies or platform. The applicability of these computing environments and the performance evaluation were performed after a test application deployment of the spatial image processing case service using Web Processing Service (WPS) 2.0 on the e-Gov standard framework. This system was a test service supported by a cloud environment of Cloud Foundry, one of open source PaaS cloud platforms. Using these components, the performance of the test system in two cases of 300 and 500 threads was assessed through a comparison test with two kinds of service: a service case for only the PaaS and that on the e-Gov on the PaaS. The performance measurements were based on the recording of response time with respect to users' requests during 3,600 seconds. According to the experimental results, all the test cases of the e-Gov on PaaS considered showed a greater performance. It is expected that the e-Gov standard framework on the PaaS cloud would be important factors to build the web-based spatial information service, especially in public sectors.

Effect of abutment superimposition process of dental model scanner on final virtual model (치과용 모형 스캐너의 지대치 중첩 과정이 최종 가상 모형에 미치는 영향)

  • Yu, Beom-Young;Son, Keunbada;Lee, Kyu-Bok
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.57 no.3
    • /
    • pp.203-210
    • /
    • 2019
  • Purpose: The purpose of this study was to verify the effect of the abutment superimposition process on the final virtual model in the scanning process of single and 3-units bridge model using a dental model scanner. Materials and methods: A gypsum model for single and 3-unit bridges was manufactured for evaluating. And working casts with removable dies were made using Pindex system. A dental model scanner (3Shape E1 scanner) was used to obtain CAD reference model (CRM) and CAD test model (CTM). The CRM was scanned without removing after dividing the abutments in the working cast. Then, CTM was scanned with separated from the divided abutments and superimposed on the CRM (n=20). Finally, three-dimensional analysis software (Geomagic control X) was used to analyze the root mean square (RMS) and Mann-Whitney U test was used for statistical analysis (${\alpha}=.05$). Results: The RMS mean abutment for single full crown preparation was $10.93{\mu}m$ and the RMS average abutment for 3 unit bridge preparation was $6.9{\mu}m$. The RMS mean of the two groups showed statistically significant differences (P<.001). In addition, errors of positive and negative of two groups averaged $9.83{\mu}m$, $-6.79{\mu}m$ and 3-units bridge abutment $6.22{\mu}m$, $-3.3{\mu}m$, respectively. The mean values of the errors of positive and negative of two groups were all statistically significantly lower in 3-unit bridge abutments (P<.001). Conclusion: Although the number of abutments increased during the scan process of the working cast with removable dies, the error due to the superimposition of abutments did not increase. There was also a significantly higher error in single abutments, but within the range of clinically acceptable scan accuracy.

A Study on the Development of High Sensitivity Collision Simulation with Digital Twin (디지털 트윈을 적용한 고감도 충돌 시뮬레이션 개발을 위한 연구)

  • Ki, Jae-Sug;Hwang, Kyo-Chan;Choi, Ju-Ho
    • Journal of the Society of Disaster Information
    • /
    • v.16 no.4
    • /
    • pp.813-823
    • /
    • 2020
  • Purpose: In order to maximize the stability and productivity of the work through simulation prior to high-risk facilities and high-cost work such as dismantling the facilities inside the reactor, we intend to use digital twin technology that can be closely controlled by simulating the specifications of the actual control equipment. Motion control errors, which can be caused by the time gap between precision control equipment and simulation in applying digital twin technology, can cause hazards such as collisions between hazardous facilities and control equipment. In order to eliminate and control these situations, prior research is needed. Method: Unity 3D is currently the most popular engine used to develop simulations. However, there are control errors that can be caused by time correction within Unity 3D engines. The error is expected in many environments and may vary depending on the development environment, such as system specifications. To demonstrate this, we develop crash simulations using Unity 3D engines, which conduct collision experiments under various conditions, organize and analyze the resulting results, and derive tolerances for precision control equipment based on them. Result: In experiments with collision experiment simulation, the time correction in 1/1000 seconds of an engine internal function call results in a unit-hour distance error in the movement control of the collision objects and the distance error is proportional to the velocity of the collision. Conclusion: Remote decomposition simulators using digital twin technology are considered to require limitations of the speed of movement according to the required precision of the precision control devices in the hardware and software environment and manual control. In addition, the size of modeling data such as system development environment, hardware specifications and simulations imitated control equipment and facilities must also be taken into account, available and acceptable errors of operational control equipment and the speed required of work.

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.