• Title/Summary/Keyword: 정보의 변환

Search Result 7,687, Processing Time 0.031 seconds

A Lifelog Management System Based on the Relational Data Model and its Applications (관계 데이터 모델 기반 라이프로그 관리 시스템과 그 응용)

  • Song, In-Chul;Lee, Yu-Won;Kim, Hyeon-Gyu;Kim, Hang-Kyu;Haam, Deok-Min;Kim, Myoung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.637-648
    • /
    • 2009
  • As the cost of disks decreases, PCs are soon expected to be equipped with a disk of 1TB or more. Assuming that a single person generates 1GB of data per month, 1TB is enough to store data for the entire lifetime of a person. This has lead to the growth of researches on lifelog management, which manages what people see and listen to in everyday life. Although many different lifelog management systems have been proposed, including those based on the relational data model, based on ontology, and based on file systems, they have all advantages and disadvantages: Those based on the relational data model provide good query processing performance but they do not support complex queries properly; Those based on ontology handle more complex queries but their performances are not satisfactory: Those based on file systems support only keyword queries. Moreover, these systems are lack of support for lifelog group management and do not provide a convenient user interface for modifying and adding tags (metadata) to lifelogs for effective lifelog search. To address these problems, we propose a lifelog management system based on the relational data model. The proposed system models lifelogs by using the relational data model and transforms queries on lifelogs into SQL statements, which results in good query processing performance. It also supports a simplified relationship query that finds a lifelog based on other lifelogs directly related to it, to overcome the disadvantage of not supporting complex queries properly. In addition, the proposed system supports for the management of lifelog groups by providing ways to create, edit, search, play, and share them. Finally, it is equipped with a tagging tool that helps the user to modify and add tags conveniently through the ion of various tags. This paper describes the design and implementation of the proposed system and its various applications.

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.

Application of Terrestrial LiDAR for Reconstructing 3D Images of Fault Trench Sites and Web-based Visualization Platform for Large Point Clouds (지상 라이다를 활용한 트렌치 단층 단면 3차원 영상 생성과 웹 기반 대용량 점군 자료 가시화 플랫폼 활용 사례)

  • Lee, Byung Woo;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.54 no.2
    • /
    • pp.177-186
    • /
    • 2021
  • For disaster management and mitigation of earthquakes in Korea Peninsula, active fault investigation has been conducted for the past 5 years. In particular, investigation of sediment-covered active faults integrates geomorphological analysis on airborne LiDAR data, surface geological survey, and geophysical exploration, and unearths subsurface active faults by trench survey. However, the fault traces revealed by trench surveys are only available for investigation during a limited time and restored to the previous condition. Thus, the geological data describing the fault trench sites remain as the qualitative data in terms of research articles and reports. To extend the limitations due to temporal nature of geological studies, we utilized a terrestrial LiDAR to produce 3D point clouds for the fault trench sites and restored them in a digital space. The terrestrial LiDAR scanning was conducted at two trench sites located near the Yangsan Fault and acquired amplitude and reflectance from the surveyed area as well as color information by combining photogrammetry with the LiDAR system. The scanned data were merged to form the 3D point clouds having the average geometric error of 0.003 m, which exhibited the sufficient accuracy to restore the details of the surveyed trench sites. However, we found more post-processing on the scanned data would be necessary because the amplitudes and reflectances of the point clouds varied depending on the scan positions and the colors of the trench surfaces were captured differently depending on the light exposures available at the time. Such point clouds are pretty large in size and visualized through a limited set of softwares, which limits data sharing among researchers. As an alternative, we suggested Potree, an open-source web-based platform, to visualize the point clouds of the trench sites. In this study, as a result, we identified that terrestrial LiDAR data can be practical to increase reproducibility of geological field studies and easily accessible by researchers and students in Earth Sciences.

Reconstruction of Stereo MR Angiography Optimized to View Position and Distance using MIP (최대강도투사를 이용한 관찰 위치와 거리에 최적화 된 입체 자기공명 뇌 혈관영상 재구성)

  • Shin, Seok-Hyun;Hwang, Do-Sik
    • Investigative Magnetic Resonance Imaging
    • /
    • v.16 no.1
    • /
    • pp.67-75
    • /
    • 2012
  • Purpose : We studied enhanced method to view the vessels in the brain using Magnetic Resonance Angiography (MRA). Noticing that Maximum Intensity Projection (MIP) image is often used to evaluate the arteries of the neck and brain, we propose a new method for view brain vessels to stereo image in 3D space with more superior and more correct compared with conventional method. Materials and Methods: We use 3T Siemens Tim Trio MRI scanner with 4 channel head coil and get a 3D MRA brain data by fixing volunteers head and radiating Phase Contrast pulse sequence. MRA brain data is 3D rotated according to the view angle of each eyes. Optimal view angle (projection angle) is determined by the distance between eye and center of the data. Newly acquired MRA data are projected along with the projection line and display only the highest values. Each left and right view MIP image is integrated through anaglyph imaging method and optimal stereoscopic MIP image is acquired. Results: Result image shows that proposed method let enable to view MIP image at any direction of MRA data that is impossible to the conventional method. Moreover, considering disparity and distance from viewer to center of MRA data at spherical coordinates, we can get more realistic stereo image. In conclusion, we can get optimal stereoscopic images according to the position that viewers want to see and distance between viewer and MRA data. Conclusion: Proposed method overcome problems of conventional method that shows only specific projected image (z-axis projection) and give optimal depth information by converting mono MIP image to stereoscopic image considering viewers position. And can display any view of MRA data at spherical coordinates. If the optimization algorithm and parallel processing is applied, it may give useful medical information for diagnosis and treatment planning in real-time.

Development of a Comprehensive Model of Disaster Management in Korea Based on the Result of Response to Sampung Building Collapse (1995), - Disaster Law, and 98 Disaster Preparedness Plan of Seoul City - (우리나라 사고예방과 재난관리 모형 개발을 위한 연구)

  • Lee, In-Sook
    • Research in Community and Public Health Nursing
    • /
    • v.11 no.1
    • /
    • pp.289-316
    • /
    • 2000
  • 우리나라의 경우 지역사회 재난 관리계획과 훈련이 보건의료적 모형이라기 보다는 민방위 모형에 입각하기 때문에 사고 현장에서의 환자 중증도 분류, 합리적 환자배분 및 이송, 병원 응급실에서의 대처 등이 체계적으로 이루어지지 못하고 있으며, 지역사회가 이에 즉각적으로 반응할 수 없다. 본 연구는 삼풍 붕괴사고 시에 대응방식과 그 후의 우리나라 응급의료 체계를 분석함으로써 대형사고 예방과 재난관리를 위한 우리나라 응급의료체계의 개선방안과 간호교육에서의 준비부분을 제시하고자 한다. 1 삼풍 사고 발생시에는 이를 관장할 만한 법적 근거인 인위적 재해에 관한 재난관리법이 없었다. 따라서 현장에서는 의학적 명령체계를 확보하지 못했기 때문에 현장에서의 응급 처치는 전혀 이루어지지 못하였다. 현장에서의 중증도 분류. 응급조치와 의뢰, 병원과 현장본부 그리고 구급차간의 통신 체계 두절, 환자 운송 중 의료지시를 받을 수 있도록 인력, 장비, 통신 체계가 준비되지 못하였던 점이 주요한 문제였다. 또한 병원 응급실에서는 재난 계획이 없거나 있었더라도 이를 활성화하여 병원의 운영 체계를 변환해가지 못하였다. 2. 삼풍백화점 붕괴사고 한달 후에는 인위적 재해에 대한 재난관리법이 제정되고, 행정부 수준별로 매년 지역요구에 합당한 재난관리 계획을 세우도록 법으로 규정하였다. 재난 관리법에는 보건의료 측면에서의 현장대응, 주민 참여, 응급 의료적 대처, 정보의 배된. 교육/훈련 등이 포함되어 있어야 한다. 그러나 법적 기반이 마련된 이후에도 한국 재난 계획 내에는 응급의료 측면의 대응 영역은 부처간 역할의 명시가 미흡하며, 현장에서의 응급 대응과정을 수행할 수 있는 운영 지침이 없이 명목상 언급으로 그치고 있기 때문에 계획을 활성화시켜 지역사회에서 운영하기는 어렵다. 즉 이 내용 속에는 사고의 확인 /공고, 응급 사고 지령, 요구 평가, 사상자의 중증도 분류와 안정화, 사상자 수집, 현장 처치 생명보존과 내과 외과적 응급처치가 수반된 이송, 사고 후 정신적 스트레스 관리, 사고의 총괄적 평가 부분에 대한 인력간 부처간 역할과 업무가 분명히 제시되어 있지 못하여, 사고 발생시 가장 중요한 연계적 업무 처리나 부문간 협조를 하기 어렵다. 의료 기관과 응급실/중환자실, 시민 안전을 책임지고 있는 기관들과의 상호 협력의 연계는 부족하다. 즉 현재의 재난 대비 계획 속에는 부처별 분명한 업무 분장, 재난 상황에 따른 시나리오적 대비 계획과 이를 훈련할 틀을 확보하고 있지 못하다. 3. 지방 정부 수준의 재난 계획서에는 재난 발생시 보건의료에 관한 사항 전반을 공공 보건소가 핵심적 역할을 하며 재난 관리에 대처해야 된다고 규정하고 있다. 그러므로 보건소는 지역사회 중심의 재난 관리 계획을 구성하고 이를 운영하며, 재난 현장에서의 응급 치료 대응 과정은 구조/ 구명을 책임지고 있는 공공기관인 소방서와 지역의 응급의료병원에게 위임한다. 즉 지역사회 재난 관리 계획이 보건소 주도하에 관내 병원과 관련기관(소방서. 경찰서)이 협동하여 만들고 업무를 명확히 분담하여 연계방안을 만든다. 이는 재난관리 대처에 성공여부를 결정하는 주요 요인이다. 4 대한 적십자사의 지역사회 주민에 대한 교육 프로그램은 연중 열리고 있다. 그러나 대부분의 교육주제는 건강증진 영역이며. 응급의료 관리는 전체 교육시간의 8%를 차지하며 이중 재난 준비를 위한 주민 교육 프로그램은 없다. 또한 특정 연령층이 모여있는 학교의 경우도 정규 보건교육 시간이 없기 때문에 생명구조나 응급처치를 체계적으로 배우고 연습할 기회가 없으면서 국민의 재난 준비의 기반확대가 되고 있지 못하다. 5. 병원은 재난 관리 위원회를 군성하여 병원의 진료권역 내에 있는 여러 자원을 감안한 포괄적인 재난관리계획을 세우고, 지역사회를 포함한 훈련을 해야 한다. 그러나 현재 병원은 명목상의 재난 관리 계획을 갖고 있을 뿐이다. 6. 재난관리 준비도를 평가할 때 병원응급실 치료 팀의 인력과 장비 등은 비교적 기준을 충족시키고 있었으나 병원의 재난 관리 계획은 전혀 훈련되고 있지 못하였다 그러므로 우리나라 재난 관리의 준비를 위해서는 현장의 응급의료체계, 재난 대응 계획, 이의 훈련을 통한 주민교육이 선행되어야만 개선될 수 있다. 즉 민방위 훈련 모델이 아닌 응급의료 서비스 모델에 입각한 장기적 노력과 재원의 투입이 필요하며, 지역사회를 중심으로 대응 준비와 이의 활성화 전략 개발, 훈련과 연습. 교육에 노력을 부여해야 한다. 7. 현장의 1차 응급처치자에 대해서는 법적으로 명시하고 있는 역할이 없다. 한국에서는 응급구조사 1급과 2급에 대한 교육과 규정을 1995년 이후 응급의료에 관한 법률에서 정하고 있다. 이 교육과정은 미국이 정하고 있는 응급구조사 과정 기준과 유사하지만 실습실이나 현장에서의 실습시간이 절대적으로 부족하다. 덧붙여 승인된 응급구조사 교육 기관의 강사는 강사로서의 자격기준을 충족할 뿐 아니라 실습강사는 대체적으로 1주일의 1/2은 응급 구조차를 탑승하여 현장 활동을 끊임없이 하고 있으며, 실습은 시나리오 유형으로 진행된다. 그러므로 우리나라의 경우 응급 구조사가 현장 기술 인력으로 역할 할 수 있도록 교과과정 내에서 실습을 강화 시켜야하며, 졸업생은 인턴쉽을 통한 현장 능력을 배양시키는 것이 필요하다. 8. 간호사의 경우 응급전문간호사의 자격을 부여받게 됨에 따라, 이를 위한 표준 교육 지침을 개발함으로써 병원 전 처치와 재난시 대응할 수 있는 역량을 보완해야 한다. 또한 현 자격 부여 프로그램 내용을 고려하여 정규자격 간호사가 현장 1차 치료자(first responder)로 역할 할 수 있도록 간호학 교과과정을 부분 보완해야한다.

  • PDF

Corporate Credit Rating based on Bankruptcy Probability Using AdaBoost Algorithm-based Support Vector Machine (AdaBoost 알고리즘기반 SVM을 이용한 부실 확률분포 기반의 기업신용평가)

  • Shin, Taek-Soo;Hong, Tae-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.25-41
    • /
    • 2011
  • Recently, support vector machines (SVMs) are being recognized as competitive tools as compared with other data mining techniques for solving pattern recognition or classification decision problems. Furthermore, many researches, in particular, have proved them more powerful than traditional artificial neural networks (ANNs) (Amendolia et al., 2003; Huang et al., 2004, Huang et al., 2005; Tay and Cao, 2001; Min and Lee, 2005; Shin et al., 2005; Kim, 2003).The classification decision, such as a binary or multi-class decision problem, used by any classifier, i.e. data mining techniques is so cost-sensitive particularly in financial classification problems such as the credit ratings that if the credit ratings are misclassified, a terrible economic loss for investors or financial decision makers may happen. Therefore, it is necessary to convert the outputs of the classifier into wellcalibrated posterior probabilities-based multiclass credit ratings according to the bankruptcy probabilities. However, SVMs basically do not provide such probabilities. So it required to use any method to create the probabilities (Platt, 1999; Drish, 2001). This paper applied AdaBoost algorithm-based support vector machines (SVMs) into a bankruptcy prediction as a binary classification problem for the IT companies in Korea and then performed the multi-class credit ratings of the companies by making a normal distribution shape of posterior bankruptcy probabilities from the loss functions extracted from the SVMs. Our proposed approach also showed that their methods can minimize the misclassification problems by adjusting the credit grade interval ranges on condition that each credit grade for credit loan borrowers has its own credit risk, i.e. bankruptcy probability.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

The Efficient Merge Operation in Log Buffer-Based Flash Translation Layer for Enhanced Random Writing (임의쓰기 성능향상을 위한 로그블록 기반 FTL의 효율적인 합병연산)

  • Lee, Jun-Hyuk;Roh, Hong-Chan;Park, Sang-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.19D no.2
    • /
    • pp.161-186
    • /
    • 2012
  • Recently, the flash memory consistently increases the storage capacity while the price of the memory is being cheap. This makes the mass storage SSD(Solid State Drive) popular. The flash memory, however, has a lot of defects. In order that these defects should be complimented, it is needed to use the FTL(Flash Translation Layer) as a special layer. To operate restrictions of the hardware efficiently, the FTL that is essential to work plays a role of transferring from the logical sector number of file systems to the physical sector number of the flash memory. Especially, the poor performance is attributed to Erase-Before-Write among the flash memory's restrictions, and even if there are lots of studies based on the log block, a few problems still exists in order for the mass storage flash memory to be operated. If the FAST based on Log Block-Based Flash often is generated in the wide locality causing the random writing, the merge operation will be occur as the sectors is not used in the data block. In other words, the block thrashing which is not effective occurs and then, the flash memory's performance get worse. If the log-block makes the overwriting caused, the log-block is executed like a cache and this technique contributes to developing the flash memory performance improvement. This study for the improvement of the random writing demonstrates that the log block is operated like not only the cache but also the entire flash memory so that the merge operation and the erase operation are diminished as there are a distinct mapping table called as the offset mapping table for the operation. The new FTL is to be defined as the XAST(extensively-Associative Sector Translation). The XAST manages the offset mapping table with efficiency based on the spatial locality and temporal locality.

A Research of Standards for Radiopharmaceutical Doses in Pediatric Nuclear Medicine (소아 핵의학 검사 시 사용되는 방사성의약품의 양 산출 기준 조사)

  • Do, Yong-Ho;Kim, Gye-Hwan;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.47-50
    • /
    • 2009
  • Purpose: Presently, any exact standard of radiopharmaceutical doses in pediatric nuclear medicine doesn't exist in the universe. So hospitals are following by manual of vial kit or guidelines of America and Europe based on recommended adult doses adjusted for body mass (MBq/kg) or body surface area (MBq/$m^2$). However, especially for children younger than 1 year and heavier than 50 kg, it's hard to estimate exact dosage for those children. Materials and Methods: In order to obtain objective data of multipliers for pediatric studies, we surveyed 4 major hospitals in Korea. After receiving feedbacks, we changed dosage to multiplier. And we compared multipliers of Korea to America's and Europe's. Results: Most hospitals in Korea are following by body mass formula (MBq/kg). On the other hand, standards don't include proper factors for a child younger than 1 year and heavier than 50 kg. Multipliers for 3 kg children who are injected lower doses than needed are America:0.12, Europe:0.09, Korea:0.05, multipliers for 30 kg children who are injected proper doses are America:0.58, Europe:0.51, Korea:0.45 and multipliers for 60 kg children who are injected more doses than needed are America:0.95, Europe:0.95, Korea:0.91. Conclusions : Through the survey, when calculating doses for children, usually output doses are based on adult doses adjusted for body mass (MBq/kg) but research has shown that standards of all of the compared standards don't reflect exact multipliers for children younger than 1 year and heavier than 50 kg. Therefore, we should give an effort to reduce needless radiation exposure in children by establishing a proper doses standard and also developing better image reconstruction software.

  • PDF

Development of a Testing Environment for Parallel Programs based on MSC Specifications (MSC 명세를 기반으로 한 병렬 프로그램 테스팅 환경의 개발)

  • Kim, Hyeon-Soo;Bae, Hyun-Seop;Chung, In-Sang;Kwon, Yong-Rae;Chung, Young-Sik;Lee, Byung-Sun;Lee, Dong-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.2
    • /
    • pp.135-149
    • /
    • 2000
  • Most of prior works on testing parallel programs have concentrated on how to guarantee the reproducibility by employing event traces exercised during executions of a program. Consequently, little work has been done to generate test cases, especially, from specifications produced from software development process. In this research work, we devise the techniques for deriving test cases automatically from the specifications written in Message Sequence Charts(MSCs) which are widely used in telecommunication areas and develop the testing environment for performing module testing of parallel programs with derived test cases. For deriving test cases from MSCs, we have to uncover the causality relations among events embedded implicitly in MSCs. For this, we devise the methods for adapting vector time stamping to MSCs, Then, valid event sequences, satisfying the causality relations, are generated and these are used as test cases. The generated test cases, written in TTCN, are translated into CHILL source codes, which interact with a target module to be tested and test the validity of behaviors of the module. Since the testing method developed in this research work extracts test cases from the MSC specifications produced front telecommunications software development process, it is not necessary to describe auxiliary specifications for testing. In audition adapting vector time stamping generates automatically the event sequences, the generated event sequences that are ones for whole system can be used for individual testing purpose.

  • PDF