• 제목/요약/키워드: C언어

Search Result 1,291, Processing Time 0.028 seconds

An Improved Technique of Fitness Evaluation for Automated Test Data Generation (테스트 데이터 자동 생성을 위한 적합도 평가 방법의 효율성 향상 기법)

  • Lee, Sun-Yul;Choi, Hyun-Jae;Jeong, Yeon-Ji;Bae, Jung-Ho;Kim, Tae-Ho;Chae, Heung-Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.12
    • /
    • pp.882-891
    • /
    • 2010
  • Many automated dynamic test data generation technique have been proposed. The techniques evaluate fitness of test data through executing instrumented Software Under Test (SUT) and then generate new test data based on evaluated fitness values and optimization algorithms. Previous researches and experiments have been showed that these techniques generate effective test data. However, optimization algorithms in these techniques incur much time to generate test data, which results in huge test case generation cost. In this paper, we propose a technique for reducing the time of evaluating a fitness of test data among steps of dynamic test data generation methods. We introduce the concept of Fitness Evaluation Program (FEP), derived from a path constraint of SUT. We suggest a test data generation method based on FEP and implement a test generation tool, named ConGA. We also apply ConGA to generate test cases for C programs, and evaluate efficiency of the FEP-based test case generation technique. The experiments show that the proposed technique reduces 20% of test data generation time on average.

A Unified Design Methodology using UML Classes for XML Application based on RDB (관계형 데이터베이스 기반의 XML 응용을 위한, UML 클래스를 이용한 통합 설계 방법론)

  • Bang, Sung-Yoon;Joo, Kyung-Soo
    • The KIPS Transactions:PartD
    • /
    • v.9D no.6
    • /
    • pp.1105-1112
    • /
    • 2002
  • Nowadays the information exchange based on XML such as B2B electronic commerce is spreading. Therefore a systematic and stable management mechanism for storing the exchanged information is needed. For this goal there are many research activities for concerning the connection between XML application and relational databases. But because XML data has hierarchical structure and relational databases can store only flat-structured data, we need to make a conversion rule which changes the hierarchical architecture to a 2-dimensional format. Accordingly the modeling methodology for storing such structured information in relational databases is needed. In order to build good quality application systems, modeling is an important first step. In 1997, the OMG adopted the UML as its standard modeling language. Since industry has warmly embraced UML, its popularity should become more important in the future. So a design methodology based on UML is needed to develop efficient XML applications. In this paper, we propose a unified design methodology for XML applications based on relational database using UML. To reach these goals, first we introduce a XML modeling methodology to design W3C XML schema using UML and second we propose data modeling methodology for relational database schema to store XML data efficiently in relational databases.

A Trustworthiness Improving Link Evaluation Technique for LOD considering the Syntactic Properties of RDFS, OWL, and OWL2 (RDFS, OWL, OWL2의 문법특성을 고려한 신뢰향상적 LOD 연결성 평가 기법)

  • Park, Jaeyeong;Sohn, Yonglak
    • Journal of KIISE:Databases
    • /
    • v.41 no.4
    • /
    • pp.226-241
    • /
    • 2014
  • LOD(Linked Open Data) is composed of RDF triples which are based on ontologies. They are identified, linked, and accessed under the principles of linked data. Publications of LOD data sets lead to the extension of LOD cloud and ultimately progress to the web of data. However, if ontologically the same things in different LOD data sets are identified by different URIs, it is difficult to figure out their sameness and to provide trustworthy links among them. To solve this problem, we suggest a Trustworthiness Improving Link Evaluation, TILE for short, technique. TILE evaluates links in 4 steps. Step 1 is to consider the inference property of syntactic elements in LOD data set and then generate RDF triples which have existed implicitly. In Step 2, TILE appoints predicates, compares their objects in triples, and then evaluates links between the subjects in the triples. In Step 3, TILE evaluates the predicates' syntactic property at the standpoints of subject description and vocabulary definition and compensates the evaluation results of Step 2. The syntactic elements considered by TILE contain RDFS, OWL, OWL2 which are recommended by W3C. Finally, TILE makes the publisher of LOD data set review the evaluation results and then decide whether to re-evaluate or finalize the links. This leads the publishers' responsibility to be reflected in the trustworthiness of links among the data published.

A Study on the Contextual Layout Process of Exhibit Space With a Focus on the expo Comm Wireless Korea '99-KT Pavilion (전시공간 맥락화 구성 프로세스 사례연구 expo Comm Wireless Korea '99 -한국통신관을 중심으로)

  • 김준호
    • Archives of design research
    • /
    • v.13 no.1
    • /
    • pp.121-130
    • /
    • 2000
  • This study can be expressed by gathered and formed into the exhibition space's structure practical progress/ application. For the one model of fascinating exhibition's space thru inter-space's pertinent adjustment, production between spactator and exhibition constituent on an exhibition story-line, I adjusted the focus to follow mentioned methods systematically for the example; An exhibition space is completed to be very impressive and attractive space by proper adjustment and production of M-M/C interface in exhibition storyline. Quantity space is transfered into quality space through the transmission of an exhibition which can be define as the point of an exhibition structure. And also could de transferd into a bodily sensation space that inherent full of interactive constituent. Changeable exhibition constituents (exhibition item and text) that sporadically expatiated (not trimed, the original form) in the process of an exhibition structure draw much higer quality of the optimal solution in optimize process which is given when aggregate again to contextual flow of synthetic exhibition scenario. Reconstruction of individual exhibition constituents to the new story, that is, transference of exhibition text to exhibition context is inspirit to an exhibition by maximize the exhibition effect in connection can be systematized through carrying out an outcentripetalpart. However, since pattern of an exhibition structure that consequtively meet variety spacetime of an exhibittion environmental can't be exist, this study presents centralizing the exhibition plan of Korea communication pavilion of the annual Expo Comm Wireress Korea, sustained process from design proposal, research and analysis to synthesis, development, transmission and management to an example of an applying crystallization.

  • PDF

Dosimetry and Three Dimensional Planning for Stereotactic Radiosurgery with SIEMENS 6-MV LINAC (6-MV선형가속기를 이용한 입체방사선수술의 선량측정 및 3차원적 치료계획)

  • Choi Dong-Rak;Cho Byong Chul;Suh Tae-Suk;Chung Su Mi;Choi Il Bong;Shinn Kyung Sub
    • Radiation Oncology Journal
    • /
    • v.11 no.1
    • /
    • pp.175-181
    • /
    • 1993
  • Radiosurgery requires integral procedure where special devices and computer systems are needed for localization, dose planning and treatment. The aim of this work is to verify the overall mechanical accuracy of our LINAC and develop dose calculation algorithm for LINAC radiosurgery. The alignment of treatment machine and the performance testing of the entire system were extensively carried out and the basic data such as percent depth dose, off-axis ratio and output factor were measured. A three dimensional treatment planning system for stereotactic radiosurgery has been developed. We used an IBM personal computer with C programming language (IBM personal system/2, Model 80386, IBM Co., USA) for calculating the dose distribution. As a result, deviations at isocenter on gantry and table rotation for our treatment machine were acceptable since they were less than 2 mm. According to the phantom experiments, the focusing isocenter were successful by the error of less than 2 mm. Finally, the mechanical accuracy of our three dimensional planning system was confirmed by film dosimetry in sphere phantom.

  • PDF

Development of a Chinese cabbage model using Microsoft Excel/VBA (엑셀/VBA를 이용한 배추 모형 제작)

  • Moon, Kyung Hwan;Song, Eun Young;Wi, Seung Hwan;Oh, Sooja
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.20 no.2
    • /
    • pp.228-232
    • /
    • 2018
  • Process-based crop models have been used to assess the impact of climate change on crop production. These models are implemented in procedural or object oriented computer programming languages including FORTRAN, C++, Delphi, Java, which have a stiff learning curve. The requirement for a high level of computer programming is one of barriers for efforts to develop and improve crop models based on biophysical process. In this study, we attempted to develop a Chinese cabbage model using Microsoft Excel with Visual Basic for Application (VBA), which would be easy enough for most agricultural scientists to develop a simple model for crop growth simulation. Results from Soil-Plant-Atmosphere-Research (SPAR) experiments under six temperature conditions were used to determine parameters of the Chinese cabbage model. During a plant growing season in SPAR chambers, numbers of leaves, leaf areas, growth rate of plants were measured six times. Leaf photosynthesis was also measured using LI-6400 Potable Photosynthesis System. Farquhar, von Caemmerer, and Berry (FvCB) model was used to simulate a leaf-level photosynthesis process. A sun/shade model was used to scale up to canopy-level photosynthesis. An Excel add-in, which is a small VBA program to assist crop modeling, was used to implement a Chinese cabbage model under the environment of Excel organizing all of equations into a single set of crop model. The model was able to simulate hourly changes in photosynthesis, growth rate, and other physiological variables using meteorological input data. Estimates and measurements of dry weight obtained from six SPAR chambers were linearly related ($R^2=0.985$). This result indicated that the Excel/VBA can be widely used for many crop scientists to develop crop models.

DEM_Comp Software for Effective Compression of Large DEM Data Sets (대용량 DEM 데이터의 효율적 압축을 위한 DEM_Comp 소프트웨어 개발)

  • Kang, In-Gu;Yun, Hong-Sik;Wei, Gwang-Jae;Lee, Dong-Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.265-271
    • /
    • 2010
  • This paper discusses a new software package, DEM_Comp, developed for effectively compressing large digital elevation model (DEM) data sets based on Lempel-Ziv-Welch (LZW) compression and Huffman coding. DEM_Comp was developed using the $C^{++}$ language running on a Windows-series operating system. DEM_Comp was also tested on various test sites with different territorial attributes, and the results were evaluated. Recently, a high-resolution version of the DEM has been obtained using new equipment and the related technologies of LiDAR (LIght Detection And Radar) and SAR (Synthetic Aperture Radar). DEM compression is useful because it helps reduce the disk space or transmission bandwidth. Generally, data compression is divided into two processes: i) analyzing the relationships in the data and ii) deciding on the compression and storage methods. DEM_Comp was developed using a three-step compression algorithm applying a DEM with a regular grid, Lempel-Ziv compression, and Huffman coding. When pre-processing alone was used on high- and low-relief terrain, the efficiency was approximately 83%, but after completing all three steps of the algorithm, this increased to 97%. Compared with general commercial compression software, these results show approximately 14% better performance. DEM_Comp as developed in this research features a more efficient way of distributing, storing, and managing large high-resolution DEMs.

Test Time Reduction of BIST by Primary Input Grouping Method (입력신호 그룹화 방법에 의한 BIST의 테스트 시간 감소)

  • Chang, Yoon-Seok;Kim, Dong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.37 no.8
    • /
    • pp.86-96
    • /
    • 2000
  • The representative area among the ones whose cost increases as the integration ratio increases is the test area. As the relative cost of hardware decreases, the BIST method has been focued on as the future-oriented test method. The biggest drawback of it is the increasing test time to obtain the acceptable fault coverage. This paper proposed a BIST implementation method to reduce the test times. This method uses an input grouping and test point insertion method, in which the definition of test point is different from the previous one. That is, the test points are defined on the basis of the internal nodes which are the reference points of the input grouping and are merging points of the grouped signals. The main algorithms in the proposed method were implemented with C-language, and various circuits were used to apply the proposed method for experiment. The results showed that the test time could be reduced to at most $1/2^{40}$ of the pseudo-random pattern case and the fault coverage were also increased compared with the conventional BIST method. The relative hardware overhead of the proposed method to the circuit under test decreases as th e size of the circuit to be tested increases, and the delay overhead by the BIST utility is negligible compared to that of the original circuit. That means, the proposed method can be applied efficiently to large VLSI circuits.

  • PDF

Research for the Element to Analyze the Performance of Modern-Web-Browser Based Applications (모던 웹 브라우저(Modern-Web-Browser) 기반 애플리케이션 성능분석을 위한 요소 연구)

  • Park, Jin-tae;Kim, Hyun-gook;Moon, Il-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.278-281
    • /
    • 2018
  • The early Web technology was to show text information through a browser. However, as web technology advances, it is possible to show large amounts of multimedia data through browsers. Web technologies are being applied in a variety of fields such as sensor network, hardware control, and data collection and analysis for big data and AI services. As a result, the standard has been prepared for the Internet of Things, which typically controls a sensor via HTTP communication and provides information to users, by installing a web browser on the interface of the Internet of Things. In addition, the recent development of web-assembly enabled 3D objects, virtual/enhancing real-world content that could not be run in web browsers through a native language of C-class. Factors that evaluate the performance of existing Web applications include performance, network resources, and security. However, since there are many areas in which web applications are applied, it is time to revisit and review these factors. In this thesis, we will conduct an analysis of the factors that assess the performance of a web application. We intend to establish an indicator of the development of web-based applications by reviewing the analysis of each element, its main points, and its needs to be supplemented.

  • PDF

A Study on Motion Estimator Design Using DCT DC Value (DCT 직류 값을 이용한 움직임 추정기 설계에 관한 연구)

  • Lee, Gwon-Cheol;Park, Jong-Jin;Jo, Won-Gyeong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.3
    • /
    • pp.258-268
    • /
    • 2001
  • The compression method is necessarily used to send the high quality moving picture that contains a number of data in image processing. In the field of moving picture compression method, the motion estimation algorithm is used to reduce the temporal redundancy. Block matching algorithm to be usually used is distinguished partial search algorithm with full search algorithm. Full search algorithm be used in this paper is the method to compare the reference block with entire block in the search window. It is very efficient and has simple data flow and control circuit. But the bigger the search window, the larger hardware size, because large computational operation is needed. In this paper, we design the full search block matching motion estimator. Using the DCT DC values, we decide luminance. And we apply 3 bit compare-selector using bit plane to I(Intra coded) picture, not using 8 bit luminance signals. Also it is suggested that use the same selective bit for the P(Predicted coded) and B(Bidirectional coded) picture. We compare based full search method with PSNR(Peak Signal to Noise Ratio) for C language modeling. Its condition is the reference block 8$\times$8, the search window 24$\times$24 and 352$\times$288 gray scale standard video images. The result has small difference that we cannot see. And we design the suggested motion estimator that hardware size is proved to reduce 38.3% for structure I and 30.7% for structure II. The memory is proved to reduce 31.3% for structure I and II.

  • PDF