• Title/Summary/Keyword: software architecture

Search Result 1,994, Processing Time 0.029 seconds

3D Architecture Modeling and Quantity Estimation using SketchUp (스케치업을 활용한 3D 건축모델링 및 물량산출)

  • Kim, Min Gyu;Um, Dae Yong
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.701-708
    • /
    • 2017
  • The construction cost is estimated based on the drawings at the design stage and constructor will find efficient construction methods for budgeting and budgeting appropriate to the budget. Accurate quantity estimation and budgeting are critical to determining whether the project is profitable or not. However, since this process is mostly performed depending on manpower or 2D drawings, errors are likely to occur and The BIM(Build Information Modeling) program, which can be automated, is very expensive and difficult to apply in the field. In this study, 3D architectural modeling was performed using SketchUp which is a 3D modeling software and suggest a methodology for Quantity Estimation. As a result, 3D modeling was performed effectively using 2D drawings of buildings. Based on the modeling results, it was possible to calculate the difference of the quantity estimation by 2D drawing and 3D modeling. The research suggests that the 3D modeling using the SketchUp and the calculation of the quantity can prevent the error of the conventional 2D calculation method. If the applicability of the research method is verified through continuous research, it will contribute to increase the efficiency of architectural modeling and quantity Estimation work.

Violation Detection of Application Network QoS using Ontology in SDN Environment (SDN 환경에서 온톨로지를 활용한 애플리케이션 네트워크의 품질 위반상황 식별 방법)

  • Hwang, Jeseung;Kim, Ungsoo;Park, Joonseok;Yeom, Keunhyuk
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.6
    • /
    • pp.7-20
    • /
    • 2017
  • The advancement of cloud and big data and the considerable growth of traffic have increased the complexity and problems in the management inefficiency of existing networks. The software-defined networking (SDN) environment has been developed to solve this problem. SDN enables us to control network equipment through programming by separating the transmission and control functions of the equipment. Accordingly, several studies have been conducted to improve the performance of SDN controllers, such as the method of connecting existing legacy equipment with SDN, the packet management method for efficient data communication, and the method of distributing controller load in a centralized architecture. However, there is insufficient research on the control of SDN in terms of the quality of network-using applications. To support the establishment and change of the routing paths that meet the required network service quality, we require a mechanism to identify network requirements based on a contract for application network service quality and to collect information about the current network status and identify the violations of network service quality. This study proposes a method of identifying the quality violations of network paths through ontology to ensure the network service quality of applications and provide efficient services in an SDN environment.

Empirical and Numerical Analyses of a Small Planing Ship Resistance using Longitudinal Center of Gravity Variations (경험식과 수치해석을 이용한 종방향 무게중심 변화에 따른 소형선박의 저항성능 변화에 관한 연구)

  • Michael;Jun-Taek Lim;Nam-Kyun Im;Kwang-Cheol Seo
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.971-979
    • /
    • 2023
  • Small ships (<499 GT) constitute 46% of the existing ships, therefore, it can be concluded that they produce relatively high CO2 gas emissions. Operating in optimal trim conditions can reduce the resistance of the ship, which results in fewer greenhouse gases. An affordable way for trim optimization is to adjust the weight distribution to obtain an optimum longitudinal center of gravity (LCG). Therefore, in this study, the effect of LCG changes on the resistance of a small planing ship is studied using empirical and numerical analyses. The Savitsky method employing Maxsurf resistance and the STAR-CCM+ commercial computational fluid dynamics (CFD) software is used for the empirical and numerical analyses, respectively. Finally, the total resistance from the ship design process is compared to obtain the optimum LCG. To summarize, using numerical analysis, optimum LCG is achieved at the 46.2% length overall (LoA) at Froude Number 0.56, and 43.4% LoA at Froude Number 0.63, which provides a significant resistance reduction of 41.12 - 45.16% compared to the reference point at 29.2% LoA.

Analysis on the Positional Accuracy of the Non-orthogonal Two-pair kV Imaging Systems for Real-time Tumor Tracking Using XCAT (XCAT를 이용한 실시간 종양 위치 추적을 위한 비직교 스테레오 엑스선 영상시스템에서의 위치 추정 정확도 분석에 관한 연구)

  • Jeong, Hanseong;Kim, Youngju;Oh, Ohsung;Lee, Seho;Jeon, Hosang;Lee, Seung Wook
    • Progress in Medical Physics
    • /
    • v.26 no.3
    • /
    • pp.143-152
    • /
    • 2015
  • In this study, we aim to design the architecture of the kV imaging system for tumor tracking in the dual-head gantry system and analyze its accuracy by simulations. We established mathematical formulas and algorithms to track the tumor position with the two-pair kV imaging systems when they are in the non-orthogonal positions. The algorithms have been designed in the homogeneous coordinate framework and the position of the source and the detector coordinates are used to estimate the tumor position. 4D XCAT (4D extended cardiac-torso) software was used in the simulation to identify the influence of the angle between the two-pair kV imaging systems and the resolution of the detectors to the accuracy in the position estimation. A metal marker fiducial has been inserted in a numerical human phantom of XCAT and the kV projections were acquired at various angles and resolutions using CT projection software of the XCAT. As a result, a positional accuracy of less than about 1mm was achieved when the resolution of the detector is higher than 1.5 mm/pixel and the angle between the kV imaging systems is approximately between $90^{\circ}$ and $50^{\circ}$. When the resolution is lower than 1.5 mm/pixel, the positional errors were higher than 1mm and the error fluctuation by the angles was greater. The resolution of the detector was critical in the positional accuracy for the tumor tracking and determines the range for the acceptable angle range between the kV imaging systems. Also, we found that the positional accuracy analysis method using XCAT developed in this study is highly useful and will be a invaluable tool for further refined design of the kV imaging systems for tumor tracking systems.

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

A Study on the Selection of Parameter Values of FUSION Software for Improving Airborne LiDAR DEM Accuracy in Forest Area (산림지역에서의 LiDAR DEM 정확도 향상을 위한 FUSION 패러미터 선정에 관한 연구)

  • Cho, Seungwan;Park, Joowon
    • Journal of Korean Society of Forest Science
    • /
    • v.106 no.3
    • /
    • pp.320-329
    • /
    • 2017
  • This study aims to evaluate whether the accuracy of LiDAR DEM is affected by the changes of the five input levels ('1','3','5','7' and '9') of median parameter ($F_{md}$), mean parameter ($F_{mn}$) of the Filtering Algorithm (FA) in the GroundFilter module and median parameter ($I_{md}$), mean parameter ($I_{mn}$) of the Interpolation Algorithm (IA) in the GridSurfaceCreate module of the FUSION in order to present the combination of parameter levels producing the most accurate LiDAR DEM. The accuracy is measured by the residuals calculated by difference between the field elevation values and their corresponding DEM elevation values. A multi-way ANOVA is used to statistically examine whether there are effects of parameter level changes on the means of the residuals. The Tukey HSD is conducted as a post-hoc test. The results of the multi- way ANOVA test show that the changes in the levels of $F_{md}$, $F_{mn}$, $I_{mn}$ have significant effects on the DEM accuracy with the significant interaction effect between $F_{md}$ and $F_{mn}$. Therefore, the level of $F_{md}$, $F_{mn}$, and the interaction between two variables are considered to be factors affecting the accuracy of LiDAR DEM as well as the level of $I_{mn}$. As the results of the Tukey HSD test on the combination levels of $F_{md}{\ast}F_{mn}$, the mean of residuals of the '$9{\ast}3$' combination provides the highest accuracy while the '$1{\ast}1$' combination provides the lowest one. Regarding $I_{mn}$ levels, the mean of residuals of the both '3' and '1' provides the highest accuracy. This study can contribute to improve the accuracy of the forest attributes as well as the topographic information extracted from the LiDAR data.

THE EFFECTS OF DIETARY CONSISTENCY ON THE TRABECULAR BONE ARCHITECTURE IN GROWING MOUSE MANDIBULAR CONDYLE : A STUDY USING MICRO-CONFUTED TOMOGRAPHY (성장 중인 쥐에서 음식물의 경도가 하악 과두의 해면골에 미치는 영향 : 미세전산화 단층촬영을 이용한 연구)

  • Youn, Seok-Hee;Lee, Sang-Dae;Kim, Jung-Wook;Lee, Sang-Hoon;Hahn, Se-Hyun;Kim, Chong-Chul
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.31 no.2
    • /
    • pp.228-235
    • /
    • 2004
  • The development and proliferation of the mandibular condyle can be altered by changes in the biomechanical environment of the temporomandibular joint. The biomechanical loads were varied by feeding diets of different consistencies. The purpose of the present study was to determine whether changes of masticatory forces by feeding a soft diet can alter the trabecular bone morphology of the growing mouse mandibular condyle, by means of micro-computed tomography. Thirty-six female, 21 days old, C57BL/6 mice were randomly divided into two groups. Mice in the hard-diet control group were fed standard hard rodent pellets for 8 weeks. The soft-diet group mice were given soft ground diets for 8 weeks and their lower incisors were shortened by cutting with a wire cutter twice a week to reduce incision. After 8 weeks all animals were killed after they were weighed. Following sacrifice, the right mandibular condyle was removed. High spatial resolution tomography was done with a Skyscan Micro-CT 1072. Cross-sections were scanned and three-dimensional images were reconstructed from 2D sections. Morphometric and nonmetric parameters such as bone volume(BV), bone surface(BS), total volume(TV), bone volume fraction(BV/TV), surface to volume ratio(BS/BV), trabecular thickness(Tb. Th.), structure model index(SMI) and degree of anisotropy(DA) were directly determined by means of the software package at the micro-CT system. From directly determined indices the trabecular number(Tb. N.) and trabecular separation(Tb. Sp.) were calculated according to parallel plate model of Parfitt et al.. After micro-tomographic imaging, the samples were decalcified, dehydrated, embedded and sectioned for histological observation. The results were as follow: 1. The bone volume fraction, trabecular thickness(Tb. Th.) and trabecular number(Tb. N.) were significantly decreased in the soft-diet group compared with that of the control group (p<0.05). 2. The trabecular separation(Tb. Sp.) was significantly increased in the soft-diet group(p<0.05). 3. There was no significant differences in the surface to volume ratio(BS/BV), structure model index(SMI) and degree of anisotropy(DA) between the soft-diet group and hard-diet control group (p>0.05). 4. Histological sections showed that the thickness of the proliferative layer and total cartilage thickness were significantly reduced in the soft-diet group.

  • PDF

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.