• Title/Summary/Keyword: software engineering

Search Result 12,283, Processing Time 0.038 seconds

A Study on the Development items of Korean Marine GIS Software Based on S-100 Universal Hydrographic Standard (S-100 표준 기반 해양 GIS 소프트웨어 국산화 개발 방향에 관한 연구)

  • LEE, Sang-Min;CHOI, Tae-Seok;KIM, Jae-Myung;CHOI, Yun-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.3
    • /
    • pp.17-28
    • /
    • 2022
  • This study is to develop the direction of the development of the next-generation mapping of marine information required to develop a base of the utilization localization of maritime production tools. The GIS data-processing products and technologies currently used in the Korea's marine sector depend on external applications which is renewal costs, technical updates, and unreflected characteristics. Meanwhile, the S-100 standard, the next generation hydrographic data model that complements S-57's problems in marine GIS data processing, was adopted as a new marine data standard. This study aims to present the current status and problems of marine GIS technology in Korea and to suggest the development direction of GIS software based on the next generation hydrogrphic data model S-100 standard of IHO(International Hydrographic Organization). S-100-based marine GIS localization technology development and industrial ecosystem development research is expected to scientific decision-making on policy issues that occur with other countries such as marine territory management and development and use of marine resources.

A Study on Precise Tide Prediction at the Nakdong River Estuary using Long-term Tidal Observation Data (장기조석관측 자료를 이용한 낙동강 하구 정밀조위 예측 연구)

  • Park, Byeong-Woo;Kim, Tae-Woo;Kang, Du Kee;Seo, Yongjae;Shin, Hyun-Suk
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.6
    • /
    • pp.874-881
    • /
    • 2022
  • Until 2016, before discussions on the restoration of brackish water of the Nakdong River Estuary started in earnest, the downstream water level was predicted using the data of existing tide level observatories (Busan and Gadeokdo) several kilometers away from the estuary. However, it was not easy to carry out the prediction due to the dif erence in tide level and phase. Therefore, this study was conducted to estimate tide prediction more accurately through tidal harmonic analysis using the measured water level affected by the tides in the offshore waters adjacent to the Nakdong River Estuary. As a research method, the storage status of observation data according to the period and abnormal data were checked at 10-minute intervals in the offshore sea area near the Nakdong River Estuary bank, and the observed and predicted tides were measured using TASK2000 (Tidal Analysis Software Kit) Package, a tidal harmonic analysis program. Regression analysis based on one-to-one comparison showed that the correlation between the two components was high correlation coef icient 0.9334. In predicting the tides for the current year, if possible, more accurate data can be obtained by harmonically analyzing one-year tide observation data from the previous year and performing tide prediction using the obtained harmonic constant. Based on this method, the predicted tide for 2022 was generated and it is being used in the calculation of seawater inflow for the restoration of brackish water of the Nakdong River Estuary.

Dental Surgery Simulation Using Haptic Feedback Device (햅틱 피드백 장치를 이용한 치과 수술 시뮬레이션)

  • Yoon Sang Yeun;Sung Su Kyung;Shin Byeong Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.6
    • /
    • pp.275-284
    • /
    • 2023
  • Virtual reality simulations are used for education and training in various fields, and are especially widely used in the medical field recently. The education/training simulator consists of tactile/force feedback generation and image/sound output hardware that provides a sense similar to a doctor's treatment of a real patient using real surgical tools, and software that produces realistic images and tactile feedback. Existing simulators are complicated and expensive because they have to use various types of hardware to simulate various surgical instruments used during surgery. In this paper, we propose a dental surgical simulation system using a force feedback device and a morphable haptic controller. Haptic hardware determines whether the surgical tool collides with the surgical site and provides a sense of resistance and vibration. In particular, haptic controllers that can be deformed, such as length changes and bending, can express various senses felt depending on the shape of various surgical tools. When the user manipulates the haptic feedback device, events such as movement of the haptic feedback device or button clicks are delivered to the simulation system, resulting in interaction between dental surgical tools and oral internal models, and thus haptic feedback is delivered to the haptic feedback device. Using these basic techniques, we provide a realistic training experience of impacted wisdom tooth extraction surgery, a representative dental surgery technique, in a virtual environment represented by sophisticated three-dimensional models.

A Methodology for Making Military Surveillance System to be Intelligent Applied by AI Model (AI모델을 적용한 군 경계체계 지능화 방안)

  • Changhee Han;Halim Ku;Pokki Park
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.57-64
    • /
    • 2023
  • The ROK military faces a significant challenge in its vigilance mission due to demographic problems, particularly the current aging population and population cliff. This study demonstrates the crucial role of the 4th industrial revolution and its core artificial intelligence algorithm in maximizing work efficiency within the Command&Control room by mechanizing simple tasks. To achieve a fully developed military surveillance system, we have chosen multi-object tracking (MOT) technology as an essential artificial intelligence component, aligning with our goal of an intelligent and automated surveillance system. Additionally, we have prioritized data visualization and user interface to ensure system accessibility and efficiency. These complementary elements come together to form a cohesive software application. The CCTV video data for this study was collected from the CCTV cameras installed at the 1st and 2nd main gates of the 00 unit, with the cooperation by Command&Control room. Experimental results indicate that an intelligent and automated surveillance system enables the delivery of more information to the operators in the room. However, it is important to acknowledge the limitations of the developed software system in this study. By highlighting these limitations, we can present the future direction for the development of military surveillance systems.

Comparison of Adversarial Example Restoration Performance of VQ-VAE Model with or without Image Segmentation (이미지 분할 여부에 따른 VQ-VAE 모델의 적대적 예제 복원 성능 비교)

  • Tae-Wook Kim;Seung-Min Hyun;Ellen J. Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.194-199
    • /
    • 2022
  • Preprocessing for high-quality data is required for high accuracy and usability in various and complex image data-based industries. However, when a contaminated hostile example that combines noise with existing image or video data is introduced, which can pose a great risk to the company, it is necessary to restore the previous damage to ensure the company's reliability, security, and complete results. As a countermeasure for this, restoration was previously performed using Defense-GAN, but there were disadvantages such as long learning time and low quality of the restoration. In order to improve this, this paper proposes a method using adversarial examples created through FGSM according to image segmentation in addition to using the VQ-VAE model. First, the generated examples are classified as a general classifier. Next, the unsegmented data is put into the pre-trained VQ-VAE model, restored, and then classified with a classifier. Finally, the data divided into quadrants is put into the 4-split-VQ-VAE model, the reconstructed fragments are combined, and then put into the classifier. Finally, after comparing the restored results and accuracy, the performance is analyzed according to the order of combining the two models according to whether or not they are split.

A Study on Feature Selection and Feature Extraction for Hyperspectral Image Classification Using Canonical Correlation Classifier (정준상관분류에 의한 하이퍼스펙트럴영상 분류에서 유효밴드 선정 및 추출에 관한 연구)

  • Park, Min-Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.3D
    • /
    • pp.419-431
    • /
    • 2009
  • The core of this study is finding out the efficient band selection or extraction method discovering the optimal spectral bands when applying canonical correlation classifier (CCC) to hyperspectral data. The optimal efficient bands grounded on each separability decision technique are selected using Multispec$^{(C)}$ software developed by Purdue university of USA. Total 6 separability decision techniques are used, which are Divergence, Transformed Divergence, Bhattacharyya, Mean Bhattacharyya, Covariance Bhattacharyya, Noncovariance Bhattacharyya. For feature extraction, PCA transformation and MNF transformation are accomplished by ERDAS Imagine and ENVI software. For the comparison and assessment on the effect of feature selection and feature extraction, land cover classification is performed by CCC. The overall accuracy of CCC using the firstly selected 60 bands is 71.8%, the highest classification accuracy acquired by CCC is 79.0% as the case that executes CCC after appling Noncovariance Bhattacharyya. In conclusion, as a matter of fact, only Noncovariance Bhattacharyya separability decision method was valuable as feature selection algorithm for hyperspectral image classification depended on CCC. The lassification accuracy using other feature selection and extraction algorithms except Divergence rather declined in CCC.

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

Difference Factors Analysis of between Quantity Take-off Using BIM Model and Using 2D Drawings in Reinforced Concrete Building Frame (건물 골조수량 산출 시 BIM모델 기반 수량과 2D도면 기반 수량 차이 요인 분석)

  • Kim, Gwang-Hee
    • Journal of the Korea Institute of Building Construction
    • /
    • v.23 no.5
    • /
    • pp.651-662
    • /
    • 2023
  • Recently, research on the use of Building Information Modeling(BIM) for various construction management activities is being actively conducted, and interest in 3D model-based estimation is increasing because it has the advantage of being able to be automatically performed using the attribute information of the 3D model. Therefore, this study aimed that the difference in the quantities is calculated the quantity based on the 2D drawing of a building and is extracted from the 3D model created by the Revit software was compared and tried to find out the cause. The difference in the quantity calculated by the two methods was the largest in the formwork, followed by the smallest in the order of the quantity of rebar and concrete. The reason for this difference is that there is a part where the quantity extraction in the 3D model is not suitable for the quantity calculation standard, and in particular, in the case of formwork, it was difficult to separate only the quantity of the necessary part. In addition, since the quantity of rebar was not separated by member, it was impossible to accurately compare the quantity and identify the cause of the difference. Therefore, it is considered to be the most reasonable to use application software that imports only the numerical information necessary for quantity calculation from the 3D model and applies a separate calculation formula.

R-lambda Model based Rate Control for GOP Parallel Coding in A Real-Time HEVC Software Encoder (HEVC 실시간 소프트웨어 인코더에서 GOP 병렬 부호화를 지원하는 R-lambda 모델 기반의 율 제어 방법)

  • Kim, Dae-Eun;Chang, Yongjun;Kim, Munchurl;Lim, Woong;Kim, Hui Yong;Seok, Jin Wook
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.193-206
    • /
    • 2017
  • In this paper, we propose a rate control method based on the $R-{\lambda}$ model that supports a parallel encoding structure in GOP levels or IDR period levels for 4K UHD input video in real-time. For this, a slice-level bit allocation method is proposed for parallel encoding instead of sequential encoding. When a rate control algorithm is applied in the GOP level or IDR period level parallelism, the information of how many bits are consumed cannot be shared among the frames belonging to a same frame level except the lowest frame level of the hierarchical B structure. Therefore, it is impossible to manage the bit budget with the existing bit allocation method. In order to solve this problem, we improve the bit allocation procedure of the conventional ones that allocate target bits sequentially according to the encoding order. That is, the proposed bit allocation strategy is to assign the target bits in GOPs first, then to distribute the assigned target bits from the lowest depth level to the highest depth level of the HEVC hierarchical B structure within each GOP. In addition, we proposed a processing method that is used to improve subjective image qualities by allocating the bits according to the coding complexities of the frames. Experimental results show that the proposed bit allocation method works well for frame-level parallel HEVC software encoders and it is confirmed that the performance of our rate controller can be improved with a more elaborate bit allocation strategy by using the preprocessing results.

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.