• Title/Summary/Keyword: virtual-simulation

Search Result 2,140, Processing Time 0.037 seconds

Optimal deployment of sonobuoy for unmanned aerial vehicles using reinforcement learning considering the target movement (표적의 이동을 고려한 강화학습 기반 무인항공기의 소노부이 최적 배치)

  • Geunyoung Bae;Juhwan Kang;Jungpyo Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.214-224
    • /
    • 2024
  • Sonobuoys are disposable devices that utilize sound waves for information gathering, detecting engine noises, and capturing various acoustic characteristics. They play a crucial role in accurately detecting underwater targets, making them effective detection systems in anti-submarine warfare. Existing sonobuoy deployment methods in multistatic systems often rely on fixed patterns or heuristic-based rules, lacking efficiency in terms of the number of sonobuoys deployed and operational time due to the unpredictable mobility of the underwater targets. Thus, this paper proposes an optimal sonobuoy placement strategy for Unmanned Aerial Vehicles (UAVs) to overcome the limitations of conventional sonobuoy deployment methods. The proposed approach utilizes reinforcement learning in a simulation-based experimental environment that considers the movements of the underwater targets. The Unity ML-Agents framework is employed, and the Proximal Policy Optimization (PPO) algorithm is utilized for UAV learning in a virtual operational environment with real-time interactions. The reward function is designed to consider the number of sonobuoys deployed and the cost associated with sound sources and receivers, enabling effective learning. The proposed reinforcement learning-based deployment strategy compared to the conventional sonobuoy deployment methods in the same experimental environment demonstrates superior performance in terms of detection success rate, deployed sonobuoy count, and operational time.

A Study on Precision of 3D Spatial Model of a Highly Dense Urban Area based on Drone Images (드론영상 기반 고밀 도심지의 3차원 공간모형의 정밀도에 관한 연구)

  • Choi, Yeon Woo;Yoon, Hye Won;Choo, Mi Jin;Yoon, Dong Keun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.2
    • /
    • pp.69-77
    • /
    • 2022
  • The 3D spatial model is an analysis framework for solving urban problems and is used in various fields such as urban planning, environment, land and housing management, and disaster simulation. The utilization of drones that can capture 3D images in a short time at a low cost is increasing for the construction of 3D spatial model. In terms of building a virtual city and utilizing simulation modules, high location accuracy of aerial survey and precision of 3D spatial model function as important factors, so a method to increase the accuracy has been proposed. This study analyzed location accuracy of aerial survey and precision of 3D spatial model by each condition of aerial survey for urban areas where buildings are densely located. We selected Daerim 2-dong, Yeongdeungpo-gu, Seoul as a target area and applied shooting angle, shooting altitude, and overlap rate as conditions for the aerial survey. In this study, we calculated the location accuracy of aerial survey by analyzing the difference between an actual survey value of CPs and a predicted value of 3D spatial Model. Also, We calculated the precision of 3D spatial Model by analyzing the difference between the position of Point cloud and the 3D spatial Model (3D Mesh). As a result of this study, the location accuracy tended to be high at a relatively high rate of overlap, but the higher the rate of overlap, the lower the precision of 3D spatial model and the higher the shooting angle, the higher precision. Also, there was no significant relationship with precision. In terms of baseline-height ratio, the precision tended to be improved as the baseline-height ratio increased.

Development of a prototype simulator for dental education (치의학 교육을 위한 프로토타입 시뮬레이터의 개발)

  • Mi-El Kim;Jaehoon Sim;Aein Mon;Myung-Joo Kim;Young-Seok Park;Ho-Beom Kwon;Jaeheung Park
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.61 no.4
    • /
    • pp.257-267
    • /
    • 2023
  • Purpose. The purpose of the study was to fabricate a prototype robotic simulator for dental education, to test whether it could simulate mandibular movements, and to assess the possibility of the stimulator responding to stimuli during dental practice. Materials and methods. A virtual simulator model was developed based on segmentation of the hard tissues using cone-beam computed tomography (CBCT) data. The simulator frame was 3D printed using polylactic acid (PLA) material, and dentiforms and silicone face skin were also inserted. Servo actuators were used to control the movements of the simulator, and the simulator's response to dental stimuli was created by pressure and water level sensors. A water level test was performed to determine the specific threshold of the water level sensor. The mandibular movements and mandibular range of motion of the simulator were tested through computer simulation and the actual model. Results. The prototype robotic simulator consisted of an operational unit, an upper body with an electric device, a head with a temporomandibular joint (TMJ) and dentiforms. The TMJ of the simulator was capable of driving two degrees of freedom, implementing rotational and translational movements. In the water level test, the specific threshold of the water level sensor was 10.35 ml. The mandibular range of motion of the simulator was 50 mm in both computer simulation and the actual model. Conclusion. Although further advancements are still required to improve its efficiency and stability, the upper-body prototype simulator has the potential to be useful in dental practice education.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Analysis of Emerging Geo-technologies and Markets Focusing on Digital Twin and Environmental Monitoring in Response to Digital and Green New Deal (디지털 트윈, 환경 모니터링 등 디지털·그린 뉴딜 정책 관련 지질자원 유망기술·시장 분석)

  • Ahn, Eun-Young;Lee, Jaewook;Bae, Junhee;Kim, Jung-Min
    • Economic and Environmental Geology
    • /
    • v.53 no.5
    • /
    • pp.609-617
    • /
    • 2020
  • After introducing the industry 4.0 policy, Korean government announced 'Digital New Deal' and 'Green New Deal' as 'Korean New Deal' in 2020. We analyzed Korea Institute of Geoscience and Mineral Resources (KIGAM)'s research projects related to that policy and conducted markets analysis focused on Digital Twin and environmental monitoring technologies. Regarding 'Data Dam' policy, we suggested the digital geo-contents with Augmented Reality (AR) & Virtual Reality (VR) and the public geo-data collection & sharing system. It is necessary to expand and support the smart mining and digital oil fields research for '5th generation mobile communication (5G) and artificial intelligence (AI) convergence into all industries' policy. Korean government is suggesting downtown 3D maps for 'Digital Twin' policy. KIGAM can provide 3D geological maps and Internet of Things (IoT) systems for social overhead capital (SOC) management. 'Green New Deal' proposed developing technologies for green industries including resource circulation, Carbon Capture Utilization and Storage (CCUS), and electric & hydrogen vehicles. KIGAM has carried out related research projects and currently conducts research on domestic energy storage minerals. Oil and gas industries are presented as representative applications of digital twin. Many progress is made in mining automation and digital mapping and Digital Twin Earth (DTE) is a emerging research subject. The emerging research subjects are deeply related to data analysis, simulation, AI, and the IoT, therefore KIGAM should collaborate with sensors and computing software & system companies.

Treatment Planning for Minimizing Carotid Artery Dose in the Radiotherapy of Early Glottic Cancer (조기 성문암의 방사선치료에서 경동맥을 보호하기 위한 치료 계획)

  • Ki, Yang-Kan;Kim, Won-Taek;Nam, Ji- Ho;Kim, Dong-Hyun;Lee, Ju-Hye;Park, Dal;Kim, Don-Won
    • Radiation Oncology Journal
    • /
    • v.29 no.2
    • /
    • pp.115-120
    • /
    • 2011
  • Purpose: To examine the feasibility of the treatment planning for minimizing carotid artery dose in the radiotherapy of early glottic cancer. Materials and Methods: From 2007 to 2010, computed tomography simulation images of 31 patients treated by radiotherapy for early glottic cancer were analyzed. The virtual planning was used to compare the parallel-opposing fields (POF) with the modified oblique fields (MOF) placed at angles to exclude the ipsilateral carotid arteries. Planning target volume (PTV), irradiated volume, carotid artery, and spinal cord were analyzed at a mean dose, $V_{35}$, $V_{40}$, $V_{50}$ and with a percent dose-volume. Results: The beam angles were arranged 25 degrees anteriorly in 23 patients and 30 degrees anteriorly in 8 dose-volume of carotid artery shows the significant difference (p<0.001). The mean doses of carotid artery were 38.5 Gy for POF and 26.3 Gy for MOF and the difference was statistically significant (p=0.012). Similarly, $V_{35}$, $V_{40}$, and $V_{50}$ also showed significant differences between POF and MOF. Conclusion: The modified oblique field was respected to prevent a carotid artery stenosis and reduce the incidence of a stroke based on these results.

Study on the structure of the articulation jack and skin plate of the sharp curve section shield TBM in numerical analysis (수치해석을 통한 급곡선 구간 Shield TBM의 중절잭 및 스킨플레이트 구조에 관한 연구)

  • Kang, Sin-Hyun;Kim, Dong-Ho;Kim, Hun-Tae;Song, Seung-Woo
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.3
    • /
    • pp.421-435
    • /
    • 2017
  • Recently, due to the saturation of ground structures and the overpopulation of pipeline facilities requires to development of underground structures as an alternative to ground structures. Thus, mechanized tunnel construction of the shield TBM method has been increasing in order to prevent vibration and noise problems in construction of the NATM tunnel for the urban infrastructure construction. Tunnel construction plan for the tunnel line should be formed in a sharp curve to avoid building foundation and underground structures and it is inevitable to develop a shield TBM technology that suits the sharp curve tunnel construction. Therefore, this study is about the structural stability technology of the articulation jack, shield jack and skin plate for the shield TBM thrust in case of the mechanized tunnel construction that is a straight and sharp curve line. The construction case study and shield TBM operation principle are examined and analyzed by the theoretical approach. The torque of the cutter head, the thrust of the articulation jack and the shield jack, the amount of over cutting for curve is important respectively in shield TBM construction of straight and sharp curve line. In addition, it is very important to secure the stability of the skin plate structure to ensure the safety of the inside worker. This study examines the general structure and construction of the equipment, experimental simulation was carried out through numerical analysis to examine the main factors and structural stability of the skin plate structure. The structural stability of the skin plate was evaluated and optimizes the shape by comparing the loads of the articulation jack by selecting the virtual soil to be applied in a straight and sharp curve line construction. Since the present structure and operation method of the shield TBM type in domestic constructions are very similar, this study will help to develop the localized shield TBM technology for the new equipment and the vulnerability and stability review.

A study of lower facial change according to facial type when virtually vertical dimension increases (가상적 수직 교합 고경 증가 시 안모의 유형에 따른 하안모 변화에 관한 연구)

  • Kim, Nam-Woo;Lee, Gung-Chol;Moon, Cheol-Hyun;Bae, Jung-Yoon;Kim, Ji-Yeon
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.54 no.1
    • /
    • pp.1-7
    • /
    • 2016
  • Purpose: The aim of this study was to evaluate the effect of increased vertical dimension of occlusion on lower facial changes by facial type. Materials and methods: Lateral cephalograms from 261 patients were obtained and classified by sagittal (Class I, II, and III) and vertical (hypodivergent, normodivergent, and hyperdivergent) facial patterns. Retrusive displacement of soft tissue Pogonion and downward displacement of soft tissue Menton were measured in each group after 2 mm of vertical dimension of occlusion was increased at the lower central incisor using a virtual simulation program. The ratio of both displacements was calculated in all groups. The statistical analysis was done by 2-way ANOVA and Post hoc was done by Tukey test (5% level of significance). Results: Retrusive displacement of soft tissue Pogonion in Class III group was statistically different compared to Class I and II, and in vertical facial groups all 3 groups were significantly different (P<.05). Downward displacement of soft tissue Menton showed statistically significant difference between all sagittal groups and vertical groups (P<.05). The ratio of both displacements showed statistically significant difference in all sagittal groups and vertical groups (P<.05), and Class II hyperdivergent group had the highest value. Conclusion: Lower facial change was statically significant according to the facial type when vertical dimension of occlusion increased. Class II hyperdivergent facial type showed the highest ratio after increase in vertical dimension of occlusion.

A Study on Metaverse Construction Based on 3D Spatial Information of Convergence Sensors using Unreal Engine 5 (언리얼 엔진 5를 활용한 융복합센서의 3D 공간정보기반 메타버스 구축 연구)

  • Oh, Seong-Jong;Kim, Dal-Joo;Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.52 no.2
    • /
    • pp.171-187
    • /
    • 2022
  • Recently, the demand and development for non-face-to-face services are rapidly progressing due to the pandemic caused by the COVID-19, and attention is focused on the metaverse at the center. Entering the era of the 4th industrial revolution, Metaverse, which means a world beyond virtual and reality, combines various sensing technologies and 3D reconstruction technologies to provide various information and services to users easily and quickly. In particular, due to the miniaturization and economic increase of convergence sensors such as unmanned aerial vehicle(UAV) capable of high-resolution imaging and high-precision LiDAR(Light Detection and Ranging) sensors, research on digital-Twin is actively underway to create and simulate real-life twins. In addition, Game engines in the field of computer graphics are developing into metaverse engines by expanding strong 3D graphics reconstuction and simulation based on dynamic operations. This study constructed a mirror-world type metaverse that reflects real-world coordinate-based reality using Unreal Engine 5, a recently announced metaverse engine, with accurate 3D spatial information data of convergence sensors based on unmanned aerial system(UAS) and LiDAR. and then, spatial information contents and simulations for users were produced based on various public data to verify the accuracy of reconstruction, and through this, it was possible to confirm the construction of a more realistic and highly utilizable metaverse. In addition, when constructing a metaverse that users can intuitively and easily access through the unreal engine, various contents utilization and effectiveness could be confirmed through coordinate-based 3D spatial information with high reproducibility.