• Title/Summary/Keyword: Computational Load

Search Result 1,708, Processing Time 0.03 seconds

Diagnosis and Visualization of Intracranial Hemorrhage on Computed Tomography Images Using EfficientNet-based Model (전산화 단층 촬영(Computed tomography, CT) 이미지에 대한 EfficientNet 기반 두개내출혈 진단 및 가시화 모델 개발)

  • Youn, Yebin;Kim, Mingeon;Kim, Jiho;Kang, Bongkeun;Kim, Ghootae
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.4
    • /
    • pp.150-158
    • /
    • 2021
  • Intracranial hemorrhage (ICH) refers to acute bleeding inside the intracranial vault. Not only does this devastating disease record a very high mortality rate, but it can also cause serious chronic impairment of sensory, motor, and cognitive functions. Therefore, a prompt and professional diagnosis of the disease is highly critical. Noninvasive brain imaging data are essential for clinicians to efficiently diagnose the locus of brain lesion, volume of bleeding, and subsequent cortical damage, and to take clinical interventions. In particular, computed tomography (CT) images are used most often for the diagnosis of ICH. In order to diagnose ICH through CT images, not only medical specialists with a sufficient number of diagnosis experiences are required, but even when this condition is met, there are many cases where bleeding cannot be successfully detected due to factors such as low signal ratio and artifacts of the image itself. In addition, discrepancies between interpretations or even misinterpretations might exist causing critical clinical consequences. To resolve these clinical problems, we developed a diagnostic model predicting intracranial bleeding and its subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and epidural) by applying deep learning algorithms to CT images. We also constructed a visualization tool highlighting important regions in a CT image for predicting ICH. Specifically, 1) 27,758 CT brain images from RSNA were pre-processed to minimize the computational load. 2) Three different CNN-based models (ResNet, EfficientNet-B2, and EfficientNet-B7) were trained based on a training image data set. 3) Diagnosis performance of each of the three models was evaluated based on an independent test image data set: As a result of the model comparison, EfficientNet-B7's performance (classification accuracy = 91%) was a way greater than the other models. 4) Finally, based on the result of EfficientNet-B7, we visualized the lesions of internal bleeding using the Grad-CAM. Our research suggests that artificial intelligence-based diagnostic systems can help diagnose and treat brain diseases resolving various problems in clinical situations.

Single Image Super Resolution Based on Residual Dense Channel Attention Block-RecursiveSRNet (잔여 밀집 및 채널 집중 기법을 갖는 재귀적 경량 네트워크 기반의 단일 이미지 초해상도 기법)

  • Woo, Hee-Jo;Sim, Ji-Woo;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.429-440
    • /
    • 2021
  • With the recent development of deep convolutional neural network learning, deep learning techniques applied to single image super-resolution are showing good results. One of the existing deep learning-based super-resolution techniques is RDN(Residual Dense Network), in which the initial feature information is transmitted to the last layer using residual dense blocks, and subsequent layers are restored using input information of previous layers. However, if all hierarchical features are connected and learned and a large number of residual dense blocks are stacked, despite good performance, a large number of parameters and huge computational load are needed, so it takes a lot of time to learn a network and a slow processing speed, and it is not applicable to a mobile system. In this paper, we use the residual dense structure, which is a continuous memory structure that reuses previous information, and the residual dense channel attention block using the channel attention method that determines the importance according to the feature map of the image. We propose a method that can increase the depth to obtain a large receptive field and maintain a concise model at the same time. As a result of the experiment, the proposed network obtained PSNR as low as 0.205dB on average at 4× magnification compared to RDN, but about 1.8 times faster processing speed, about 10 times less number of parameters and about 1.74 times less computation.

A Study on Non-Fungible Token Platform for Usability and Privacy Improvement (사용성 및 프라이버시 개선을 위한 NFT 플랫폼 연구)

  • Kang, Myung Joe;Kim, Mi Hui
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.11
    • /
    • pp.403-410
    • /
    • 2022
  • Non-Fungible Tokens (NFTs) created on the basis of blockchain have their own unique value, so they cannot be forged or exchanged with other tokens or coins. Using these characteristics, NFTs can be issued to digital assets such as images, videos, artworks, game characters, and items to claim ownership of digital assets among many users and objects in cyberspace, as well as proving the original. However, interest in NFTs exploded from the beginning of 2020, causing a lot of load on the blockchain network, and as a result, users are experiencing problems such as delays in computational processing or very large fees in the mining process. Additionally, all actions of users are stored in the blockchain, and digital assets are stored in a blockchain-based distributed file storage system, which may unnecessarily expose the personal information of users who do not want to identify themselves on the Internet. In this paper, we propose an NFT platform using cloud computing, access gate, conversion table, and cloud ID to improve usability and privacy problems that occur in existing system. For performance comparison between local and cloud systems, we measured the gas used for smart contract deployment and NFT-issued transaction. As a result, even though the cloud system used the same experimental environment and parameters, it saved about 3.75% of gas for smart contract deployment and about 4.6% for NFT-generated transaction, confirming that the cloud system can handle computations more efficiently than the local system.

Reliability of mortar filling layer void length in in-service ballastless track-bridge system of HSR

  • Binbin He;Sheng Wen;Yulin Feng;Lizhong Jiang;Wangbao Zhou
    • Steel and Composite Structures
    • /
    • v.47 no.1
    • /
    • pp.91-102
    • /
    • 2023
  • To study the evaluation standard and control limit of mortar filling layer void length, in this paper, the train sub-model was developed by MATLAB and the track-bridge sub-model considering the mortar filling layer void was established by ANSYS. The two sub-models were assembled into a train-track-bridge coupling dynamic model through the wheel-rail contact relationship, and the validity was corroborated by the coupling dynamic model with the literature model. Considering the randomness of fastening stiffness, mortar elastic modulus, length of mortar filling layer void, and pier settlement, the test points were designed by the Box-Behnken method based on Design-Expert software. The coupled dynamic model was calculated, and the support vector regression (SVR) nonlinear mapping model of the wheel-rail system was established. The learning, prediction, and verification were carried out. Finally, the reliable probability of the amplification coefficient distribution of the response index of the train and structure in different ranges was obtained based on the SVR nonlinear mapping model and Latin hypercube sampling method. The limit of the length of the mortar filling layer void was, thus, obtained. The results show that the SVR nonlinear mapping model developed in this paper has a high fitting accuracy of 0.993, and the computational efficiency is significantly improved by 99.86%. It can be used to calculate the dynamic response of the wheel-rail system. The length of the mortar filling layer void significantly affects the wheel-rail vertical force, wheel weight load reduction ratio, rail vertical displacement, and track plate vertical displacement. The dynamic response of the track structure has a more significant effect on the limit value of the length of the mortar filling layer void than the dynamic response of the vehicle, and the rail vertical displacement is the most obvious. At 250 km/h - 350 km/h train running speed, the limit values of grade I, II, and III of the lengths of the mortar filling layer void are 3.932 m, 4.337 m, and 4.766 m, respectively. The results can provide some reference for the long-term service performance reliability of the ballastless track-bridge system of HRS.

Study on the Selection of Optimal Operation Position Using AI Techniques (인공지능 기법에 의한 최적 운항자세 선정에 관한 연구)

  • Dong-Woo Park
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.681-687
    • /
    • 2023
  • The selection technique for optimal operation position selection technique is used to present the initial bow and stern draft with minimum resistance, for achievingthat is, the optimal fuel consumption efficiency at a given operating displacement and speed. The main purpose of this studypaper is to develop a program to select the optimal operating position with maximum energy efficiency under given operating conditions based on the effective power data of the target ship. This program was written as a Python-based GUI (Graphic User Interface) usingbased on artificial intelligence techniques sucho that ship owners could easily use the GUIit. In the process, tThe introduction of the target ship, the collection of effective power data through computational fluid dynamics (CFD), the learning method of the effective power model using deep learning, and the program for presenting the optimal operation position using the deep neural network (DNN) model were specifically explained. Ships are loaded and unloaded for each operation, which changes the cargo load and changes the displacement. The shipowners wants to know the optimal operating position with minimum resistance, that is, maximum energy efficiency, according to the given speed of each displacement. The developed GUI can be installed on the ship's tablet PC and application and used to determineselect the optimal operating position.

Development of Agent-based Platform for Coordinated Scheduling in Global Supply Chain (글로벌 공급사슬에서 경쟁협력 스케줄링을 위한 에이전트 기반 플랫폼 구축)

  • Lee, Jung-Seung;Choi, Seong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.213-226
    • /
    • 2011
  • In global supply chain, the scheduling problems of large products such as ships, airplanes, space shuttles, assembled constructions, and/or automobiles are complicated by nature. New scheduling systems are often developed in order to reduce inherent computational complexity. As a result, a problem can be decomposed into small sub-problems, problems that contain independently small scheduling systems integrating into the initial problem. As one of the authors experienced, DAS (Daewoo Shipbuilding Scheduling System) has adopted a two-layered hierarchical architecture. In the hierarchical architecture, individual scheduling systems composed of a high-level dock scheduler, DAS-ERECT and low-level assembly plant schedulers, DAS-PBS, DAS-3DS, DAS-NPS, and DAS-A7 try to search the best schedules under their own constraints. Moreover, the steep growth of communication technology and logistics enables it to introduce distributed multi-nation production plants by which different parts are produced by designated plants. Therefore vertical and lateral coordination among decomposed scheduling systems is necessary. No standard coordination mechanism of multiple scheduling systems exists, even though there are various scheduling systems existing in the area of scheduling research. Previous research regarding the coordination mechanism has mainly focused on external conversation without capacity model. Prior research has heavily focuses on agent-based coordination in the area of agent research. Yet, no scheduling domain has been developed. Previous research regarding the agent-based scheduling has paid its ample attention to internal coordination of scheduling process, a process that has not been efficient. In this study, we suggest a general framework for agent-based coordination of multiple scheduling systems in global supply chain. The purpose of this study was to design a standard coordination mechanism. To do so, we first define an individual scheduling agent responsible for their own plants and a meta-level coordination agent involved with each individual scheduling agent. We then suggest variables and values describing the individual scheduling agent and meta-level coordination agent. These variables and values are represented by Backus-Naur Form. Second, we suggest scheduling agent communication protocols for each scheduling agent topology classified into the system architectures, existence or nonexistence of coordinator, and directions of coordination. If there was a coordinating agent, an individual scheduling agent could communicate with another individual agent indirectly through the coordinator. On the other hand, if there was not any coordinating agent existing, an individual scheduling agent should communicate with another individual agent directly. To apply agent communication language specifically to the scheduling coordination domain, we had to additionally define an inner language, a language that suitably expresses scheduling coordination. A scheduling agent communication language is devised for the communication among agents independent of domain. We adopt three message layers which are ACL layer, scheduling coordination layer, and industry-specific layer. The ACL layer is a domain independent outer language layer. The scheduling coordination layer has terms necessary for scheduling coordination. The industry-specific layer expresses the industry specification. Third, in order to improve the efficiency of communication among scheduling agents and avoid possible infinite loops, we suggest a look-ahead load balancing model which supports to monitor participating agents and to analyze the status of the agents. To build the look-ahead load balancing model, the status of participating agents should be monitored. Most of all, the amount of sharing information should be considered. If complete information is collected, updating and maintenance cost of sharing information will be increasing although the frequency of communication will be decreasing. Therefore the level of detail and updating period of sharing information should be decided contingently. By means of this standard coordination mechanism, we can easily model coordination processes of multiple scheduling systems into supply chain. Finally, we apply this mechanism to shipbuilding domain and develop a prototype system which consists of a dock-scheduling agent, four assembly- plant-scheduling agents, and a meta-level coordination agent. A series of experiments using the real world data are used to empirically examine this mechanism. The results of this study show that the effect of agent-based platform on coordinated scheduling is evident in terms of the number of tardy jobs, tardiness, and makespan.

Wintertime Extreme Storm Waves in the East Sea: Estimation of Extreme Storm Waves and Wave-Structure Interaction Study in the Fushiki Port, Toyama Bay (동해의 동계 극한 폭풍파랑: 토야마만 후시키항의 극한 폭풍파랑 추산 및 파랑 · 구조물 상호작용 연구)

  • Lee, Han Soo;Komaguchi, Tomoaki;Yamamoto, Atsushi;Hara, Masanori
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.25 no.5
    • /
    • pp.335-347
    • /
    • 2013
  • In February 2008, high storm waves due to a developed atmospheric low pressure system propagating from the west off Hokkaido, Japan, to the south and southwest throughout the East Sea (ES) caused extensive damages along the central coast of Japan and along the east coast of Korea. This study consists of two parts. In the first part, we estimate extreme storm wave characteristics in the Toyama Bay where heavy coastal damages occurred, using a non-hydrostatic meteorological model and a spectral wave model by considering the extreme conditions for two factors for wind wave growth, such as wind intensity and duration. The estimated extreme significant wave height and corresponding wave period were 6.78 m and 18.28 sec, respectively, at the Fushiki Toyama. In the second part, we perform numerical experiments on wave-structure interaction in the Fushiki Port, Toyama Bay, where the long North-Breakwater was heavily damaged by the storm waves in February 2008. The experiments are conducted using a non-linear shallow-water equation model with adaptive mesh refinement (AMR) and wet-dry scheme. The estimated extreme storm waves of 6.78 m and 18.28 sec are used for incident wave profile. The results show that the Fushiki Port would be overtopped and flooded by extreme storm waves if the North-Breakwater does not function properly after being damaged. Also the storm waves would overtop seawalls and sidewalls of the Manyou Pier behind the North-Breakwater. The results also depict that refined meshes by AMR method with wet-dry scheme applied capture the coastline and coastal structure well while keeping the computational load efficiently.

A Basis Study on the Optimal Design of the Integrated PM/NOx Reduction Device (일체형 PM/NOx 동시저감장치의 최적 설계에 대한 기초 연구)

  • Choe, Su-Jeong;Pham, Van Chien;Lee, Won-Ju;Kim, Jun-Soo;Kim, Jeong-Kuk;Park, Hoyong;Lim, In Gweon;Choi, Jae-Hyuk
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.6
    • /
    • pp.1092-1099
    • /
    • 2022
  • Research on exhaust aftertreatment devices to reduce air pollutants and greenhouse gas emissions is being actively conducted. However, in the case of the particulate matters/nitrogen oxides (PM/NOx) simultaneous reduction device for ships, the problem of back pressure on the diesel engine and replacement of the filter carrier is occurring. In this study, for the optimal design of the integrated device that can simultaneously reduce PM/NOx, an appropriate standard was presented by studying the flow inside the device and change in back pressure through the inlet/outlet pressure. Ansys Fluent was used to apply porous media conditions to a diesel particulate filter (DPF) and selective catalytic reduction (SCR) by setting porosity to 30%, 40%, 50%, 60%, and 70%. In addition, the ef ect on back pressure was analyzed by applying the inlet velocity according to the engine load to 7.4 m/s, 10.3 m/s, 13.1 m/s, and 26.2 m/s as boundary conditions. As a result of a computational fluid dynamics analysis, the rate of change for back pressure by changing the inlet velocity was greater than when inlet temperature was changed, and the maximum rate of change was 27.4 mbar. This was evaluated as a suitable device for ships of 1800kW because the back pressure in all boundary conditions did not exceed the classification standard of 68mbar.