• Title/Summary/Keyword: 손실 데이터

Search Result 2,060, Processing Time 0.033 seconds

Unsupervised Non-rigid Registration Network for 3D Brain MR images (3차원 뇌 자기공명 영상의 비지도 학습 기반 비강체 정합 네트워크)

  • Oh, Donggeon;Kim, Bohyoung;Lee, Jeongjin;Shin, Yeong-Gil
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.64-74
    • /
    • 2019
  • Although a non-rigid registration has high demands in clinical practice, it has a high computational complexity and it is very difficult for ensuring the accuracy and robustness of registration. This study proposes a method of applying a non-rigid registration to 3D magnetic resonance images of brain in an unsupervised learning environment by using a deep-learning network. A feature vector between two images is produced through the network by receiving both images from two different patients as inputs and it transforms the target image to match the source image by creating a displacement vector field. The network is designed based on a U-Net shape so that feature vectors that consider all global and local differences between two images can be constructed when performing the registration. As a regularization term is added to a loss function, a transformation result similar to that of a real brain movement can be obtained after the application of trilinear interpolation. This method enables a non-rigid registration with a single-pass deformation by only receiving two arbitrary images as inputs through an unsupervised learning. Therefore, it can perform faster than other non-learning-based registration methods that require iterative optimization processes. Our experiment was performed with 3D magnetic resonance images of 50 human brains, and the measurement result of the dice similarity coefficient confirmed an approximately 16% similarity improvement by using our method after the registration. It also showed a similar performance compared with the non-learning-based method, with about 10,000 times speed increase. The proposed method can be used for non-rigid registration of various kinds of medical image data.

Automatic Bee-Counting System with Dual Infrared Sensor based on ICT (ICT 기반 이중 적외선 센서를 이용한 꿀벌 출입 자동 모니터링 시스템)

  • Son, Jae Deok;Lim, Sooho;Kim, Dong-In;Han, Giyoun;Ilyasov, Rustem;Yunusbaev, Ural;Kwon, Hyung Wook
    • Journal of Apiculture
    • /
    • v.34 no.1
    • /
    • pp.47-55
    • /
    • 2019
  • Honey bees are a vital part of the food chain as the most important pollinators for a broad palette of crops and wild plants. The climate change and colony collapse disorder (CCD) phenomenon make it challenging to develop ICT solutions to predict changes in beehive and alert about potential threats. In this paper, we report the test results of the bee-counting system which stands out against the previous analogues due to its comprehensive components including an improved dual infrared sensor to detect honey bees entering and leaving the hive, environmental sensors that measure ambient and interior, a wireless network with the bluetooth low energy (BLE) to transmit the sensing data in real time to the gateway, and a cloud which accumulate and analyze data. To assess the system accuracy, 3 persons manually counted the outgoing and incoming honey bees using the video record of 360-minute length. The difference between automatic and manual measurements for outgoing and incoming scores were 3.98% and 4.43% respectively. These differences are relatively lower than previous analogues, which inspires a vision that the tested system is a good candidate to use in precise apicultural industry, scientific research and education.

Collision Risk Assessment by using Hierarchical Clustering Method and Real-time Data (계층 클러스터링과 실시간 데이터를 이용한 충돌위험평가)

  • Vu, Dang-Thai;Jeong, Jae-Yong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.4
    • /
    • pp.483-491
    • /
    • 2021
  • The identification of regional collision risks in water areas is significant for the safety of navigation. This paper introduces a new method of collision risk assessment that incorporates a clustering method based on the distance factor - hierarchical clustering - and uses real-time data in case of several surrounding vessels, group methodology and preliminary assessment to classify vessels and evaluate the basis of collision risk evaluation (called HCAAP processing). The vessels are clustered using the hierarchical program to obtain clusters of encounter vessels and are combined with the preliminary assessment to filter relatively safe vessels. Subsequently, the distance at the closest point of approach (DCPA) and time to the closest point of approach (TCPA) between encounter vessels within each cluster are calculated to obtain the relation and comparison with the collision risk index (CRI). The mathematical relationship of CRI for each cluster of encounter vessels with DCPA and TCPA is constructed using a negative exponential function. Operators can easily evaluate the safety of all vessels navigating in the defined area using the calculated CRI. Therefore, this framework can improve the safety and security of vessel traffic transportation and reduce the loss of life and property. To illustrate the effectiveness of the framework proposed, an experimental case study was conducted within the coastal waters of Mokpo, Korea. The results demonstrated that the framework was effective and efficient in detecting and ranking collision risk indexes between encounter vessels within each cluster, which allowed an automatic risk prioritization of encounter vessels for further investigation by operators.

A study on the reliability and availability improvement of wireless communication in the LTE-R (철도통합무선망(LTE-R) 환경에서 무선통신 안정성과 가용성 향상을 위한 방안 연구)

  • Choi, Min-Suk;Oh, Sang-Chul;Lee, Sook-Jin;Yoon, Byung-Sik;Kim, Dong-Joon;Sung, Dong-Il
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.9
    • /
    • pp.1172-1179
    • /
    • 2020
  • With the establishment of the railway integrated radio network (LTE-R) environment, radio-based train control transmission and reception and various forms of service are provided. The smooth delivery of these services requires improved performance in a highly reliable and available wireless environment. This paper measured the LTE-R radio communication environment to improve radio communication performance of railway integrated wireless network reliability and availability, analyzed the results, and established the wireless environment model. Based on the built-up model, we also proposed an improved radio-access algorithm to control trains for improved reliability, suggesting a way to improve stability for handover that occur during open-air operation, and proposed an algorithm for frequency auto-heating to improve availability. For simulation, data were collected from the Korea Rail Network Authority (Daejeon), Manjong-Gangneung KTX route, which can measure the actual data of LTE-R wireless environment, and the results of the simulation show performance improvement through algorithm.

Evaluation of Classification Performance of Inception V3 Algorithm for Chest X-ray Images of Patients with Cardiomegaly (심장비대증 환자의 흉부 X선 영상에 대한 Inception V3 알고리즘의 분류 성능평가)

  • Jeong, Woo-Yeon;Kim, Jung-Hun;Park, Ji-Eun;Kim, Min-Jeong;Lee, Jong-Min
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.4
    • /
    • pp.455-461
    • /
    • 2021
  • Cardiomegaly is one of the most common diseases seen on chest X-rays, but if it is not detected early, it can cause serious complications. In view of this, in recent years, many researches on image analysis in which deep learning algorithms using artificial intelligence are applied to medical care have been conducted with the development of various science and technology fields. In this paper, we would like to evaluate whether the Inception V3 deep learning model is a useful model for the classification of Cardiomegaly using chest X-ray images. For the images used, a total of 1026 chest X-ray images of patients diagnosed with normal heart and those diagnosed with Cardiomegaly in Kyungpook National University Hospital were used. As a result of the experiment, the classification accuracy and loss of the Inception V3 deep learning model according to the presence or absence of Cardiomegaly were 96.0% and 0.22%, respectively. From the research results, it was found that the Inception V3 deep learning model is an excellent deep learning model for feature extraction and classification of chest image data. The Inception V3 deep learning model is considered to be a useful deep learning model for classification of chest diseases, and if such excellent research results are obtained by conducting research using a little more variety of medical image data, I think it will be great help for doctor's diagnosis in future.

The Effect of Communication Distance and Number of Peripheral on Data Error Rate When Transmitting Medical Data Based on Bluetooth Low Energy (저 전력 블루투스 기반으로 의료데이터 전송 시 통신 거리와 연동 장치의 수가 데이터 손실률에 미치는 영향)

  • Park, Young-Sang;Son, ByeongJin;Son, Jaebum;Lee, Hoyul;Jeong, Yoosoo;Song, Chanho;Jung, Euisung
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.6
    • /
    • pp.259-267
    • /
    • 2021
  • Recently, the market for personal health care and medical devices based on Bluetooth Low Energy(BLE) has grown rapidly. BLE is being used in various medical data communication devices based on low power consumption and universal compatibility. However, since data errors occurring in the transmission of medical data can lead to medical accidents, it is necessary to analyze the causes of errors and study methods to reduce data error. In this paper, the minimum communication speed to be used in medical devices was set to at least 800 byte/sec based on the wireless electrocardiography regulations of the Ministry of Food and Drug Safety. And the data loss rate was tested when data was transmitted at a speed higher than 800 byte/sec. The factors that cause communication data error were classified, and the relationship between each factor and the data error rate was analyzed through experiments. When there were two or more activated peripherals connected to the central, data error occurred due to channel hopping and bottleneck, and the data error rate increased in proportion to the communication distance and the number of activated peripherals. Through this experiment, when the BLE is used in a medical device that intermittently transmits biosignal data, the risk of a medical accident is predicted to be low if the number of peripherals is 3 or less. But, it was determined that BLE would not be suitable for the development of a biosignal measuring device that must be continuously transmitted in real time, such as an electrocardiogram.

Development of The Safe Driving Reward System for Truck Digital Tachograph using Hyperledger Fabric (하이퍼레저 패브릭을 이용한 화물차 디지털 운행기록 단말기의 안전운행 보상시스템 구현)

  • Kim, Yong-bae;Back, Juyong;Kim, Jongweon
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.47-56
    • /
    • 2022
  • The safe driving reward system aims to reduce the loss of life and property by reducing the occurrence of accidents by motivating safe driving and encouraging active participation by providing direct reward to vehicle drivers who have performed safe driving. In the case of the existing digital tachograph, the goal is to limit dangerous driving by recording the driving status of the vehicle whereas the safe driving reward system is a support measure to increase the effect of accident prevention and induces safe driving with financial reward when safe driving is performed. In other words, in an area where accidents due to speeding are high, direct reward is provided to motivate safe driving to prevent traffic accidents when safe driving instructions such as speed compliance, maintaining distance between vehicles, and driving in designated lanes are performed. Since these safe operation data and reward histories must be managed transparently and safely, the reward evidences and histories were constructed using the closed blockchain Hyperledger Fabric. However, while transparency and safety are guaranteed in the blockchain system, low data processing speed is a problem. In this study, the sequential block generation speed was as low as 10 TPS(transaction per second), and as a result of applying the acceleration function a high-performance network of 1,000 TPS or more was implemented.

LSTM Prediction of Streamflow during Peak Rainfall of Piney River (LSTM을 이용한 Piney River유역의 최대강우시 유량예측)

  • Kareem, Kola Yusuff;Seong, Yeonjeong;Jung, Younghun
    • Journal of Korean Society of Disaster and Security
    • /
    • v.14 no.4
    • /
    • pp.17-27
    • /
    • 2021
  • Streamflow prediction is a very vital disaster mitigation approach for effective flood management and water resources planning. Lately, torrential rainfall caused by climate change has been reported to have increased globally, thereby causing enormous infrastructural loss, properties and lives. This study evaluates the contribution of rainfall to streamflow prediction in normal and peak rainfall scenarios, typical of the recent flood at Piney Resort in Vernon, Hickman County, Tennessee, United States. Daily streamflow, water level, and rainfall data for 20 years (2000-2019) from two USGS gage stations (03602500 upstream and 03599500 downstream) of the Piney River watershed were obtained, preprocesssed and fitted with Long short term memory (LSTM) model. Tensorflow and Keras machine learning frameworks were used with Python to predict streamflow values with a sequence size of 14 days, to determine whether the model could have predicted the flooding event in August 21, 2021. Model skill analysis showed that LSTM model with full data (water level, streamflow and rainfall) performed better than the Naive Model except some rainfall models, indicating that only rainfall is insufficient for streamflow prediction. The final LSTM model recorded optimal NSE and RMSE values of 0.68 and 13.84 m3/s and predicted peak flow with the lowest prediction error of 11.6%, indicating that the final model could have predicted the flood on August 24, 2021 given a peak rainfall scenario. Adequate knowledge of rainfall patterns will guide hydrologists and disaster prevention managers in designing efficient early warning systems and policies aimed at mitigating flood risks.

Semantic Computing-based Dynamic Job Scheduling Model and Simulation (시멘틱 컴퓨팅 기반의 동적 작업 스케줄링 모델 및 시뮬레이션)

  • Noh, Chang-Hyeon;Jang, Sung-Ho;Kim, Tae-Young;Lee, Jong-Sik
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.2
    • /
    • pp.29-38
    • /
    • 2009
  • In the computing environment with heterogeneous resources, a job scheduling model is necessary for effective resource utilization and high-speed data processing. And, the job scheduling model has to cope with a dynamic change in the condition of resources. There have been lots of researches on resource estimation methods and heuristic algorithms about how to distribute and allocate jobs to heterogeneous resources. But, existing researches have a weakness for system compatibility and scalability because they do not support the standard language. Also, they are impossible to process jobs effectively and deal with a variety of computing situations in which the condition of resources is dynamically changed in real-time. In order to solve the problems of existing researches, this paper proposes a semantic computing-based dynamic job scheduling model that defines various knowledge-based rules for job scheduling methods adaptable to changes in resource condition and allocate a job to the best suited resource through inference. This paper also constructs a resource ontology to manage information about heterogeneous resources without difficulty as using the OWL, the standard ontology language established by W3C. Experimental results shows that the proposed scheduling model outperforms existing scheduling models, in terms of throughput, job loss, and turn around time.

Extending StarGAN-VC to Unseen Speakers Using RawNet3 Speaker Representation (RawNet3 화자 표현을 활용한 임의의 화자 간 음성 변환을 위한 StarGAN의 확장)

  • Bogyung Park;Somin Park;Hyunki Hong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.303-314
    • /
    • 2023
  • Voice conversion, a technology that allows an individual's speech data to be regenerated with the acoustic properties(tone, cadence, gender) of another, has countless applications in education, communication, and entertainment. This paper proposes an approach based on the StarGAN-VC model that generates realistic-sounding speech without requiring parallel utterances. To overcome the constraints of the existing StarGAN-VC model that utilizes one-hot vectors of original and target speaker information, this paper extracts feature vectors of target speakers using a pre-trained version of Rawnet3. This results in a latent space where voice conversion can be performed without direct speaker-to-speaker mappings, enabling an any-to-any structure. In addition to the loss terms used in the original StarGAN-VC model, Wasserstein distance is used as a loss term to ensure that generated voice segments match the acoustic properties of the target voice. Two Time-Scale Update Rule (TTUR) is also used to facilitate stable training. Experimental results show that the proposed method outperforms previous methods, including the StarGAN-VC network on which it was based.