• Title/Summary/Keyword: 정보처리기술

Search Result 13,602, Processing Time 0.039 seconds

Voice Synthesis Detection Using Language Model-Based Speech Feature Extraction (언어 모델 기반 음성 특징 추출을 활용한 생성 음성 탐지)

  • Seung-min Kim;So-hee Park;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.439-449
    • /
    • 2024
  • Recent rapid advancements in voice generation technology have enabled the natural synthesis of voices using text alone. However, this progress has led to an increase in malicious activities, such as voice phishing (voishing), where generated voices are exploited for criminal purposes. Numerous models have been developed to detect the presence of synthesized voices, typically by extracting features from the voice and using these features to determine the likelihood of voice generation.This paper proposes a new model for extracting voice features to address misuse cases arising from generated voices. It utilizes a deep learning-based audio codec model and the pre-trained natural language processing model BERT to extract novel voice features. To assess the suitability of the proposed voice feature extraction model for voice detection, four generated voice detection models were created using the extracted features, and performance evaluations were conducted. For performance comparison, three voice detection models based on Deepfeature proposed in previous studies were evaluated against other models in terms of accuracy and EER. The model proposed in this paper achieved an accuracy of 88.08%and a low EER of 11.79%, outperforming the existing models. These results confirm that the voice feature extraction method introduced in this paper can be an effective tool for distinguishing between generated and real voices.

Establishment of Risk Database and Development of Risk Classification System for NATM Tunnel (NATM 터널 공정리스크 데이터베이스 구축 및 리스크 분류체계 개발)

  • Kim, Hyunbee;Karunarathne, Batagalle Vinuri;Kim, ByungSoo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.25 no.1
    • /
    • pp.32-41
    • /
    • 2024
  • In the construction industry, not only safety accidents, but also various complex risks such as construction delays, cost increases, and environmental pollution occur, and management technologies are needed to solve them. Among them, process risk management, which directly affects the project, lacks related information compared to its importance. This study tried to develop a MATM tunnel process risk classification system to solve the difficulty of risk information retrieval due to the use of different classification systems for each project. Risk collection used existing literature review and experience mining techniques, and DB construction utilized the concept of natural language processing. For the structure of the classification system, the existing WBS structure was adopted in consideration of compatibility of data, and an RBS linked to the work species of the WBS was established. As a result of the research, a risk classification system was completed that easily identifies risks by work type and intuitively reveals risk characteristics and risk factors linked to risks. As a result of verifying the usability of the established classification system, it was found that the classification system was effective as risks and risk factors for each work type were easily identified by user input of keywords. Through this study, it is expected to contribute to preventing an increase in cost and construction period by identifying risks according to work types in advance when planning and designing NATM tunnels and establishing countermeasures suitable for those factors.

Investigating Data Preprocessing Algorithms of a Deep Learning Postprocessing Model for the Improvement of Sub-Seasonal to Seasonal Climate Predictions (계절내-계절 기후예측의 딥러닝 기반 후보정을 위한 입력자료 전처리 기법 평가)

  • Uran Chung;Jinyoung Rhee;Miae Kim;Soo-Jin Sohn
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.2
    • /
    • pp.80-98
    • /
    • 2023
  • This study explores the effectiveness of various data preprocessing algorithms for improving subseasonal to seasonal (S2S) climate predictions from six climate forecast models and their Multi-Model Ensemble (MME) using a deep learning-based postprocessing model. A pipeline of data transformation algorithms was constructed to convert raw S2S prediction data into the training data processed with several statistical distribution. A dimensionality reduction algorithm for selecting features through rankings of correlation coefficients between the observed and the input data. The training model in the study was designed with TimeDistributed wrapper applied to all convolutional layers of U-Net: The TimeDistributed wrapper allows a U-Net convolutional layer to be directly applied to 5-dimensional time series data while maintaining the time axis of data, but every input should be at least 3D in U-Net. We found that Robust and Standard transformation algorithms are most suitable for improving S2S predictions. The dimensionality reduction based on feature selections did not significantly improve predictions of daily precipitation for six climate models and even worsened predictions of daily maximum and minimum temperatures. While deep learning-based postprocessing was also improved MME S2S precipitation predictions, it did not have a significant effect on temperature predictions, particularly for the lead time of weeks 1 and 2. Further research is needed to develop an optimal deep learning model for improving S2S temperature predictions by testing various models and parameters.

Predicting Crime Risky Area Using Machine Learning (머신러닝기반 범죄발생 위험지역 예측)

  • HEO, Sun-Young;KIM, Ju-Young;MOON, Tae-Heon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.64-80
    • /
    • 2018
  • In Korea, citizens can only know general information about crime. Thus it is difficult to know how much they are exposed to crime. If the police can predict the crime risky area, it will be possible to cope with the crime efficiently even though insufficient police and enforcement resources. However, there is no prediction system in Korea and the related researches are very much poor. From these backgrounds, the final goal of this study is to develop an automated crime prediction system. However, for the first step, we build a big data set which consists of local real crime information and urban physical or non-physical data. Then, we developed a crime prediction model through machine learning method. Finally, we assumed several possible scenarios and calculated the probability of crime and visualized the results in a map so as to increase the people's understanding. Among the factors affecting the crime occurrence revealed in previous and case studies, data was processed in the form of a big data for machine learning: real crime information, weather information (temperature, rainfall, wind speed, humidity, sunshine, insolation, snowfall, cloud cover) and local information (average building coverage, average floor area ratio, average building height, number of buildings, average appraised land value, average area of residential building, average number of ground floor). Among the supervised machine learning algorithms, the decision tree model, the random forest model, and the SVM model, which are known to be powerful and accurate in various fields were utilized to construct crime prevention model. As a result, decision tree model with the lowest RMSE was selected as an optimal prediction model. Based on this model, several scenarios were set for theft and violence cases which are the most frequent in the case city J, and the probability of crime was estimated by $250{\times}250m$ grid. As a result, we could find that the high crime risky area is occurring in three patterns in case city J. The probability of crime was divided into three classes and visualized in map by $250{\times}250m$ grid. Finally, we could develop a crime prediction model using machine learning algorithm and visualized the crime risky areas in a map which can recalculate the model and visualize the result simultaneously as time and urban conditions change.

Handover Functional Architecture for Next Generation Wireless Networks (차세대 무선 네트워크를 위한 핸드오버 기능 구조 제안)

  • Baek, Joo-Young;Kim, Dong-Wook;Kim, Hyun-Jin;Choi, Yoon-Hee;Kim, Duk-Jin;Kim, Woo-Jae;Suh, Young-Joo;Kang, Suk-Yang;Kim, Kyung-Suk;Shin, Kyung-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10d
    • /
    • pp.268-273
    • /
    • 2006
  • 차세대 무선 네트워크 (4G)는 새로운 무선 접속 기술의 개발과 함께 많은 연구가 필요한 분야이다. 그 중에서 특히 단말의 끊김없는 이동성을 제공해 주기 위한 핸드오버 기술이 가장 중요하다고 할 수 있다. 차세대 무선 네트워크는 새로운 무선 접속 기술과 함께 기존의 무선랜이나 이동통신망 등과 같이 사용될 것으로 예상되며, 네트워크 계층에서의 이동성 지원을 위하여 Mobile IPv6를 사용할 것으로 예상되는 네트워크이다. 이러한 네트워크에서 끊김없는 이동성을 제공해 주기 위해서는 현재까지 연구된 핸드오버 기능 및 구조에 대한 연구와 함께 보다 다양해진 네트워크 환경과 QoS 등을 고려한 종합적인 핸드오버 기능에 대한 연구가 필요하다. 본 논문에서는 차세대 무선 네트워크에서 단말의 끊김없는 핸드오버를 제공해 주기 위하여 필요한 기능들을 도출하고, 이들간의 유기적인 연관관계를 정의하여 다양한 네트워크 환경과 사용자의 우선순위, 어플리케이션의 QoS 요구 조건 등을 고려한 종합적인 핸드오버 기능 구조를 제안하고자 한다. 제안하는 핸드오버 구조는 Monitoring, Triggering, Handover의 세 가지 module로 나뉘어져 있으며, 각각은 필요에 따라 sub-module로 다시 세분화된다. 제안하는 핸드오버 구조의 가장 큰 특징은 핸드오버를 유발시킬 수 있는 여러 가지 요소를 종합적으로 고려하며 이들간의 수평적인 비교가 아닌 다단계 비교를 수행하여 보다 정확한 triggering이 가능하도록 한다. 또한 단말의 QoS 요구 사항을 보장하고 네트워크의 혼잡도(congestion) 및 부하 조절 (load balancing)을 위한 기능을 핸드오버 기능에 추가하여 효율적인 네트워크의 자원 사용이 가능하도록 설계하였다.서버로 분산처리하게 함으로써 성능에 대한 신뢰성을 향상 시킬 수 있는 Load Balancing System을 제안한다.할 때 가장 효과적인 라우팅 프로토콜이라고 할 수 있다.iRNA 상의 의존관계를 분석할 수 있었다.수안보 등 지역에서 나타난다 이러한 이상대 주변에는 대개 온천이 발달되어 있었거나 새로 개발되어 있는 곳이다. 온천에 이용하고 있는 시추공의 자료는 배제하였으나 온천이응으로 직접적으로 영향을 받지 않은 시추공의 자료는 사용하였다 이러한 온천 주변 지역이라 하더라도 실제는 온천의 pumping 으로 인한 대류현상으로 주변 일대의 온도를 올려놓았기 때문에 비교적 높은 지열류량 값을 보인다. 한편 한반도 남동부 일대는 이번 추가된 자료에 의해 새로운 지열류량 분포 변화가 나타났다 강원 북부 오색온천지역 부근에서 높은 지열류량 분포를 보이며 또한 우리나라 대단층 중의 하나인 양산단층과 같은 방향으로 발달한 밀양단층, 모량단층, 동래단층 등 주변부로 NNE-SSW 방향의 지열류량 이상대가 발달한다. 이것으로 볼 때 지열류량은 지질구조와 무관하지 않음을 파악할 수 있다. 특히 이러한 단층대 주변은 지열수의 순환이 깊은 심도까지 가능하므로 이러한 대류현상으로 지표부근까지 높은 지온 전달이 되어 나타나는 것으로 판단된다.의 안정된 방사성표지효율을 보였다. $^{99m}Tc$-transferrin을 이용한 감염영상을 성공적으로 얻을 수 있었으며, $^{67}Ga$-citrate 영상과 비교하여 더 빠른 시간 안에 우수한 영상을 얻을 수 있었다. 그러므로 $^{99m}Tc$-transierrin이 감염 병소의 영상진단에 사용될 수 있을 것으로 기대된다.리를 정량화 하였다. 특히 선

  • PDF

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Interaction Between TCP and MAC-layer to Improve TCP Flow Performance over WLANs (유무선랜 환경에서 TCP Flow의 성능향상을 위한 MAC 계층과 TCP 계층의 연동기법)

  • Kim, Jae-Hoon;Chung, Kwang-Sue
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.2
    • /
    • pp.99-111
    • /
    • 2008
  • In recent years, the needs for WLANs(Wireless Local Area Networks) technology which can access to Internet anywhere have been dramatically increased particularly in SOHO(Small Office Home Office) and Hot Spot. However, unlike wired networks, there are some unique characteristics of wireless networks. These characteristics include the burst packet losses due to unreliable wireless channel. Note that burst packet losses, which occur when the distance between the wireless station and the AP(Access Point) increase or when obstacles move temporarily between the station and AP, are very frequent in 802.11 networks. Conversely, due to burst packet losses, the performance of 802.11 networks are not always as sufficient as the current application require, particularly when they use TCP at the transport layer. The high packet loss rate over wireless links can trigger unnecessary execution of TCP congestion control algorithm, resulting in performance degradation. In order to overcome the limitations of WLANs environment, MAC-layer LDA(Loss Differentiation Algorithm)has been proposed. MAC-layer LDA prevents TCP's timeout by increasing CRD(Consecutive Retry Duration) higher than burst packet loss duration. However, in the wireless channel with high packet loss rate, MAC-layer LDA does not work well because of two reason: (a) If the CRD is lower than burst packet loss duration due to the limited increase of retry limit, end-to-end performance is degraded. (b) energy of mobile device and bandwidth utilization in the wireless link are wasted unnecessarily by Reducing the drainage speed of the network buffer due to the increase of CRD. In this paper, we propose a new retransmission module based on Cross-layer approach, called BLD(Burst Loss Detection) module, to solve the limitation of previous link layer retransmission schemes. BLD module's algorithm is retransmission mechanism at IEEE 802.11 networks and performs retransmission based on the interaction between retransmission mechanisms of the MAC layer and TCP. From the simulation by using ns-2(Network Simulator), we could see more improved TCP throughput and energy efficiency with the proposed scheme than previous mechanisms.

Analysis of $^1H$ MR Spectroscopy of parietal white matter material Phantom (두정부 백질 물질을 이용한 수소 자기 공명 분광 분석)

  • Lee, Jae-Yeong;Lim, Cheong-Hwan;Kim, Myeong-Soo
    • Journal of radiological science and technology
    • /
    • v.26 no.2
    • /
    • pp.57-61
    • /
    • 2003
  • The purpose of this study is to compare both 1.5T and 4.7T in Praietal White matter material Phantom using the same methodology at both field strengths. Data at both field strengths are compared in terms of $T_2$ relaxation times, line widths and SNRs MR imaging and $^1H$ MR spectroscopy were performed on GE 1.5T SIGNA system and Broker Biospec 4.7T/30 MRI/MRS system. After phantom axial scan $^1H$ MRS was obtained from T2 weighted image by 3-dimensional localization technique(PRESS : Point RE solved spectroscopy Sequence) this phantom is composed of an aqueous solution 36.7 mmol/L of NAA, 25.0 mmol/L of Cr, 6.3 mmol/L of choline chloride, 30.0 mmol/L or Glu, and 22.5 mmol/L of MI(adjusted to a pH of 7,15 in a phosphate buffet). Data processed using software developed inhouse. At 1.5T, T2 relaxation times for Cho, Cr, and NAA were $0.41{\pm}0.07,\;0.26{\pm}0.04,\;0.46{\pm}0.07$ while at 4.7T they were $0.17{\pm}0.03,\;0.14{\pm}0.05,\;0.20{\pm}0.03$ respectively. At 1.5T, line widths for water, Cho, Cr and NAA were $2.9{\pm}0.7,\;1.6{\pm}0.7,\;1.7{\pm}0.8,\;2.2{\pm}0.02Hz$ while at 4.7T they were $5.2{\pm}1.1,\;4.6{\pm}1.9,\;4.01{\pm}1.8,\;4.8{\pm}1.9Hz$ respectively. It can be seen that $T_2$ relaxation times were significantly shorter at 4.7 compared to 1.5T and that the line widths were also broader. The average SNRs for NAA for subjects at short and long TEs were $23.5{\pm}11.3$ at TE=20 msec ; $15.4{\pm}7.7$ at TE=272 msec at 1.5T and $40{\pm}8.3$ and $17{\pm}3.5$ respectively at 4.7T higher field strength is superior because of improved sensitivity and chemical shift dispersion. However these improvements are partially offset by increased line widths and decrease $T_2$ relaxation times, which act to reduce both sensitivity and resolution. In our experiments with the equipment available to us, 4.7T proton spectra at short TEs exhibit moderately improved sensitivity compared to 1.5T.

  • PDF

A Study on the Component-based GIS Development Methodology using UML (UML을 활용한 컴포넌트 기반의 GIS 개발방법론에 관한 연구)

  • Park, Tae-Og;Kim, Kye-Hyun
    • Journal of Korea Spatial Information System Society
    • /
    • v.3 no.2 s.6
    • /
    • pp.21-43
    • /
    • 2001
  • The environment to development information system including a GIS has been drastically changed in recent years in the perspectives of the complexity and diversity of the software, and the distributed processing and network computing, etc. This leads the paradigm of the software development to the CBD(Component Based Development) based object-oriented technology. As an effort to support these movements, OGC has released the abstract and implementation standards to enable approaching to the service for heterogeneous geographic information processing. It is also common trend in domestic field to develop the GIS application based on the component technology for municipal governments. Therefore, it is imperative to adopt the component technology considering current movements, yet related research works have not been made. This research is to propose a component-based GIS development methodology-ATOM(Advanced Technology Of Methodology)-and to verify its adoptability through the case study. ATOM can be used as a methodology to develop component itself and enterprise GIS supporting the whole procedure for the software development life cycle based on conventional reusable component. ATOM defines stepwise development process comprising activities and work units of each process. Also, it provides input and output, standardized items and specs for the documentation, detailed instructions for the easy understanding of the development methodology. The major characteristics of ATOM would be the component-based development methodology considering numerous features of the GIS domain to generate a component with a simple function, the smallest size, and the maximum reusability. The case study to validate the adoptability of the ATOM showed that it proves to be a efficient tool for generating a component providing relatively systematic and detailed guidelines for the component development. Therefore, ATOM would lead to the promotion of the quality and the productivity for developing application GIS software and eventually contribute to the automatic production of the GIS software, the our final goal.

  • PDF

Analysis of Optimal Resolution and Number of GCP Chips for Precision Sensor Modeling Efficiency in Satellite Images (농림위성영상 정밀센서모델링 효율성 재고를 위한 최적의 해상도 및 지상기준점 칩 개수 분석)

  • Choi, Hyeon-Gyeong;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1445-1462
    • /
    • 2022
  • Compact Advanced Satellite 500-4 (CAS500-4), which is scheduled to be launched in 2025, is a mid-resolution satellite with a 5 m resolution developed for wide-area agriculture and forest observation. To utilize satellite images, it is important to establish a precision sensor model and establish accurate geometric information. Previous research reported that a precision sensor model could be automatically established through the process of matching ground control point (GCP) chips and satellite images. Therefore, to improve the geometric accuracy of satellite images, it is necessary to improve the GCP chip matching performance. This paper proposes an improved GCP chip matching scheme for improved precision sensor modeling of mid-resolution satellite images. When using high-resolution GCP chips for matching against mid-resolution satellite images, there are two major issues: handling the resolution difference between GCP chips and satellite images and finding the optimal quantity of GCP chips. To solve these issues, this study compared and analyzed chip matching performances according to various satellite image upsampling factors and various number of chips. RapidEye images with a resolution of 5m were used as mid-resolution satellite images. GCP chips were prepared from aerial orthographic images with a resolution of 0.25 m and satellite orthogonal images with a resolution of 0.5 m. Accuracy analysis was performed using manually extracted reference points. Experiment results show that upsampling factor of two and three significantly improved sensor model accuracy. They also show that the accuracy was maintained with reduced number of GCP chips of around 100. The results of the study confirmed the possibility of applying high-resolution GCP chips for automated precision sensor modeling of mid-resolution satellite images with improved accuracy. It is expected that the results of this study can be used to establish a precise sensor model for CAS500-4.