• Title/Summary/Keyword: Interpolation function

Search Result 500, Processing Time 0.032 seconds

Prediction of Wind Damage Risk based on Estimation of Probability Distribution of Daily Maximum Wind Speed (일 최대풍속의 추정확률분포에 의한 농작물 강풍 피해 위험도 판정 방법)

  • Kim, Soo-ock
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.130-139
    • /
    • 2017
  • The crop damage caused by strong wind was predicted using the wind speed data available from Korean Meteorological Administration (KMA). Wind speed data measured at 19 automatic weather stations in 2012 were compared with wind data available from the KMA's digital forecast. Linear regression equations were derived using the maximum value of wind speed measurements for the three-hour period prior to a given hour and the digital forecasts at the three-hour interval. Estimates of daily maximum wind speed were obtained from the regression equation finding the greatest value among the maximum wind speed at the three-hour interval. The estimation error for the daily maximum wind speed was expressed using normal distribution and Weibull distribution probability density function. The daily maximum wind speed was compared with the critical wind speed that could cause crop damage to determine the level of stages for wind damage, e.g., "watch" or "warning." Spatial interpolation of the regression coefficient for the maximum wind speed, the standard deviation of the estimation error at the automated weather stations, the parameters of Weibull distribution was performed. These interpolated values at the four synoptic weather stations including Suncheon, Namwon, Imsil, and Jangsu were used to estimate the daily maximum wind speed in 2012. The wind damage risk was determined using the critical wind speed of 10m/s under the assumption that the fruit of a pear variety Mansamgil would begin to drop at 10 m/s. The results indicated that the Weibull distribution was more effective than the normal distribution for the estimation error probability distribution for assessing wind damage risk.

3D Pointing for Effective Hand Mouse in Depth Image (깊이영상에서 효율적인 핸드 마우스를 위한 3D 포인팅)

  • Joo, Sung-Il;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.8
    • /
    • pp.35-44
    • /
    • 2014
  • This paper proposes a 3D pointing interface that is designed for the efficient application of a hand mouse. The proposed method uses depth images to secure high-quality results even in response to changes in lighting and environmental conditions and uses the normal vector of the palm of the hand to perform 3D pointing. First, the hand region is detected and tracked using the existing conventional method; based on the information thus obtained, the region of the palm is predicted and the region of interest is obtained. Once the region of interest has been identified, this region is approximated by the plane equation and the normal vector is extracted. Next, to ensure stable control, interpolation is performed using the extracted normal vector and the intersection point is detected. For stability and efficiency, the dynamic weight using the sigmoid function is applied to the above detected intersection point, and finally, this is converted into the 2D coordinate system. This paper explains the methods of detecting the region of interest and the direction vector and proposes a method of interpolating and applying the dynamic weight in order to stabilize control. Lastly, qualitative and quantitative analyses are performed on the proposed 3D pointing method to verify its ability to deliver stable control.

Classification of Scaled Textured Images Using Normalized Pattern Spectrum Based on Mathematical Morphology (형태학적 정규화 패턴 스펙트럼을 이용한 질감영상 분류)

  • Song, Kun-Woen;Kim, Gi-Seok;Do, Kyeong-Hoon;Ha, Yeong-Ho
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.116-127
    • /
    • 1996
  • In this paper, a scheme of classification of scaled textured images using normalized pattern spectrum incorporating arbitrary scale changes based on mathematical morphology is proposed in more general environments considering camera's zoom-in and zoom-out function. The normalized pattern spectrum means that firstly pattern spectrum is calculated and secondly interpolation is performed to incorporate scale changes according to scale change ratio in the same textured image class. Pattern spectrum is efficiently obtained by using both opening and closing, that is, we calculate pattern spectrum by opening method for pixels which have value more than threshold and calculate pattern spectrum by closing method for pixels which have value less than threshold. Also we compare classification accuracy between gray scale method and binary method. The proposed approach has the advantage of efficient information extraction, high accuracy, less computation, and parallel implementation. An important advantage of the proposed method is that it is possible to obtain high classification accuracy with only (1:1) scale images for training phase.

  • PDF

Development of Wave by Wave Analysis Program using MATLAB (MATLAB을 이용한 개별파 분석 프로그램 개발)

  • Choi, Hyukjin;Jeong, Shin Taek;Cho, Hong Yeon;Ko, Dong Hui;Kang, Keum Seok
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.29 no.5
    • /
    • pp.239-246
    • /
    • 2017
  • In case of observing only wave height and period in the field, various wave characteristics are mainly calculated by wave by wave analysis method. In this paper, an wave by wave analysis program using MATLAB language is developed. It is possible to perform a function such as 1) correction for mean water level, 2) calculation for zero crossing time, 3) calculation for individual wave height, 4) time interval by using zero upcrossing and downcrossing method. The applicability of the developed program to the data of 0.2 second interval observed by using the WaveGuide Radar installed on HeMOSU-1 was examined. Tidal level variation removal and zero crossing time estimation were determined by linear or quadratic interpolation. It was judged that the Goda method was appropriate for calculating individual wave height, and the method proposed in this study seems to be improved through subsequent research. Due to the fineness of the sample, it can be seen that characteristics of representative waves are different from the results calculated by zero upcrossing and downcrossing method.

A 3D Magnetic Inversion Software Based on Algebraic Reconstruction Technique and Assemblage of the 2D Forward Modeling and Inversion (대수적 재구성법과 2차원 수치모델링 및 역산 집합에 기반한 3차원 자력역산 소프트웨어)

  • Ko, Kwang-Beom;Jung, Sang-Won;Han, Kyeong-Soo
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.1
    • /
    • pp.27-35
    • /
    • 2013
  • In this study, we developed the trial product on 3D magnetic inversion tentatively named 'KMag3D'. Also, we briefly introduced its own function and graphic user interface on which especially focused through the development in the form of user manual. KMag3D is consisted of two fundamental frame for the 3D magnetic inversion. First, algebraic reconstruction technique was selected as a 3D inversion algorithm instead of least square method conventionally used in various magnetic inversion. By comparison, it was turned out that algebraic reconstruction algorithm was more effective and economic than that of least squares in aspect of both computation time and memory. Second, for the effective determination of the 3D initial and a-priori information model required in the execution of our algorithm, we proposed the practical technique based on the assemblage of 2D forward modeling and inversion results for individual user-selected 2D profiles. And in succession, initial and a-priori information model were constructed by appropriate interpolation along the strke direction. From this, we concluded that our technique is both suitable and very practical for the application of 3D magentic inversion problem.

A LQR Controller Design for Performance Optimization of Medium Scale Commercial Aircraft Turbofan Engine (II) (중형항공기용 터보팬 엔진의 성능최적화를 위한 LQR 제어기 설계 (II))

  • 공창덕;기자영
    • Journal of the Korean Society of Propulsion Engineers
    • /
    • v.2 no.3
    • /
    • pp.99-106
    • /
    • 1998
  • The performance of the turbofan engine, a medium scale civil aircraft which has been developing in Rep. of Korea, was analyzed and the control scheme for optimization the performance was studied. The dynamic and real-time linear simulation was performed in the previous study The result was that the fuel scedule of the step increase overshoot the limit temperature(3105 $^{\cire}R$) of the high pressure turbine and got small surge margine of the high pressure compressor. Therefore a control scheme such as the LQR(Linear Quadratic Regulator) was applied to optimizing the performance in this studies. The linear model was expected for designing controller and the real time linear model was developed to be closed to nonlinear simulation results. The system matrices were derived from sampling operating points in the scheduled range and then the least square method was applied to the interpolation between these sampling points, where each element of matrices was a function of the rotor speed. The control variables were the fuel flow and the low pressure compressor bleed air. The controlled linear model eliminated the inlet temperature overshoot of the high pressure turbine and obtained maximum surge margins within 0.55. The SFC was stabilized in the range of 0.355 to 0.43.

  • PDF

Unsupervised Non-rigid Registration Network for 3D Brain MR images (3차원 뇌 자기공명 영상의 비지도 학습 기반 비강체 정합 네트워크)

  • Oh, Donggeon;Kim, Bohyoung;Lee, Jeongjin;Shin, Yeong-Gil
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.64-74
    • /
    • 2019
  • Although a non-rigid registration has high demands in clinical practice, it has a high computational complexity and it is very difficult for ensuring the accuracy and robustness of registration. This study proposes a method of applying a non-rigid registration to 3D magnetic resonance images of brain in an unsupervised learning environment by using a deep-learning network. A feature vector between two images is produced through the network by receiving both images from two different patients as inputs and it transforms the target image to match the source image by creating a displacement vector field. The network is designed based on a U-Net shape so that feature vectors that consider all global and local differences between two images can be constructed when performing the registration. As a regularization term is added to a loss function, a transformation result similar to that of a real brain movement can be obtained after the application of trilinear interpolation. This method enables a non-rigid registration with a single-pass deformation by only receiving two arbitrary images as inputs through an unsupervised learning. Therefore, it can perform faster than other non-learning-based registration methods that require iterative optimization processes. Our experiment was performed with 3D magnetic resonance images of 50 human brains, and the measurement result of the dice similarity coefficient confirmed an approximately 16% similarity improvement by using our method after the registration. It also showed a similar performance compared with the non-learning-based method, with about 10,000 times speed increase. The proposed method can be used for non-rigid registration of various kinds of medical image data.

Providing the combined models for groundwater changes using common indicators in GIS (GIS 공통 지표를 활용한 지하수 변화 통합 모델 제공)

  • Samaneh, Hamta;Seo, You Seok
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.3
    • /
    • pp.245-255
    • /
    • 2022
  • Evaluating the qualitative the qualitative process of water resources by using various indicators, as one of the most prevalent methods for optimal managing of water bodies, is necessary for having one regular plan for protection of water quality. In this study, zoning maps were developed on a yearly basis by collecting and reviewing the process, validating, and performing statistical tests on qualitative parameters҆ data of the Iranian aquifers from 1995 to 2020 using Geographic Information System (GIS), and based on Inverse Distance Weighting (IDW), Radial Basic Function (RBF), and Global Polynomial Interpolation (GPI) methods and Kriging and Co-Kriging techniques in three types including simple, ordinary, and universal. Then, minimum uncertainty and zoning error in addition to proximity for ASE and RMSE amount, was selected as the optimum model. Afterwards, the selected model was zoned by using Scholar and Wilcox. General evaluation of groundwater situation of Iran, revealed that 59.70 and 39.86% of the resources are classified into the class of unsuitable for agricultural and drinking purposes, respectively indicating the crisis of groundwater quality in Iran. Finally, for validating the extracted results, spatial changes in water quality were evaluated using the Groundwater Quality Index (GWQI), indicating high sensitivity of aquifers to small quantitative changes in water level in addition to severe shortage of groundwater reserves in Iran.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.