• Title/Summary/Keyword: model processing

Search Result 8,642, Processing Time 0.048 seconds

A Performance Comparison of Super Resolution Model with Different Activation Functions (활성함수 변화에 따른 초해상화 모델 성능 비교)

  • Yoo, Youngjun;Kim, Daehee;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.10
    • /
    • pp.303-308
    • /
    • 2020
  • The ReLU(Rectified Linear Unit) function has been dominantly used as a standard activation function in most deep artificial neural network models since it was proposed. Later, Leaky ReLU, Swish, and Mish activation functions were presented to replace ReLU, which showed improved performance over existing ReLU function in image classification task. Therefore, we recognized the need to experiment with whether performance improvements could be achieved by replacing the RELU with other activation functions in the super resolution task. In this paper, the performance was compared by changing the activation functions in EDSR model, which showed stable performance in the super resolution task. As a result, in experiments conducted with changing the activation function of EDSR, when the resolution was converted to double, the existing activation function, ReLU, showed similar or higher performance than the other activation functions used in the experiment. When the resolution was converted to four times, Leaky ReLU and Swish function showed slightly improved performance over ReLU. PSNR and SSIM, which can quantitatively evaluate the quality of images, were able to identify average performance improvements of 0.06%, 0.05% when using Leaky ReLU, and average performance improvements of 0.06% and 0.03% when using Swish. When the resolution is converted to eight times, the Mish function shows a slight average performance improvement over the ReLU. Using Mish, PSNR and SSIM were able to identify an average of 0.06% and 0.02% performance improvement over the RELU. In conclusion, Leaky ReLU and Swish showed improved performance compared to ReLU for super resolution that converts resolution four times and Mish showed improved performance compared to ReLU for super resolution that converts resolution eight times. In future study, we should conduct comparative experiments to replace activation functions with Leaky ReLU, Swish and Mish to improve performance in other super resolution models.

Techniques for Acquisition of Moving Object Location in LBS (위치기반 서비스(LBS)를 위한 이동체 위치획득 기법)

  • Min, Gyeong-Uk;Jo, Dae-Su
    • The KIPS Transactions:PartD
    • /
    • v.10D no.6
    • /
    • pp.885-896
    • /
    • 2003
  • The typws of service using location Information are being various and extending their domain as wireless internet tochnology is developing and its application par is widespread, so it is prospected that LBS(Location-Based Services) will be killer application in wireless internet services. This location information is basic and high value-added information, and this information services make prior GIS(Geographic Information System) to be useful to anybody. The acquisition of this location information from moving object is very important part in LBS. Also the interfacing of acquisition of moving object between MODB and telecommunication network is being very important function in LBS. After this, when LBS are familiar to everybody, we can predict that LBS system load is so heavy for the acquisition of so many subscribers and vehicles. That is to say, LBS platform performance is fallen off because of overhead increment of acquiring moving object between MODB and wireless telecommunication network. So, to make stable of LBS platform, in this MODB system, acquisition of moving object location par as reducing the number of acquisition of unneccessary moving object location. We study problems in acquiring a huge number of moving objects location and design some acquisition model using past moving patternof each object to reduce telecommunication overhead. And after implementation these models, we estimate performance of each model.

Quantitative Analysis of Digital Radiography Pixel Values to absorbed Energy of Detector based on the X-Ray Energy Spectrum Model (X선 스펙트럼 모델을 이용한 DR 화소값과 디텍터 흡수에너지의 관계에 대한 정량적 분석)

  • Kim Do-Il;Kim Sung-Hyun;Ho Dong-Su;Choe Bo-young;Suh Tae-Suk;Lee Jae-Mun;Lee Hyoung-Koo
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.202-209
    • /
    • 2004
  • Flat panel based digital radiography (DR) systems have recently become useful and important in the field of diagnostic radiology. For DRs with amorphous silicon photosensors, CsI(TI) is normally used as the scintillator, which produces visible light corresponding to the absorbed radiation energy. The visible light photons are converted into electric signal in the amorphous silicon photodiodes which constitute a two dimensional array. In order to produce good quality images, detailed behaviors of DR detectors to radiation must be studied. The relationship between air exposure and the DR outputs has been investigated in many studies. But this relationship was investigated under the condition of the fixed tube voltage. In this study, we investigated the relationship between the DR outputs and X-ray in terms of the absorbed energy in the detector rather than the air exposure using SPEC-l8, an X-ray energy spectrum model. Measured exposure was compared with calculated exposure for obtaining the inherent filtration that is a important input variable of SPEC-l8. The absorbed energy in the detector was calculated using algorithm of calculating the absorbed energy in the material and pixel values of real images under various conditions was obtained. The characteristic curve was obtained using the relationship of two parameter and the results were verified using phantoms made of water and aluminum. The pixel values of the phantom image were estimated and compared with the characteristic curve under various conditions. It was found that the relationship between the DR outputs and the absorbed energy in the detector was almost linear. In a experiment using the phantoms, the estimated pixel values agreed with the characteristic curve, although the effect of scattered photons introduced some errors. However, effect of a scattered X-ray must be studied because it was not included in the calculation algorithm. The result of this study can provide useful information about a pre-processing of digital radiography.

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Comparative Analysis of GNSS Precipitable Water Vapor and Meteorological Factors (GNSS 가강수량과 기상인자의 상호 연관성 분석)

  • Jae Sup, Kim;Tae-Suk, Bae
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.4
    • /
    • pp.317-324
    • /
    • 2015
  • GNSS was firstly proposed for application in weather forecasting in the mid-1980s. It has continued to demonstrate the practical uses in GNSS meteorology, and other relevant researches are currently being conducted. Precipitable Water Vapor (PWV), calculated based on the GNSS signal delays due to the troposphere of the Earth, represents the amount of the water vapor in the atmosphere, and it is therefore widely used in the analysis of various weather phenomena such as monitoring of weather conditions and climate change detection. In this study we calculated the PWV through the meteorological information from an Automatic Weather Station (AWS) as well as GNSS data processing of a Continuously Operating Reference Station (CORS) in order to analyze the heavy snowfall of the Ulsan area in early 2014. Song’s model was adopted for the weighted mean temperature model (Tm), which is the most important parameter in the calculation of PWV. The study period is a total of 56 days (February 2013 and 2014). The average PWV of February 2014 was determined to be 11.29 mm, which is 11.34% lower than that of the heavy snowfall period. The average PWV of February 2013 was determined to be 10.34 mm, which is 8.41% lower than that of not the heavy snowfall period. In addition, certain meteorological factors obtained from AWS were compared as well, resulting in a very low correlation of 0.29 with the saturated vapor pressure calculated using the empirical formula of Magnus. The behavioral pattern of PWV has a tendency to change depending on the precipitation type, specifically, snow or rain. It was identified that the PWV showed a sudden increase and a subsequent rapid drop about 6.5 hours before precipitation. It can be concluded that the pattern analysis of GNSS PWV is an effective method to analyze the precursor phenomenon of precipitation.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Analysis of Optimal Pathways for Terrestrial LiDAR Scanning for the Establishment of Digital Inventory of Forest Resources (디지털 산림자원정보 구축을 위한 최적의 지상LiDAR 스캔 경로 분석)

  • Ko, Chi-Ung;Yim, Jong-Su;Kim, Dong-Geun;Kang, Jin-Taek
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.245-256
    • /
    • 2021
  • This study was conducted to identify the applicability of a LiDAR sensor to forest resources inventories by comparing data on a tree's position, height, and DBH obtained by the sensor with those by existing forest inventory methods, for the tree species of Criptomeria japonica in Jeolmul forest in Jeju, South Korea. To this end, a backpack personal LiDAR (Greenvalley International, Model D50) was employed. To facilitate the process of the data collection, patterns of collecting the data by the sensor were divided into seven ones, considering the density of sample plots and the work efficiency. Then, the accuracy of estimating the variables of each tree was assessed. The amount of time spent on acquiring and processing the data by each method was compared to evaluate the efficiency. The findings showed that the rate of detecting standing trees by the LiDAR was 100%. Also, the high statistical accuracy was observed in both Pattern 5 (DBH: RMSE 1.07 cm, Bias -0.79 cm, Height: RMSE 0.95 m, Bias -3.2 m), and Pattern 7 (DBH: RMSE 1.18 cm, Bias -0.82 cm, Height: RMSE 1.13 m, Bias -2.62 m), compared to the results drawn in the typical inventory manner. Concerning the time issue, 115 to 135 minutes per 1ha were taken to process the data by utilizing the LiDAR, while 375 to 1,115 spent in the existing way, proving the higher efficiency of the device. It can thus be concluded that using a backpack personal LiDAR helps increase efficiency in conducting a forest resources inventory in an planted coniferous forest with understory vegetation, implying a need for further research in a variety of forests.

Seismic response characteristics of the hypothetical subsea tunnel in the fault zone with various material properties (다양한 물성의 단층대를 통과하는 가상해저터널의 지진 시 응답 특성)

  • Jang, Dong In;Kwak, Chang-Won;Park, Inn-Joon;Kim, Chang-Yong
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.20 no.6
    • /
    • pp.1061-1071
    • /
    • 2018
  • A subsea tunnel, being a super-sized underground structure must ensure safety at the time of earthquake, as well as at ordinary times. At the time of earthquake, in particular, of a subsea tunnel, a variety of response behaviors are induced owing to relative rigidity to the surrounding ground, or difference of displacement, so that the behavior characteristics can be hardly anticipated. The investigation aims to understand the behavior characteristics switched by earthquake of an imaginary subsea tunnel which passes through a fault zone having different physical properties from those of the surrounding ground. In order to achieve the aim, dynamic response behaviors of a subsea tunnel which passes through a fault zone were observed by means of indoor experiments. For the sake of improved earthquake resistance, a shape of subsea tunnel to which flexible segments have been applied was considered. Afterward, it is believed that a D/B can be established through 3-dimensional earthquake resistance interpretation of various grounds, on the basis of verified results from the experiments and interpretations under various conditions. The present investigation performed 1 g shaking table test in order to verify the result of 3-dimensional earthquake resistance interpretation. A model considering the similitude (1:100) of a scale-down model test was manufactured, and tests for three (3) Cases were carried out. Incident seismic wave was introduced by artificial seismic wave having both long-period and short-period earthquake properties in the horizontal direction which is rectangular to the processing direction of the tunnel, so that a fault zone was modeled. For numerical analysis, elastic modulus of the fault zone was assumed 1/5 value of the modulus of individual grounds surround the tunnel, in order to simulate a fault zone. Resultantly, reduced acceleration was confirmed with increase of physical properties of the fault zone, and the result from the shaking table test showed the same tendency as the result from 3-dimensional interpretation.

Assessment of Fire-Damaged Mortar using Color image Analysis (색도 이미지 분석을 이용한 화재 피해 모르타르의 손상 평가)

  • Park, Kwang-Min;Lee, Byung-Do;Yoo, Sung-Hun;Ham, Nam-Hyuk;Roh, Young-Sook
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.23 no.3
    • /
    • pp.83-91
    • /
    • 2019
  • The purpose of this study is to assess a fire-damaged concrete structure using a digital camera and image processing software. To simulate it, mortar and paste samples of W/C=0.5(general strength) and 0.3(high strength) were put into an electric furnace and simulated from $100^{\circ}C$ to $1000^{\circ}C$. Here, the paste was processed into a powder to measure CIELAB chromaticity, and the samples were taken with a digital camera. The RGB chromaticity was measured by color intensity analyzer software. As a result, the residual compressive strength of W/C=0.5 and 0.3 was 87.2 % and 86.7 % at the heating temperature of $400^{\circ}C$. However there was a sudden decrease in strength at the temperature above $500^{\circ}C$, while the residual compressive strength of W/C=0.5 and 0.3 was 55.2 % and 51.9 % of residual strength. At the temperature $700^{\circ}C$ or higher, W/C=0.5 and W/C=0.3 show 26.3% and 27.8% of residual strength, so that the durability of the structure could not be secured. The results of $L^*a^*b$ color analysis show that $b^*$ increases rapidly after $700^{\circ}C$. It is analyzed that the intensity of yellow becomes strong after $700^{\circ}C$. Further, the RGB analysis found that the histogram kurtosis and frequency of Red and Green increases after $700^{\circ}C$. It is analyzed that number of Red and Green pixels are increased. Therefore, it is deemed possible to estimate the degree of damage by checking the change in yellow($b^*$ or R+G) when analyzing the chromaticity of the fire-damaged concrete structures.

Establishment of Geospatial Schemes Based on Topo-Climatology for Farm-Specific Agrometeorological Information (농장맞춤형 농업기상정보 생산을 위한 소기후 모형 구축)

  • Kim, Dae-Jun;Kim, Soo-Ock;Kim, Jin-Hee;Yun, Eun-Jeong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.3
    • /
    • pp.146-157
    • /
    • 2019
  • One of the most distinctive features of the South Korean rural environment is that the variation of weather or climate is large even within a small area due to complex terrains. The Geospatial Schemes based on Topo-Climatology (GSTP) was developed to simulate such variations effectively. In the present study, we reviewed the progress of the geospatial schemes for production of farm-scale agricultural weather data. Efforts have been made to improve the GSTP since 2000s. The schemes were used to provide climate information based on the current normal year and future climate scenarios at a landscape scale. The digital climate maps for the normal year include the maps of the monthly minimum temperature, maximum temperature, precipitation, and solar radiation in the past 30 years at 30 m or 270 m spatial resolution. Based on these digital climate maps, future climate change scenario maps were also produced at the high spatial resolution. These maps have been used for climate change impact assessment at the field scale by reprocessing them and transforming them into various forms. In the 2010s, the GSTP model was used to produce information for farm-specific weather conditions and weather forecast data on a landscape scale. The microclimate models of which the GSTP model consists have been improved to provide detailed weather condition data based on daily weather observation data in recent development. Using such daily data, the Early warning service for agrometeorological hazard has been developed to provide weather forecasts in real-time by processing a digital forecast and mid-term weather forecast data (KMA) at 30 m spatial resolution. Currently, daily minimum temperature, maximum temperature, precipitation, solar radiation quantity, and the duration of sunshine are forecasted as detailed weather conditions and forecast information. Moreover, based on farm-specific past-current-future weather information, growth information for various crops and agrometeorological disaster forecasts have been produced.