• Title/Summary/Keyword: 가중치 함수

Search Result 544, Processing Time 0.024 seconds

Integrated Color Matching in Stereoscopic Image by Combining Local and Global Color Compensation (지역과 전역적인 색보정을 결합한 스테레오 영상에서의 색 일치)

  • Shu, Ran;Ha, Ho-Gun;Kim, Dae-Chul;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.12
    • /
    • pp.168-175
    • /
    • 2013
  • Color consistency in stereoscopic contents is important for 3D display systems. Even with a stereo camera of the same model and with the same hardware settings, complex color discrepancies occur when acquiring high quality stereo images. In this paper, we propose an integrated color matching method that use cumulative histogram in global matching and estimated 3D-distance for the stage of local matching. The distance between the current pixel and the target local region is computed using depth information and the spatial distance in the 2D image plane. The 3D-distance is then used to determine the similarity between the current pixel and the target local region. The overall algorithm is described as follow; First, the cumulative histogram matching is introduced for reducing global color discrepancies. Then, the proposed local color matching is established for reducing local discrepancies. Finally, a weight-based combination of global and local matching is computed. Experimental results show the proposed algorithm has improved global and local error correction performance for stereoscopic contents with respect to other approaches.

An integrated framework of security tool selection using fuzzy regression and physical programming (퍼지회귀분석과 physical programming을 활용한 정보보호 도구 선정 통합 프레임워크)

  • Nguyen, Hoai-Vu;Kongsuwan, Pauline;Shin, Sang-Mun;Choi, Yong-Sun;Kim, Sang-Kyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.143-156
    • /
    • 2010
  • Faced with an increase of malicious threats from the Internet as well as local area networks, many companies are considering deploying a security system. To help a decision maker select a suitable security tool, this paper proposed a three-step integrated framework using linear fuzzy regression (LFR) and physical programming (PP). First, based on the experts' estimations on security criteria, analytic hierarchy process (AHP) and quality function deployment (QFD) are employed to specify an intermediate score for each criterion and the relationship among these criteria. Next, evaluation value of each criterion is computed by using LFR. Finally, a goal programming (GP) method is customized to obtain the most appropriate security tool for an organization, considering a tradeoff among the multi-objectives associated with quality, credibility and costs, utilizing the relative weights calculated by the physical programming weights (PPW) algorithm. A numerical example provided illustrates the advantages and contributions of this approach. Proposed approach is anticipated to help a decision maker select a suitable security tool by taking advantage of experts' experience, with noises eliminated, as well as the accuracy of mathematical optimization methods.

Initialization by using truncated distributions in artificial neural network (절단된 분포를 이용한 인공신경망에서의 초기값 설정방법)

  • Kim, MinJong;Cho, Sungchul;Jeong, Hyerin;Lee, YungSeop;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.5
    • /
    • pp.693-702
    • /
    • 2019
  • Deep learning has gained popularity for the classification and prediction task. Neural network layers become deeper as more data becomes available. Saturation is the phenomenon that the gradient of an activation function gets closer to 0 and can happen when the value of weight is too big. Increased importance has been placed on the issue of saturation which limits the ability of weight to learn. To resolve this problem, Glorot and Bengio (Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249-256, 2010) claimed that efficient neural network training is possible when data flows variously between layers. They argued that variance over the output of each layer and variance over input of each layer are equal. They proposed a method of initialization that the variance of the output of each layer and the variance of the input should be the same. In this paper, we propose a new method of establishing initialization by adopting truncated normal distribution and truncated cauchy distribution. We decide where to truncate the distribution while adapting the initialization method by Glorot and Bengio (2010). Variances are made over output and input equal that are then accomplished by setting variances equal to the variance of truncated distribution. It manipulates the distribution so that the initial values of weights would not grow so large and with values that simultaneously get close to zero. To compare the performance of our proposed method with existing methods, we conducted experiments on MNIST and CIFAR-10 data using DNN and CNN. Our proposed method outperformed existing methods in terms of accuracy.

Two-Stage Evolutionary Algorithm for Path-Controllable Virtual Creatures (경로 제어가 가능한 가상생명체를 위한 2단계 진화 알고리즘)

  • Shim Yoon-Sik;Kim Chang-Hun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.682-691
    • /
    • 2005
  • We present a two-step evolution system that produces controllable virtual creatures in physically simulated 3D environment. Previous evolutionary methods for virtual creatures did not allow any user intervention during evolution process, because they generated a creature's shape, locomotion, and high-level behaviors such as target-following and obstacle avoidance simultaneously by one-time evolution process. In this work, we divide a single system into manageable two sub-systems, and this more likely allowsuser interaction. In the first stage, a body structure and low-level motor controllers of a creature for straight movement are generated by an evolutionary algorithm. Next, a high-level control to follow a given path is achieved by a neural network. The connection weights of the neural network are optimized by a genetic algorithm. The evolved controller could follow any given path fairly well. Moreover, users can choose or abort creatures according to their taste before the entire evolution process is finished. This paper also presents a new sinusoidal controller and a simplified hydrodynamics model for a capped-cylinder, which is the basic body primitive of a creature.

Application of a Convolution Method for the Fast Prediction of Wind-Induced Surface Current in the Yellow Sea and the East China Sea (표층해류 신속예측을 위한 회선적분법의 적용)

  • 강관수;정경태
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.7 no.3
    • /
    • pp.265-276
    • /
    • 1995
  • In this Paper, the Performance of the convolution method has been investigated as an effort to develop a simple system of predicting wind-driven surface current on a real time basis. In this approach wind stress is assumed to be spatially uniform and the effect of atmospheric pressure is neglected. The discrete convolution weights are determined in advance at each point using a linear three-dimensional Galerkin model with linear shape functions(Galerkin-FEM model). Four directions of wind stress(e.g. NE, SW, NW, SE) with unit magnitude are imposed in the model calculation for the construction of data base for convolution weights. Given the time history of wind stress, it is then possible to predict with-driven currents promptly using the convolution product of finite length. An unsteady wind stress of arbitrary form can be approximated by a series of wind pulses with magnitude of 6 hour averaged value. A total of 12 pulses are involved in the convolution product To examine the accuracy of the convolution method a series of numerical experiments has been carried out in the idealized basin representing the scale of the Yellow Sea and the East China Sea. The wind stress imposed varies sinusoidally in time. It was found that the predicted surface currents and elevation fields were in good agreement with the results computed by the direct integration of the Galerkin model. A model with grid 1/8$^{\circ}$ in latitude, l/6$^{\circ}$ in longitude was established which covers the entire region of the Yellow Sea and the East China Sea. The numerical prediction in terms of the convolution product has been carried out with particular attention on the formation of upwind flow in the middle of the Yellow Sea by northerly wind.

  • PDF

A System Model of Iterative Image Reconstruction for High Sensitivity Collimator in SPECT (SPECT용 고민감도 콜리메이터를 위한 반복적 영상재구성방법의 시스템 모델 개발)

  • Bae, Seung-Bin;Lee, Hak-Jae;Kim, Young-Kwon;Kim, You-Hyun;Lee, Ki-Sung;Joung, Jin-Hun
    • Journal of radiological science and technology
    • /
    • v.33 no.1
    • /
    • pp.31-36
    • /
    • 2010
  • Low energy high resolution (LEHR) collimator is the most widely used collimator in SPECT imaging. LEHR has an advantage in terms of image resolution but has a difficulty in acquiring high sensitivity due to the narrow hole size and long septa height. Throughput in SPECT can be improved by increasing counts per second with the use of high sensitivity collimators. The purpose of this study is to develop a system model in iterative image reconstruction to recover the resolution degradation caused by high sensitivity collimators with bigger hole size. We used fan-beam model instead of parallel-beam model for calculation of detection probabilities to accurately model the high sensitivity collimator with wider holes. In addition the weight factors were calculated and applied onto the probabilities as a function of incident angle of incoming photons and distance from source to the collimator surface. The proposed system model resulted in the equivalent performance with the same counts (i.e. in shortened acquisition time) and improved image quality in the same acquisition time. The proposed method can be effectively applied for resolution improvement of pixel collimator of next generation solid state detectors.

Prediction of Potential Habitat of Japanese evergreen oak (Quercus acuta Thunb.) Considering Dispersal Ability Under Climate Change (분산 능력을 고려한 기후변화에 따른 붉가시나무의 잠재서식지 분포변화 예측연구)

  • Shin, Man-Seok;Seo, Changwan;Park, Seon-Uk;Hong, Seung-Bum;Kim, Jin-Yong;Jeon, Ja-Young;Lee, Myungwoo
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.3
    • /
    • pp.291-306
    • /
    • 2018
  • This study was designed to predict potential habitat of Japanese evergreen oak (Quercus acuta Thunb.) in Korean Peninsula considering its dispersal ability under climate change. We used a species distribution model (SDM) based on the current species distribution and climatic variables. To reduce the uncertainty of the SDM, we applied nine single-model algorithms and the pre-evaluation weighted ensemble method. Two representative concentration pathways (RCP 4.5 and 8.5) were used to simulate the distribution of Japanese evergreen oak in 2050 and 2070. The final future potential habitat was determined by considering whether it will be dispersed from the current habitat. The dispersal ability was determined using the Migclim by applying three coefficient values (${\theta}=-0.005$, ${\theta}=-0.001$ and ${\theta}=-0.0005$) to the dispersal-limited function and unlimited case. All the projections revealed potential habitat of Japanese evergreen oak will be increased in Korean Peninsula except the RCP 4.5 in 2050. However, the future potential habitat of Japanese evergreen oak was found to be limited considering the dispersal ability of this species. Therefore, estimation of dispersal ability is required to understand the effect of climate change and habitat distribution of the species.

Mobbing-Value Algorithm based on User Profile in Online Social Network (온라인 소셜 네트워크에서 사용자 프로파일 기반의 모빙지수(Mobbing-Value) 알고리즘)

  • Kim, Guk-Jin;Park, Gun-Woo;Lee, Sang-Hoon
    • The KIPS Transactions:PartD
    • /
    • v.16D no.6
    • /
    • pp.851-858
    • /
    • 2009
  • Mobbing is not restricted to problem of young people but the bigger recent problem occurs in workspaces. According to reports of ILO and domestic case mobbing in the workplace is increasing more and more numerically from 9.1%('03) to 30.7%('08). These mobbing brings personal and social losses. The proposed algorithm makes it possible to grasp not only current mobbing victims but also potential mobbing victims through user profile and contribute to efficient personnel management. This paper extracts user profile related to mobbing, in a way of selecting seven factors and fifty attributes that are related to this matter. Next, expressing extracting factors as '1' if they are related me or not '0'. And apply similarity function to attributes summation included in factors to calculate similarity between the users. Third, calculate optimizing weight choosing factors included attributes by applying neural network algorithm of SPSS Clementine and through this summation Mobbing-Value(MV) can be calculated . Finally by mapping MV of online social network users to G2 mobbing propensity classification model(4 Groups; Ideal Group of the online social network, Bullies, Aggressive victims, Victims) which is designed in this paper, can grasp mobbing propensity of users, which will contribute to efficient personnel management.

Edge-adaptive demosaicking method for complementary color filter array of digital video cameras (디지털 비디오 카메라용 보색 필터를 위한 에지 적응적 색상 보간 방법)

  • Han, Young-Seok;Kang, Hee;Kang, Moon-Gi
    • Journal of Broadcast Engineering
    • /
    • v.13 no.1
    • /
    • pp.174-184
    • /
    • 2008
  • Complementary color filter array (CCFA) is widely used in consumer-level digital video cameras, since it not only has high sensitivity and good signal-to-noise ratio in low-light condition but also is compatible with the interlaced scanning used in broadcast systems. However, the full-color images obtained from CCFA suffer from the color artifacts such as false color and zipper effects. These artifacts can be removed with edge-adaptive demosaicking (ECD) approaches which are generally used in rrimary color filter array (PCFA). Unfortunately, the unique array pattern of CCFA makes it difficult that CCFA adopts ECD approaches. Therefore, to apply ECD approaches suitable for CCFA to demosaicking is one of the major issues to reconstruct the full-color images. In this paper, we propose a new ECD algorithm for CCFA. To estimate an edge direction precisely and enhance the quality of the reconstructed image, a function of spatial variances is used as a weight, and new color conversion matrices are presented for considering various edge directions. Experimental results indicate that the proposed algorithm outperforms the conventional method with respect to both objective and subjective criteria.

Methodology for Estimating the Probability of Damage to a Heat Transmission Pipe (열수송관 파손확률 추정 방법론 개발)

  • Kong, Myeongsik;Kang, Jaemo
    • Journal of the Korean GEO-environmental Society
    • /
    • v.22 no.11
    • /
    • pp.15-21
    • /
    • 2021
  • Losses of both life and property increased from damage to underground pipe such as heat transmission pipe buried underground in downtown because pipes are gradually aging. Considering the characteristics of the heat transmission pipe, which is not exposed to the outside and difficult to immediately identify problems such as damage, it is realistic to indirectly check the condition of the facility based on the historical information that is periodically collected through facility maintenance. In this study, a methodology for estimating the damage probability was developed by examining the history information of the heat transmission pipe, deriving an evaluation factor that is related to the damage probability. The contribution factor of the damage probability were reviewed by analyzing not only the guidelines for maintenance of heat transmission pipe of advanced European countries and domestic district heating companies, but also the cases of waterworks with similar characteristics. Evaluation factors were selected by considering not only the correlation with the damage probability but also the possibility of securing data. Based on 1999, when the construction technology and standards of heat transmission pipe changed, the damage probability estimation function according to the period of use was divided into the case of being buried before 1998 and the case of being buried after 1999, and presented. In addition, the damage probability was corrected by assigning weights according to the measured data for each evaluation factor such as the diameter, use, and management authority.