• Title/Summary/Keyword: Descent image

Search Result 35, Processing Time 0.026 seconds

Fast Block-Matching Motion Estimation Using Constrained Diamond Search Algorithm (구속조건을 적용한 다이아몬드 탐색 알고리즘에 의한 고속블록정합움직임추정)

  • 홍성용
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.4
    • /
    • pp.13-20
    • /
    • 2003
  • Based on the studies on the motion vector distributions estimated on the image sequences, we proposed constrained diamond search (DS) algorithm for fast block-matching motion estimation. By considering the fact that motion vectors are searched within the 2 pixels distance in vertically and horizontally on average, we confirmed that DS algorithm achieves close performance on error ratio and requires less computation compared with new three-step search (NTSS) algorithm. Also, by applying displaced frame difference (DFD) to DS algorithm, we reduced the computational loads needed to estimate the motion vectors within the stable block that do not have motions. And we reduced the possibilities falling into the local minima in the course of estimation of motion vectors by applying DFD to DS algorithm. So, we knew that proposed constrained DS algorithm achieved enhanced results as aspects of error ratio and the number of search points to be necessary compared with conventional DS algorithm, four step search (FSS) algorithm, and block-based gradient-descent search algorithm

  • PDF

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.

Towards a Pedestrian Emotion Model for Navigation Support (내비게이션 지원을 목적으로 한 보행자 감성모델의 구축)

  • Kim, Don-Han
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.197-206
    • /
    • 2010
  • For an emotion retrieval system implementation to support pedestrian navigation, coordinating the pedestrian emotion model with the system user's emotion is considered a key component. This study proposes a new method for capturing the user's model that corresponds to the pedestrian emotion model and examines the validity of the method. In the first phase, a database comprising a set of interior images that represent hypothetical destinations was developed. In the second phase, 10 subjects were recruited and asked to evaluate on navigation and satisfaction toward each interior image in five rounds of navigation experiments. In the last phase, the subjects' feedback data was used for of the pedestrian emotion model, which is called ‘learning' in this study. After evaluations by the subjects, the learning effect was analyzed by the following aspects: recall ratio, precision ratio, retrieval ranking, and satisfaction. Findings of the analysis verify that all four aspects significantly were improved after the learning. This study demonstrates the effectiveness of the learning algorithm for the proposed pedestrian emotion model. Furthermore, this study demonstrates the potential of such pedestrian emotion model to be well applicable in the development of various mobile contents service systems dealing with visual images such as commercial interiors in the future.

  • PDF

Effect of the Thermal Changes of Five-shu-points(五輸穴) of the Lung Meridian with Acupuncture Stimulation on Taeyon(L9, 太淵) (태연(太淵)(L9)자침(刺鍼)이 수태음폐경(手太陰肺經)의 오수혈(五輸穴) 영역(領域) 온도변화(溫度變化)에 미치는 영향(影響))

  • Song, Beom-Yong;Yook, Tae-Han
    • Journal of Acupuncture Research
    • /
    • v.17 no.3
    • /
    • pp.219-232
    • /
    • 2000
  • Objective : The meridian and the acupuncture point of oriental medicine are very important in the department of acupuncture and moxibustion. Recently, we needed to study on the phenomenon of the meridian and acupuncture point with objective data. And then, I made a study of effects on the thermal changes of Five-shu-points(五輸穴) of the Lung meridian with acupuncture on Taeyon($L_9$, 太淵), using Digital infrared thermal imaging(D.I.T.I). Method : This study researched into clinical statistics for 60 men who are in good health. The objective was divided into two groups, one was the control group(CON, N=30) and the other was acupuncture group(ACU, N=30). The first, I took a picture for 60 men with the Digital infrared thermal imaging(D.I.T.I.). After 10 minutes, I took a second picture for each group following experimental methods. Results : I. The Mean temperature of Sasang($L_{11}$), Oje($L_{10}$), Taeyon($L_9$), Kyonggo($L_8$), Choldaek($L_5$) and Taenung($P_7$) area in adult men with good health, made a no significant difference with left and right side points. 2. Acupuncture group with acupuncture stimularion on Taeyon($L_9$) had a effect on much thermal changes of Sasang($L_{11}$), Oje($L_{10}$), Taeyon($L_9$), Kyonggo($L_8$) and Choldaek($L_5$) than control group. The thermal changes of the area which is a meridian point in the Lung Meridian of acupuncture group differed from control group with significant decrease and increase following the decreasing or increasing temperature class. Each class of ascent and descent thermal change was statistically significant value compared with control group. 3. Acupuncture group with acupuncture stimulation on Taeyon($L_9$) had not a effect on thermal changes of Taenung($P_7$) area than control group. And the increasing and decreasing temperature class of the acupuncture group did not significantly differ from control group. Conclusion : I could think that the acupuncture on Taeyon($L_9$) affected the thermal change of the area which is the Five-shu-points in the Lung Meridian. And then I could relate these results with the existence of the meridian and acupuncture point.

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.