• Title/Summary/Keyword: Activation

Search Result 15,839, Processing Time 0.045 seconds

In vitro Activation of Procaspase-8 by Forming the Cytoplasmic Component of the Death-inducing Signaling Complex (cDISC)

  • Roy, Ankoor;Hong, Jong hui;Lee, Jin-Hee;Lee, Young-Tae;Lee, Bong-Jin;Kim, Key-Sun
    • Molecules and Cells
    • /
    • v.26 no.2
    • /
    • pp.165-170
    • /
    • 2008
  • Procaspase-8 is activated by forming a death-inducing signaling complex (DISC) with the Fas-associated death domain (FADD) and the Fas receptor, but the mechanism of its activation is not well understood. Procaspase-8 devoid of the death effector domain at its N-terminus (${\Delta}nprocaspase-8$) was reported to be activated by kosmotropic salts, but it has not been induced to form a DISC in vitro because it cannot interact with FADD. Here, we report the production of full-length procaspase-8 and show that it is activated by adding the Fas death domain (Fas-DD) and the FADD forming the cytoplasmic part of the DISC (cDISC). Furthermore, mutations known to affect DISC formation in vivo were shown to have the same effect on procaspase-8 activation in vitro. An antibody that induces Fas-DD association enhanced procaspase-8 activation, suggesting that the Fas ligand is not required for low-level activation of procaspase-8, but that Fas receptor clustering is needed for high-level activation of procaspase-8 leading to cell death. In vitro activation of procaspase-8 by forming a cDISC will be invaluable for investigating activation of ligand-mediated apoptosis and the numerous interactions affecting procaspase-8 activation.

The Course of Schema Activation in Processing of Humor Text (유머텍스트 처리에서 스키마의 활성화 과정)

  • Choi, Young-Geon;Shin, Hyun-Jung
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.9
    • /
    • pp.425-435
    • /
    • 2015
  • Though most researchers studying humor in recent years agree that 'incongruity' is essential factor of humor elicitation, they have different views in the course of schema activation in processing humor text. One of the different views on schema activation in processing humor text is wether schemata are activated concurrently or selectively. While concurrent activation view suggests that different schemata are concurrently activated because we perceive them at the same time, selective activation view suggests that different schemata are selectively activated because we use selective attention for their perception. This study was conducted to verify these two different views. We clarified that different schemata were activated in processing of humor text, and we examined whether Vaid's experiment failed or not. Experiment was designed mixed 2 (schema1, schema2) ${\times}$ 3 (setup, incongruity, resolution) ${\times}$ (humor, control) factorial design and latin square counterbalancing. As a result of experiment, we got data that different schemata were activated in the course of 'Incongruity' at the same time. Most of all, the activated schemata were kept activation in the course of 'Resolution'. This result suggest that different schemata were activated concurrently.

Performance Improvement Method of Convolutional Neural Network Using Agile Activation Function (민첩한 활성함수를 이용한 합성곱 신경망의 성능 향상)

  • Kong, Na Young;Ko, Young Min;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.7
    • /
    • pp.213-220
    • /
    • 2020
  • The convolutional neural network is composed of convolutional layers and fully connected layers. The nonlinear activation function is used in each layer of the convolutional layer and the fully connected layer. The activation function being used in a neural network is a function that simulates the method of transmitting information in a neuron that can transmit a signal and not send a signal if the input signal is above a certain criterion when transmitting a signal between neurons. The conventional activation function does not have a relationship with the loss function, so the process of finding the optimal solution is slow. In order to improve this, an agile activation function that generalizes the activation function is proposed. The agile activation function can improve the performance of the deep neural network in a way that selects the optimal agile parameter through the learning process using the primary differential coefficient of the loss function for the agile parameter in the backpropagation process. Through the MNIST classification problem, we have identified that agile activation functions have superior performance over conventional activation functions.

Performance Improvement Method of Convolutional Neural Network Using Combined Parametric Activation Functions (결합된 파라메트릭 활성함수를 이용한 합성곱 신경망의 성능 향상)

  • Ko, Young Min;Li, Peng Hang;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.371-380
    • /
    • 2022
  • Convolutional neural networks are widely used to manipulate data arranged in a grid, such as images. A general convolutional neural network consists of a convolutional layers and a fully connected layers, and each layer contains a nonlinear activation functions. This paper proposes a combined parametric activation function to improve the performance of convolutional neural networks. The combined parametric activation function is created by adding the parametric activation functions to which parameters that convert the scale and location of the activation function are applied. Various nonlinear intervals can be created according to parameters that convert multiple scales and locations, and parameters can be learned in the direction of minimizing the loss function calculated by the given input data. As a result of testing the performance of the convolutional neural network using the combined parametric activation function on the MNIST, Fashion MNIST, CIFAR10 and CIFAR100 classification problems, it was confirmed that it had better performance than other activation functions.

The Comparison of Activation Protocols for PEMFC MEA with PtCo/C Catalyst (PtCo/C 촉매를 사용한 PEMFC MEA의 활성화 프로토콜 비교)

  • GISEONG LEE;HYEON SEUNG JUNG;JINHO HYUN;CHANHO PAK
    • Transactions of the Korean hydrogen and new energy society
    • /
    • v.34 no.2
    • /
    • pp.178-186
    • /
    • 2023
  • Three activation methods (constant voltage, current cycling, and hydrogen pumping) were applied to investigate the effects on the performance of the membrane electrode assembly (MEA) loaded with PtCo/C catalyst. The current cycling protocol took the shortest time to activate the MEA, while the performance after activation was the worst among the all activation methods. The constant voltage method took a moderate activation time and exhibited the best performance after activation. The hydrogen pumping protocol took the longest time to activate the MEA with moderate performance after activation. According to the distribution of relaxation time analysis, the improved performance after the activation mainly comes from the decrease of charge transfer resistance rather than the ionic resistance in the cathode catalyst layer, which suggests that the existence of water on the electrode is the key factor for activation.

The effects of action observation and motor imagery of serial reaction time task(SRTT) in mirror neuron activation (연속 반응 시간 과제 수행의 행위 관찰과 운동 상상이 거울신경활성에 미치는 영향)

  • Lee, Sang-Yeol;Lee, Myung-Hee;Bae, Sung-Soo;Lee, Kang-Seong;Gong, Won-Tae
    • Journal of the Korean Society of Physical Medicine
    • /
    • v.5 no.3
    • /
    • pp.395-404
    • /
    • 2010
  • Purpose : The object of this study was to examine the effect of motor learning on brain activation depending on the method of motor learning. Methods : The brain activation was measured in 9 men by fMRI. The subjects were divided into the following groups depending on the method of motor learning: actually practice (AP, n=3) group, action observation (AO, n=3) group and motor imagery (MI, n=3) group. In order to examine the effect of motor learning depending on the method of motor learning, the brain activation data were measured during learning. For the investigation of brain activation, fMRI was conducted. Results : The results of brain activation measured before and during learning were as follows; (1) During learning, the AP group showed the activation in the following areas: primary motor area located in precentral gyrus, somatosensory area located in postcentral gyrus, supplemental motor area and prefrontal association area located in precentral gyrus, middle frontal gyrus and superior frontal gyrus, speech area located in superior temporal gyrus and middle temporal gyrus, Broca's area located in inferior parietal lobe and somatosensory association area of precuneus; (2) During learning, the AD groups showed the activation in the following areas: primary motor area located in precentral gyrus, prefrontal association area located in middle frontal gyrus and superior frontal gyrus, speech area and supplemental motor area located in superior temporal gyrus and middle temporal gyrus, Broca's area located in inferior parietal lobe, somatosensory area and primary motor area located in precentral gyrus of right cerebrum and left cerebrum, and somatosensory association area located in precuneus; and (3) During learning, the MI group showed activation in the following areas: speech area located in superior temporal gyrus, supplemental area, and somatosensory association area located in precuneus. Conclusion : Given the results above, in this study, the action observation was suggested as an alternative to motor learning through actual practice in serial reaction time task of motor learning. It showed the similar results to the actual practice in brain activation which were obtained using activation of mirror neuron. This result suggests that the brain activation occurred by the activation of mirror neuron, which was observed during action observation. The mirror neurons are located in primary motor area, somatosensory area, premotor area, supplemental motor area and somatosensory association area. In sum, when we plan a training program through physiotherapy to increase the effect during reeducation of movement, the action observation as well as best resting is necessary in increasing the effect of motor learning with the patients who cannot be engaged in actual practice.

Activation of apoptotic protein in U937 cells by a component of turmeric oil

  • Lee, Yong-Kyu
    • BMB Reports
    • /
    • v.42 no.2
    • /
    • pp.96-100
    • /
    • 2009
  • Aromatic (ar)-turmerone from turmeric oil displays anti-tumorigenesis activity that includes inhibited cell proliferation. This study investigated ar-turmerone-mediated apoptotic protein activation in human lymphoma U937 cells. Ar-turmerone treatment inhibited U937 cell viability in a concentration-dependent fashion, with inhibition exceeding 84%. Moreover, the treatment produced nucleosomal DNA fragmentation and the percentage of sub-diploid cells increased in a concentration-dependent manner; both are hallmarks of apoptosis. The apoptotic effect of ar-turmerone was associated with the induction of Bax and p53 proteins, rather than Bcl-2 and p21. Activation of mitochondrial cytochrome c and caspase-3 demonstrated that the activation of caspases accompanied the apoptotic effect of ar-turmerone, which mediated cell death. These results suggest that the apoptotic effect of ar-turmerone on U937 cells may involve caspase-3 activation through the induction of Bax and p53, rather than Bcl-2 and p21.

The Activation-Only VSIMM Algorithm for Maneuvering Target Tracking (기동표적 추적을 위한 Activation-Only VSIMM)

  • Choe, Seong-Hui;Song, Taek-Ryeol
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.9
    • /
    • pp.381-388
    • /
    • 2002
  • This paper suggests the activation-only VSIMM estimator, applied mainly to target tracking problems. This algorithm is much simpler and easier to implement than the ordinary VSIMM algorithm. Also the activation-only VSIMM algorithm provides a substantial reduction in computation while having identical performance with the ordinary VSIMM estimator and the FSIMM estimator. More importantly, the drawbacks related to the improper termination and activation inherent to the VSIMM algorithm are eliminated in this algorithm. The performance of this estimator will be shown through a Monte Carlo simulation for maneuvering target tracking in comparison with the FSIMM and the VSIMM.

Comparison of Image Classification Performance by Activation Functions in Convolutional Neural Networks (컨벌루션 신경망에서 활성 함수가 미치는 영상 분류 성능 비교)

  • Park, Sung-Wook;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.10
    • /
    • pp.1142-1149
    • /
    • 2018
  • Recently, computer vision application is increasing by using CNN which is one of the deep learning algorithms. However, CNN does not provide perfect classification performance due to gradient vanishing problem. Most of CNN algorithms use an activation function called ReLU to mitigate the gradient vanishing problem. In this study, four activation functions that can replace ReLU were applied to four different structural networks. Experimental results show that ReLU has the lowest performance in accuracy, loss rate, and speed of initial learning convergence from 20 experiments. It is concluded that the optimal activation function varied from network to network but the four activation functions were higher than ReLU.

Effect of Nonlinear Transformations on Entropy of Hidden Nodes

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.10 no.1
    • /
    • pp.18-22
    • /
    • 2014
  • Hidden nodes have a key role in the information processing of feed-forward neural networks in which inputs are processed through a series of weighted sums and nonlinear activation functions. In order to understand the role of hidden nodes, we must analyze the effect of the nonlinear activation functions on the weighted sums to hidden nodes. In this paper, we focus on the effect of nonlinear functions in a viewpoint of information theory. Under the assumption that the nonlinear activation function can be approximated piece-wise linearly, we prove that the entropy of weighted sums to hidden nodes decreases after piece-wise linear functions. Therefore, we argue that the nonlinear activation function decreases the uncertainty among hidden nodes. Furthermore, the more the hidden nodes are saturated, the more the entropy of hidden nodes decreases. Based on this result, we can say that, after successful training of feed-forward neural networks, hidden nodes tend not to be in linear regions but to be in saturated regions of activation function with the effect of uncertainty reduction.