• Title/Summary/Keyword: temporal feature

Search Result 315, Processing Time 0.033 seconds

Effect of a Hot Water Extract of Sparasis Crispa on the Expression of Tight Junction-Associated Genes in HaCaT Cells (꽃송이버섯 열수추출물이 HaCaT의 세포 연접 관련 유전자의 발현에 대한 영향)

  • Han, Hyo-Sang
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.9 no.2
    • /
    • pp.83-92
    • /
    • 2021
  • Purpose : Keratinocytes are the main cellular components involved in wound healing during re-epithelization and inflammation. Dysfunction of tight junction (TJ) adhesions is a major feature in the pathogenesis of various diseases. The purpose of this study was to identify the various effects of a Sparassis crispa water extract (SC) on HaCaT cells and to investigate whether these effects might be applicable to human skin. Methods : We investigated the effectiveness of SC on cell HaCaT viability using MTS. The antioxidant effect of SC was analyzed by comparing the effectiveness of ABTS to that of the well-known antioxidant resveratrol. Reverse-transcription quantitative polymerase chain reaction (qRT-PCR) is the most widely applied method Quantitative RT-PCR analysis has shown that SC in HaCaT cells affects mRNA expression of tight-junction genes associated with skin moisturization. In addition, Wound healing is one of the most complex processes in the human body. It involves the spatial and temporal synchronization of a variety of cell types with distinct roles in the phases of hemostasis, inflammation, growth, re-epithelialization, and remodeling. wound healing analysis demonstrated altered cell migration in SC-treated HaCaT cells. Results : MTS analysis in HaCaT cells was found to be more cytotoxic in SC at a concentration of 0.5 mg/㎖. Compared to 100 µM resveratrol, 4 mg/㎖ SC exhibited similar or superior antioxidant effects. SC treatment in HaCaT cells reduced levels of claudin 1, claudin 3, claudin 4, claudin 6, claudin 7, claudin 8, ZO-1, ZO-2, JAM-A, occludin, and Tricellulin mRNA expression by about 1.13 times. Wound healing analysis demonstrated altered cell migration in SC-treated HaCaT cells and HaCaT cell migration was also reduced to 73.2 % by SC treatment. Conclusion : SC, which acts as an antioxidant, reduces oxidative stress and prevents aging of the skin. Further research is needed to address the effects of SC on human skin given the observed alteration of mRNA expression of tight-junction genes and the decreased the cell migration of HaCaT cells.

A SVR Based-Pseudo Modified Einstein Procedure Incorporating H-ADCP Model for Real-Time Total Sediment Discharge Monitoring (실시간 총유사량 모니터링을 위한 H-ADCP 연계 수정 아인슈타인 방법의 의사 SVR 모형)

  • Noh, Hyoseob;Son, Geunsoo;Kim, Dongsu;Park, Yong Sung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.3
    • /
    • pp.321-335
    • /
    • 2023
  • Monitoring sediment loads in natural rivers is the key process in river engineering, but it is costly and dangerous. In practice, suspended loads are directly measured, and total loads, which is a summation of suspended loads and bed loads, are estimated. This study proposes a real-time sediment discharge monitoring system using the horizontal acoustic Doppler current profiler (H-ADCP) and support vector regression (SVR). The proposed system is comprised of the SVR model for suspended sediment concentration (SVR-SSC) and for total loads (SVR-QTL), respectively. SVR-SSC estimates SSC and SVR-QTL mimics the modified Einstein procedure. The grid search with K-fold cross validation (Grid-CV) and the recursive feature elimination (RFE) were employed to determine SVR's hyperparameters and input variables. The two SVR models showed reasonable cross-validation scores (R2) with 0.885 (SVR-SSC) and 0.860 (SVR-QTL). During the time-series sediment load monitoring period, we successfully detected various sediment transport phenomena in natural streams, such as hysteresis loops and sensitive sediment fluctuations. The newly proposed sediment monitoring system depends only on the gauged features by H-ADCP without additional assumptions in hydraulic variables (e.g., friction slope and suspended sediment size distribution). This method can be applied to any ADCP-installed discharge monitoring station economically and is expected to enhance temporal resolution in sediment monitoring.

Derivation of Engineered Barrier System (EBS) Degradation Mechanism and Its Importance in the Early Phase of the Deep Geological Repository for High-Level Radioactive Waste (HLW) through Analysis on the Long-Term Evolution Characteristics in the Finnish Case (핀란드 고준위방폐물 심층처분장 장기진화 특성 분석을 통한 폐쇄 초기단계 공학적방벽 성능저하 메커니즘 및 중요도 도출)

  • Sukhoon Kim;Jeong-Hwan Lee
    • The Journal of Engineering Geology
    • /
    • v.33 no.4
    • /
    • pp.725-736
    • /
    • 2023
  • The compliance of deep geological disposal facilities for high-level radioactive waste with safety objectives requires consideration of uncertainties owing to temporal changes in the disposal system. A comprehensive review and analysis of the characteristics of this evolution should be undertaken to identify the effects on multiple barriers and the biosphere. We analyzed the evolution of the buffer, backfill, plug, and closure regions during the early phase of the post-closure period as part of a long-term performance assessment for an operating license application for a deep geological repository in Finland. Degradation mechanisms generally expected in engineered barriers were considered, and long-term evolution features were examined for use in performance assessments. The importance of evolution features was classified into six categories based on the design of the Finnish case. Results are expected to be useful as a technical basis for performance and safety assessment in developing the Korean deep geological disposal system for high-level radioactive waste. However, for a more detailed review and evaluation of each feature, it is necessary to obtain data for the final disposal site and facility-specific design, and to assess its impact in advance.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF

Comparison of Forest Carbon Stocks Estimation Methods Using Forest Type Map and Landsat TM Satellite Imagery (임상도와 Landsat TM 위성영상을 이용한 산림탄소저장량 추정 방법 비교 연구)

  • Kim, Kyoung-Min;Lee, Jung-Bin;Jung, Jaehoon
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.449-459
    • /
    • 2015
  • The conventional National Forest Inventory(NFI)-based forest carbon stock estimation method is suitable for national-scale estimation, but is not for regional-scale estimation due to the lack of NFI plots. In this study, for the purpose of regional-scale carbon stock estimation, we created grid-based forest carbon stock maps using spatial ancillary data and two types of up-scaling methods. Chungnam province was chosen to represent the study area and for which the $5^{th}$ NFI (2006~2009) data was collected. The first method (method 1) selects forest type map as ancillary data and uses regression model for forest carbon stock estimation, whereas the second method (method 2) uses satellite imagery and k-Nearest Neighbor(k-NN) algorithm. Additionally, in order to consider uncertainty effects, the final AGB carbon stock maps were generated by performing 200 iterative processes with Monte Carlo simulation. As a result, compared to the NFI-based estimation(21,136,911 tonC), the total carbon stock was over-estimated by method 1(22,948,151 tonC), but was under-estimated by method 2(19,750,315 tonC). In the paired T-test with 186 independent data, the average carbon stock estimation by the NFI-based method was statistically different from method2(p<0.01), but was not different from method1(p>0.01). In particular, by means of Monte Carlo simulation, it was found that the smoothing effect of k-NN algorithm and mis-registration error between NFI plots and satellite image can lead to large uncertainty in carbon stock estimation. Although method 1 was found suitable for carbon stock estimation of forest stands that feature heterogeneous trees in Korea, satellite-based method is still in demand to provide periodic estimates of un-investigated, large forest area. In these respects, future work will focus on spatial and temporal extent of study area and robust carbon stock estimation with various satellite images and estimation methods.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

A Study of Segmental and Syllabic Intervals of Canonical Babbling and Early Speech

  • Chen, Xiaoxiang;Xiao, Yunnan
    • Cross-Cultural Studies
    • /
    • v.28
    • /
    • pp.115-139
    • /
    • 2012
  • Interval or duration of segments, syllables, words and phrases is an important acoustic feature which influences the naturalness of speech. A number of cross-sectional studies regarding acoustic characteristics of children's speech development found that intervals of segments, syllables, words and phrases tend to change with the growing age. One hypothesis assumed that decreases in intervals would be greater when children were younger and smaller decreases in intervals when older (Thelen,1991), it has been supported by quite a number of researches on the basis of cross-sectional studies (Tingley & Allen,1975; Kent & Forner,1980; Chermak & Schneiderman, 1986), but the other hypothesis predicted that decreases in intervals would be smaller when children were younger and greater decreases in intervals when older (Smith, Kenney & Hussain, 1996). Researchers seem to come up with conflicting postulations and inconsistent results about the change trends concerning intervals of segments, syllables, words and phrases, leaving it as an issue unresolved. Most acoustic investigations of children's speech production have been conducted via cross-sectional designs, which involves studying several groups of children. So far, there are only a few longitudinal studies. This issue needs more longitudinal investigations; moreover, the acoustic measures of the intervals of child speech are hardly available. All former studies focus on word stages excluding the babbling stages especially the canonical babbling stage, but we need to find out when concrete changes of intervals begin to occur and what causes the changes. Therefore, we conducted an acoustic study of interval characteristics of segments and words concerning Canonical Babble ( CB) and early speech in an infant aged from 0;9 to 2;4 acquiring Mandarin Chinese. The current research addresses the following two questions: 1. Whether decreases in interval would be greater when children were younger and smaller when they were older or vice versa? 2. Whether the child speech concerning the acoustic features of interval drifts in the direction of the language they are exposed to? The female infant whose L1 was Southern Mandarin living in Changsha was audio- and video-taped at her home for about one hour almost on a weekly basis during her age range from 0;9 to 2;4 under natural observation by us investigators. The recordings were digitized. Parts of the digitized material were labeled. All the repetitions were excluded. The utterances were extracted from 44 sessions ranging from 30 minutes to one hour. The utterances were divided into segments as well as syllable-sized units. Age stages are 0;9-1;0,1;1-1;5, 1;6-2;0, 2;1-2;4. The subject was a monolingual normal child from parents with a good education. The infant was audio-and video-taped in her home almost every week. The data were digitized, segments and syllables from 44 sessions spanning the transition from babble to speech were transcribed in narrow IPA and coded for analysis. Babble was coded from age 0;9-1;0, and words were coded from 1;0 to 2;4, the data has been checked by two professionally trained persons who majored in phonetics. The present investigation is a longitudinal analysis of some temporal characteristics of the child speech during the age periods of 0;9-1;0, 1;1-1;5, 1;6-2;0, 2;1-2;4. The answer to Research Question 1 is that our results are in agreement with neither of the hypotheses. One hypothesis assumed that decreases in intervals would be greater when children were younger and smaller decreases in intervals when older (Thelen,1991); but the other hypothesis predicted that decreases in intervals would be smaller when children were younger and greater decreases in intervals when older (Smith, Kenney & Hussain, 1996). On the whole, there is a tendency of decrease in segmental and syllabic duration with the growing age, but the changes are not drastic and abrupt. For example, /a/ after /k/ in Table 1 has greater decrease during 1;1-1;5, while /a/ after /p/, /t/ and /w/ has greater decrease during 2;1-2;4. /ka/ has greater decrease during 1;1-1;5, while /ta/ and /na/ has greater decrease during 2;1-2;4.Across the age periods, interval change experiences lots of fluctuation all the time. The answer to Research Question 2 is yes. Babbling stage is a period in which the children's acoustic features of intervals of segments, syllables, words and phrases is shifted in the direction of the language to be learned, babbling and children's speech emergence is greatly influenced by ambient language. The phonetic changes in terms of duration would go on until as late as 10-12 years of age before reaching adult-like levels. Definitely, with the increase of exposure to ambient language, the variation would be less and less until they attain the adult-like competence. Via the analysis of the SPSS 15.0, the decrease of segmental and syllabic intervals across the four age periods proves to be of no significant difference (p>0.05). It means that the change of segmental and syllabic intervals is continuous. It reveals that the process of child speech development is gradual and cumulative.

A Study on Transformed "Shimcheong-jeon" in The Juvenile Literature - focusing on juvenile literature since the 2000s - (<심청전>의 어린이문학 변용 양상 - 2000년대 이후 창작동화를 중심으로 -)

  • Jin, Eun-jin
    • (The) Research of the performance art and culture
    • /
    • no.36
    • /
    • pp.223-253
    • /
    • 2018
  • The purpose of this study is to examine how the Korean classic novel "Shimcheong-jeon" has transformed in juvenile literature since the 2000s. Classical novels are far from modern and temporal, differ from modern cultures. Classic novels are also different from the lives and thoughts of modern children. It is therefore difficult for modern child readers to easily understand or agree with classical novels. In order for classical novels to have the meaning in the present, it is necessary to pay attention to the encounter between classical novels and children's literature. In the case of "Shimcheong-jeon", unlike other classical novels, there are many creative fairy tales. There are seven kinds of fairy tales that transformed "Shimcheong-jeon". They are diverse in genres such as picture books, fairy tales, and juvenile fiction, and are intended for a variety of ages. These works are described in various perspectives such as, Shimcheong who is full of desire, Shim Hakgyu who is disabled, Ppaengdeog's mother who has maternity and subjectivity, The dragon of the dragon king and Byeogdeog who loves Shimcheong, and Shin Cheong who has a dream. The themes of the works vary. So, These works extend our expectations for classical literature. Fairy tales that transformed "Shimcheong-jeon" reflect the lives of children and youths, this is important because it can reduce the distance between classical novels and children and youth readers. Classical novels are modernized and give new meaning to modern children and youths. And it reflects the characteristics of the novels of Pansori's "Shimcheong-jeon", preserving the value of classics. Tears of Paengdeok is a story that explains the origin of Pansori "Shimcheong-ga", and inserts some lyrics of Pansori, in the case of Cheong, Cheong, Pansori style is used. Although humor is the greatest feature of pansori, there are few of Fairy tales that transformed "Shimcheong-jeon". It is a direction to worry and to orient when transforming "Shimcheong-jeon" into a fairy tale.

Report about First Repeated Sectional Measurements of Water Property in the East Sea using Underwater Glider (수중글라이더를 활용한 동해 최초 연속 물성 단면 관측 보고)

  • GYUCHANG LIM;JONGJIN PARK
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.29 no.1
    • /
    • pp.56-76
    • /
    • 2024
  • We for the first time made a successful longest continuous sectional observation in the East Sea by an underwater glider during 95 days from September 18 to December 21 2020 in the Korea along the 106 Line (129.1 °E ~ 131.5 °E at 37.9 °N) of the regular shipboard measurements by the National Institute of Fishery Science (NIFS) and obtained twelve hydrographic sections with high spatiotemporal resolution. The glider was deployed at 129.1 °E in September 18 and conducted 88-days flight from September 19 to December 15 2020, yielding twelve hydrographic sections, and then recovered at 129.2 °E in December 21 after the last 6 days virtual mooring operation. During the total traveled distance of 2550 km, the estimated deviation from the predetermined zonal path had an average RMS distance of 262 m. Based on these high-resolution long-term glider measurements, we conducted a comparative study with the bi-monthly NIFS measurements in terms of spatial and temporal resolutions, and found distinguished features. One is that spatial features of sub-mesoscale such as sub-mesoscale frontal structure and intensified thermocline were detected only in the glider measurements, mainly due to glider's high spatial resolution. The other is the detection of intramonthly variations from the weekly time series of temperature and salinity, which were extracted from glider's continuous sections. Lastly, there were deviations and bias in measurements from both platforms. We argued these deviations in terms of the time scale of variation, the spatial scale of fixed-point observation, and the calibration status of CTD devices of both platforms.