• Title/Summary/Keyword: moment feature

Search Result 159, Processing Time 0.343 seconds

A Contents-based Drug Image Retrieval System Using Shape Classification and Color Information (모양분류와 컬러정보를 이용한 내용기반 약 영상 검색 시스템)

  • Chun, Jun-Chul;Kim, Dong-Sun
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.117-128
    • /
    • 2011
  • In this paper, we present a novel approach for contents-based medication image retrieval from a medication image database using the shape classification and color information of the medication. One major problem in developing a contents-based drug image retrieval system is there are too many similar images in shape and color and it makes difficult to identify any specific medication by a single feature of the drug image. To resolve such difficulty in identifying images, we propose a hybrid approach to retrieve a medication image based on shape and color features of the medication. In the first phase of the proposed method we classify the medications by shape of the images. In the second phase, we identify them by color matching between a query image and preclassified images in the first phase. For the shape classification, the shape signature, which is unique shape descriptor of the medication, is extracted from the boundary of the medication. Once images are classified by the shape signature, Hue and Saturation(HS) color model is used to retrieve a most similarly matched medication image from the classified database images with the query image. The proposed system is designed and developed especially for specific population- seniors to browse medication images by using visual information of the medication in a feasible fashion. The experiment shows the proposed automatic image retrieval system is reliable and convenient to identify the medication images.

Weaving the realities with video in multi-media theatre centering on Schaubuhne's Hamlet and Lenea de Sombra's Amarillo (멀티미디어 공연에서 비디오를 활용한 리얼리티 구축하기 - 샤우뷔네의 <햄릿>과 리니아 드 솜브라의 <아마릴로>를 중심으로 -)

  • Choi, Young-Joo
    • Journal of Korean Theatre Studies Association
    • /
    • no.53
    • /
    • pp.167-202
    • /
    • 2014
  • When video composes mise-en-scene during the performance, it reflects the aspect of contemporary image culture, where the individual as creator joins in the image culture through the device of cell phone and computer remediating the former video technology. It also closely related with the contemporary theatre culture in which 1960's and 1970's video art was weaved into the contemporary performance theatre. With these cultural background, theatre practitioners regarded media-friendly mise-en-scene as an alternative facing the cultural landscape the linear representational narrative did not correspond to the present culture. Nonetheless, it can not be ignored that video in the performance theatre is remediating its historical function: to criticize the social reality. to enrich the aesthetic or emotional reality. I focused video in the performance theatre could feature the object with the image by realizing the realtime relay, emphasizing the situation within the frame, and strengthening the reality by alluding the object as a gesutre. So I explored its two historical manuel. First, video recorded the spot, communicated the information, and arose the audience's recognition of the object to its critical function. Second, video in performance theatre could redistribute perceptual way according to the editing method like as close up, slow motion, multiple perspective, montage and collage, and transformation of the image to the aesthetic function. Reminding the historical function of video in contemporary performance theatre, I analyzed two shows, Schaubuhne's Hamlet and Lenea de Sombra's Amarillo which were introduced to Korean audiences during the 2010 Seoul Theatre Olympics. It is known to us that Ostermeir found real social reality as a text and made the play the context. In this, he used video as a vehicle to penetrate the social reality through the hero's perspective. It is also noteworthy that Ostermeir understood Hamlet's dilemma as these days' young generation's propensity. They delayed action while being involved in image culture. Besides his use of video in the piece revitalized the aesthetic function of video by hypermedial perceptual method. Amarillo combined documentary theatre method with installation, physical theatre, and video relay on the spot, and activated aesthetic function with the intermediality, its interacting co-relationship between the media. In this performance theatre, video has recorded and pursued the absent presence of the real people who died or lost in the desert. At the same time it fantasized the emotional aspect of the people at the moment of their death, which would be opaque or non prominent otherwise. As a conclusion, I found the video in contemporary performance theatre visualized the rupture between the media and perform their intermediality. It attempted to disturb the transparent immediacy to invoke the spectator's perception to the theatrical situation, to open its emotional and spiritual aspect, and to remind the realities as with Schaubuhne's Hamlet and Lenea de Sombra's Amarillo.

A Study on the Conservation of Excavated Features (발굴유구의 보존방법과 적용)

  • An, Jin Hwan
    • Korean Journal of Heritage: History & Science
    • /
    • v.43 no.3
    • /
    • pp.26-47
    • /
    • 2010
  • When the term conservation is used with regard to excavated features, it means not only conservation but also restoration. Restoring the features here does not imply restoring their original form but restoring their form at the moment of excavation. That means, the conservation of excavated features includes the concept of both reparation and restoration. The way of conserving excavated features can be largely categorized into on-site conservation and transfer conservation. On-site conservation means to conserve excavated features as they were at the excavation site. It can be further categorized into soil-covered on-site conservation, in which excavated features are covered with soil to prevent them from being damaged, and exposed on-site conservation in which the features were conserved as they were exposed. Transfer conservation is operated on the premise that excavated features are transferred to another place. It can be further categorized into original form transfer, transcribing transfer, reproduction transfer, and dismantlement transfer. Original form transfer refers to the method of moving the original forms of excavated features to another place. Transcribing transfer refers to moving some of the surfaces of excavated features to another place. Reproduction transfer refers to restoring the forms of excavated features in another place after copying the forms of excavated features at the excavation site. Dismantlement transfer refers to the method of restoring excavated features in a place other than the excavation site in the reverse order of dismantlement after dismantling the features at the excavation site. The most fundamental issue regarding conserving excavated features is the conservation of their original forms. However, the conservation of excavated features tends to be decided depending on a variety of conditions such as society, economy, culture, and local situations. In order to conserve excavated features more effectively, more detailed and specialized conservation methods should be created. Furthermore, continuing research is needed to find the most effective way of conserving them through exchange with other neighboring academic fields and scientific technology.

Valence Band Photoemission Study of Co/Pd Multilayer (광전자분광법을 이용한 Co/Pd 다층박막의 전자구조연구)

  • Kang, J.-S.;Kim, S.K.;Jeong, J.I.;Hong, J.H.;Lee, Y.P.;Shin, H.J.;Olson, C.G.
    • Journal of the Korean Magnetics Society
    • /
    • v.3 no.1
    • /
    • pp.48-55
    • /
    • 1993
  • We report the photoemission (PES) studies for the Co/Pd multilayter. The Co 3d PES spectrum of Co/Pd exhibits two interesting features, one near the Fermi energy, $E_{F}$, and another at ~2.5 eV below $E_{F}$. The Co 3d peak near $E_{F}$ of Co/Pd is much narrower than that of the bulk Co, consistent with the enhanced Co magnetic moment in Co/Pd compared to that in the bulk Co. The Co 3d feature at ~-2.5 eV resembles the Pd valence band structures, which suggests a substantial hybridization between the Co and Pd sublayers. The Co 3d PES spectrum of Co/Pd is compared with the existing band structures, obtained using the local spin density functional calculations. A reasonable agreement is found concerning the bandwidth of the occupied part of the Co 3d band, whereas a narrow Co 3d peak near $E_{F}$ seems not to be described by the band structure calculations.

  • PDF

A Development of Torsional Analysis Model and Parametric Study for PSC Box Girder Bridge with Corrugated Steel Web (복부 파형강판을 사용한 PSC 복합 교량의 비틀림 해석모델의 제안 및 변수해석)

  • Lee, Han-Koo;Kim, Kwang-Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.2A
    • /
    • pp.281-288
    • /
    • 2008
  • The Prestressed Concrete (hereinafter PSC) box girder bridges with corrugated steel webs have been drawing an attention as a new structure type of PSC bridge fully utilizing the feature of concrete and steel. However, the previous study focused on the shear buckling of the corrugated steel web and development of connection between concrete flange and steel web. Therefore, it needs to perform a study on the torsional behavior and develop the rational torsional analysis model for PSC box girder with corrugated steel web. In this study, torsional analysis model is developed using Rausch's equation based on space truss model, equilibrium equation considering softening effect of reinforced concrete element and compatibility equation. Validation studies are performed on developed model through the comparison with the experimental results of loading test for PSC box girder with corrugated steel webs. Parametric studies are also performed to investigate the effect of prestressing force and concrete strength in torsional behavior of PSC box girder with corrugated steel web. The modified correction factor is also derived for the torsional coefficient of PSC box girder with corrugated steel web through the parametric study using the proposed anlaytical model.

Simulation and Post-representation: a study of Algorithmic Art (시뮬라시옹과 포스트-재현 - 알고리즘 아트를 중심으로)

  • Lee, Soojin
    • 기호학연구
    • /
    • no.56
    • /
    • pp.45-70
    • /
    • 2018
  • Criticism of the postmodern philosophy of the system of representation, which has continued since the Renaissance, is based on a critique of the dichotomy that separates the subjects and objects and the environment from the human being. Interactivity, highlighted in a series of works emerging as postmodern trends in the 1960s, was transmitted to an interactive aspect of digital art in the late 1990s. The key feature of digital art is the possibility of infinite variations reflecting unpredictable changes based on public participation on the spot. In this process, the importance of computer programs is highlighted. Instead of using the existing program as it is, more and more artists are creating and programming their own algorithms or creating unique algorithms through collaborations with programmers. We live in an era of paradigm shift in which programming itself must be considered as a creative act. Simulation technology and VR technology draw attention as a technique to represent the meaning of reality. Simulation technology helps artists create experimental works. In fact, Baudrillard's concept of Simulation defines the other reality that has nothing to do with our reality, rather than a reality that is extremely representative of our reality. His book Simulacra and Simulation refers to the existence of a reality entirely different from the traditional concept of reality. His argument does not concern the problems of right and wrong. There is no metaphysical meaning. Applying the concept of simulation to algorithmic art, the artist models the complex attributes of reality in the digital system. And it aims to build and integrate internal laws that structure and activate the world (specific or individual), that is to say, simulate the world. If the images of the traditional order correspond to the reproduction of the real world, the synthesized images of algorithmic art and simulated space-time are the forms of art that facilitate the experience. The moment of seeing and listening to the work of Ian Cheng presented in this article is a moment of personal experience and the perception is made at that time. It is not a complete and closed process, but a continuous and changing process. It is this active and situational awareness that is required to the audience for the comprehension of post-representation's forms.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Crystal Structure and Mossbauer Studies of 57Fe Doped TiO2 (57Fe가 치환된 TiO2의 결정학적 및 뫼스바우어 분광학적 연구)

  • Lee, Hi-Min;Shim, In-Bo;Kim, Chul-Sung
    • Journal of the Korean Magnetics Society
    • /
    • v.13 no.6
    • /
    • pp.237-242
    • /
    • 2003
  • $Ti_{1-x}$$^{57}$ F $e_{x}$ $O_2$(0.0$\leq$x$\leq$0.07) compounds were fabricated using the sol-gel method, and the crystal structure and magnetic properties were investigated as a function of doped $^{57}$ Fe concentration. X-ray diffraction patterns showed a pure anatase single phase, without any segregation of Fe into particulate. With varying $^{57}$ Fe concentration, we could observe unusual magnetic phenomena in these materials. Doping $^{57}$ Fe into the Ti $O_2$ nonmagnetic semiconductor formed magnetic properties, but the gradual increase of $^{57}$ Fe concentration decreased rapidly the ferromagnetic properties rather than enhanced the ferromagnetic properties. Obvious ferromagnetic behavior was shown for the samples with x$\leq$0.01, while paramagnetic behavior was shown for the sample with x$\geq$0.03. These phenomena could be verified using Mossbauer measurement. Separation of the ferromagnetic phase (sextet) and the paramagnetic phase (doublet) of the samples with different $^{57}$ Fe concentration was characterized. Samples with x$\leq$0.01 have sextet and doublet simultaneously, but samples with x$\geq$0.03 have only doublet at room temperature. This indicates that the sample x$\leq$0.01 have the ferromagnetic phase at room temperature. This result corresponded with the M-H loops referenced above and reveals an interesting feature that there is a critical limit of $^{57}$ Fe concentration (0.01$\leq$0.01 samples was fundamentally attributable to the paramagnetic phase as well as the ferromagnetic phase.e.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.