• Title/Summary/Keyword: Feature space

Search Result 1,356, Processing Time 0.03 seconds

Integrating Discrete Wavelet Transform and Neural Networks for Prostate Cancer Detection Using Proteomic Data

  • Hwang, Grace J.;Huang, Chuan-Ching;Chen, Ta Jen;Yue, Jack C.;Ivan Chang, Yuan-Chin;Adam, Bao-Ling
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.319-324
    • /
    • 2005
  • An integrated approach for prostate cancer detection using proteomic data is presented. Due to the high-dimensional feature of proteomic data, the discrete wavelet transform (DWT) is used in the first-stage for data reduction as well as noise removal. After the process of DWT, the dimensionality is reduced from 43,556 to 1,599. Thus, each sample of proteomic data can be represented by 1599 wavelet coefficients. In the second stage, a voting method is used to select a common set of wavelet coefficients for all samples together. This produces a 987-dimension subspace of wavelet coefficients. In the third stage, the Autoassociator algorithm reduces the dimensionality from 987 to 400. Finally, the artificial neural network (ANN) is applied on the 400-dimension space for prostate cancer detection. The integrated approach is examined on 9 categories of 2-class experiments, and also 3- and 4-class experiments. All of the experiments were run 10 times of ten-fold cross-validation (i. e. 10 partitions with 100 runs). For 9 categories of 2-class experiments, the average testing accuracies are between 81% and 96%, and the average testing accuracies of 3- and 4-way classifications are 85% and 84%, respectively. The integrated approach achieves exciting results for the early detection and diagnosis of prostate cancer.

  • PDF

Application of Multi-agent Reinforcement Learning to CELSS Material Circulation Control

  • Hirosaki, Tomofumi;Yamauchi, Nao;Yoshida, Hiroaki;Ishikawa, Yoshio;Miyajima, Hiroyuki
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.145-150
    • /
    • 2001
  • A Controlled Ecological Life Support System(CELSS) is essential for man to live a long time in a closed space such as a lunar base or a mars base. Such a system may be an extremely complex system that has a lot of facilities and circulates multiple substances,. Therefore, it is very difficult task to control the whole CELSS. Thus by regarding facilities constituting the CELSS as agents and regarding the status and action as information, the whole CELSS can be treated as multi-agent system(MAS). If a CELSS can be regarded as MAS the CELSS can have three advantages with the MAS. First the MAS need not have a central computer. Second the expendability of the CELSS increases. Third, its fault tolerance rises. However it is difficult to describe the cooperation protocol among agents for MAS. Therefore in this study we propose to apply reinforcement learning (RL), because RL enables and agent to acquire a control rule automatically. To prove that MAS and RL are effective methods. we have created the system in Java, which easily gives a distributed environment that is the characteristics feature of an agent. In this paper, we report the simulation results for material circulation control of the CELSS by the MAS and RL.

  • PDF

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF

A Multiresolution Model Generation Method Preserving View Directional Feature (시점과의 방향관계를 고려한 다단계 모델 생성 기법)

  • Kim, HyungSeok;Jung, SoonKi;Wohn, KwangYun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.4 no.1
    • /
    • pp.1-10
    • /
    • 1998
  • The idea of level-of-detail based on multiresolution model is gaining popularity as a natural means of handling the complexity regarding the realtime rendering of virtual environments. To generate an effective multiresolution model, we should capture the prominent visual features in the process of simplifying original complex model. In this paper, we incorporate view dependent features such as silhouette features and backface features, to the generation process of multiresolution model. To capture the view directional parameter, we propose multiresolution view sphere. View sphere maps the directional relationship between object surface and the view. Using the view sphere, coherence in the directional space is mapped into spatial coherence in the view sphere. View sphere is generated in multiresolution fashion to simplify the object. To access multiresolution view sphere efficiently, we devise quad tree for the view sphere. We also devise a mechanism for realtime simplification process using proposed view sphere. Using proposed mechanism, regenerating simplified model in realtime is effectively done in the order of number of rendered vertices.

  • PDF

Feature-Based Light and Shadow Estimation for Video Compositing and Editing (동영상 합성 및 편집을 위한 특징점 기반 조명 및 그림자 추정)

  • Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.1-9
    • /
    • 2012
  • Video-based modeling / rendering developed to produce photo-realistic video contents have been one of the important research topics in computer graphics and computer visions. To smoothly combine original input video clips and 3D graphic models, geometrical information of light sources and cameras used to capture a scene in the real world is essentially required. In this paper, we present a simple technique to estimate the position and orientation of an optimal light source from the topology of objects and the silhouettes of shadows appeared in the original video clips. The technique supports functions to generate well matched shadows as well as to render the inserted models by applying the estimated light sources. Shadows are known as an important visual cue that empirically indicates the relative location of objects in the 3D space. Thus our method can enhance realism in the final composed videos through the proposed shadow generation and rendering algorithms in real-time.

Similar Movie Contents Retrieval Using Peak Features from Audio (오디오의 Peak 특징을 이용한 동일 영화 콘텐츠 검색)

  • Chung, Myoung-Bum;Sung, Bo-Kyung;Ko, Il-Ju
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.11
    • /
    • pp.1572-1580
    • /
    • 2009
  • Combing through entire video files for the purpose of recognizing and retrieving matching movies requires much time and memory space. Instead, most current similar movie-matching methods choose to analyze only a part of each movie's video-image information. Yet, these methods still share a critical problem of erroneously recognizing as being different matching videos that have been altered only in resolution or converted merely with a different codecs. This paper proposes an audio-information-based search algorithm by which similar movies can be identified. The proposed method prepares and searches through a database of movie's spectral peak information that remains relatively steady even with changes in the bit-rate, codecs, or sample-rate. The method showed a 92.1% search success rate, given a set of 1,000 video files whose audio-bit-rate had been altered or were purposefully written in a different codec.

  • PDF

Distance Measurement of the Multi Moving Objects using Parallel Stereo Camera in the Video Monitoring System (영상감시 시스템에서 평행식 스테레오 카메라를 이용한 다중 이동물체의 거리측정)

  • 김수인;이재수;손영우
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.18 no.1
    • /
    • pp.137-145
    • /
    • 2004
  • In this paper, a new algorithm for the segmentation of the multi moving objects at the 3 dimension space and the method of measuring the distance from the camera to the moving object by using stereo video monitoring system is proposed. It get the input image of left and right from the stereo video monitoring system, and the area of the multi moving objects segmented by using adaptive threshold and PRA(pixel recursive algorithm). Each of the object segmented by window mask, then each coordinate value and stereo disparity of the multi moving objects obtained from the window masks. The distance of the multi moving objects can be calculated by this disparity, the feature of the stereo vision system and the trigonometric function. From the experimental results, the error rate of a distance measurement be existed within 7.28%, therefore, in case of implementation the proposed algorithm, the stereo security system, the automatic moving robot system and the stereo remote control system will be applied practical application.

FTFM: An Object Linkage Model for Virtual Reality (가상현실을 위한 객체 연결 모델)

  • Ju, U-Seok;Choe, Seong-Un;Park, Gyeong-Hui;Lee, Hui-Seung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.1
    • /
    • pp.95-106
    • /
    • 1996
  • The most fundamental difference between general three dimensional computer graphics technology and virtual reality technology lies in the degree of realism as we feel, and thus the virtual reality method heavily relies on such tolls as data gloves, 3D auditory system to enhance human perception and recognition. Although these tolls are valid for such purpose, more essential ingredient. This paper provides further realism by modeling active interactions between the objects inside scenes. For this purpose, this paper proposes and implements a field model where the virtual reality space is treated as a physical field defined on the characteristic radius of stimulus and sense corresponding to the individual object. In the field model, the rule of cause and effect as an essential feature of the realism can be interpreted simply as an energy exchange between objects and consequently, variation on the radius information together with behavioral logic alone can build the virtual environment where each object can react to other objects actively and controllably.

  • PDF

3D Reconstruction and Self-calibration based on Binocular Stereo Vision (스테레오 영상을 이용한 자기보정 및 3차원 형상 구현)

  • Hou, Rongrong;Jeong, Kyung-Seok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.9
    • /
    • pp.3856-3863
    • /
    • 2012
  • A 3D reconstruction technique from stereo images that requires minimal intervention from the user has been developed. The reconstruction problem consists of three steps of estimating specific geometry groups. The first step is estimating the epipolar geometry that exists between the stereo image pairs which includes feature matching in both images. The second is estimating the affine geometry, a process to find a special plane in the projective space by means of vanishing points. The third step, which includes camera self-calibration, is obtaining a metric geometry from which a 3D model of the scene could be obtained. The major advantage of this method is that the stereo images do not need to be calibrated for reconstruction. The results of camera calibration and reconstruction have shown the possibility of obtaining a 3D model directly from features in the images.

Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm (스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘)

  • Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.598-605
    • /
    • 2018
  • Behavior awareness is a technology that recognizes human behavior through data and can be used in applications such as risk behavior through video surveillance systems. Conventional behavior recognition algorithms have been performed using the 2D camera image device or multi-mode sensor or multi-view or 3D equipment. When two-dimensional data was used, the recognition rate was low in the behavior recognition of the three-dimensional space, and other methods were difficult due to the complicated equipment configuration and the expensive additional equipment. In this paper, we propose a method of recognizing human behavior using only CCTV images without additional equipment using only RGB and depth information. First, the skeleton extraction algorithm is applied to extract points of joints and body parts. We apply the equations to transform the vector including the displacement vector and the relational vector, and study the continuous vector data through the RNN model. As a result of applying the learned model to various data sets and confirming the accuracy of the behavior recognition, the performance similar to that of the existing algorithm using the 3D information can be verified only by the 2D information.