• Title/Summary/Keyword: Visual Features

Search Result 1,079, Processing Time 0.026 seconds

Application Of Information Technologies In Network Mass Communication Media

  • Ulianova, Kateryna;Kovalova, Tetiana;Mostipan, Tetiana;Lysyniuk, Maryna;Parfeniuk, Ihor
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.344-348
    • /
    • 2021
  • The article examines one of the most important means of visualization of mass information on the Internet - information graphics in the broadest sense of the term as a visual technology for presenting mass information. The main objectives of the article are to determine the genre-typological features of infographics and basic technological principles; identification of features of creation and use of information graphics in modern network. Certain benefits of online infographic editors include savings in resources and time. They allow the user, who has basic PC skills, to create standardized infographics based on their own data. In addition, the use of online services develops visual thinking, allows you to get an idea of quality criteria and current trends in infographics, as well as to gain initial experience in the visual presentation of data.

Features of Attention Shown at Continuous Observation of Department-Store Space (백화점 공간의 연속 주시에 나타난 주의집중 특성)

  • Choi, Gae-Young
    • Korean Institute of Interior Design Journal
    • /
    • v.24 no.6
    • /
    • pp.128-136
    • /
    • 2015
  • This research, which has been planned to appreciate the features of continuous observation of space, has applied the procedure of acquiring continuous visual information when the act of watching takes place along the time to analyze the space characteristics through the scenes and time so that the features of attention shown in the process of acquiring visual information at the time of observing continuous scenes might be estimated. For analysis of the features of continuous observation was set up the premise that the features of observation and perception vary depending on gender, when the women shops in department stores were selected as research objects. The observation features found at the time of continuous observation of selling spaces in department stores were focused on two analysis methods in order to compare the differences and characteristics of the two. The followings are the findings. First, the area with predominant observation was found to be 87.1% in both methods. It was found that the analysis of observation features by "Analysis I" was useful for inter-sectional comparison of continuous images. Second, in case of extracting predominant sections, the ceiling or the structures which are the backgrounds rarely attracted any eyes. Depending on analysis method, there was the gap of 14.3%~25.0% between observed sections. Third, in case that the hall is curved, the eyes were found to be expanded from side to side and up and down. The review of observation numbers of predominant sections makes it possible to decide whether it should be regarded as (1) unstability or (2) expanding search, and when the images are enlarged from distant view to close-range view, the weakening vanishing point results in the increase of expanded search of surroundings. Accordingly, it was found that the characteristics of images has effects on the observation features when any space was continuously observed. Furthermore, the difference of analysis methods also was found to be likely to cause big differences in the results of analyzing observation features.

VOQL : A Visual Object Query Language (Stochastic VOQL : 시각적 객체 질의어)

  • Kim, Jeong-Hee;Cho, Wan-Sup;Lee, Suk-Kyoon;Whang, Kyung-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.5
    • /
    • pp.1-15
    • /
    • 2001
  • Expressing complex query conditions in a concise and intuitive way has been a challenge in the design of visual object-oriented query languages. We propose a visual query language called VOQL (Visual Object oriented Query Language) for object oriented databases. By employing the visual notation of graph and Venn diagram, the database schema and the advanced features of object oriented queries such as multi-valued path expressions and quantifiers can be represented in a simple way. VOQL has such good features as simple and intuitive syntax, well-defined semantics, and excellent expressive power of object-oriented queries compared with previous visual object-oriented query languages.

  • PDF

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

A new approach for content-based video retrieval

  • Kim, Nac-Woo;Lee, Byung-Tak;Koh, Jai-Sang;Song, Ho-Young
    • International Journal of Contents
    • /
    • v.4 no.2
    • /
    • pp.24-28
    • /
    • 2008
  • In this paper, we propose a new approach for content-based video retrieval using non-parametric based motion classification in the shot-based video indexing structure. Our system proposed in this paper has supported the real-time video retrieval using spatio-temporal feature comparison by measuring the similarity between visual features and between motion features, respectively, after extracting representative frame and non-parametric motion information from shot-based video clips segmented by scene change detection method. The extraction of non-parametric based motion features, after the normalized motion vectors are created from an MPEG-compressed stream, is effectively fulfilled by discretizing each normalized motion vector into various angle bins, and by considering the mean, variance, and direction of motion vectors in these bins. To obtain visual feature in representative frame, we use the edge-based spatial descriptor. Experimental results show that our approach is superior to conventional methods with regard to the performance for video indexing and retrieval.

An Intelligent Visual Servoing Method using Vanishing Point Features

  • Lee, Joon-Soo;Suh, Il-Hong
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.177-182
    • /
    • 1997
  • A visual servoing method is proposed for a robot with a camera in hand. Specifically, vanishing point features are suggested by employing a viewing model of perspective projection to calculate the relative rolling, pitching and yawing angles between the object and the camera. To compensate dynamic characteristics of the robot, desired feature trajectories for the learning of visually guided line-of-sight robot motion are obtained by measuring features by the camera in hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a commercially provided function of linear motion. And then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories. To show the validity of proposed algorithm, some experimental results are illustrated, where a four axis SCARA robot with a B/W CCD camera is used.

  • PDF

Study on Analysis of Driver's Visual Characteristics in Road Traffic (도로교통에 있어서 운전자 주시특성분석)

  • 김대웅;임채문
    • Journal of Korean Society of Transportation
    • /
    • v.8 no.2
    • /
    • pp.7-25
    • /
    • 1990
  • In road traffic, road circumstances, vehicle, and driver are closely related each other. When road facilities are established in road planning, only road structure has been considered. However, relatively little work has been done regarding the relation between road circumstances and human with respect to a driver. This dissertation focuses on analysis of driver's visual characteristics to improve road circumstances. In this study, driver's visual characteristics are measured with eye-mark recorder and analyzed statistically. This study includes that visual characteristics, visual range, visual time, distribution of fixation duration, and visual moving angle with respect to road circumstances are established qualitatively and quantitatively by driving testing vehicle on streets, roads and high-ways. The main features of this study are : The driver's visual ranges are different over 10% depending on lane in multi-lanes. The visual range on two-lanes is more than twice as big as that on multi-lanes at 85% of whole vision. The right and left visual ranges by as big as that on multi-lanes at 85% of whole vision. The right and left visual ranges by as big as that on multi-lanes at 85% of whole vision. The right and left visual ranges by as big as that on multi-lanes at 85% of whole vision. The right and left visual ranges by speed are $34^{\circ}$ for 30-50km/hr, $28^{\circ}$ for 50-70km/hr, $22^{\circ}$ for 70-90km/hr and 16^{\circ} for over 90km/hr at 95% of visual rate. Accordingly, increasing speed results in narrow visual range.

  • PDF

Computer Vision and Neuro- Net Based Automatic Grading of a Mushroom(Lentinus Edodes L.) (컴퓨터시각과 신경회로망에 의한 표고등급의 자동판정)

  • Hwang, Heon;Lee, Choongho;Han, Joonhyun
    • Journal of Bio-Environment Control
    • /
    • v.3 no.1
    • /
    • pp.42-51
    • /
    • 1994
  • Visual features of a mushromm(Lentinus Edodes L.) are critical in sorting and grading as most agricultural products are. Because of its complex and various visual features, grading and sorting of mushrooms have been done manually by the human expert. Though actions involved in human grading look simple, it decision making underneath the simple action comes from the result of the complex neural processing of visual image. Recently, an artificial neural network has drawn a great attention because of its functional capability as a partial substitute of the human brain. Since most agricultural products are not uniquely defined in its physical properties and do not have a well defined job structure, the neuro -net based computer visual information processing is the promising approach toward the automation in the agricultural field. In this paper, first, the neuro - net based classification of simple geometric primitives were done and the generalization property of the network was tested for degraded primitives. And then the neuro-net based grading system was developed for a mushroom. A computer vision system was utilized for extracting and quantifying the qualitative visual features of sampled mushrooms. The extracted visual features of sampled mushrooms and their corresponding grades were used as input/output pairs for training the neural network. The grading performance of the trained network for the mushrooms graded previously by the expert were also presented.

  • PDF

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.

Visual servoing by a fuzzy reasoning method (퍼지추론에 의한 시각적 구동방법)

  • 김태원;서일홍;오상록
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.984-989
    • /
    • 1991
  • In this paper, a novel type of a visual servoing method is proposed for eye-in-hand robots by employing a self-organizing fuzzy controller. For this is there defined a new Jacobian riot to be the function of a relative position of the object but to be a function of the only image features. Instead of obtaining an analytic form of the proposed Jacobian, a self-organizing fuzzy controller is then proposed to alleviate difficulties in real-time implementation. To show the validities, the proposed method is applied to a 2-dimensional visual servoing task.

  • PDF