• Title/Summary/Keyword: Visual Models

Search Result 602, Processing Time 0.028 seconds

QUANTITATIVE ANALYSES USING 4D MODELS - AN EXPLORATIVE STUDY

  • Rogier Jongeling;Jonghoon Kim;Claudio Mourgues;Martin Fischer;Thomas Olofsson
    • International conference on construction engineering and project management
    • /
    • 2005.10a
    • /
    • pp.830-835
    • /
    • 2005
  • 4D models help construction planners to develop and evaluate construction plans. However, current analyses using 4D models are mainly visual and limit the quantitative comparison of construction alternatives. This paper explores the usefulness of extracting quantitative information from 4D models to support time-space analyses. We use two 4D models of an industry test case to illustrate how to analyze 4D content quantitatively (i.e., work space areas and distances between concurrent activities). This paper shows how these two types of 4D content can be extracted from 4D models to support 4D-based-analysis and novel presentation of construction planning information. We suggest further research to formalize the content of 4D models to enable comparative quantitative analyses of construction planning alternatives. Formalized 4D content will enable the development of reasoning mechanisms that automate 4D-model-based analyses and provide the information content for informative presentations of construction planning information.

  • PDF

An Basic Study on the Curriculum Evaluation of Gifted Education in Visual Art (미술영재 교육과정 평가를 위한 이론적 기초)

  • Lee, Kyung-Jin;Kim, Sun-Ah
    • Journal of Gifted/Talented Education
    • /
    • v.22 no.3
    • /
    • pp.639-662
    • /
    • 2012
  • The purpose of this study is to develop the evaluation model of gifted curriculum in visual art. For this purpose, first, it discusses about what kinds of issues raised about gifted education in visual art. Second, it critically reviews the evaluation models of gifted curriculum, and investigates the suitable model for developing curriculum evaluation model of gifted in visual art. Third, it suggests the appropriate perspective and evaluation model of gifted curriculum in visual art. Along with the change in the concept of creativity, recent studies on gifted education in visual art concentrate that gifted learners who have the potential find their own way of creating art. Also they emphasize the contextual implementation which recognizes the significance of interaction among field, domain and individual. Based of these inquiry, existing evaluation models of gifted curriculum have limitations in suitability as a evaluation model of gifted curriculum in visual art. This study suggests that the curriculum evaluation of visual art gifted programs should be approached from the decision-making perspective. Also it develops the conceptual framework and the evaluation model of gifted curriculum in visual art based on the CIPP model, which is the representative model of decision-making approach. It concludes with its implications and the discussion about the role of evaluators.

Real-Time Comprehensive Assistance for Visually Impaired Navigation

  • Amal Al-Shahrani;Amjad Alghamdi;Areej Alqurashi;Raghad Alzahrani;Nuha imam
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.1-10
    • /
    • 2024
  • Individuals with visual impairments face numerous challenges in their daily lives, with navigating streets and public spaces being particularly daunting. The inability to identify safe crossing locations and assess the feasibility of crossing significantly restricts their mobility and independence. Globally, an estimated 285 million people suffer from visual impairment, with 39 million categorized as blind and 246 million as visually impaired, according to the World Health Organization. In Saudi Arabia alone, there are approximately 159 thousand blind individuals, as per unofficial statistics. The profound impact of visual impairments on daily activities underscores the urgent need for solutions to improve mobility and enhance safety. This study aims to address this pressing issue by leveraging computer vision and deep learning techniques to enhance object detection capabilities. Two models were trained to detect objects: one focused on street crossing obstacles, and the other aimed to search for objects. The first model was trained on a dataset comprising 5283 images of road obstacles and traffic signals, annotated to create a labeled dataset. Subsequently, it was trained using the YOLOv8 and YOLOv5 models, with YOLOv5 achieving a satisfactory accuracy of 84%. The second model was trained on the COCO dataset using YOLOv5, yielding an impressive accuracy of 94%. By improving object detection capabilities through advanced technology, this research seeks to empower individuals with visual impairments, enhancing their mobility, independence, and overall quality of life.

Discrimination of a Pleasant and an Unpleasant State by Autoregressive Models from EEG Signals (EEG신호의 시계열분석에 의한 쾌, 불쾌 감성분류에 관한 연구)

  • Im, Seong-Sik;Kim, Jin-Ho;Kim, Chi-Yong
    • Journal of the Ergonomics Society of Korea
    • /
    • v.17 no.1
    • /
    • pp.67-77
    • /
    • 1998
  • The objective of this study is to extract information from electroencephalogram(EEG) signals with which we can discriminate mental states. Seven university students were participated in this study. Ten stimuli based on IAPS (International Affective Picture Systems) Were presented at random according to the experimental schedule. 8-channel ($O_1$, $O_2$, $F_3$, $F_4$, $F_7$, $F_8$, $FP_1$, and $FP_2$)EEG signals were recorded at a sampling rate of 204.8 Hz for visual stimuli and analyzed. After random ten sequential stimuli presentation, the subject subjectively assessed the stimulus by scaling from -5 to 5. If the stimulus was the best and the worst, it was scored 5 and -5, respectively. Only maximum and minimum scored-EEG signals within each subject were selected on the basis of subjectively assessment for analysis. EEG signals were transformed into feature objects based on scalar autoregressive model coefficients. They were classified with Discriminant Analysis for each channel. The features produced results with the best classification accuracy of 85.7 % in $O_1$ and $O_2$ for visual stimuli. This study could be extended to establish an algorithm which quantify and classify emotions evoked by visual stimulus using autoregressive models.

  • PDF

A Multi-category Task for Bitrate Interval Prediction with the Target Perceptual Quality

  • Yang, Zhenwei;Shen, Liquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4476-4491
    • /
    • 2021
  • Video service providers tend to face user network problems in the process of transmitting video streams. They strive to provide user with superior video quality in a limited bitrate environment. It is necessary to accurately determine the target bitrate range of the video under different quality requirements. Recently, several schemes have been proposed to meet this requirement. However, they do not take the impact of visual influence into account. In this paper, we propose a new multi-category model to accurately predict the target bitrate range with target visual quality by machine learning. Firstly, a dataset is constructed to generate multi-category models by machine learning. The quality score ladders and the corresponding bitrate-interval categories are defined in the dataset. Secondly, several types of spatial-temporal features related to VMAF evaluation metrics and visual factors are extracted and processed statistically for classification. Finally, bitrate prediction models trained on the dataset by RandomForest classifier can be used to accurately predict the target bitrate of the input videos with target video quality. The classification prediction accuracy of the model reaches 0.705 and the encoded video which is compressed by the bitrate predicted by the model can achieve the target perceptual quality.

Anatomical and Functional Comparison of the Caudate Tail in Primates and the Tail of the Striatum in Rodents: Implications for Sensory Information Processing and Habitual Behavior

  • Keonwoo Lee;Shin-young An;Jun Park;Seoyeon Lee;Hyoung F. Kim
    • Molecules and Cells
    • /
    • v.46 no.8
    • /
    • pp.461-469
    • /
    • 2023
  • The tail of the striatum (TS) is located at the caudal end in the striatum. Recent studies have advanced our knowledge of the anatomy and function of the TS but also raised questions about the differences between rodent and primate TS. In this review, we compare the anatomy and function of the TS in rodent and primate brains. The primate TS is expanded more caudally during brain development in comparison with the rodent TS. Additionally, five sensory inputs from the cortex and thalamus converge in the rodent TS, but this convergence is not observed in the primate TS. The primate TS, including the caudate tail and putamen tail, primarily receives inputs from the visual areas, implying a specialized function in processing visual inputs for action generation. This anatomical difference leads to further discussion of cellular circuit models to comprehend how the primate brain processes a wider range of complex visual stimuli to produce habitual behavior as compared with the rodent brain. Examining these differences and considering possible neural models may provide better understanding of the anatomy and function of the primate TS.

Determination of Target Value under Automatic Vision Inspection Systems (자동시각검사환경하에서 공정 목표치의 설정)

  • 서순근;이성재
    • Journal of Korean Society for Quality Management
    • /
    • v.29 no.3
    • /
    • pp.66-78
    • /
    • 2001
  • This paper deals with problem of determining process target value under automated visual inspection(AVI) system. Three independent error sources - digitizing error, illumination error, and positional error - which have a close relationship with the performance of the AVI system, are considered. Assuming that digitizing error is uniformly or normally distributed and illumination and positional errors are normally distributed, respectively, the distribution function for the error of measured lengths is derived when the length of a product is measured by the AVI system. Then, Optimal target values under two error models of AVI system are obtained by minimizing the total expected cost function which consists of give away, rework and penalty cost. To validate two process setting models, AVI system for drinks filling process is made up and test results are discussed.

  • PDF

Voting based Cue Integration for Visual Servoing

  • Cho, Che-Seung;Chung, Byeong-Mook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.798-802
    • /
    • 2003
  • The robustness and reliability of vision algorithms is the key issue in robotic research and industrial applications. In this paper, the robust real time visual tracking in complex scene is considered. A common approach to increase robustness of a tracking system is to use different models (CAD model etc.) known a priori. Also fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. Because voting is a very simple or no model is needed for fusion, voting-based fusion of cues is applied. The approach for this algorithm is tested in a 3D Cartesian robot which tracks a toy vehicle moving along 3D rail, and the Kalman filter is used to estimate the motion parameters, namely the system state vector of moving object with unknown dynamics. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

  • PDF

A Study on the Artificial Recognition System on Visual Environment of Architecture (건축의 시각적 환경에 대한 지능형 인지 시스템에 관한 연구)

  • Seo, Dong-Yeon;Lee, Hyun-Soo
    • KIEAE Journal
    • /
    • v.3 no.2
    • /
    • pp.25-32
    • /
    • 2003
  • This study deals with the investigation of recognition structure on architectural environment and reconstruction of it by artificial intelligence. To test the possibility of the reconstruction, recognition structure on architectural environment is analysed and each steps of the structure are matched with computational methods. Edge Detection and Neural Network were selected as matching methods to each steps of recognition process. Visual perception system established by selected methods is trained and tested, and the result of the system is compared with that of experiment of human. Assuming that the artificial system resembles the process of human recognition on architectural environment, does the system give similar response of human? The result shows that it is possible to establish artificial visual perception system giving similar response with that of human when it models after the recognition structure and process of human.

A Spatial Planning Model for Supporting Facilities Allocation and Visual Evaluation in Improvement of Rural Villages (농촌마을개발의 시설배치 및 시각적 평가 지원을 위 한 공간계획 모형)

  • 김대식;정하우
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.44 no.6
    • /
    • pp.71-82
    • /
    • 2002
  • The purpose of this study is to develop a 3 dimensional spatial planning model (3DSPLAM) for facilities allocation and visual evaluation in improvement planning of rural village. For the model development, this study developed both planning layers and a modelling process for spatial planning of rural villages. The 3DSPLAM generates road networks and village facilities location automatically from built area plan map and digital elevation model generated by geographic information system. The model also simulates 3-dimensional villagescape for visual presentation of the planned results. The 3DSPLAM could be conveniently used for automatic allocation of roads, easy partition of land lots and reasonable locating of facilities. The planned results could be also presented in the stereoscopic models with varied viewing positions and angles.