• Title/Summary/Keyword: 훈련 시나리오

Search Result 184, Processing Time 0.025 seconds

Effects of Cognitive Heuristics on the Decisions of Actual Judges and Mock Jury Groups for Simulated Trial Issues (가상적인 재판 쟁점에서의 현역판사의 판단과 모의배심의 집단판단에 대한 인지적 방략의 효과)

  • Kwang B. Park;Sang Joon Kim;Mi Young Han
    • Korean Journal of Culture and Social Issue
    • /
    • v.11 no.1
    • /
    • pp.59-84
    • /
    • 2005
  • Three studies were conducted to examine the degree to which three common heuristics, anchoring heuristic, framing effect and representative-ness heuristic, influence the decision-making precesses of actual judges and 5-persons mock juries. With scenarios regarding various issues that are commonly raised in actual criminal and civil trials, study 1 examined the 158 actual judges' decisions. In study 2, the decisions of 80 mock jury groups that consisted of college students were examined with similar scenarios. And individual decisions were examined in study 3 to compare with the group decisions in study 2. The decision processes of the actual judges and the mock jury groups alike were found to be influenced by "anchors". But the biases by the anchoring heuristic were more pronounced in the group decisions than in the decisions of the actual judges. With respect to framing effect, the actual judges were found to be resistant, while a small effect was found in the decisions of mock jury groups. Representative-ness biases weren't found in the decisions of both the actual judges and mock juries. The implications of the results for judicial systems were discussed.

  • PDF

Analysis of Research Trends in Deep Learning-Based Video Captioning (딥러닝 기반 비디오 캡셔닝의 연구동향 분석)

  • Lyu Zhi;Eunju Lee;Youngsoo Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.35-49
    • /
    • 2024
  • Video captioning technology, as a significant outcome of the integration between computer vision and natural language processing, has emerged as a key research direction in the field of artificial intelligence. This technology aims to achieve automatic understanding and language expression of video content, enabling computers to transform visual information in videos into textual form. This paper provides an initial analysis of the research trends in deep learning-based video captioning and categorizes them into four main groups: CNN-RNN-based Model, RNN-RNN-based Model, Multimodal-based Model, and Transformer-based Model, and explain the concept of each video captioning model. The features, pros and cons were discussed. This paper lists commonly used datasets and performance evaluation methods in the video captioning field. The dataset encompasses diverse domains and scenarios, offering extensive resources for the training and validation of video captioning models. The model performance evaluation method mentions major evaluation indicators and provides practical references for researchers to evaluate model performance from various angles. Finally, as future research tasks for video captioning, there are major challenges that need to be continuously improved, such as maintaining temporal consistency and accurate description of dynamic scenes, which increase the complexity in real-world applications, and new tasks that need to be studied are presented such as temporal relationship modeling and multimodal data integration.

Development of a deep neural network model to estimate solar radiation using temperature and precipitation (온도와 강수를 이용하여 일별 일사량을 추정하기 위한 심층 신경망 모델 개발)

  • Kang, DaeGyoon;Hyun, Shinwoo;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.2
    • /
    • pp.85-96
    • /
    • 2019
  • Solar radiation is an important variable for estimation of energy balance and water cycle in natural and agricultural ecosystems. A deep neural network (DNN) model has been developed in order to estimate the daily global solar radiation. Temperature and precipitation, which would have wider availability from weather stations than other variables such as sunshine duration, were used as inputs to the DNN model. Five-fold cross-validation was applied to train and test the DNN models. Meteorological data at 15 weather stations were collected for a long term period, e.g., > 30 years in Korea. The DNN model obtained from the cross-validation had relatively small value of RMSE ($3.75MJ\;m^{-2}\;d^{-1}$) for estimates of the daily solar radiation at the weather station in Suwon. The DNN model explained about 68% of variation in observed solar radiation at the Suwon weather station. It was found that the measurements of solar radiation in 1985 and 1998 were considerably low for a small period of time compared with sunshine duration. This suggested that assessment of the quality for the observation data for solar radiation would be needed in further studies. When data for those years were excluded from the data analysis, the DNN model had slightly greater degree of agreement statistics. For example, the values of $R^2$ and RMSE were 0.72 and $3.55MJ\;m^{-2}\;d^{-1}$, respectively. Our results indicate that a DNN would be useful for the development a solar radiation estimation model using temperature and precipitation, which are usually available for downscaled scenario data for future climate conditions. Thus, such a DNN model would be useful for the impact assessment of climate change on crop production where solar radiation is used as a required input variable to a crop model.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.