• Title/Summary/Keyword: 영상 표현

Search Result 2,310, Processing Time 0.031 seconds

A Study on the Cartoon Style in Image Contents (영상 콘텐츠에 나타나는 만화적 표현에 관한 연구)

  • Lee, Young-Sook;Lee, Heon-Woo
    • Cartoon and Animation Studies
    • /
    • s.24
    • /
    • pp.65-82
    • /
    • 2011
  • Since the arrival of the internet, numerous creative content industries have been active online. In particular, additional training has been necessary due to higher interest in e-Book publishing, and the opening and expansion of the online comics industry. However, as for the study of film grammar, there is a lack of research on lively and diverse expression in cartoon directing. In this study, animation and painting techniques, with examples of cartoons being expressed cartoon is intended to provide the possibility of directing. In addition, the comics industry one source multi user to the related industries is a key element with the value-added industries. So in other media expressed a wide range of research on the techniques of comic culture shall serve as key elements of the content looks.

Bit Depth Expansion using Error Distribution (에러 분포의 예측을 이용한 비트 심도 확장 기술)

  • Woo, Jihwan;Shim, Woosung
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.42-50
    • /
    • 2017
  • A Bit-depth expansion is a method to increase the number of bit. It is getting important as the needs of HDR (High Dynamic Range) display or resolution of display have been increased because the level of luminance or expressiveness of color is proportional to the number of bit in the display. In this paper, we present effective bit-depth expansion algorithm for conventional standard 8 bit-depth content to display in high bit-depth device (10 bits). Proposed method shows better result comparing with recently developed methods in quantitative (PSNR) with low complexity. The proposed method shows 1db higher in PSNR measurement with 40 times faster in computational time.

Passive sonar signal classification using graph neural network based on image patch (영상 패치 기반 그래프 신경망을 이용한 수동소나 신호분류)

  • Guhn Hyeok Ko;Kibae Lee;Chong Hyun Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.234-242
    • /
    • 2024
  • We propose a passive sonar signal classification algorithm using Graph Neural Network (GNN). The proposed algorithm segments spectrograms into image patches and represents graphs through connections between adjacent image patches. Subsequently, Graph Convolutional Network (GCN) is trained using the represented graphs to classify signals. In experiments with publicly available underwater acoustic data, the proposed algorithm represents the line frequency features of spectrograms in graph form, achieving an impressive classification accuracy of 92.50 %. This result demonstrates a 8.15 % higher classification accuracy compared to conventional Convolutional Neural Network (CNN).

A Design of Art-Robot Technique for Drawing Shade and Shadow of a Picture (그림의 명암과 그림자 표현을 위한 아트로봇 기술 설계)

  • Song, Myeongjin;Kim, Paul;Lee, Geunjoo;Kim, Sangwook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.1027-1030
    • /
    • 2011
  • 휴머노이드 로봇 중 초상화를 그리는 로봇이 있지만 다양한 영상을 입력받아 명암 및 그림자까지 그림을 그리는 로봇은 흔하지 않다. 기존의 화가로봇들은 사용자의 얼굴을 영상으로 입력받아 윤곽선만 추출하여 그리는 방식으로, 입력된 영상으로부터 로봇 암을 제어하는 과정에서 제대로 동기화가 이뤄지지 못해 드로잉 속도가 느리고 원본 영상과 비교 시 차이가 많이 난다. 본 연구에서는 입력된 영상으로부터 명암과 그림자를 인식하여 표현해 줌으로써 입체감 있는 그림의 드로잉이 가능하다. 또한, 로봇 암의 미세한 컨트롤을 통해 드로잉 선 두께를 제어함으로써 자연스러운 그림을 그리고, 드로잉 속도가 향상되어 정확도를 높일 수 있게 하는 휴리스틱 암 제어 기술을 제안한다. 이를 구현하기 위해서는 영상으로부터 명암, 그림자의 농도에 따라 레벨을 결정하고, 레벨을 바탕으로 주변 명암 픽셀들을 평활화 하여 좌표 집합을 추출한다. 좌표 값들로 부터 유효 궤적을 분석하여 로봇 암이 이동할 경로를 추출하고, 효율적인 드로잉 기법을 통해 명암을 표현하여 드로잉하려 한다.

Automatic Depth-of-Field Control for Stereoscopic Visualization (입체영상 가시화를 위한 자동 피사계 심도 조절기법)

  • Kang, Dong-Soo;Kim, Yang-Wook;Park, Jun;Shin, Byeong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.4
    • /
    • pp.502-511
    • /
    • 2009
  • In order to simulate a depth-of-field effect in real world, there have been several researches in computer graphics field. It can represent an out-of-focused scene by calculating focal plane. When a point in a 3D coordinate lies on further or nearer than focal plane, the point is presented as a blurred circle on image plane according to the characteristic of the aperture and the lens. We can generate a realistic image by simulating the effect because it provides an out-of-focused scene like human eye dose. In this paper, we propose a method to calculate a disparity value of a viewer using a customized stereoscopic eye-tracking system and a GPU-based depth-of-field control method. They enable us to generate more realistic images reducing side effects such as dizziness. Since stereoscopic imaging system compels the users to fix their focal position, they usually feel discomfort during watching the stereoscopic images. The proposed method can reduce the side effect of stereoscopic display system and generate more immersive images.

  • PDF

Study on the estimation and representation of disparity map for stereo-based video compression/transmission systems (스테레오 기반 비디오 압축/전송 시스템을 위한 시차영상 추정 및 표현에 관한 연구)

  • Bak Sungchul;Namkung Jae-Chan
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.576-586
    • /
    • 2005
  • This paper presents a new estimation and representation of a disparity map for stereo-based video communication systems. Several pixel-based and block-based algorithms have been proposed to estimate the disparity map. While the pixel-based algorithms can achieve high accuracy in computing the disparity map, they require a lost of bits to represent the disparity information. The bit rate can be reduced by the block-based algorithm, sacrificing the representation accuracy. In this paper, the block enclosing a distinct edge is divided into two regions and the disparity of each region is set to that of a neighboring block. The proposed algorithm employs accumulated histograms and a neural network to classify a type of a block. In this paper, we proved that the proposed algorithm is more effective than the conventional algorithms in estimating and representing disparity maps through several experiments.

ORMN: A Deep Neural Network Model for Referring Expression Comprehension (ORMN: 참조 표현 이해를 위한 심층 신경망 모델)

  • Shin, Donghyeop;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.2
    • /
    • pp.69-76
    • /
    • 2018
  • Referring expressions are natural language constructions used to identify particular objects within a scene. In this paper, we propose a new deep neural network model for referring expression comprehension. The proposed model finds out the region of the referred object in the given image by making use of the rich information about the referred object itself, the context object, and the relationship with the context object mentioned in the referring expression. In the proposed model, the object matching score and the relationship matching score are combined to compute the fitness score of each candidate region according to the structure of the referring expression sentence. Therefore, the proposed model consists of four different sub-networks: Language Representation Network(LRN), Object Matching Network (OMN), Relationship Matching Network(RMN), and Weighted Composition Network(WCN). We demonstrate that our model achieves state-of-the-art results for comprehension on three referring expression datasets.

Improvement of Character-net via Detection of Conversation Participant (대화 참여자 결정을 통한 Character-net의 개선)

  • Kim, Won-Taek;Park, Seung-Bo;Jo, Geun-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.241-249
    • /
    • 2009
  • Recently, a number of researches related to video annotation and representation have been proposed to analyze video for searching and abstraction. In this paper, we have presented a method to provide the picture elements of conversational participants in video and the enhanced representation of the characters using those elements, collectively called Character-net. Because conversational participants are decided as characters detected in a script holding time, the previous Character-net suffers serious limitation that some listeners could not be detected as the participants. The participants who complete the story in video are very important factor to understand the context of the conversation. The picture elements for detecting the conversational participants consist of six elements as follows: subtitle, scene, the order of appearance, characters' eyes, patterns, and lip motion. In this paper, we present how to use those elements for detecting conversational participants and how to improve the representation of the Character-net. We can detect the conversational participants accurately when the proposed elements combine together and satisfy the special conditions. The experimental evaluation shows that the proposed method brings significant advantages in terms of both improving the detection of the conversational participants and enhancing the representation of Character-net.

Encoding Method for Interactive Genetic Algorithm by Wavelet Transform (웨이블렛 변환을 이용한 대화형 유전자 알고리즘의 인코딩 방법)

  • 이주영;조성배
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.10a
    • /
    • pp.131-134
    • /
    • 1997
  • 기존의 유전자 알고리즘과는 달리 대화형 유전자 알고리즘은 평가치를 인간이 제시할 수있기 때문에 인간의 직관이나 감성을 효과적으로 표현할 수 있다는 장점이 있다. 대화형 유전자 알고리즘을 기반으로 내용 기반 영상 검색 시스템을 구축한 바 있는데, 이 시스템은 웨이블렛 변환을 통하여 기술된 영상을 내용에 기반하여 검색할 수 있도록 한다. 본 논문에서는 이러한 웨이블렛 변환으로 얻어진 계수가 유전자 알고리즘의 염색체 표현으로 효과적인지를 실험적으로 평가하고자 한다. 소규모의 영상 데이터베이스에 대하여 실험한 결과 체이블렛 변환으로 기술된 염색테들이 유전자 알고리즘의 교차 연산에 대하여 의미있는 후보를 찾아낸다는 사실을 확인할 수 있다.

  • PDF

Edge Class Design for the Development of Edge-based Image Analysis Algorithm (표준화된 Edge기반 영상분석 알고리즘 개발을 위한 윤곽선 클래스 설계 및 구현)

  • 안기옥;황혜정;채옥삼
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10b
    • /
    • pp.589-591
    • /
    • 2003
  • 영상에 추출된 윤곽선(Edge)은 물체의 핵심적인 형태정보를 포함하고 있어서 영상인식과 분석의 근간이 되고 있다. 따라서 정확한 윤곽선 검출을 위한 많은 연구가 진행되고 있으며 그 응용분야도 다양하다. 그러나 정작 추출된 윤곽선 정보를 효율적으로 표현하고 활용하기 위한 표준화된 자료구조에 대한 연구는 많지 않아서 연구결과의 공유를 어렵게 하고 있다. 본 논문에서는 검출된 윤곽선을 효율적으로 표현, 관리, 검색, 조작하기 위한 자료클래스를 설계구현 함으로서 윤곽선검출 알고리즘의 표준화와 재사용을 촉진시키고 검출된 다양한 응용을 가능하게 한다.

  • PDF