• Title/Summary/Keyword: KCGS

Search Result 271, Processing Time 0.026 seconds

The Effect of Corporate Social Responsibility Activities on Corporate Earnings Persistence: Financial Companies (기업의 사회적 책임활동이 기업의 이익지속성에 미치는 영향: 금융 기업을 중심으로)

  • Park, AJin;Kim, JeongYeon
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.4
    • /
    • pp.155-168
    • /
    • 2020
  • Although many studies have been conducted on the impact of increasing social awareness of corporate social responsibility activities on financial and non-financial performances, the number of studies conducted by financial companies is relatively small compared to those conducted by non-financial companies such as manufacturing and service industries. Accordingly, this study explores the impact of corporate social responsibility activities on the Earnings Persistence of financial companies through a regression analysis that utilizes the conversion score of an ESG rating of a Korean listed company provided by the Korea Corporate Governance Service (KCGS) as a variable for the company's social responsibility activities. Through this analysis, the study found that, among the ESG scores that are variables of social responsibility activities, the ESG governance score was significant in the direction of (+) for the Earnings Persistence. In addition, the same study conducted by classifying the entire sample into six sub-industries shows that the ESG governance score in the banking industry was more significant compared to when the regression analysis was conducted on the entirety of the samples. Therefore, this study concludes that the soundness and reliability of corporate governance have a positive effect on Corporate Earnings Persistence.

Adaptive Slicing by Merging Vertical Layer Polylines for Reducing 3D Printing Time (3D 프린팅 시간 단축을 위한 상하 레이어 폴리라인 병합 기반 가변 슬라이싱)

  • Park, Jiyoung;Kang, Joohyung;Lee, Hye-In;Shin, Hwa Seon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.17-26
    • /
    • 2016
  • This paper presents an adaptive slicing method based on merging vertical layer polylines. Firstly, we slice the input 3D polygon model uniformly with the minimum printable thickness, which results in bounding polylines of the cross section at each layer. Next, we group a set of layer polylines according to vertical connectivity. We then remove polylines in overdense area of each group. The number of layers to merge is determined by the layer thickness computed using the cusp height of the layer. A set of layer polylines are merged into a single polyline by removing the polylines within the layer thickness. The proposed method maintains the shape features as well as reduces the printing time. For evaluation, we sliced ten 3D polygon models using our method and a global adaptive slicing method and measured the total length of polylines which determines the printing time. The result showed that the total length from our method was shorter than the other method for all ten models, which meant that our method achieved less printing time.

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.

Massive Fluid Simulation Using a Responsive Interaction Between Surface and Wave Foams (수면거품과 웨이브거품의 미세한 상호작용을 이용한 대규모 유체 시뮬레이션)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.29-39
    • /
    • 2017
  • This paper presents a unified framework to efficiently and realistically simulate surface and wave foams. The framework is designed to first project 3D water particles from an underlying water solver onto 2D screen space in order to reduce the computational complexity of determining where foam particles should be generated. Because foam effects are often created primarily in fast and complicated water flows, we analyze the acceleration and curvature values to identify the areas exhibiting such flow patterns. Foam particles are emitted from the identified areas in 3D space, and each foam particle is advected according to its type, which is classified on the basis of velocity, thereby capturing the essential characteristics of foam wave motions. We improve the realism of the resulting foam by classifying it into two types: surface foam and wave foam. Wave foam is characterized by the sharp wave patterns of torrential flow s, and surface foam is characterized by a cloudy foam shape even in water with reduced motion. Based on these features, we propose a technique to correct the velocity and position of a foam particle. In addition, we propose a kernel technique using the screen space density to efficiently reduce redundant foam particles, resulting in improved overall memory efficiency without loss of visual detail in terms of foam effects. Experiments convincingly demonstrate that the proposed approach is efficient and easy to use while delivering high-quality results.

Speech Animation Synthesis based on a Korean Co-articulation Model (한국어 동시조음 모델에 기반한 스피치 애니메이션 생성)

  • Jang, Minjung;Jung, Sunjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.49-59
    • /
    • 2020
  • In this paper, we propose a speech animation synthesis specialized in Korean through a rule-based co-articulation model. Speech animation has been widely used in the cultural industry, such as movies, animations, and games that require natural and realistic motion. Because the technique for audio driven speech animation has been mainly developed for English, however, the animation results for domestic content are often visually very unnatural. For example, dubbing of a voice actor is played with no mouth motion at all or with an unsynchronized looping of simple mouth shapes at best. Although there are language-independent speech animation models, which are not specialized in Korean, they are yet to ensure the quality to be utilized in a domestic content production. Therefore, we propose a natural speech animation synthesis method that reflects the linguistic characteristics of Korean driven by an input audio and text. Reflecting the features that vowels mostly determine the mouth shape in Korean, a coarticulation model separating lips and the tongue has been defined to solve the previous problem of lip distortion and occasional missing of some phoneme characteristics. Our model also reflects the differences in prosodic features for improved dynamics in speech animation. Through user studies, we verify that the proposed model can synthesize natural speech animation.

Development of Management and Evaluation System for Realistic Virtual Reality Field Training Exercise Contents : A Case Study (실감형 가상현실 실전훈련 콘텐츠를 위한 관리 평가 시스템 개발 사례연구)

  • Kim, J.;Park, D.;Lee, P.;Cho, J.;Yoon, S.H.;Park, S.
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.111-121
    • /
    • 2020
  • Realistic training contents utilizing intensive immersion of virtual reality are being used in various fields such as industry, education, and medical care. High-risk, high-cost education training, in particular, is difficult to conduct in reality, but it can be applied with the latest virtual reality technology that enhances educational effectiveness by efficiently and safely experiencing it in an environment similar to reality. This study introduces a management system that systematically manages realistic virtual training contents and visualizes training results in schematic pictures based on defined evaluation elements. The management system can store the information generated from the content in the database and manage the training records of each trainee in a practical way. In addition, a content-based scenario can be created in multiple scenarios by setting training goals, number of participants, and methods for applying evaluation elements. This paper describes the management system's production method and the results based on the virtual reality training content as an application example.

Visual Feedback System for Manipulating Objects Using Hand Motions in Virtual Reality Environment (가상 환경에서의 손동작을 사용한 물체 조작에 대한 시각적 피드백 시스템)

  • Seo, Woong;Kwon, Sangmo;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.9-19
    • /
    • 2020
  • With the recent development of various kinds of virtual reality devices, there has been an active research effort to increase the sense of reality by recognizing the physical behavior of users rather than by classical user input methods. Among such devices, the Leap Motion controller recognizes the user's hand gestures and can realistically trace the user's hand in a virtual reality environment. However, manipulating an object in virtual reality using a recognized user's hand often causes the hand to pass through the object, which should not occur in the real world. This study presents a way to build a visual feedback system for enhancing the user's sense of interaction between hands and objects in virtual reality. In virtual reality, the user's hands are examined precisely by using a ray tracing method to see if the virtual object collides with the user's hand, and when any collision occurs, visual feedback is given through the process of reconstructing the user's hand by moving the position of the end of the user's fingers that enter the object through sign distance field and reverse mechanics. This enables realistic interaction in virtual reality in real time.

Technology to create a 360-degree panorama of a square room using a single projector and a hemispherical mirror (1대의 프로젝터와 반구형 반사경을 이용한 사각방 360도 파노라마 생성 기법)

  • Lee, Jung-jik;Park, Yoen-yong;Lee, Yun-sang;Lee, Jun-yuep;Jung, Eun-yeong;Yu, Rim;Kang, Myongjin;Jung, Moon-ryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.133-142
    • /
    • 2020
  • In this research, we describe the method of implementing a 360-degree panorama using one projector, in terms of hardware and in the production of projected pre-distortion images. We propose a method of installing a projector and a reflector on the central ceiling of the space to minimize the shadows generated based on the position of the spectators. We used a virtual camera and virtual space where the projector and hemisphere positions were set to the same as in the exhibition space in Unity. After the image projected on the screen was mapped on the wall of the virtual space, the pre-distortion image was created by the method of capturing from the virtual camera using the ray tracing technique. When the produced pre-distortion image is hemispherical reflected and projected by the projector installed at the same position as the virtual camera, the image is reflected and projected 360 degrees on the panoramic screen.

Development of VR Healing Content 'NORNIR' Using Color Therapy (컬러테라피를 활용한 VR 힐링 콘텐츠, '노르니르' 개발)

  • Choi, Seyoung;Kim, Sujin;Lee, Nayoung;Lee, Kihan;Ko, Hyeyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.143-153
    • /
    • 2020
  • This study embodies and proposes VR color therapy healing contents 'Nornir' that can manage stress in daily life. "Nornir" applies the CRR analysis method to provide a customized VR color therapy experience according to the three colors selected by the user. It is designed to enable users to understand themselves through their color journey, to rec eive various color interactions and stimuli to implement in the future, and to provide healing that lowers stress levels. Based on the results implemented, the Korean version of the mood condition test 'K-POMS' was conducted before an d after the demonstration to check the user's stress changes after the content demonstration. Experiments have shown that users clearly see a decrease in negative emotions and an increase in positive emotions. By using VR technology, color psychotherapy rules are combined to provide the possibility of relieving stress for users who are exposed to fre quent stress in daily life.

Automatic Sagittal Plane Detection for the Identification of the Mandibular Canal (치아 신경관 식별을 위한 자동 시상면 검출법)

  • Pak, Hyunji;Kim, Dongjoon;Shin, Yeong-Gil
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.31-37
    • /
    • 2020
  • Identification of the mandibular canal path in Computed Tomography (CT) scans is important in dental implantology. Typically, prior to the implant planning, dentists find a sagittal plane where the mandibular canal path is maximally observed, to manually identify the mandibular canal. However, this is time-consuming and requires extensive experience. In this paper, we propose a deep-learning-based framework to detect the desired sagittal plane automatically. This is accomplished by utilizing two main techniques: 1) a modified version of the iterative transformation network (ITN) method for obtaining initial planes, and 2) a fine searching method based on a convolutional neural network (CNN) classifier for detecting the desirable sagittal plane. This combination of techniques facilitates accurate plane detection, which is a limitation of the stand-alone ITN method. We have tested on a number of CT datasets to demonstrate that the proposed method can achieve more satisfactory results compared to the ITN method. This allows dentists to identify the mandibular canal path efficiently, providing a foundation for future research into more efficient, automatic mandibular canal detection methods.