• Title/Summary/Keyword: visual layers

Search Result 127, Processing Time 0.023 seconds

A Study on Layer's Method Applied Long & Middle Hair Design (레이어 법칙을 활용한 긴 머리형과 중간 머리형의 디자인 연구)

  • Park, Sang-Kook;Seo, Yun-Kyung
    • Korean Journal of Human Ecology
    • /
    • v.18 no.3
    • /
    • pp.793-798
    • /
    • 2009
  • Hair cut is one of the best useful technical tool for hair styling. In this study using the rule of the layers of hair design, balanced for visual art, perceptual ability and form created by the principles of analysis and offer hair cuts and hair design of the representation of regions and even hair design as the basis of a student of Hair Beauty and all the people working in the field can create a variety of hair design puts the purpose to establish a basis. The result of this study can be outline as follows: First, the step line and the movement of the relationship between the law of the layers above and below the length of the same layer techniques, the same consists of a vertical cross-section of the overall round shape of the cut same layer is created and the appropriate volume and movement, the movement of low-layer round a little bit more feeling and expression is used when you want. High-layer used to want to move a lot of light and could see that. Second, the layer of the Law and over direction, lifting, section, the line control. weight control and analyzed by principle and the principle of the process of forming the written form was unknown. Third, hair design, the expansion of the expressive power of the law of the layers, and the section of the over direction depends on the presence of line control to express the length of the outline I had to, lifting the weight to adjust form controls, and the expression of Hair Design will be expanding the width. A hair designer, a layer style to create a zone he thought the law of the first layer formative area To further the reach will be a lot of research, leading up formative aspects of this research thesis do not have missing parts, or as a result of the Beauty of Hair Design and the width of a hair design education in the field can perform to help feed the reference materials that will be.

Dependent Quantization for Scalable Video Coding

  • Pranantha, Danu;Kim, Mun-Churl;Hahm, Sang-Jin;Lee, Keun-Sik;Park, Keun-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.127-132
    • /
    • 2006
  • Quantization in video coding plays an important role in controlling the bit-rate of compressed video bit-streams. It has been used as an important control means to adjust the amount of bit-streams to at]owed bandwidth of delivery networks and storage. Due to the dependent nature of video coding, dependent quantization has been proposed and applied for MPEG-2 video coding to better maintain the quality of reconstructed frame for given constraints of target bit-rate. Since Scalable Video Coding (SVC) being currently standardized exhibits highly dependent coding nature not only between frames but also lower and higher scalability layers where the dependent quantization can be effectively applied, in this paper, we propose a dependent quantization scheme for SVC and compare its performance in visual qualities and bit-rates with the current JSVM reference software for SVC. The proposed technique exploits the frame dependences within each GOP of SVC scalability layers to formulate dependent quantization. We utilize Lagrange optimization, which is widely accepted in R-D (rate-distortion) based optimization, and construct trellis graph to find the optimal cost path in the trellis by minimizing the R-D cost. The optimal cost path in the trellis graph is the optimal set of quantization parameters (QP) for frames within a GOP. In order to reduce the complexity, we employ pruning procedure using monotonicity property in the trellis optimization and cut the frame dependency into one GOP to decrease dependency depth. The optimal Lagrange multiplier that is used for SVC is equal to H.264/AVC which is also used in the mode prediction of the JSVM reference software. The experimental result shows that the dependent quantization outperforms the current JSVM reference software encoder which actually takes a linear increasing QP in temporal scalability layers. The superiority of the dependent quantization is achieved up to 1.25 dB increment in PSNR values and 20% bits saving for the enhancement layer of SVC.

  • PDF

Study on the Visual Cells in the Retina of Macropodus ocellatus (Pisces, Osphronemidae) Freshwater Fish from Korea (한국산 담수어류 버들붕어, Macropodus ocellatus (Pisces, Osphronemidae) 망막의 시각세포에 관한 연구)

  • Kim, Jae Goo;Park, Jong Yong
    • Korean Journal of Ichthyology
    • /
    • v.29 no.3
    • /
    • pp.218-223
    • /
    • 2017
  • Using both light and scanning electron microscopies, it was investigated on the visual cells as well as the eyes of Macropodus ocellatus (Pisces, Osphronemidae). This species had a circular lens and yellowish cornea. The eyes had $3.5{\pm}0.2mm$ which is $31.1{\pm}3.0%$ in a percentage of eye diameter relative to head length. The retina ($158.2{\pm}10.6{\mu}m$) was built of several layers, including the visual cell layer which consists of three types of cells: single cons ($27.8{\pm}1.6{\mu}m$) and equal double cone ($33.9{\pm}3.7{\mu}m$), and large rods ($57.3{\pm}1.3{\mu}m$). The visual cell layer then was classified into the correct pattern. All visual cells were clearly distinguished from two parts (inner and outer segments). The elongated rod cells were extend to the bottom of the retinal pigment epithelium. In scanning electron microscopy, the outer segment links to inner segment by so-called calyceal piles. The M. ocellatus single and double cones appearance form a flower-petal arrangement, which is a regular mosaic pattern that contains quadrilateral units by four double cones surrounding a single cone.

An Adaptive FLIP-Levelset Hybrid Method for Efficient Fluid Simulation (효율적인 유체 시뮬레이션을 위한 FLIP과 레벨셋의 적응형 혼합 기법)

  • Lim, Jae-Gwang;Kim, Bong-Jun;Hong, Jeong-Mo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.19 no.3
    • /
    • pp.1-11
    • /
    • 2013
  • Fluid Implicit Particle (FLIP) method is used in Visual Effect(VFX) industries frequently because FLIP based simulation show high performance with good visual quality. However in large-scale fluid simulations, the efficiency of FLIP method is low because it requires many particles to represent large volume of water. In this papers, we propose a novel hybrid method of simulating fluids to supplement this drawback. To improve the performance of the FLIP method by reducing the number of particles, particles are deployed inside thin layers of the inner surface of water volume only. The coupling between less-disspative solutions of FLIP method and viscosity solution of level set method is achieved by introducing a new surface reconstruction method motivated by surface reconstruction method[1] and moving least squares(MLS) method[2]. Our hybrid method can generate high quality of water simulations efficiently with various multiscale features.

Fashion Typography from a Conceptual Art Perspective (개념미술 관점의 패션 타이포그래피)

  • Park, Soo-Jin
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.1
    • /
    • pp.109-117
    • /
    • 2020
  • This study aims to analyze typographic expressions used in recent fashion trends from the perspective of conceptual art and thus identify a variety of meanings. Towards this end, theoretical considerations were made vis-à-vis the main concepts of research, such as typography and conceptual art, and the ready-made, documentation, intervention and language, which are the expressive features of conceptual art, were applied to fashion typography as analytical frameworks. The results are as follows. The ready-made appears in a way that borrows or transforms the visual identity of other brands, and documentation is utilized in a way that lays tautological or contradictory texts together. Intervention arises, while leading to more complex layers of meaning when borrowing the visual identity of the brand, which is conceptually irrelevant. The language is expressed as statements about contemporary social issues, such as environmental protection, ethical consumption and gender problems. Based on the findings of this study, it can be confirmed that in fashion design, typography serves as an effective marketing tool and a medium of social statements. Moreover, it can expand into the possibility of generating new meanings as a novel way of visual expression.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Study on the Fine Structure of Retina of Anterior Lateral Eyes in Pardosa astrigera L. Koch (Aranea: Lycosidae) (별늑대거미 (Pardosa astrigera L. Koch) 전측안(前側眼) 망막(綱膜)의 미세구조(微細構造)에 관한 연구)

  • Jeong, Moon-Jin;Moon, Myung-Jin
    • Applied Microscopy
    • /
    • v.24 no.3
    • /
    • pp.1-9
    • /
    • 1994
  • Pardosa astrigera possessed eight eyes arranged in three rows on the frontal carapace. A pair of small anterior lateral eyes (ALE) flanked each side by an anterior median eyes (AME) lay along the anterior margin that was situated on the anterior row of clypeus. The anterior lateral eye was composed of cornea, vitreous body, and retina. Cornea was made up mainly of exocuticle lining the cuticle. Lens in anterior lateral eye was biconvex type which bulged into the cavity of the eyecup. Outer and inner central region of lens were approximately spherical with radius of curvature $5.6{\mu}m$ and $12.5{\mu}m$, respectly. Vitreous body formed a layer between the cuticular lens and retina. They formed biconcave shape. Retina of the anterior lateral eyes was composed of three types of cells: visual cells, glia cells, and pigment cells. The visual cells were unipolar neuron, as were the receptor of the posterior lateral eye. But cell body was unique to the anterior lateral eyes. They were giant cell, relatively a few in number, and under the layer of vitreous bodies. Each visual cell healed rhabdomeres for a short stretch beneath the cell body. Rhabdomes were irregulary pattern in retina and electron dense pigment granules scattered between the rhabdomes. Glia cell situated at the cell body of visual cell and glia cell process reached to rhabdomere portion. Below the rhabdome, tapetum were about $30{\mu}m$ distance from lens, which composed of 4-5 layers. It was about $25{\mu}m$ length that intermediate segment of distal portion of visual cell. Electron dense pigment granules between the intermediate segment were observed.

  • PDF

Fine Structure of Retinae of Cephalopods (Todarodes pacificus And Octopus minor) Inhabiting the Korean Waters I (한국 연근해산 두족류 (Todarodes pacificus And Octopus minor) 망막 (Retina)의 미세구조 I)

  • Han, Jong-Min;Chang, Nam-Sub
    • Applied Microscopy
    • /
    • v.32 no.1
    • /
    • pp.17-30
    • /
    • 2002
  • The retinae of Todarodes pacificus and Octopus minor are divided into four layers that are an outer segment, a rod base region, an inner segment, and a plexiform layer, respectively. The retina of Octopus minor is about $20{\mu}m$ thicker ($400{\sim}420{\mu}m$) than that of Todarodes pacificus ($385{\sim}400{\mu}m$). A retina is composed of visual cells and supporting cells. The microvilli of length $0.6{\sim}0.7{\mu}m$ are packed densely on top of the supporting cells of Octopus minor while they are not found in Todarodes pacificus. The visual cells and supporting cells have pigment granules that exclude light. In case of Todarodes pacificus, the pigment granules of the visual cell are larger ($2.0{\times}0.5{\mu}m$) than those of the supporting cell ($1.0{\times}0.3{\mu}m$). But, the sizes of both cells are similar in Octopus minor. In the upper portion of a visual cell, microvilli shaped like a comb are forming a rhabdome (diameter, 60 nm) of a hexagonal structure. The rhabdome consists of 4 rhabdomere and the total area of a rhabdom of Octopus minor is larger than that of Todarodes pacificus. The synaptosome constructing a plexiform layer in Todarodes pacificus are divided into two types, each of which possess electron dense-core vesicles and electron lucent vesicles, respectively. Octopus minor also has two types of synaptosomes but each type comprises a mixture of electron dense vesicles and electron lucent vesicles, and electron lucent vesicles only, respectively, which is different from the case of Todarodes pacificus.

Visual Cells in the Retina of Iksookimia longicorpa (Pisces; Cobitidae) of Korea (한국산 미꾸리과 어류 왕종개 Iksookimia longicorpa 망막의 시각세포)

  • Kim, Jae Goo;Park, Jong Young
    • Korean Journal of Ichthyology
    • /
    • v.27 no.4
    • /
    • pp.257-262
    • /
    • 2015
  • The visual cells in the retina of Iksookimia longicorpa (Pisces, Cobitidae) were investigated by light and scanning electron microscopes. The retina ($216.42{\pm}13.36{\mu}m$) has several layers, and the visual cell layer consists of unequal double cones and large rods. In a double cone, two members are unequal such that one cone is longer than the other (long element $26.42{\pm}1.7{\mu}m$, short element $16.82{\pm}1.1{\mu}m$). The cones form a row mosaic pattern in which the partners of double cones are linearly oriented with a large rod. The visual cells observed have an outer segment (hematophilic), inner segment (eosinophilic). In scanning electron microscopy, the outer segment links to inner segment by so-called calyceal piles (calyceal processes) of membrane discs surrounded by double membranes.

An automated visual inspection of solder joints using 2D and 3D features (2차원 및 3차원 특징값을 이용한 납땜 시각 검사)

  • 김태현;문영식;박성한
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.11
    • /
    • pp.53-61
    • /
    • 1996
  • In this paper, efficient techniques for solder joint inspection have been described. Using three layers of ring shaped LED's with different illumination angles, three frames of images are sequentially obtained. From these images the regions of interest (soldered regions) are segmented, and their characteristic features including the average gray level and the percentage of highlights - refereed to as 2D features - are extracted. Based on the backpropagation algorithm of neural networks, each solder joint is classified intor one of the pre-defined types. If the output value is not in the confidence interval, the distribution of tilt angles-referred to as 3D features - is claculated, and the solder joint is classified based on the bayes classfier. The second classifier requires more computation while providing more information and better performance. The proposed inspection system has been implemented and tested with various types of solder joints in SMDs. The experimental results have verified the validity of this scheme in terms of speed and recognition rate.

  • PDF