• Title/Summary/Keyword: color software

Search Result 446, Processing Time 0.03 seconds

The Analysis on the Trend of the Women's Wear Researches - In Consideration of the Apparel Related Journals Publication Listed on the KCI(Korea Citation Index) from 2001 to 2010 - (여성복 관련 연구경향 분석 - 2001~2010년까지 학회지 게재논문 중심으로 -)

  • Park, Se Hee;Park, Gin Ah
    • Journal of the Korean Society of Costume
    • /
    • v.62 no.8
    • /
    • pp.1-18
    • /
    • 2012
  • The purpose of the study was to offer in-depth understanding of the women's wear research trend in South Korea and thus to provide insights from the findings throughout the study to set appropriate directions for further development of women's wear related researches in the clothing and textile study area. The study considered research papers published by the 6 major apparel related journals listed on the KCI(Korea Citation Index) i.e. journals of the Korean Society of Clothing and Textiles(KSCT), the Korean Society of Costume(KSC), the Costume Culture Association (CCA), the Korean Society of Fashion Business(KSFB), the Korean Home Economics Association (KHEA) and the Korean Society for Clothing Industry(KSCI). A total of 380 research papers that were related with women's wear published from 2001 to 2010 were selected for the study and analyzed in the form of descriptive statistics using the SPSS Software ver. 18.0. The analysis was categorized according to the journals, years and research theme. The research themes were divided into various categories such as, clothing construction, textile science, fashion aesthetics and design, costume history and culture, apparel psychology and fashion marketing. The results derived from the research were: (1) the ratio of the research papers on the women's wear to the total papers published from 2001 to 2010 by the 6 subject journals was 380 to 6,815, i.e. 5.6% of the total papers; (2) journal of KSCT published the most women's wear research papers (N=149, 39.2%) and then the rest in order were the journal of CCA (N=69, 18.2%), the journal of KSC (N=68, 17.9%), the journal of KSFB (N=52, 13.7%), the journal of KHEA (N=39, 10.3%) and the journal of KSCI (N=3, 0.7%); (3) the proportions of the research themes for the women's wear study were in the order of the case study in marketing (N=135, 35.5%), body measurements and sizing systems in clothing construction (N=88, 23.2%), fashion design and aesthetics (N=83, 21.8%), pattern-making (N=63, 16.6%), and color study (N=11, 2.9%) and so on.

Characterization of Rabbit Retinal Ganglion Cells with Multichannel Recording (다채널기록법을 이용한 토끼 망막 신경절세포의 특성 분석)

  • Cho Hyun Sook;Jin Gye-Hwan;Goo Yong Sook
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.228-236
    • /
    • 2004
  • Retinal ganglion cells transmit visual scene as an action potential to visual cortex through optic nerve. Conventional recording method using single intra- or extra-cellular electrode enables us to understand the response of specific neuron on specific time. Therefore, it is not possible to determine how the nerve impulses in the population of retinal ganglion cells collectively encode the visual stimulus with conventional recording. This requires recording the simultaneous electrical signals of many neurons. Recent advances in multi-electrode recording have brought us closer to understanding how visual information is encoded by population of retinal ganglion cells. We examined how ganglion cells act together to encode a visual scene with multi-electrode array (MEA). With light stimulation (on duration: 2 sec, off duration: 5 sec) generated on a color monitor driven by custom-made software, we isolated three functional types of ganglion cell activities; ON (35.0$\pm$4.4%), OFF (31.4$\pm$1.9%), and ON/OFF cells (34.6$\pm$5.3%) (Total number of retinal pieces = 8). We observed that nearby neurons often fire action potential near synchrony (< 1 ms). And this narrow correlation is seen among cells within a cluster which is made of 6~8 cells. As there are many more synchronized firing patterns than ganglion cells, such a distributed code might allow the retina to compress a large number of distinct visual messages into a small number of ganglion cells.

  • PDF

Comparative esthetic evaluation of anterior zone with immediate, early, and delay implantation (전치부 영역 임플란트의 식립 시기에 따른 심미적 평가)

  • Kim, Jung-Hwa;Seo, Seong-Yong;Kim, Na-Hong;Yu, Jung-Hyun;Lee, Dong -Woon
    • Journal of the Korean Academy of Esthetic Dentistry
    • /
    • v.26 no.1
    • /
    • pp.17-23
    • /
    • 2017
  • Purpose: This retrospective study is to evaluate whether the timing of implant placement and the result of esthetic outcomes are related. Materials and Methods: Among the patients who had undergone single implant surgery on anterior area from 2010 to 2013 in Veterans Health Service Medical Center, 34 implants in 27 patients (24 male and 3 female) were selected and categorized into 3 groups according to the timing of placement, which are group D (Delay), group E (Early) and group I (Immediate). Aesthetic indices used included the Pink Esthetic Score (PES). It has 7 variables scores, such as mesial papilla, distal papilla, a level of soft-tissue margin, soft-tissue contour, alveolar process, soft-tissue color, and soft-tissue texture. Each variable ranges from 0 to 2, therefore total 14 points are highest. All patients were received by regular follow-up at least 1-year. One examiner measured PES on the intraoral photos. Each patient was considered as a statistical unit for statistical analysis. Statistical analyses were performed using a commercially available statistical software (SPSS Statistics 21.0, IBM Corp., Armonk, NY, USA). Kruskal-Wallis test was used for inter-group comparisons. Statistical significance was set at P<0.05. Result: Mean score in Group D, Group I, and Group E were $11.5{\pm}1.5$, $11.4{\pm}1.8$, and $11.3{\pm}1.8$ respectively. In Kruskal-Wallis test, there are no differences (P=0.989). Conclusion: In this limited study suggests that clinical aesthetic results can be achieved with all treatment protocols. Finally, various factors can be considered to produce the esthetic results.

Development of a Prototype Patient Monitoring System with Module-Based Bedside Units and Central Stations: Overall Architecture and Specifications (모듈형 환자감시기와 중앙 환자감시기로 구성되는 환자감시시스템 시제품의 개발: 전체구조 및 사양)

  • Woo, E.J.;Park, S.H.;Jun, B.M.;Moon, C.W.;Lee, H.C.;Kim, S.T.;Kim, H.J.;Seo, J.J.;Chae, K.M.;Park, J.C.;Choi, K.H.;Lee, W.J.;Kim, K.S.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.05
    • /
    • pp.315-319
    • /
    • 1996
  • We have developed a prototype patient monitoring system including module-based bedside units, interbed network, and central stations. A bedside unit consists of a color monitor and a main CPU unit with peripherals including a module controller. It can also include up to 3 module cases and 21 different modules. In addition to the 3-channel recorder module, six different physiological parameters of ECG, respiration, invasive blood pressure, noninvasive blood pressure, body temperature, and arterial pulse oximetry with plethysmogaph are provided as parameter modules. Modules and a module controller communicate with up to 1Mbps data rate through an intrabed network based on RS-485 and HDLC protocol. Bedside units can display up to 12 channels of waveforms with any related numeric informations simultaneously. At the same time, it communicates with other bedside units and central stations through interbed network based on 10Mbps Ethernet and TCP/IP protocol. Software far bedside units and central stations fully utilizes gaphical user interface techniques and all functions are controlled by a rotate/push button on bedside unit and a mouse on central station. The entire system satisfies the requirements of AAMI and ANSI standards in terms of electrical safety and performances. In order to accommodate more advanced data management capabilities such as 24-hour full disclosure, we are developing a relational database server dedicated to the patient monitoring system. We are also developing a clinical workstation with which physicians can review and examine the data from patients through various kinds of computer networks far diagnosis and report generation. Portable bedside units with LCD display and wired or wireless data communication capability will be developed in the near future. New parameter modules including cardiac output, capnograph, and other gas analysis functions will be added.

  • PDF

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Automatic 3D data extraction method of fashion image with mannequin using watershed and U-net (워터쉐드와 U-net을 이용한 마네킹 패션 이미지의 자동 3D 데이터 추출 방법)

  • Youngmin Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.825-834
    • /
    • 2023
  • The demands of people who purchase fashion products on Internet shopping are gradually increasing, and attempts are being made to provide user-friendly images with 3D contents and web 3D software instead of pictures and videos of products provided. As a reason for this issue, which has emerged as the most important aspect in the fashion web shopping industry, complaints that the product is different when the product is received and the image at the time of purchase has been heightened. As a way to solve this problem, various image processing technologies have been introduced, but there is a limit to the quality of 2D images. In this study, we proposed an automatic conversion technology that converts 2D images into 3D and grafts them to web 3D technology that allows customers to identify products in various locations and reduces the cost and calculation time required for conversion. We developed a system that shoots a mannequin by placing it on a rotating turntable using only 8 cameras. In order to extract only the clothing part from the image taken by this system, markers are removed using U-net, and an algorithm that extracts only the clothing area by identifying the color feature information of the background area and mannequin area is proposed. Using this algorithm, the time taken to extract only the clothes area after taking an image is 2.25 seconds per image, and it takes a total of 144 seconds (2 minutes and 4 seconds) when taking 64 images of one piece of clothing. It can extract 3D objects with very good performance compared to the system.

Comparative Analysis of Self-supervised Deephashing Models for Efficient Image Retrieval System (효율적인 이미지 검색 시스템을 위한 자기 감독 딥해싱 모델의 비교 분석)

  • Kim Soo In;Jeon Young Jin;Lee Sang Bum;Kim Won Gyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.519-524
    • /
    • 2023
  • In hashing-based image retrieval, the hash code of a manipulated image is different from the original image, making it difficult to search for the same image. This paper proposes and evaluates a self-supervised deephashing model that generates perceptual hash codes from feature information such as texture, shape, and color of images. The comparison models are autoencoder-based variational inference models, but the encoder is designed with a fully connected layer, convolutional neural network, and transformer modules. The proposed model is a variational inference model that includes a SimAM module of extracting geometric patterns and positional relationships within images. The SimAM module can learn latent vectors highlighting objects or local regions through an energy function using the activation values of neurons and surrounding neurons. The proposed method is a representation learning model that can generate low-dimensional latent vectors from high-dimensional input images, and the latent vectors are binarized into distinguishable hash code. From the experimental results on public datasets such as CIFAR-10, ImageNet, and NUS-WIDE, the proposed model is superior to the comparative model and analyzed to have equivalent performance to the supervised learning-based deephashing model. The proposed model can be used in application systems that require low-dimensional representation of images, such as image search or copyright image determination.

RGB Channel Selection Technique for Efficient Image Segmentation (효율적인 이미지 분할을 위한 RGB 채널 선택 기법)

  • 김현종;박영배
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1332-1344
    • /
    • 2004
  • Upon development of information super-highway and multimedia-related technoiogies in recent years, more efficient technologies to transmit, store and retrieve the multimedia data are required. Among such technologies, firstly, it is common that the semantic-based image retrieval is annotated separately in order to give certain meanings to the image data and the low-level property information that include information about color, texture, and shape Despite the fact that the semantic-based information retrieval has been made by utilizing such vocabulary dictionary as the key words that given, however it brings about a problem that has not yet freed from the limit of the existing keyword-based text information retrieval. The second problem is that it reveals a decreased retrieval performance in the content-based image retrieval system, and is difficult to separate the object from the image that has complex background, and also is difficult to extract an area due to excessive division of those regions. Further, it is difficult to separate the objects from the image that possesses multiple objects in complex scene. To solve the problems, in this paper, I established a content-based retrieval system that can be processed in 5 different steps. The most critical process of those 5 steps is that among RGB images, the one that has the largest and the smallest background are to be extracted. Particularly. I propose the method that extracts the subject as well as the background by using an Image, which has the largest background. Also, to solve the second problem, I propose the method in which multiple objects are separated using RGB channel selection techniques having optimized the excessive division of area by utilizing Watermerge's threshold value with the object separation using the method of RGB channels separation. The tests proved that the methods proposed by me were superior to the existing methods in terms of retrieval performances insomuch as to replace those methods that developed for the purpose of retrieving those complex objects that used to be difficult to retrieve up until now.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

A Study of the shade of between maxillary and mandibular anterior teeth in the Korean (한국인의 상하악 전치부 색조에 관한 연구)

  • Kim, Tae-Jin; Kwon, Kung-Rock;Kim, Hyeong-Seob;Woo, Yi-Hyung
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.4
    • /
    • pp.343-350
    • /
    • 2008
  • Purpose: The purpose of this study was to spectrophotometrically evaluate the shade difference between of maxillary and mandibular anterior teeth in the Korean by the standard of vita classical shade guide using $SpectroShade^{TM}$. Material and methods: In this study, the shades of healthy anterior teeth were examined and analyzed using the digital shade analysis of $SpectroShade^{TM}$. This study examined 80 individuals in their twenties, thirties, fourties, fifities ages and 40 males and 40 females, presenting 12 healthy, unrestored maxillary and mandibular anterior teeth. Tooth brushing and oral prophylaxis were performed prior to evaluation. The $SpectroShade^{TM}$ was used to acquire images of the 12 maxillary and mandibular anterior teeth. These images were analyzed using $SpectroShade^{TM}$ Software, and shade maps of each tooth were acquired. The shade difference of upper and lower, and gender differences and ages difference were investigated and analyzed with CIE $L^{*}a^{*}b^{*}$ color order system. One-Way ANOVA test was used to find out if there were significant differences between groups tested and Sheffe multiple comparison was used to identify where the differences were. Results: 1. Shade differences were significant (P < .05) between maxillary and mandibular central incisor, lateral incisor, canine. 2. No significant differences in shade distribution were seen between lateral incisors and central incisors. 3. Canine's shade difference were more significant than central incisor's and lateral incisors's. 4. No significant differences in shade distribution were seen between genders in maxillary and mandibulr central incisor, lateral incisor, canine. 5. No significant differences in shade distribution were seen in order of years in maxillary and mandibulr central incisor, lateral incisor, canine. Conclusions: The results of this study show that 1. Shade difference was founded in maxillary and mandibular anterior teeth and ${\Delta}E^{*}$ value was more than 2.0. 2. Canine's shade difference were more significant than central incisor's and lateral incisors's and between central incisors and lateral incisors shade differences were no significant. 3. No significant differences in shade distribution were seen between genders in maxillary and mandibular anterior teeth. 4. No significant differences in shade distribution were seen in order of years grade in maxillary and mandibular anterior teeth.