• Title/Summary/Keyword: Typical set

Search Result 593, Processing Time 0.028 seconds

Mode Analysis of Silica Waveguides with Semi Circular Cross Section by using the Method of Harmonic Expansion in Finite Area (유한영역에서 조화함수 전개법을 이용한 반달꼴 단면 이산화규소 광도파로의 모우드 분석)

  • 김진승
    • Korean Journal of Optics and Photonics
    • /
    • v.4 no.1
    • /
    • pp.90-100
    • /
    • 1993
  • A computer routine for personal computer(PC/AT class) is developed to analysize the mode characteristics of silica based optical waveguides whose cross sections are of semi circular and other typical shapes. The basic algorithm used in the routine is to convert the wave equation into a matrix equation by expanding the wave function in terms of simple harmonic functions. The matrix elements are a set of overlap integrals of sinusoidal funtions with appropriate weight given by the distribution of refractive index over the waveguide cross section. The eigenvectors and eigenvalues of the matrix is then computed via diagonalization. We explain some practical problems that arises when implementing the algorithm into the routine. By using this routine we analyze the mode characteristics of silica based optical waveguides of semi circular and some other typical cross sections.

  • PDF

Thermal Load Calculations on Stud-Frame Walls by Response Coefficient Method (응답계수(應答係數)를 이용(利用)한 건물벽에서의 열부하(熱負荷) 계산(計算))

  • Hwang, Y.K.;Pak, E.T.
    • The Magazine of the Society of Air-Conditioning and Refrigerating Engineers of Korea
    • /
    • v.17 no.4
    • /
    • pp.357-368
    • /
    • 1988
  • An application of thermal response coefficient method for obtaining thermal load on stud-frame walls in a typical house is presented. A set of stud-frame walls is two-dimensional heat conduction transients with composite structure. The ambient temperature on the right-hand face of the stud-frame walls is a typical day-cycle input and the room temperature on the left-hand face is a constant input. The desired output is thermal load at the left-hand face. The time-dependent ambient temperature is approximated by a continuous, piecewise-linear function each having one hour interval. The conduction problem is spatially discretized as 8 computer modelings by finite elements to obtain thermal response coefficients. The discretization and round-off errors can be neglected in the range of adequate number of nodes. A 60-node discretization is recommended as the optimum model among 8 computer modelings. Several sets of response coefficients of the stud-frame walls are generated by which the rate of heat transfer through the walls or some temperature in the walls can be calculated for different input histories.

  • PDF

TsCNNs-Based Inappropriate Image and Video Detection System for a Social Network

  • Kim, Youngsoo;Kim, Taehong;Yoo, Seong-eun
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.677-687
    • /
    • 2022
  • We propose a detection algorithm based on tree-structured convolutional neural networks (TsCNNs) that finds pornography, propaganda, or other inappropriate content on a social media network. The algorithm sequentially applies the typical convolutional neural network (CNN) algorithm in a tree-like structure to minimize classification errors in similar classes, and thus improves accuracy. We implemented the detection system and conducted experiments on a data set comprised of 6 ordinary classes and 11 inappropriate classes collected from the Korean military social network. Each model of the proposed algorithm was trained, and the performance was then evaluated according to the images and videos identified. Experimental results with 20,005 new images showed that the overall accuracy in image identification achieved a high-performance level of 99.51%, and the effectiveness of the algorithm reduced identification errors by the typical CNN algorithm by 64.87 %. By reducing false alarms in video identification from the domain, the TsCNNs achieved optimal performance of 98.11% when using 10 minutes frame-sampling intervals. This indicates that classification through proper sampling contributes to the reduction of computational burden and false alarms.

Neo-Chinese Style Furniture Design Based on Semantic Analysis and Connection

  • Ye, Jialei;Zhang, Jiahao;Gao, Liqian;Zhou, Yang;Liu, Ziyang;Han, Jianguo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2704-2719
    • /
    • 2022
  • Lately, neo-Chinese style furniture has been frequently noticed by product design professionals for the big part it played in promoting traditional Chinese culture. This article is an attempt to use big data semantic analysis method to provide effective design research method for neo-Chinese furniture design. By using big data mining program TEXTOM for big data collection and analysis, the data obtained from typical websites in a set time period will be sorted and analyzed. On the basis of "neo-Chinese furniture" samples, key data will be compared, classification analysis of overall data, and horizontal analysis of typical data will be performed by the methods of word frequency analysis, connection centrality analysis, and TF-IDF analysis. And we tried to summarize according to the related views and theories of the design. The research results show that the results of data analysis are close to the relevant definitions of design. The core high-frequency vocabulary obtained under data analysis, such as popular, furniture, modern, etc., can provide a reasonable and effective focus of attention for the designs. The result obtained through the systematic sorting and summary of the data can be a reliable guidance in the direction of our design. This research attempted to introduce related big data mining semantic analysis methods into the product design industry, to supply scientific and objective data and channels for studies on design, and to provide a case on the practical application of big data analysis in the industry.

Differentiation of Legal Rules and Individualization of Court Decisions in Criminal, Administrative and Civil Cases: Identification and Assessment Methods

  • Egor, Trofimov;Oleg, Metsker;Georgy, Kopanitsa
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.125-131
    • /
    • 2022
  • The diversity and complexity of criminal, administrative and civil cases resolved by the courts makes it difficult to develop universal automated tools for the analysis and evaluation of justice. However, big data generated in the scope of justice gives hope that this problem will be resolved as soon as possible. The big data applying makes it possible to identify typical options for resolving cases, form detailed rules for the individualization of a court decision, and correlate these rules with an abstract provisions of law. This approach allows us to somewhat overcome the contradiction between the abstract and the concrete in law, to automate the analysis of justice and to model e-justice for scientific and practical purposes. The article presents the results of using dimension reduction, SHAP value, and p-value to identify, analyze and evaluate the individualization of justice and the differentiation of legal regulation. Processing and analysis of arrays of court decisions by computational methods make it possible to identify the typical views of courts on questions of fact and questions of law. This knowledge, obtained automatically, is promising for the scientific study of justice issues, the improvement of the prescriptions of the law and the probabilistic prediction of a court decision with a known set of facts.

A Study on Programming Concepts of Programming Education Experts through Delphi and Conceptual Metaphor Analysis

  • Kim, Dong-Man;Lee, Tae-Wuk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.11
    • /
    • pp.277-286
    • /
    • 2020
  • In this paper, we propose a new educational approach to help learners form concepts by identifying the properties of programming concepts targeting a group of experts in programming education. Therefore, we confirmed the typical properties of concepts by programming education experts for programming learning elements through conceptual metaphor analysis, which is a qualitative research method, and confirmed the validity through the delphi method. As a result of this study, we identified 17 typical properties of programming concepts that learners should form in programming education. The conclusions of this study are that need to compose the educational content more specifically for the conceptualization of learners' programming as follows: 1)the concept of a variable is to understand how to store data, how to set a name, what an address has, how to change a value, various types of variables, and the meaning of the size of a variable, 2)the concept of operator is to understand how to operate the four rules, how to deal with it logically, how to connect according to priority, meaning of operation symbols, and how to compare, 3)the concept of the control structure is to understand how to control the execution flow, how to make a logical judgment, how to set an execution rule, meaning of sequential execution, and how to repeat executing.

An Adaptive Algorithm for Plagiarism Detection in a Controlled Program Source Set (제한된 프로그램 소스 집합에서 표절 탐색을 위한 적응적 알고리즘)

  • Ji, Jeong-Hoon;Woo, Gyun;Cho, Hwan-Gue
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.12
    • /
    • pp.1090-1102
    • /
    • 2006
  • This paper suggests a new algorithm for detecting the plagiarism among a set of source codes, constrained to be functionally equivalent, such are submitted for a programming assignment or for a programming contest problem. The typical algorithms largely exploited up to now are based on Greedy-String Tiling, which seeks for a perfect match of substrings, and analysis of similarity between strings based on the local alignment of the two strings. This paper introduces a new method for detecting the similar interval of the given programs based on an adaptive similarity matrix, each entry of which is the logarithm of the probabilities of the keywords based on the frequencies of them in the given set of programs. We experimented this method using a set of programs submitted for more than 10 real programming contests. According to the experimental results, we can find several advantages of this method compared to the previous one which uses fixed similarity matrix(+1 for match, -1 for mismatch, -2 for gap) and also can find that the adaptive similarity matrix can be used for detecting various plagiarism cases.

Efficient Loop Accelerator for Motion Estimation Specific Instruction-set Processor (움직임 추정 전용 프로세서를 위한 효율적인 루프 가속기)

  • Ha, Jae Myung;Jung, Ho Sun;Sunwoo, Myung Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.159-166
    • /
    • 2013
  • This paper proposes an efficient loop accelerator for a motion estimation specific instruction-set processor. ME algorithms in nature contain complex and multiple loop operations. To support efficient hardware (HW) loop operations, this paper introduces four loop instructions and their specific HW architecture. The simulation results show that the proposed loop accelerator can reduce about 29% average instruction cycles for ME early-termination schemes compared with typical implementation having a combination of compare and conditional jump instructions. The proposed loop accelerator of the motion estimation specific instruction-set processor can significantly reduce the number of program memory accesses and greatly save power consumption. Hence, it can be quite suitable for low power and flexible ME implementation.

Missing Pattern Matching of Rough Set Based on Attribute Variations Minimization in Rough Set (속성 변동 최소화에 의한 러프집합 누락 패턴 부합)

  • Lee, Young-Cheon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.6
    • /
    • pp.683-690
    • /
    • 2015
  • In Rough set, attribute missing values have several problems such as reduct and core estimation. Further, they do not give some discernable pattern for decision tree construction. Now, there are several methods such as substitutions of typical attribute values, assignment of every possible value, event covering, C4.5 and special LEMS algorithm. However, they are mainly substitutions into frequently appearing values or common attribute ones. Thus, decision rules with high information loss are derived in case that important attribute values are missing in pattern matching. In particular, there is difficult to implement cross validation of the decision rules. In this paper we suggest new method for substituting the missing attribute values into high information gain by using entropy variation among given attributes, and thereby completing the information table. The suggested method is validated by conducting the same rough set analysis on the incomplete information system using the software ROSE.

A Study on the Development of Set Menu according to Market Segmentation of Chinese Restaurant (중식당의 시장세분화에 따른 세트메뉴 개발에 관한 연구)

  • Kim, Hyun-Duk
    • Culinary science and hospitality research
    • /
    • v.23 no.5
    • /
    • pp.109-120
    • /
    • 2017
  • This study aimed to develop Chinese restaurant set menu which was proper to tendency of market segmentation by using conjoint analysis. In order to examine tendency of market segmentation, this study investigated the important factors and effective values of whole market and segment market. First, the study found that whole market and segment market seemed to prefer seafood to meat except Cluster 3 (Gentle demand type). Second, regarding efficiency of attribute level, the study found that crap soup is favored over seafood in both whole market and segment market except Cluster 1 (strong demand type). Third, Cluster 1 (strong demand type) showed a high level of efficiency on menu which is mixed with meat and seafood. In Cluster 2 (middle demand type), there was a high level of efficiency in meat menu. In case of Cluster 3 (gentle demand type), seafood menu showed high level of efficiency. Forth, there was a high level of efficacy in rice and western dessert menu on the result of analysis on whole market and segment market. Therefore, this study suggests that the preference of seafood is more higher than the preference of meat. It means that current customers care their health more than they used to be. According to this study, people who want to develop Chinese restaurant menu should focus on seafood more than meat. What's more, marketers of chinese restaurants have to not only present new awareness and fresh atmosphere but also provide typical composition of set menu for target customers.