• Title/Summary/Keyword: computer programming ability

Search Result 206, Processing Time 0.023 seconds

Embodiment of PWM converter by using the VHDL (VHDL을 이용한 PWM 컨버터의 구현)

  • Baek, Kong-Hyun;Joo, Hyung-Jun;Lee, Hyo-Sung;Lim, Yong-Kon;Lee, Heung-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2002.11d
    • /
    • pp.197-199
    • /
    • 2002
  • The invention of VHDL(Very High Speed Integrated Circuit Hardware Description Language), Technical language of Hardware, is a kind of turning point in digital circuit designing, which is being more and more complicated and integrated. Because of its excellency in expression ability of hardware, VHDL is not only used in designing Hardware but also in simulation for verification, and in exchange and conservation, composition of the data of designs, and in many other ways. Especially, It is very important that VHDL is a Technical language of Hardware standardized by IEEE, intenational body with an authority. The biggest problem in modern circuit designing can be pointed out in two way. One is a problem how to process the rapidly being complicated circuit complexity. The other is minimizing the period of designing and manufacturing to survive in a cutthroat competition. To promote the use of VHDL, more than a simple use of simulation by VHDL, it is requested to use VHDL in composing logical circuit with chip manufacturing. And, by developing the quality of designing technique, it can contribute for development in domestic industry related to ASIC designing. In this paper in designing SMPS(Switching mode power supply), programming PWM by VHDL, it can print static voltage by the variable load, connect computer to chip with byteblaster, and download in Max(EPM7064SLCS4 - 5)chip of ALTER. To achieve this, it is supposed to use VHDL in modeling, simulating, compositing logic and product of the FPGA chip. Despite its limit in size and operating speed caused by the specific property of FPGA chip, it can be said that this method should be introduced more aggressively because of its prompt realization after designing.

  • PDF

Optimal Location of FACTS Devices Using Adaptive Particle Swarm Optimization Hybrid with Simulated Annealing

  • Ajami, Ali;Aghajani, Gh.;Pourmahmood, M.
    • Journal of Electrical Engineering and Technology
    • /
    • v.5 no.2
    • /
    • pp.179-190
    • /
    • 2010
  • This paper describes a new stochastic heuristic algorithm in engineering problem optimization especially in power system applications. An improved particle swarm optimization (PSO) called adaptive particle swarm optimization (APSO), mixed with simulated annealing (SA), is introduced and referred to as APSO-SA. This algorithm uses a novel PSO algorithm (APSO) to increase the convergence rate and incorporate the ability of SA to avoid being trapped in a local optimum. The APSO-SA algorithm efficiency is verified using some benchmark functions. This paper presents the application of APSO-SA to find the optimal location, type and size of flexible AC transmission system devices. Two types of FACTS devices, the thyristor controlled series capacitor (TCSC) and the static VAR compensator (SVC), are considered. The main objectives of the presented method are increasing the voltage stability index and over load factor, decreasing the cost of investment and total real power losses in the power system. In this regard, two cases are considered: single-type devices (same type of FACTS devices) and multi-type devices (combination of TCSC, SVC). Using the proposed method, the locations, type and sizes of FACTS devices are obtained to reach the optimal objective function. The APSO-SA is used to solve the above non.linear programming optimization problem for better accuracy and fast convergence and its results are compared with results of conventional PSO. The presented method expands the search space, improves performance and accelerates to the speed convergence, in comparison with the conventional PSO algorithm. The optimization results are compared with the standard PSO method. This comparison confirms the efficiency and validity of the proposed method. The proposed approach is examined and tested on IEEE 14 bus systems by MATLAB software. Numerical results demonstrate that the APSO-SA is fast and has a much lower computational cost.

A Study on "Wittgenstein" Album (비트겐슈타인(Wittgenstein)앨범에 관한 고찰)

  • Kim, Jun-Soo;Cho, Tae-seon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.374-380
    • /
    • 2021
  • Band Wittgenstein is a relatively band-shaped team since Shin Hae-Chul's previous big band "Next." The album, which features Shin Hae-Chul's unique lyrics and specific concepts, is also similar to the Next albums. However, there is a difference in sounds used on the album that are properly fused sampling based work and computer music. This album is a low-budget home recording album produced at a total cost of 3 million won. Shin Hae-Chul was in charge of the main vocals and programming, and all of the works were done together by the band members. In this album, Shin Hae-Chul focused on teamwork rather than producing his own music. The low budget could have been a constraint on music production, but it must be highly appreciated for it being a novel attempt. Musicians who create music always create conflicts between their favorite music and popular ones. However, without creative efforts, there is no evolution or development in the music industry. It is clear that constant changes can continue to develop musical ability, which leads to the development of Korean pop music.

Development of a Forensic Analyzing Tool based on Cluster Information of HFS+ filesystem

  • Cho, Gyu-Sang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.178-192
    • /
    • 2021
  • File system forensics typically focus on the contents or timestamps of a file, and it is common to work around file/directory centers. But to recover a deleted file on the disk or use a carving technique to find and connect partial missing content, the evidence must be analyzed using cluster-centered analysis. Forensics tools such as EnCase, TSK, and X-ways, provide a basic ability to get information about disk clusters, but these are not the core functions of the tools. Alternatively, Sysinternals' DiskView tool provides a more intuitive visualization function, which makes it easier to obtain information around disk clusters. In addition, most current tools are for Windows. There are very few forensic analysis tools for MacOS, and furthermore, cluster analysis tools are very rare. In this paper, we developed a tool named FACT (Forensic Analyzer based Cluster Information Tool) for analyzing the state of clusters in a HFS+ file system, for digital forensics. The FACT consists of three features, a Cluster based analysis, B-tree based analysis, and Directory based analysis. The Cluster based analysis is the main feature, and was basically developed for cluster analysis. The FACT tool's cluster visualization feature plays a central role. The FACT tool was programmed in two programming languages, C/C++ and Python. The core part for analyzing the HFS+ filesystem was programmed in C/C++ and the visualization part is implemented using the Python Tkinter library. The features in this study will evolve into key forensics tools for use in MacOS, and by providing additional GUI capabilities can be very important for cluster-centric forensics analysis.

Implementing RPA for Digital to Intelligent(D2I) (디지털에서 인텔리전트(D2I)달성을 위한 RPA의 구현)

  • Dong-Jin Choi
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.143-156
    • /
    • 2019
  • Types of innovation can be categorized into simplification, information, automation, and intelligence. Intelligence is the highest level of innovation, and RPA can be seen as one of intelligence. Robotic Process Automation(RPA), a software robot with artificial intelligence, is an example of intelligence that is suited for simple, repetitive, large-scale transaction processing tasks. The RPA, which is already in operation in many companies in Korea, shows what needs to be done to naturally focus on the core tasks in a situation where the need for a strong organizational culture is increasing and the emphasis is on voluntary leadership, strong teamwork and execution, and a professional working culture. The introduction was considered naturally according to the need to find. Robotic Process Automation, or RPA, is a technology that replaces human tasks with the goal of quickly and efficiently handling structural tasks. RPA is implemented through software robots that mimic humans using software such as ERP systems or productivity tools. RPA robots are software installed on a computer and are called robots by the principle of operation. RPA is integrated throughout the IT system through the front end, unlike traditional software that communicates with other IT systems through the back end. In practice, this means that software robots use IT systems in the same way as humans, repeat the correct steps, and respond to events on the computer screen instead of communicating with the system's application programming interface(API). Designing software that mimics humans to communicate with other software can be less intuitive, but there are many advantages to this approach. First, you can integrate RPA with virtually any software you use, regardless of your openness to third-party applications. Many enterprise IT systems are proprietary because they do not have many common APIs, and their ability to communicate with other systems is severely limited, but RPA solves this problem. Second, RPA can be implemented in a very short time. Traditional software development methods, such as enterprise software integration, are relatively time consuming, but RPAs can be implemented in a relatively short period of two to four weeks. Third, automated processes through software robots can be easily modified by system users. While traditional approaches require advanced coding techniques to drastically modify how they work, RPA can be instructed by modifying relatively simple logical statements, or by modifying screen captures or graphical process charts of human-run processes. This makes RPA very versatile and flexible. This RPA is a good example of the application of digital to intelligence(D2I).

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.