• Title/Summary/Keyword: C-language

Search Result 1,634, Processing Time 0.029 seconds

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

In vitro evaluation of the wear resistance of provisional resin materials fabricated by different methods (제작방법에 따른 임시 수복용 레진의 마모저항성에 관한 연구)

  • Ahn, Jong-Ju;Huh, Jung-Bo;Choi, Jae-Won
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.57 no.2
    • /
    • pp.110-117
    • /
    • 2019
  • Purpose: This study was to evaluate the wear resistance of 3D printed, milled, and conventionally cured provisional resin materials. Materials and methods: Four types of resin materials made with different methods were examined: Stereolithography apparatus (SLA) 3D printed resin (S3P), digital light processing (DLP) 3D printed resin (D3P), milled resin (MIL), conventionally self-cured resin (CON). In the 3D printed resin specimens, the build orientation and layer thickness were set to $0^{\circ}$ and $100{\mu}m$, respectively. The specimens were tested in a 2-axis chewing simulator with the steatite as the antagonist under thermocycling condition (5 kg, 30,000 cycles, 0.8 Hz, $5^{\circ}C/55^{\circ}C$). Wear losses of the specimens were calculated using CAD software and scanning electron microscope (SEM) was used to investigate wear surface of the specimens. Statistical significance was determined using One-way ANOVA and Dunnett T3 analysis (${\alpha}=.05$). Results: Wear losses of the S3P, D3P, and MIL groups significantly smaller than those of the CON group (P < .05). There was no significant difference among S3P, D3P, and MIL group (P > .05). In the SEM observations, in the S3P and D3P groups, vertical cracks were observed in the sliding direction of the antagonist. In the MIL group, there was an overall uniform wear surface, whereas in the CON group, a distinct wear track and numerous bubbles were observed. Conclusion: Within the limits of this study, provisional resin materials made with 3D printing show adequate wear resistance for applications in dentistry.

A Kinematic Analysis of Uchi-mata(inner thigh reaping throw) by Kumi-kata types in Judo (유도 맞잡기 타입에 따른 허벅다리걸기의 Kinematic 분석[I])

  • Kim, Eui-Hwan;Cho, Dong-Hee;Kwon, Moon-Seok
    • Korean Journal of Applied Biomechanics
    • /
    • v.12 no.1
    • /
    • pp.63-87
    • /
    • 2002
  • The purpose of this study was to analyze the kinematic variables when Uchi-mata(inner thigh reaping throw) performing by Kumi-kata(engagement position, basic hold) types A, B(A: grasping part-behind neck lapel, B: chest lapel) in Judo with three dimensional analysis technique DLT method by videography. The subjects were four male judokas who have been training in Yong-In University(YIU), on Korean Representative level and Uchi-mata is their tokui-nage(favorite technique), the throwing form was filmed on two S-VHS 16mm video camera( 30frame/sec. Panasonic). Kinematic variables were temporal, posture, and COG. The data collection was performing by Uchi-mata. Six good trials were collected for each condition (type A, B) among over 10 trials. The mean values and the standard deviation for each variable were obtained and used as basic factors for examining characteristics of Uchi-mata by Kumi-kata types. The results of this analysis were as follows : 1) Temporal variables The total time elapsed(TE) by Uchi-mata of types A, B were 1.45, 1.56 sec. respectively. Types A shorter than B. 2) Posture variables In performing of Uchi-mata, the range of flexion in type A, left elbow was $45^{\circ}$ and B was $89^{\circ}$ from Event 2(E2) to Event 6(E6). Type A and B were quite different in right elbow angle in Event1(E1). Left shoulder angle of type A was extended and type B was flexed in E4. Both types right shoulder angles were showed similar pattern. Also both hip angles(right/left) were showed similar pattern. When type A performed Uchi-mata the knee-angle of supporting foot showed $142^{\circ}$in the 1st stage of kake phase[KP], and extended to $147^{\circ}$in the 2nd stage of KP. And the foot-ankle angle of supporting foot showed $83^{\circ}$in the 1st stage of KP, and extended to $86^{\circ}$in the 2nd stage of KP. moreover, The knee angle of attacking foot showed $126^{\circ}$in the 1st stage of KP, and extended to $132^{\circ}$in the 2nd stage of KP, and the foot-ankle angle of attacking foot showed $106^{\circ}$in the 1st stage of KP, and extended to $121^{\circ}$in the 2nd stage of KP. When type B performed Uchi-mata the knee-angle of supporting foot showed $144^{\circ}$in the 1st stage of KP, and extended to $154^{\circ}$in the 2nd stage of KP. And the foot-ankle angle of supporting foot showed $83^{\circ}$in the 1st stage of KP, and extended to $92^{\circ}$in the 2nd stage of KP. moreover, The knee angle of attacking foot showed $132^{\circ}$in the 1st stage of KP, and extended to $140^{\circ}$in the 2nd stage of KP, and the foot-ankle angle of attacking foot showed $103^{\circ}$in the 1st stage of KP, and extended to $115^{\circ}$in the 2nd stage of KP. During Uchi-mata performing, type A showed pulling pattern and type B showed lift-pulling pattern. As Kumi-kata types, it were different to upper body(elbow, shoulder angle), but mostly similar to lower body(hip, knee, ankle angle) on both types. 3) C. O. G. variables When the subjects performed Uchi-mata, COG of type A, B up and down in vertical aspect was 71cm, 73.8cm in height from the foot in the 2nd stage of KP. As Kumi-kata types, it were different on medial-lateral direction aspect but weren't different in Kuzushi phase on vertical direction aspect.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

Evaluations of Chinese Brand Name by Different Translation Types: Focusing on The Moderating Role of Brand Concept (영문 브랜드네임의 중문 브랜드네임 전환 방식에 대한 중화권 소비자들의 브랜드 평가에 관한 연구 -브랜드컨셉의 조절효과를 중심으로-)

  • Lee, Jieun;Jeon, Jooeon;Hsiao, Chen Fei
    • Asia Marketing Journal
    • /
    • v.12 no.4
    • /
    • pp.1-25
    • /
    • 2011
  • Brand names are often considered as a part of product and important extrinsic cues of product evaluation, when consumers make purchasing decisions. For a company, brand names are also important assets. Building a strong brand name in the Chinese commonwealth is a main challenge for many global companies. One of the first problem global company has to face is how to translate English brand name into Chinese brand name. It is very difficult decision because of cultural and linguistic differences. Western languages are based on an alphabet phonetic system, whereas Chinese are based on ideogram. Chinese speakers are more likely to recall stimuli presented as brand names in visual rather than spoken recall, whereas English speakers are more likely to recall the names in spoken rather than in visual recall. We interpret these findings in terms of the fact that mental representations of verbal information in Chinese are coded primarily in a visual manner, whereas verbal information in English is coded by primarily in a phonological manner. A key linguistic differences that would affect the decision to standardize or localize when transferring English brand name to Chinese brand name is the writing system. Prior Chinese brand naming research suggests that popular Chinese naming translations foreign companies adopt are phonetic, semantic, and phonosemantic translation. The phonetic translation refers to the speech sound that is produced, such as the pronunciation of the brand name. The semantic translation involves the actual meaning of and association made with the brand name. The phonosemantic translation preserves the sound of the brand name and brand meaning. Prior brand naming research has dealt with word-level analysis in examining English brand name that are desirable for improving memorability. We predict Chinese brand name suggestiveness with different translation methods lead to different levels of consumers' evaluations. This research investigates the structural linguistic characteristics of the Chinese language and its impact on the brand name evaluation. Otherwise purpose of this study is to examine the effect of brand concept on the evaluation of brand name. We also want to examine whether the evaluation is moderated by Chinese translation types. 178 Taiwanese participants were recruited for the research. The following findings are from the empirical analysis on the hypotheses established in this study. In the functional brand concept, participants in Chinese translation by semantic were likely to evaluate positively than Chinese translation by phonetic. On the contrary, in the symbolic brand concept condition, participants in Chinese translation by phonetic evaluated positively than by semantic. And then, we found Chinese translation by phonosemantic was most favorable evaluations regardless of brand concept. The implications of these findings are discussed for Chinese commonwealth marketers with respect to brand name strategies. The proposed model helps companies to effectively select brand name, making it highly applicable for academia and practitioner. name and brand meaning. Prior brand naming research has dealt with word-level analysis in examining English brand name that are desirable for improving memorability. We predict Chinese brand name suggestiveness with different translation methods lead to different levels of consumers' evaluations. This research investigates the structural linguistic characteristics of the Chinese language and its impact on the brand name evaluation. Otherwise purpose of this study is to examine the effect of brand concept on the evaluation of brand name. We also want to examine whether the evaluation is moderated by Chinese translation types. 178 Taiwanese participants were recruited for the research. The following findings are from the empirical analysis on the hypotheses established in this study. In the functional brand concept, participants in Chinese translation by semantic were likely to evaluate positively than Chinese translation by phonetic. On the contrary, in the symbolic brand concept condition, participants in Chinese translation by phonetic evaluated positively than by semantic. And then, we found Chinese translation by phonosemantic was most favorable evaluations regardless of brand concept. The implications of these findings are discussed for Chinese commonwealth marketers with respect to brand name strategies. The proposed model helps companies to effectively select brand name, making it highly applicable for academia and practitioner.

  • PDF

Writing and Sijo in new media culture age (새로운 매체문화시대의 글쓰기와 시조)

  • Jung Ki-chul
    • Sijohaknonchong
    • /
    • v.22
    • /
    • pp.27-55
    • /
    • 2005
  • Visual media are taken the highest position in modem society, Modern poems also have been changed into visual poems, This aspect is the result of considering only individual talents ignoring traditions. Now, new Sigo should be concentrated on the mythological and historical voice from true nature and the body of human being, That is. ut should be converted into an ecological world view resolutely and restored a form of expression granted specific characteristics of our language. Advantages the computer media have brought. that is. equality freedom. human rights. harmony. pro-environmental value. can be maximized by positively accepting an ecological world view of Sijo which had included daily lives and spirits of the nation. Moreover. these all changes of new Sijo have to be established and recreated in the traditional expressions of Sijo. Aesthetic value of Sijo should be found in the expression forms such as phonetic harmony, rules of versification, rhythm, and etc. Then, we can overcome modern society's pathological phenomena such as severance, separation, dissolution, estrangement, psychiatric syndrome and etc. which visual media superiority brought. At the same time. it will cure ills of modern poems, Sijo and writing epochally and can establish true happiness and development.

  • PDF

The Development and Application of Biotop Value Assessment Tool(B-VAT) Based on GIS to Measure Landscape Value of Biotop (GIS 기반 비오톱 경관가치 평가도구(B-VAT)의 개발 및 적용)

  • Cho, Hyun-Ju;Ra, Jung-Hwa;Kwon, Oh-Sung
    • Journal of Korean Society of Rural Planning
    • /
    • v.18 no.4
    • /
    • pp.13-26
    • /
    • 2012
  • The purpose of this study is to select the study area, which will be formed into Daegu Science Park as an national industrial complex, and to assess the landscape value based on biotop classification with different polygon forms, and to develop and computerize Biotop Value Assessment Tool (B-VAT) based on GIS. The result is as follows. First, according to the result of biotop classification based on an advanced analysis on preliminary data, a field study, and a literature review, total 13 biotop groups such as forrest biotop groups and total 63 biotop types were classified. Second, based on the advanced research on landscape value assessment model of biotop, we development biotop value assessment tool by using visual basic programming language on the ArcGIS. The first application result with B-VAT showed that the first grade was classified into 19 types including riverside forest(BE), the second grade 12 types including artificial plantation(ED), and the third class, the fourth grade, and the fifth grade 12 types, 2 types, and 18 types respectively. Also, according to the second evaluation result with above results, we divided a total number of 31 areas and 34 areas, which had special meaning for landscape conservation(1a, 1b) and which had meaning for landscape conservation(2a, 2b, 2c). As such, biotop type classification and an landscape value evaluation, both of which were suggested from the result of the study, will help to scientifically understand a landscape value for a target land before undertaking reckless development. And it will serve to provide important preliminary data aimed to overcome damaged landscape due to developed and to manage a landscape planning in the future. In particular, we expect that B-VAT based on GIS will help overcome the limitations of applicability for of current value evaluation models, which are based on complicated algorithms, and will be a great contribution to an increase in convenience and popularity. In addition, this will save time and improve the accuracy for hand-counting. However, this study limited to aesthetic-visual part in biotop assessment. Therefore, it is certain that in the future research comprehensive assessment should be conducted with conservation and recreation view.

A Study on Robust and Precise Position Control of PMSM under Disturbance Variation (외란의 변화가 있는 PMSM의 강인하고 정밀한 위치 제어에 대한 연구)

  • Lee, Ik-Sun;Yeo, Won-Seok;Jung, Sung-Chul;Park, Keon-Ho;Ko, Jong-Sun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.11
    • /
    • pp.1423-1433
    • /
    • 2018
  • Recently, a permanent magnet synchronous motor of middle and small-capacity has high torque, high precision control and acceleration / deceleration characteristics. But existing control has several problems that include unpredictable disturbances and parameter changes in the high accuracy and rigidity control industry or nonlinear dynamic characteristics not considered in the driving part. In addition, in the drive method for the control of low-vibration and high-precision, the process of connecting the permanent magnet synchronous motor and the load may cause the response characteristic of the system to become very unstable, to cause vibration, and to overload the system. In order to solve these problems, various studies such as adaptive control, optimal control, robust control and artificial neural network have been actively conducted. In this paper, an incremental encoder of the permanent magnet synchronous motor is used to detect the position of the rotor. And the position of the detected rotor is used for low vibration and high precision position control. As the controller, we propose augmented state feedback control with a speed observer and first order deadbeat disturbance observer. The augmented state feedback controller performs control that the position of the rotor reaches the reference position quickly and precisely. The addition of the speed observer to this augmented state feedback controller compensates for the drop in speed response characteristics by using the previously calculated speed value for the control. The first order deadbeat disturbance observer performs control to reduce the vibration of the motor by compensating for the vibrating component or disturbance that the mechanism has. Since the deadbeat disturbance observer has a characteristic of being vulnerable to noise, it is supplemented by moving average filter method to reduce the influence of the noise. Thus, the new controller with the first order deadbeat disturbance observer can perform more robustness and precise the position control for the influence of large inertial load and natural frequency. The simulation stability and efficiency has been obtained through C language and Matlab Simulink. In addition, the experiment of actual 2.5[kW] permanent magnet synchronous motor was verified.

Parameter Optimization and Automation of the FLEXPART Lagrangian Particle Dispersion Model for Atmospheric Back-trajectory Analysis (공기괴 역궤적 분석을 위한 FLEXPART Lagrangian Particle Dispersion 모델의 최적화 및 자동화)

  • Kim, Jooil;Park, Sunyoung;Park, Mi-Kyung;Li, Shanlan;Kim, Jae-Yeon;Jo, Chun Ok;Kim, Ji-Yoon;Kim, Kyung-Ryul
    • Atmosphere
    • /
    • v.23 no.1
    • /
    • pp.93-102
    • /
    • 2013
  • Atmospheric transport pathway of an air mass is an important constraint controlling the chemical properties of the air mass observed at a designated location. Such information could be utilized for understanding observed temporal variabilities in atmospheric concentrations of long-lived chemical compounds, of which sinks and/or sources are related particularly with natural and/or anthropogenic processes in the surface, and as well as for performing inversions to constrain the fluxes of such compounds. The Lagrangian particle dispersion model FLEXPART provides a useful tool for estimating detailed particle dispersion during atmospheric transport, a significant improvement over traditional "single-line" trajectory models that have been widely used. However, those without a modeling background seeking to create simple back-trajectory maps may find it challenging to optimize FLEXPART for their needs. In this study, we explain how to set up, operate, and optimize FLEXPART for back-trajectory analysis, and also provide automatization programs based on the open-source R language. Discussions include setting up an "AVAILABLE" file (directory of input meteorological fields stored on the computer), creating C-shell scripts for initiating FLEXPART runs and storing the output in directories designated by date, as wells as processing the FLEXPART output to create figures for a back-trajectory "footprint" (potential emission sensitivity within the boundary layer). Step by step instructions are explained for an example case of calculating back trajectories derived for Anmyeon-do, Korea for January 2011. One application is also demonstrated in interpreting observed variabilities in atmospheric $CO_2$ concentration at Anmyeon-do during this period. Back-trajectory modeling information introduced in this study should facilitate the creation and automation of most common back-trajectory calculation needs in atmospheric research.