• Title/Summary/Keyword: Task Function Approach

Search Result 96, Processing Time 0.022 seconds

Implementing Dynamic Obstacle Avoidance of Autonomous Multi-Mobile Robot System (자율 다개체 모바일 로봇 시스템의 동적 장애물 회피 구현)

  • Kim, Dong W.;Yi, Cho-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.1
    • /
    • pp.11-19
    • /
    • 2013
  • For an autonomous multi-mobile robot system, path planning and collision avoidance are important functions used to perform a given task collaboratively and cooperatively. This study considers these important and challenging problems. The proposed approach is based on a potential field method and fuzzy logic system. First, a global path planner selects the paths of the robots that minimize the cost function from each robot to its own target using a potential field. Then, a local path planner modifies the path and orientation from the global planner to avoid collisions with static and dynamic obstacles using a fuzzy logic system. In this paper, each robot independently selects its destination and considers other robots as dynamic obstacles, and there is no need to predict the motion of obstacles. This process continues until the corresponding target of each robot is found. To test this method, an autonomous multi-mobile robot simulator (AMMRS) is developed, and both simulation-based and experimental results are given. The results show that the path planning and collision avoidance strategies are effective and useful for multi-mobile robot systems.

Object detection in financial reporting documents for subsequent recognition

  • Sokerin, Petr;Volkova, Alla;Kushnarev, Kirill
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • Document page segmentation is an important step in building a quality optical character recognition module. The study examined already existing work on the topic of page segmentation and focused on the development of a segmentation model that has greater functional significance for application in an organization, as well as broad capabilities for managing the quality of the model. The main problems of document segmentation were highlighted, which include a complex background of intersecting objects. As classes for detection, not only classic text, table and figure were selected, but also additional types, such as signature, logo and table without borders (or with partially missing borders). This made it possible to pose a non-trivial task of detecting non-standard document elements. The authors compared existing neural network architectures for object detection based on published research data. The most suitable architecture was RetinaNet. To ensure the possibility of quality control of the model, a method based on neural network modeling using the RetinaNet architecture is proposed. During the study, several models were built, the quality of which was assessed on the test sample using the Mean average Precision metric. The best result among the constructed algorithms was shown by a model that includes four neural networks: the focus of the first neural network on detecting tables and tables without borders, the second - seals and signatures, the third - pictures and logos, and the fourth - text. As a result of the analysis, it was revealed that the approach based on four neural networks showed the best results in accordance with the objectives of the study on the test sample in the context of most classes of detection. The method proposed in the article can be used to recognize other objects. A promising direction in which the analysis can be continued is the segmentation of tables; the areas of the table that differ in function will act as classes: heading, cell with a name, cell with data, empty cell.

Short-term Scheduling Optimization for Subassembly Line in Ship Production Using Simulated Annealing (시뮬레이티드 어닐링을 활용한 조선 소조립 라인 소일정계획 최적화)

  • Hwang, In-Hyuck;Noh, Jac-Kyou;Lee, Kwang-Kook;Shin, Jon-Gye
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.73-82
    • /
    • 2010
  • Productivity improvement is considered as one of hot potato topics in international shipyards by the increasing amount of orders. In order to improve productivity of lines, shipbuilders have been researching and developing new work method, process automation, advanced planning and scheduling and so on. An optimization approach was accomplished on short-term scheduling of subassembly lines in this research. The problem of subassembly line scheduling turned out to be a non-deterministic polynomial time problem with regard to SKID pattern’s sequence and worker assignment to each station. The problem was applied by simulated annealing algorithm, one of meta-heuristic methods. The algorithm was aimed to avoid local minimum value by changing results with probability function. The optimization result was compared with discrete-event simulation's to propose what pros and cons were. This paper will help planners work on scheduling and decision-making to complete their task by evaluation.

The Embodiment of a Performer and Character: Psychophysical Pathway to the Practical Attunement of a Performer's Body

  • BongHee Son
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.68-74
    • /
    • 2024
  • This thesis explores the embodiment of a performer and a character/role specifically by examining what the term character is associated with and implies in a sense of the performer's bodily training through which what happens to their body. First of all, this research begins to investigate the relationship between a performer and a character centred on the performer's bodily experience through training and/or studio work. From a perspective of a performer, the concept and practical approach of a character itself essentially includes and signifies all the given circumstance of a specific play which has to be acknowledged then inhabited through the performer's body. That is, the internal structure of the text parallels with articulating and developing the spine of a specific character which take place as the substance leads the performer's body to an organic action and/or that of way corresponding to what the character needs and wants to obtain through a series of moment on stage. Here, we argue that the purposeful action as a process and result of applying/inhabiting the substance enhances the performer's body as the whole being participates in the given environment within which his/her body can also work or function by means of the integrated oneness. Second, in a manner of the most fundamental level, both the ethic of acting and the central task of a performer remind us the significance of allowing therefore experiencing subtle bodily movement, namely, responses to stimulus from in/outside of his/her body either visible or invisible on the one hand. At the same time, such a journey of self-discovery empowers the performer to explore new potential possibilities on the other. Finally, as the research finding suggests that these practical insights are necessarily need to be acknowledged as a point of the departure through which the quality of a performer's body is also cultivated by means of the changeable wholeness in order to being on stage.

Effect of a PNF Intervention Strategy with the ICF Tool Applied to a Patient with Bilateral Total Hip Replacement Walking a Crosswalk (양측 엉덩관절 전치환술 환자의 횡단보도 걷기 개선을 위해 ICF Tool을 적용한 PNF 중재전략: 사례보고 )

  • Jin-cheol Kim;Jae-heon Lim
    • Journal of the Korean Society of Physical Medicine
    • /
    • v.19 no.1
    • /
    • pp.95-105
    • /
    • 2024
  • PURPOSE: This study aimed to utilize the International Classification of Functioning, Disability, and Health (ICF) tool to identify a problem list and explore intervention effects using proprioceptive neuromuscular facilitation (PNF) for improving the crosswalk performance of patients who have undergone a bilateral hip arthroplasty. METHODS: The subject of this study was a 43-year-old male who had undergone a bilateral hip arthroplasty. To address the subject's functional status, a clinical decision-making process was carried out in the order of examination, evaluation, diagnosis, prognosis, intervention, and outcome. Patient information during the examination was collected using the ICF core set. The evaluation involved listing the items of each problem using the ICF assessment sheet and identifying the interaction between activity limitations and the impairment level. The diagnosis explicitly described the causal relationships derived from the evaluation using ICF terminology. The prognosis presented activity goals, body function, and structured goals in terms of the activity and participation levels that needed to be achieved for an individual's functional status. The intervention approached problems through the four components of the PNF philosophy, namely basic principles and procedures, techniques, and patterns, in an indirect-direct-task sequence. Results were compared before and after the intervention using the ICF evaluation display. RESULTS: The results of the study showed that the primary activity limitation, which was the walking time across the crosswalk, showed improvement, and the trunk's counter rotation and the weight-bearing capacity of both the lower limbs, which were impairment level indicators, were enhanced. CONCLUSION: This study suggests that PNF intervention strategies will serve as a positive approach for improving crosswalk walking in patients with bilateral hip arthroplasty.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Archival Appraisal of Public Records Regarding Urban Planning in Japanese Colonial Period (조선총독부 공문서의 기록학적 평가 -조선총독부 도시계획 관련 공문서군을 중심으로-)

  • Lee, Seung Il
    • The Korean Journal of Archival Studies
    • /
    • no.12
    • /
    • pp.179-235
    • /
    • 2005
  • In this article, the task of evaluating the official documents that were created and issued by the Joseon Governor General office during the Japanese occupation period, with new perspectives based upon the Macro-Appraisal approaches developed by the Canadian scholars and personnel, will be attempted. Recently, the Canadian people and the authorities have been showing a tendency of evaluating the meaning and importance of a particular document with perspectives considering the historical situation and background conditions that gave birth to that document to be a more important factor, even than considering the quality and condition of that very document. Such approach requires the archivists to determine whether they should preserve a certain document or not based upon the meaning, functions and status of the entity that produced the document or the meaning of the documentation practice itself, rather than the actual document. With regard to the task of evaluating the official documents created and issued by the Joseon Governor General office and involved the city plans devised by the office back then, this author established total of 4 primary tasks that would prove crucial in the process of determining whether or not a particular theme, or event, or an ideology should be selected and documents involving those themes, events and ideologies should be preserved as important sources of information regarding the Korean history of the Japanese occupation period. Those four tasks are as follow: First, the archivists should study the current and past trends of historical researches. The archivists, who are usually not in the position of having comprehensive access to historical details, must consult the historians' studies and also the trends mirrored in such studies, in their efforts of selecting important historical events and themes. Second, the archivists should determine the level of importance of the officials who worked inside the Joseon Governor General office as they were the entities that produced the very documents. It is only natural to assume that the level of importance of a particular document must have been determined by the level of importance(in terms of official functions) of the official who authorized the document and ordered it to be released. Third, the archivists should be made well aware of the inner structure and official functions of the Joseon Governor General office, so that they can have more appropriate analyses. Fourth, in order to collect historically important documents that involved the Koreans(the Joseon people), the archivists should analyze not only the functions of the Joseon Governor General office in general but also certain areas of the Office's business in which the Japanese officials and the Koreans would have interacted with each other. The act of analyzing the documents only based upon their respective levels of apparent importance might lead the archivists to miss certain documents that reflected the Koreans' situation or were related to the general interest of the Korean people. This kind of evaluation should provide data that are required in appraising how well the Joseon Governor General office's function of devising city plans were documented back then, and how well they are preserved today, utilizing a comparative study involving the Joseon Governor General office's own evaluations of its documentations and the current status of documents that are in custody of the National Archive. The task would also end up proposing a specialized strategy of collecting data and documents that is direly needed in establishing a well-designed comprehensive archives. We should establish a plan regarding certain documents that were documented by the Joseon Governor General office but do not remain today, and devise a task model for the job of primary collecting that would take place in the future.

Effect of Home-Visit Occupational Therapy on Community Dwelling Stroke Survivors: A Case Study (지역사회 거주 뇌졸중 환자의 가정방문 작업치료 효과: 사례 연구)

  • Jeong, Eun-Hwa
    • Therapeutic Science for Rehabilitation
    • /
    • v.9 no.2
    • /
    • pp.87-98
    • /
    • 2020
  • Objective : The purpose of this study was to verify the effectiveness of home-visit occupational therapy in stroke patients. Methods : Two patients with stroke who applied for home-based occupational therapy services at a health center in Seoul were enrolled. The home-visit occupational therapy program evaluates the subject's daily living, task performance, cognitive, and emotional functions, sets occupational therapy goals and plans interventions based on a client-centered approach. Occupational therapy programs consisted of 12 sessions based on the client's major problems. Results : COPM scores improved in both cases, there was an improvement in COPM scores, and in Case 2 there were improvements in MBI and K-MMSE scores. There was also an improvement in KGDS scores in Case 1. Conclusion : Home-visit occupational therapy was found to be effective in improving daily activities, cognition, and mental function of stroke patients. During home and community integration, continuous and continuous rehabilitation services need to be activated from institutional rehabilitation to community-based rehabilitation. Active home-visit occupational therapy is needed to promote physical, cognitive, mental and social access in stroke patients discharged from hospitals.

Correction of Secondary cleft lip-nasal deformity; secondary rhinoplasty in children and adults (구순열 이차비기형의 교정; 아동과 성인에서의 이차 비성형술)

  • Song Gin-Ah;Myung Hoon;Hwang Soon-Jung;Seo Byoung-Moo;Lee Jong-Ho;Choung Pill-Hoon;Kim Myung-Jin;Choi Jin-Young
    • Korean Journal of Cleft Lip And Palate
    • /
    • v.6 no.1
    • /
    • pp.17-25
    • /
    • 2003
  • Correction of the cleft-lip nasal deformity is a difficult task that requires clear understanding of the associated complex anatomy and function as well as the operation time, the selection of an operation method, On the expectation that it helps enhance understanding the current trend of cleft-rhinoplasty, authors analyzed secondary rhinoplasty between 1999 and 2002, In both the unilateral and bilateral cleft lip rhinoplasty, we reviewed the timing of repair, site of correction and it's major technique, incision or approach method, autogenous cartilage graft method, All patients with a septal deviation did not have a septal surgery, We were active in alar and nasal tip surgery and passive in septal and dorsal deformity correction, And for children, we used a conservative method but for adults, we used radical approach, Most surgeries are focused on esthetic goal and we thought that objective evaluation for nasal obstruction was needed for bener and predictable outcome.

  • PDF