A Performance Analysis Framework Considering the Hierarchy of Embedded Linux Systems Software Architecture

임베디드 리눅스 시스템의 소프트웨어 계층구조를 고려한 성능 분석 프레임워크

  • 곽상헌 (국민대학교 컴퓨터공학과) ;
  • 이남승 (국민대학교 컴퓨터공학과) ;
  • 이호림 (국민대학교 컴퓨터공학과) ;
  • 임성수 (국민대학교 컴퓨터공학과)
  • Received : 2010.01.04
  • Accepted : 2010.04.06
  • Published : 2010.06.15

Abstract

Recent embedded systems are being more complicated due to their hierarchical software architecture including operating systems. The performance of such complicated software architecture could not be well analyzed through separate analysis of each software layer; the combined effect and the interactions among the whole software layers should be considered. In this paper, we show the design and implementation of a performance analysis framework that enables hierarchical analysis of performance of Linux-based embedded systems considering interactions among the software layers. By using the proposed framework, we can obtain useful run-time information about a hierarchical software structure which usually consists of user-defined function layer, library function layer, system call layer, and kernel events layer. Experimental results reveal that the proposed framework could accurately identify the performance bottlenecks with the corresponding software layers during executions of target applications through the accompanying sub-steps of the analysis: the actual execution paths, the execution time of each observed event in each software layer, and the control flows across the software layers.

최근 임베디드 시스템은 운영체제를 포함하는 복잡한 소프트웨어 계층 구조를 가지는 형태로 발전하고 있다. 이러한 소프트웨어의 성능을 분석하기 위해서는, 한 소프트웨어 계층에서의 성능 뿐 아니라 전체 소프트웨어 계층 구조를 모두 고려해야 한다. 본 논문에서는 리눅스 기반 임베디드 시스템의 모든 소프트웨어 계층 구조를 고려할 수 있는 성능 분석 도구를 설계하고 구현한 결과를 보인다. 제안하는 기법은 응용 프로그램이나 라이브러리에 대한 재컴파일 없이 모든 소프트웨어 계층의 성능 분석에 필요한 측정 정보를 수집한다. 이 기법을 통해 리눅스 기반 임베디드 시스템에서 응용 프로그램의 실행에 따라 발생하는 사용자 정의 함수, 미들웨어 라이브러리 함수, 커널의 시스템 호출, 커널 이벤트에 대한 다양한 성능 분석을 수행할 수 있다. 실험을 통해 본 연구를 통해 구현된 분석도구를 사용하여 실제 실행 경로 분석, 각 소프트웨어 계층의 함수나 이벤트의 소요시간 분석, 그리고 소프트웨어 계층간 실행 흐름 분석 결과를 확인할 수 있으며, 이를 통해 전체 소프트웨어 계층상의 성능 병목을 찾을 수 있음을 보인다.

Keywords

References

  1. Ed Burnette, Hello, Android, second edition, Pragmatic Bookshelf, 2009.
  2. David Lefty Schlesinger, Architectural Principles and Overview of the LiMo Platform, OSCON, 2008.
  3. S. Lim, Feature Story: Android Platform, Embedded World 2009. 8, pp.52-58.
  4. Graham, S., Kessler, P., and McKusick, M. gprof: A Call Graph Execution Profiler. In Proceedings of ACM SIGPLAN Symposium on Compiler Construction. ACM, 1982.
  5. Strace, Ltrace: http://www.gnu.org/software/libc/resources.html
  6. http://www.windriver.com/products/workbench/
  7. K. Kantou et al, Performance Measurement/Analysis Tool "mevalet," NEC technical journal, vol.2, no.2, 2007.
  8. Steve Best, Linux Debugging and Performance Tuning: Tips and Techniques, Prentice Hall, 2005.
  9. Mark Wilding, Self-Service Linux: Determining Problems and Finding Solutions, Prentice Hall, 2005.
  10. Raj Jain, The Art of Computer Systems Performance Analysis Techniques for Experimental Design, Measurement, Simulation, and Modeling, John Wiley & Sons, Inc., 1991.
  11. N. Nethercote, Dynamic Binary Analysis and Instrumentation. PhD Thesis, University of Cambridge, Mass., USA, 2006.
  12. http://kprof.sourceforge.net/
  13. http://lttng.org/
  14. Orran Krieger et al, K42: building a complete operating system, ACM SIGOPS Operating Systems Review archive, vol.40, Issue 4 (October 2006).
  15. FC Eigler, R Hat, Problem solving with systemtap, Proceedings of the Ottawa Linux Symposium, 2006.
  16. http://valgrind.org/
  17. http://www.gnu.org/software/gdb/
  18. Richard Moore, A universal dynamic trace for Linux and other operating systems, In Proceedings of the FREENIX Track: 2001 USENIX Annual Technical Conference June 2001.
  19. Bryan M. Cantrill, Michael W. Shapiro and Adam H. Leventhal, Dynamic Instrumentation of Production Systems, In Proceedings of the 2004 USENIX Annual Technical Conference.
  20. Chi-Keung Luk Robert Cohn Robert Muth Harish Patil Artur Klauser Geoff Lowney StevenWallace Vijay Janapa Reddi Kim Hazelwood, Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation, In Proceedings of the 7th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming.
  21. Miller, B. P., Callaghan, M. D., Cargille, J. M., Hollingsworth, J. K., Irvin, R. B., Karavanic, K. L., Kunchithapadam, K., and Newhall, T. The Paradyn ParallelPerformance Measurement Tools. IEEE Computer (28) 11, (Nov., 1995).
  22. Pradeep Padala, Playing with ptrace, Part I, Linux Journal 2002.
  23. J. Levon and P. Elie. Oprofile: A system profiler for linux. http://oprofile.sf.net, September 2004.
  24. http://gcc.gnu.org/onlinedocs/gcc/Gcov.html
  25. http://www.qplus.or.kr/
  26. http://www.mvista.com/product_detail_devrocket.php
  27. R.R. Branco, Ltrace Internals, Ottawa Linux Symposium, 2007.
  28. M. Desnoyer, M.R. Dagenais, The LTTng Tracer: A low impact performance and behavior monitor for GNU/Linux, Ottawa Linux Symposium, 2007.
  29. http://gpe.handhelds.org.
  30. M.R. Guthaus et al. MiBench: A free, commercially representative embedded benchmark suite, In IEEE 4th Workshop on Workload Characterization, Dec. 2001.
  31. http://en.wikipedia.org/wiki/YAFFS
  32. http://www.linux-mtd.infradead.org/doc/ubifs.html
  33. J.Keniston, A.Mavinakayanahalli, P. Panchamukhi, Ptrace, Utrace, Uprobes: Lightweight, Dynamic Tracing of User Apps, Ottawa Linux Symposium, 2007.
  34. Francis M. David, Jeffrey C. Carlyle, Roy H. Campbell, Context Switch Overheads for Linux on ARM Platforms, Proceedings of the 2007 workshop on Experimental computer science.