"Behind every stack of books there is a flood of knowledge."
When looking at future embedded systems and their design, especially (but not exclusively) in the multi-media domain, we observe several problems:
In order to solve these problems we foresee the use of programmable multi-processor platforms, having an advanced memory hierarchy, this together with an advanced design trajectory. These platforms may contain different processors, ranging from general purpose processors, to processors which are highly tuned for a specific application or application domain. This course treats several processor architectures, shows how to program and generate (compile) code for them, and compares their efficiency in terms of cost, power and performance. Furthermore the tuning of processor architectures is treated.
Several advanced Multi-Processor Platforms, combining discussed processors, are treated. A set of lab exercises complements the course.
This course aims at getting an understanding of the processor architectures which will be used in future multi-processor platforms, including their memory hierarchy, especially for the embedded domain. Treated processors range from general purpose to highly optimized ones. Tradeoffs will be made between performance, flexibility, programmability, energy consumption and cost. It will be shown how to tune processors in various ways.
Furthermore this course looks into the required design trajectory, concentrating on code generation, scheduling, and on efficient data management (exploiting the advanced memory hierarchy) for high performance and low power. The student will learn how to apply a methodology for a step-wise (source code) transformation and mapping trajectory, going from an initial specification to an efficient and highly tuned implementation on a particular platform. The final implementation can be an order of magnitude more efficient in terms of cost, power, and performance.
In this course we treat different processor architectures: DSP (digital signal processors), VLIWs (very long instruction word, including Transport Triggered Architectures), ASIPs (application specific processors), and highly tuned, weakly programmable processors. In all cases it is shown how to program these architectures. Code generation techniques, especially for VLIWs, are treated, including methods to optimize code at source or assembly level. Furthermore the design of advanced data and instruction memory hierarchies will be detailed. A methodology is discussed for the efficient use of the data memory hierarchy.
Most of the topics will be supplemented by hands-on exercises.
For a preliminary schedule see: schedule.
The lecture slides will be made available during the course; see also below.
Papers and other reading material
** Slides as far as available; will be updated regularly during the course.
As part of this lecture you have to study a hot topic related to this course, and make a short slide presentation about this topic.
The slides have to be presented during the oral exam.
Guidelines are as follows:
Becoming a very good Embedded Computer Architect you have to practice a lot. Therefore, as part of this course we have put a lot of effort to prepare 3 very interesting lab assignments. For each lab there is a website with all the required documentation and preparation material. These lab assignments can be made on your own laptop, with for certain parts, remote access to our server systems.
For every lab you have to write a report, which has to be sent to one of the course assistants.
In the past we had several architecture design space exploration (DSE) labs, using the Transport Triggered Architecture (TTA) framework, using the Imagine Processor, and one using the AR|T tools. This year we base the first lab on the reconfigurable processor from Silicon Hive
For this excercise:
In this lab you are asked to program a (multi-)processor platform. In the past we developed various labs:
This year, 2012, we will take an x86 plus graphic processing unit (GPU) as platform.
Graphic processing units (GPUs) can contain upto hundreds of Processing Engines (PEs). They achieve performance levels of hundreds of GFLOPS (10^9 floating point operations per second). In the past GPUs were very dedicated, not general programmable, and could only be used to speedup graphics processing. Today, they become more-and-more general purpose. The latest GPUs of ATI and NVIDIA can be programmed in C and OpenCL. For this lab we will use NVIDIA GPUs together with the CUDA (based on C) programming environment. Start with setting up the CUDA environment, studying the available learning materials, and running the example programs.
We added one extensive example program, about matrix multiplication, which demonstrates various GPU programming optimizations.
You will see getting something running using CUDA is not so difficult, but getting it efficiently running will take quite some effort.
After studying the example and learning material you have to perform your own assignment and hand in a small report. The purpose is
the use your GPU as efficient as possible.
All the details about this assignment can be found on the GPU-assignment site.
The assignment is made by Dongrui She and Zhenyu Ye. For questions contact d.she _at_ tue.nl.
When finished, send in a small report about your result and various applied optimizations to Dongrui She.
In this exercise you are asked to optimize a C algorithm by using the discussed data management techniques. This should result into an implementation which shows a much improved memory behavior. This improves performance and energy consumption. In this exercise we mainly concentrate on reducing energy consumption. You need to download the following, and follow the instructions.
The 2011 year assignment can be found here. The algorithm is based on Harris corner detection.
You will start with a default platform, containing 2 levels of cache. First calculate the results of your code optimizations
for this platform. Thereafter you are free to tune the platform for the given application, e.g. changing the caches, or even
use ScratchPad memory (SRAM) instead of, or in addition to, caches.
The examination will be oral about the treated course theory, the lab report(s), and studied articles.
Likely week: 4th week of January 2013.
Grading depends on your results on theory, lab exercises and your presentation.
Interesting processor architectures:
Virtual Fashion Education
"chúng tôi chỉ là tôi tớ của anh em, vì Đức Kitô" (2Cr 4,5b)
News About Tech, Money and Innovation
Modern art using the GPU
Find the perfect theme for your blog.
Learn to Learn
Con tằm đến thác vẫn còn vương tơ
Khoa Vật lý, Đại học Sư phạm Tp.HCM - ĐT :(08)-38352020 - 109
Blog Toán Cao Cấp (M4Ps)
Indulge- Travel, Adventure, & New Experiences
"Behind every stack of books there is a flood of knowledge."
The latest news on WordPress.com and the WordPress community.