Category Archives: Uncategorized
From July 27th to August 2nd , Prof Dr. Nicholas Mc Guire (Austria) from DSLab in Lanzhou University has held a training in Shanghai SPECTRUM CONTROL SYSTEMS Corp. The topic of the training is about RT-Preempt Embedded Linux.om July 27th to August 2nd , Prof Dr. Nicholas Mc Guire (Austria) from DSLab in Lanzhou University has held a training in Shanghai SPECTRUM CONTROL SYSTEMS Corp. The topic of the training is about RT-Preempt Embedded Linux.om July 27th to August 2nd , Prof Dr. Nicholas Mc Guire (Austria) from DSLab in Lanzhou University has held a training in Shanghai SPECTRUM CONTROL SYSTEMS Corp. The topic of the training is about RT-Preempt Embedded Linux.
ANNOUNCEMENT OF SEVENTH NATIONAL STUDENTS’ CONFERENCE ON INFORMATION TECHNOLOGY (NaSCoIT 2013) AND CALL FOR PAPER
Nepal College of Information Technology proudly presents an International IT Conference on ICT for Glocalization on 28th September 2013.
- Abstract Submission: Aug 15, 2013
- Acceptance Notice: Aug 20, 2013
- Draft Submission: Sept 8, 2013
- Final Camera-ready submission: Sept 15, 2013
- Registration Deadline: September 23, 2013
- Conference Day: September 28, 2013
- Venue: Park Village Resort, Budanilkantha, Kathmandu, Nepal
The Sixth NaSCoIT was held on 8th August 2009, fifth on 18th August 2007, fourth 20th May 2006, third on 21st May 2005, second on 15th May 2004 and first on 26th April 2003.
Considering the growing interest towards IT education in the country, an attempt was made by Nepal College of Information Technology (NCIT), for the first time in Nepal, to provide a common platform to IT students for sharing their views and ideas by organizing a national level IT Conference. Prof. Suresh Raj Sharma, Hon. Vice Chancellor of Kathmandu University (KU) inaugurated the fourth NaSCoIT. Rt. Hon. Deputy Chairman of the Council of Ministers, Mr. Kirtinidhi Bista inaugurated the third edition of NaSCoIT while the Hon. Vice Chancellor of Nepal Academy of Science and Technology (NAST) Prof. Dayanand Bajracharya inaugurated the second and Hon. Vice Chairman of National Planning Commission Dr. Shankar Sharma inaugurated the first NaSCoIT. In these conferences, papers were submitted from home country as well as from abroad including institutions like NCIT, NEC, Kathmandu University, Pulchowk Campus, Technical University of Vienna, Jadavpur University and Sikkim Manipal University.
Every effort is being made to include students studying various IT related subjects in universities/colleges in Nepal as well from abroad.
Information Communication Technology (ICT) for Glocalization is the main theme of Seventh National Students’ Conference on Information Technology (NaSCoIT 2013).
Papers are invited from the university/College students (including recent graduates, Ph.D degree holders) in a wide variety of information and technology related areas including, but not limited to:
- Mobile computing
- Cloud computing
- Ubiquitous computing
- Grid computing
- Big data
- Optical Communications and Networking
- Network management and Services
- Semantic web Technologies
- ad-hoc networks
- Security in wireless communication
Paper Submission Guidelines
Paper should be submitted in soft-copy, either by e-mail (email@example.com), or Pen-drive or CD-Drive in MS-Word, or PDF Format, formatted as per this guideline.
- Target paper size is A4.
- All material on each page should fit within a rectangular area left out, after leaving the margin of 1″ from top, bottom and left, and a margin of 0.75″ from the right. The text should be in two equal sized columns with a spacing of 1 cm in-between.
- Paper Heading: Helvetica 18-point bold Font
- Authors’ names Helvetica 12-point Font
- Authors’ Affiliations: Helvetica 10-point Font, run across the full width of the page – one column wide.
- Authors’ Phone Number: Helvetica 10-point Font
- Authors’ E-mail Address: Helvetica 12-point Font
- Body Text: 10-point Times Roman Font
- Footnotes: 8-point Times New Roman Font, and justified to the full width of the column.
- References and Citations: 10-point Times New Roman Font, and justified t the full width of the column.
- Headers and Footers: Should not be included
- Page Numbering: Should not be numbered
- Figures and Captions: Tables/Figures/Images in text should be placed as close to the reference as possible. It may extend across both columns to a maximum width of the rectangular text area.
- Sections Heading: 12-point Times New Roman bold Font, in all-capitals flush left with an additional 6-points of white space above the section head. Sections and subsequent sub- sections should be numbered and flush left. For a section head and a subsection head together (such as Section 3 and subsection 3.1), use no additional space above the subsection head.
- Sub-Sections Heading: 12-point Times New Roman bold Font with only the initial letters capitalized. (Note: For subsections and sub sub sections, a word like the or a is not capitalized unless it is the first word of the header.)
- Sub-subsections Headings: 11-point Times New Roman italic Font with initial letters capitalized and 6-points of white space above the subsub section head.
- Columns at Last Page: Should Be Made As Close As Possible to Equal Length
Link to the workshop that we held in Nepal:
Link to the Conference Call for Paper: http://www.ncit.net.np/content/nascoit-2013-call-paper
GPU/CUDA Programming for High Performance Computing
(in Mandarin, Spring 2013)
Total number of lectures: 18 (3 hours per week)
Programming Assignments: 4
This course is concerned with programming GPU’s for general purpose high performance computing (not for graphics). GPUs have evolved from supporting graphics to providing a computing engine for high performance computing. The world’s fastest compute system, the Tianhe‐1A achieves it performance (2.507 Petaflops) through the use of 7000 GPUs. Many clusters and computer systems are being designed to incorporate GPUs into their compute nodes to achieve orders of magnitude speed improvements. In this course, we will learn how to program such systems. The platform can be either a Windows or a Linux system and we will learn how to use Window systems that have GPUs and appropriate software installed in a departmental computing lab and also a departmental Linux server that has a high performance 100‐core GPU installed.Tentative topics will include:
–History of GPUs leading to their use and design for HPC
–Introduction to the GPU programming model and CUDA, host and device memories
–Basic CUDA program structure, kernel calls, threads, blocks, grid, thread addressing, predefined variables, example code: vector and matrix addition, matrix multiplication
–Using Windows and Linux environments to compile and execute simple CUDA programs.
–Timing execution time
–Routines called from device.
–Incorporating graphical output.
–Global barrier synchronization.
–Coalesced global memory access
–Shared memory and constant memory usage
–Critical sections and atomics. Example use: counter and histogram programs
–Pinned memory, zero copy memory, multiple GPUs, portable pinned memory
–Optimizing performance, using knowledge of warps, and other characteristics of GPUs, overlapping computations, effects of control, flow,
–Parallel algorithms suitable for GPUs, parallel sorting,
–Building complex applications, debugging tools,
–Hybrid programming incorporating OpenMP and/or MPI with CUDA, GPU clusters, distributed clusters, …
–Possible advanced materials: texture memory, using GPU also for graphics
(in English, Autumn 2013)
Total number of lectures: 18 (3 hours per week)
Number of assignments: 2
This course is planned and developed for graduate students. As multicore CPUs and many-core GPUs become even more popular, parallel computing platforms are easily to find each day. This course intends to cover multicore CPU and CUDA architectures, and will introduce with examples OpenMP, MPI, CUDA and OpenCL. Opportunities will be provided to students to acquire hands-on programming experiences. NVIDIA CUDA and OpenCLwill be used to learn GPU programming on NVIDIA and ATI GPUs, and OpenMP and MPI to explore the computational power on multicore CPUs clusters. Tentative topics will include:
–Study Multicore CPU and GPU architectures,
–Study network topologies,
–Learn how to write parallel programs using OpenMP, MPI, OpenCL and CUDA
–Study the issues that influence the speedup and efficiency of parallel programs
–Study some parallel algorithms, as sorting, image processing, graphs, and numerical computation
1)Barry Wilkinson, Michael Allen, “Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers”, 2nd Edition, Prentice Hall
2)Michael J. Quinn, “Parallel Programming in C with MPI and OpenMP”, Mc Graw-Hill
3)Jason Sanders and Edward Kandrot, “CUDA by Example: An Introduction to General -Purpose GPU Programming”, Addison-Wesley Professional, 2010
4)Programming Massively Parallel Processors A hands‐on Approach,David B. Kirk and Wen‐mei W. Hwu,Morgan Kaufmann, 2010
5)GPU Computing Gems Emerald Edition,By Wen‐Mei W. Hwu, Editor in Chief,Morgan Kaufmann, 2011
This special track of the Embedded World Conference on Thursday, March 3, 2011 is organized by OSADL’s Safety Coordinator Prof. Nicholas Mc Guire and will focus on the use of Free and Open Source Software for safety critical systems. For a direct link to the related section of the online program of the Embedded World Conference click here.
Call for Papers – Abstract Submission – Submitted Papers
eRTL release XM-eRTL-4.0 as replacement for the legacy RTLinux/GPL 3.2 is now ready for download as release candidate 1 (XM-eRTL-4.0-rc1).
XM/eRTL based on the hypervisor XtratuM 1 as well as PaRTiKLe developed at theUniversital Politecnica de Valencia is now being continued by DSLab at Lanzhou University and has been extended to be a full featured replacement of RTLinux/GPL.
The wiki of XM/eRTL is http://dslab.lzu.edu.cn/mediawiki
The development tree has been moved to git and is publically available at:
You can also use:
git clone http://dslab.lzu.edu.cn/xmertl.git
and appropriate infrastructure to allow community interaction,patch-submission, repository access is being set up.
The DSLab team will continue to develop and enhance XM-eRTL in the future in tight coordination with the Universitat Politecnica de Valencia,DISCA, based on strong POSIX binding and compatibility to the vanilla Linux kernel as its root domain.
Enhancements in this first release candidate include:
XM-FIFO : fifo communication extension between RT and non-RT domains
XM-SHM : shared memory module
XM-TRACE: a runtime tracer for XtratuM core and RT domains
XM-DEV : XtratuM device driver domain
Modules under Development:
XM-PPC : though still in the test phase XM-eRTL-4.0 is in the alpha stage
on PowerPC 440 and 405.
XM-MIPS : still in an early development stage XM-eRTL-4.0 is targeting
support for the Loongson MIPS processors (2F)
The DSLab XtratuM team.
L4eRTL 0.91 and L4eRTL 0.92 has been announced ,they provide Posix interface,an have a good realtimeperformance.
L4eRTL is real-time virtualization solution based L4/Fiasco microkernel and it could allow coexisting of hard real-time operating system and soft real-time operating system.