Archive
Making sure variables are initialized
One source of program bugs is use of variables before they have been initialized. In C/C++ all static variables get zero-initialized if they have no specified initialization, so it is only local variables we need to worry about. Bugs caused by use of uninitialized local variables can be particularly nasty, because the value of such a variable depends on whatever previously occupied the same stack location. Read more…
Using and Abusing Unions
The C union type is one of those features that is generally frowned on by those who set programming standards for critical systems, yet is quite often used. MISRA C 2004 rule 18.4 bans them (“unions shall not be used”) on the grounds that there is a risk that the data may be misinterpreted. However, it goes on to say that deviations are acceptable for packing and unpacking of data, and for implementing variant records provided that the variants are differentiated by a common field. Read more…
Safer arrays: using a C++ array class
In a previous post, I remarked that arrays in C leave much to be desired, and that in C++ it is better to avoid using naked arrays. You can avoid naked arrays in C++ programming by wrapping them up in a suitable array class instead. The Joint Strike Fighter C++ Coding Standards document takes a similar view; rule 97 in that standard states: Read more…
How (un)safe is pointer arithmetic?
I recognize that this is a controversial topic – if you’re a safety-critical professional using C or C++, I’d be glad to hear your views.
Using explicit pointer arithmetic in critical software is generally frowned upon. MISRA 2004 rules 17.1 to 17.3 prohibit some particular cases of explicit pointer arithmetic that do not give rise to well-defined results. Read more…
Using Unicode in embedded software
Unicode provides a single character set that can represent nearly all of the world’s written languages. Mainstream software development has largely moved to Unicode already, helped by the fact that in modern languages such as Java and C#, type char is defined to be a Unicode character. However, in C a char is invariably 8 bits on modern architectures, and the associated character set is ASCII. Does this matter, for embedded software? Read more…