I got a new affair

Few months ago a colleague of mine pointed me to Rust.

It took some time, but it now has superseded D as my favorite programming language. Even not as mature as D, I especially love the error handling concept.

And to my knowledge it is the only language that can describe lifetime of objects. Not even Ada can do this.

It feels like this language really scales, i.e. you can use it for small embedded projects as well as for the really big backend server applications. Go, give it a try!

Libraries are Evil

We all know that reading books can be dangerous, but this is not why software libraries are evil 😉

There is a different reason for this.

Let’s assume we have module we like to create a library from:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
module libx;

struct X {
    version(B) {
        int y = 42;
    }
    int z = 3;
}

void doX(ref X x) {
    version(B) {
        x.y = 13;
    }
    x.z = 5;
}

The module libx provides a structure X and a function doX. However, they behave both different depending on the compiled version (you can think of B as e.g. a debug version).

Now to the program using the library:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
int main() {
    import libx;
    import std.stdio;

    X myX;
    version (B) {
        writeln("y ", myX.y);
    }
    writeln("z ", myX.z);
    doX(myX);
    version (B) {
        writeln("y ", myX.y);
    }
    writeln("z ", myX.z);
    return0;
}

Even though our program is aware of the different versions, if we build the library and the program using it with different versions, this will lead to really hard to debug errors.

You could argue that the problem is the multi-version library. However, this issue is even worse for compiler options. Just think about the options defining the layout of structures in memory, i.e. for aligning and padding of the members.

I want you to be able to sleep at night, so don’t even start thinking about dynamic link libraries 😉

Summary: If you want a program to be working, do not use pre-compiled libraries. Always compile with the same compiler in one run and statically link every object together.

Sliding Windows

Let’s assume you would like to sum up all k-tuples in an array like this:

1
2
3
   [0, 1, 2, 3, 4, 5, 6]
=> [[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]
=> [1, 3, 5, 7, 9, 11]  (here k is 2)

In a language with functional programming elements (like D), you could do the following:

1
2
3
4
5
6
auto i = 0;
generate!(
  () => inputArray[i .. min(i++ + k, inputArray.length)]
         )
          .take(inputArray.length - k +1)
          .map!(sum);

However, for large inputArrays and large k, this takes an unreasonable time, since for each tuple the sum is calculated individually again.

1
2
3
4
5
6
auto v = inputArray[0 .. k].sum;
for (int i = 0; i < inputArray.length - k; i++) {
    v -= inputArray[i];
    v += inputArray[i + k];
    // Do something with the sum of the k-th tuple.
}

This speeds up things a lot.

You may also like to have a look a this question on stackoverflow.com.

Fight Club

On the weekend I discovered codefights.com.

At work, I have to read a lot of code, but I rarely have the chance to do some of the creative coding work myself. This is a welcome activity in the evening then…

I am not much into the competition thing, but this is a nice opportunity to improve my D skills.

My Favorite

My favorite programming language that I would use these days for creating embedded or large scale software is D.

C++ brings more libraries and tool support and increased in type safety tremendously with C++11 and later. However, syntax (for newer useful features) and required usage of the preprocessor are aweful.

I tried learning D by implementing the chacha algorithm. You can find it on my github site.

Why do things go wrong?

In discussions about software safety you often end up  arguing about something that is actually about a fault model of software.

In this post I would like to try to sketch such a fault model.

The most obvious is logic faults: the software functionally does not what it is intended to do. We could also call this fault class algorithmic faults. As examples I see faults as division by zero, uninitialized variables and faults in e.g. state machines.

The next and very bad ones are memory management faults. That are wild pointers (pointing to something invalid), dangling pointers (usage after they point to something valid), buffer overflows and misunderstanding about memory ownership.

Recently, we also stumbled over an issue created by integer promotion that was not obvious. This represents another class of potential issues. It may be specific to C, however I guess an even more type safe language may have the same problems, e.g. when casting types.

Introducing parallel processing (e.g. multi-threaded programming, usage of multi-core or even interrupt handling) creates two new classes of potential faults in software: data consistency problems and locking issues (life-lock, dead-lock).

Faults that should not be considered in discussions on software faults are reliance on implementation-specific behavior (they can be prevented by static code analysis or better: don’t do it!) or hardware faults (e.g. single event upsets).  Software can detect such hardware faults, but they are not caused by software.

Maybe I one time add a page here to describe those faults in detail and think more about completeness.