I have been programming since 1982, I guess that shows my age. But in those 36 years, I have never designed nor implemented a programming language. With the current anti-C++ sentiments on Twitter, it made me ponder on what a better language would look like. So let's start the first tiny steps towards eliminating that "design a programming language" bucket-list item.
Let's start with people jokingly call the hardest part, but really is, the easiest part: the name. As I want my hypothetical language to succeed the prince of all programming languages, C, it only follows that I should name it as a successor the C. Seeing C++ and D are already taken, I see no other option that to call it 10. Or for better Googlability: 10Lang.
If you look at C, you can kind of see that C code is pretty closely modeled after the hardware. Things in the language often have a direct representation in transistors. And if they don't, it's at least not too remote from the silicon, as more complex and more abstract languages are.
What would a good programming model be for today's hardware? For that, let's just look at the main differences between a 1970s processor and a 2018 processor.
One big difference, is that today's CPU has a relatively slow memory. And by slow, I mean, very slow. The CPU has to wait an eternity for data that is not in a register or a cache. This speed discrepancy means that today's CPU can be crippled by things as simple as branches, virtual function calls, irregular data access. What could we do to lead the programmer's mind away from OOP or AoS thinking, and naturally guide her into the SoA or Structure-of-Arrays approach?
The next big difference between that 1970s processor and today's are the SIMD units. It's probably one of the most distinctive features of a processor nowadays, and will dictate what the register file will look like. So, if we are going to model the programming language after the transistors, then there really are no two ways about it...
Modern processors are complex monstrosities. As we don't want to stall on branching, CPUs make tremendous efforts in predicting branches. At what cost? At the cost of CPU vulnerabilities. Branches hinder efficiency. Instead of making them faster with prediction, why can't we just focus on reducing them instead?
With these three main design decisions, it should be possible to sketch out a programming language. So a C like language, but for arrays and SIMD hardware. And I wonder if it could be possible to implement a rudimentary prototype by just using a preprocessor. Just have the 10Lang code translated into plain C with the help of the immintrin.h header file.
But for now, it is feedback time. After reading this, it would be great if you could drop a note in the comments. A folly? An exercise worth pursuing? Let me know!