I was just reading a blurb that said Microsoft wanted to get rid of all the C/C++ code in the kernel and replace it with Rust, by 2030. Is it even possible to write a Windows kernel driver in Rust currently
Jan
I was just reading a blurb that said Microsoft wanted to get rid of all the C/C++ code in the kernel and replace it with Rust, by 2030. Is it even possible to write a Windows kernel driver in Rust currently
Jan
Sure. Rust compiles to machine code binaries, just like C and C++.
In my opinion, this move makes good sense. There are, roughly speaking, two classes of compiled languages. In one class, the compiler lets you do whatever you want, and you spend your life debugging. That’s C and, to a lesser extent, C++. In the other class, you spend a great deal of time battling the compiler, but once you have satisfied your compiler, the code is quite likely to work. That’s Rust (and Pascal). Memory ownership in Rust is an explicit first-class concept, making accidents much less likely.
Sure. There is even WDK crate which is built with bindgen. You have ccess to everything (except deprecated functions) normal WDK offers for you.
They both compile to machine code. So they both do the job. There is nothing you can do with C and can't with Rust. It will take some time to convert all that C/C++ code to Rust but yeah, they can.
Call me a cynic, but I doubt that a significant portion of Windows will ever be ported to rust or any other language. Think about the cumulative effort required to do this. Not just to change the code itself, but to retrain the staff, develop new tooling etc. And then the deployment of migrated components
And to what end? A change of language by itself won’t eliminate bugs. Even if the new language has less of a propensity to bugs of a certain class, the change in and of itself is likely to cause bugs (at least edge case behaviour changes that affect 3rd party code). And the result is no more commercially marketable than it is now. Might it have a lower maintenance cost? Maybe. But there are plenty of stability and security issues that have nothing to do with memory safety.
Do you still have to run static verifier and include the results for WHQL certification? Will THAT work for Rust code, seems doubtful.
Jan
The posts I saw about Rust were Principal Software Engineer (CoreAI) | Microsoft Careers | Galen Hunt | 27 comments and Microsoft to Replace All C/C++ Code With Rust by 2030 - Thurrott.com
Jan
The requirement is to run the remarkably awful thing on github that replaced sdv, which was quite awful to start with. No reason the github thing couldn’t do rust, I think it supports multiple languages.
Reminds me of developing safety critical software in Ada. For sure Ada/Rust have their advantages however the issue is CPU overhead.
C, being one step over assembler, remains blazingly fast when necessary, which you probably all know. Although someone made the mistake of telling me that Rust produces code as fast as C (which it cannot).
the question I have is why suddenly start shouting about Rust when Ada has been around for years and is tried and tested?
So why Rust all of a sudden?
Is there full support in WinDbg for debugging kernel-mode code written in Rust?
Well, you would be wrong. In most cases, the language compilers all share the same back end, and therefore have the same optimizations. Most of the hard work in Rust happens in the compiler, so the compiled code is streamlined.
And, repeating what has been said before, the amount of truly time-critical code in the world is a very tiny fraction. If a language allows me to write safer code more quickly, that is a huge win.
the question I have is why suddenly start shouting about Rust when Ada has been around for years and is tried and tested?
Ada was designed by committee to be safe, not efficient. The design documents are now 45 years old, and we’ve learned a lot since then.
Okay this is an interesting point and I would like to understand further. If the output of a compiler for one language performs rigorous type checking while the compiler for another language does not perform rigorous type checking then surely the former language creates more work for the CPU and therefore takes longer to execute? Or what am I missing?
Type checking is handled at compile time, not in the code (except in rare languages). I lead multiple compiler teams in my career and never needed any code generation to handle types.
Also, as a person who was on a team a professor put together to review the four ADA proposals for DOD, it was designed to be type safe and have some runtime checks, but the two biggest problems with ADA were the original decisions were as much political as engineering. The other problem was GNU ADA which did have a crappy code generator, but congress basically legislated that people had to use that compiler, when there were other compilers that produced much better code.
Type checking is handled at compile time, not in the code (except in rare languages). I lead multiple compiler teams in my career and never needed any code generation to handle types.
Oh I think I see so there is no runtime checking. The compiler does type checking with different degrees of rigor for different languages/compilers.
Also, as a person who was on a team a professor put together to review the four ADA proposals for DOD, it was designed to be type safe and have some runtime checks, but the two biggest problems with ADA were the original decisions were as much political as engineering. The other problem was GNU ADA which did have a crappy code generator, but congress basically legislated that people had to use that compiler, when there were other compilers that produced much better code.
I used ADA to perform railway signaling in Europe. Stopping tilting trains from tilting when they should not tilt (e.g. going through a tunnel), automatically changing signaling systems when crossing borders in Europe and implementing moving block systems. They computers were essentially triple computers to provide redundancy and the code for each computer was produced with its own compiler (so an error in one would not translate to an error in all three). The code also had to be SPARK(ed). To ensure it met safety critical standards.
Yup. In Store WinDbg at least. Local variables etc. all work flawlessly. The mangling might be a problem. But you can overcome that with #[unsafe(no_mangle)].
There won’t be a lot of staff to retrain. AI will write and maintain most of the code. At least so they say.
there is too much to say in a simple post, but the design of language is complex. And it isn’t possible to produce equally performant assembly from any old swag
maybe I know something, but my wife is telling me to come
And it isn’t possible to produce equally performant assembly from any old swag
Agreed and I see that Rust may well have some runtime checking. Rust looks a bit like C with a pedantic compiler (glad I am near retirement).
Going back to safety critical stuff we used C on processors with a lot of integration to write bare metal drivers however that C was heavily managed (I think they call it MISRA nowadays).
That said the level of software engineering involved at all stages in terms of documentation (requirements, design specification, Yourdon, pseudocode) was under constant review.
Writing a compiler is a complex project. And each language has features that make it easier or harder for a compiler to produce effective assembly sequences.
The simple approach is that the front end interprets the code itself and produces an abstract syntax tree. Then the back end encodes instructions for the chosen architecture to evaluate it.
For r-value expressions, that’s true enough. But flow of control presents more challenges for the compiler. Even a simple if, else if, else combination has multiple potential implementations, and looping, loop unrolling and anything more complex is - well, more complex. Choices made my the compiler can have a significant effect on the resulting execution speed because of stalls within the execution pipeline. And side effects of expressions used within control flow statements need to follow the language semantics. Producing a side effect flow tree can identify optimizations, but side effects can be hard to identify and differ depending on how variables can be affected by language constructs (besides direct assignment)
And indirection adds another layer of complexity. Think about aliased pointers for example. The most common example is memcpy versus memmove. The C language provides no information to the compiler about the independence of the pointers, and that inherently limits either the safety or performance of the resulting instruction stream. But when the compiler can know about or prevent aliased pointers, this isn’t an issue
And then the optimization happens at multiple levels. Idiom recognition is probably the biggest difference between compilers. For example bit rolling is easily accomplished in ASM, but there is no direct way to express that operation in C. Quite a lot of work has been done on that for C, but other languages present different challenges for compiler authors.
There is lot’s more to say of course. This is only the most cursory summary. But there are lots of things that go into it besides just memory lifetime. And that doesn’t even touch on algorithm flaws - where the programmer has simply written the wrong thing.
There is an active investigation to support Rust in WDK. For details on this see Towards Rust in Windows Drivers | Microsoft Community Hub
This looks like red light to me: if Microsoft likes some technology too enthusiastically, it can become a new Silverlight, Windows 8… Windows Phone… and so on. Python was a lucky exception. Also, Rust is too ugly.
If the industry should jump to a new language, can we have something more elegant and human friendly? Something like Mojo on the heavy end or Zig on the lower end? Else I’d rather stay with modern C++. The Profiles initiative is especially interesting.