There’s a lot of stuff how great Rust is in this or that but not so much on evaluating what is bad Rust code and what is good so I’ll throw my 2 cents more like to get some feedback on that, rather than delivering the official truth :-) I’m happy to withdraw my views, I just throw this given the lack of “grown up” and educated viewpoints on efficient Rust code.
Like e.g. consider e.g. async
which addresses the lack of non-blocking I/O in the standard library. It brings essentially a workqueue or thread pool abstraction with syntax sugar for polling and scheduling threads in the pool. Inefficient code is still a problem because thread pool is a limited resource, and keeping it too hot can make code stall just like before the feature. async
is pretty much same thing as struct workqueue
in kernel and not much else.
Making Rust features boring and uninteresting I guess :-) But it is good to make this sort of mind exercise for language features that are essentially glue code generators.
E.g. here’s a snippet from my zmodem2
crate:
for (i, field) in payload.split('\0').enumerate() {
if i == 0 {
state.file_name = String::from_str(field).or(Err(Error::Data))?;
}
if i == 1 {
if let Some(field) = field.split_ascii_whitespace().next() {
state.file_size = u32::from_str(field).or(Err(Error::Data))?;
}
}
}
In terms of how nice looking Rust code it is, well it is not that nice looking but it is heck a lot more efficient than a one-liner with collect
‘s between. If this was std
code (actually it uses heapless::String
) the only heap allocation would happen in String::from_str
but there you actually need heap allocated memory because the string length is not known at compile time.
I think that to go beyond basics of Rust to actual production code you need to step up from nice conventions to sort of requirements based of thinking. If code uses Vec
, there should always answer on plate why it requires Vec
and nice syntax is not a great answer for that question.
Even with e.g. enterprise Java, when in production, a lot of undestanding of how JVM JIT and GC work is required for efficient production code. A great tool needs educated use to actually get the added value sort of, or then actually something like Go might be a better option from purely productivity standpoint.
If the masking gets in the way (which it rarely actually does because compiler is smart), some corner cases can be sorted by calling convention e.g. std::io::Read::read(self, buf)
. This happens super rarely in practice.
To make manageable #Rust stacks over time, I’ve ended up to following conclusions on how to make I/O connectors between library crates:
Read
, Seek
and Write
traits that specify an API that is a more constrained versions of std::io
counterparts. Here the masking is used for benefit so that all in-crate code depends on exactly to internal traits.std.rs
containing std::io
implementations and macro shenanigans to use it [1]. This way std
can be compiled out if required (e.g. when used with embedded_io
).This way my own crates are pretty easy to glue to std
, embedded_io
and other crates providing the I/O backend so this sort of indirection makes a lot of sense. The goal is to have a structure with clean separation of internals and connectors to the outside world (which can or might not include std
).
[1]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(feature = "std")]
mod std;
@dpom use a Vec
if you need a Vec
because it is a Vec
:-) it is not a great choice to make a code cleaner, unless you actually need to dynamically grow it later on. function like collect()
cleans up code at the cost of using heap for small useless allocations. Allocating from stack on the other hand is just pointer arithmetic.
If the code that aims to be in par with equivalent C code, then these things are relevant. If the use of Rust is more like integration of available components and frameworks with the aim of not loosing productivity of something like JavaScript or Python, then this whole thing does not probably matter all that much…
In Rust programs one common theme, which is not great for optimizing memory footprint, is the heavy use of collect()
. Iterators are a great abstraction exactly for the reason that you can view items as a set without having to brute force deploy them as such in memory. One area where no language paradigm can protect against is the overuse of computing resource.
One good recipe against this e.g. in a commercial setting would be to set a constraint for core components to be no_std
compatible and have total artistic freedom in user interfacing parts or e.g. storage backend interfacing part i.e. only in I/O code.
Then early steps are slower but sort of investments instead of debt when memory is considered early on…
There’s little gain with the added complexity of Rust to something like Go if this consideration is not done. Sometimes something like Go could do even a better job because then at least garbage collector considers memory constraints..
Preparing for v0.1 of my #zmodem2 crate: https://github.com/jarkkojs/zmodem2/issues/9. It is heapless and has a context structure that takes less than 2 KB of memory. Not sync but sequential because I want for first version to have just correct implementation of the protocol. Works also in no_std
environment.
Great my little zmodem2
crate is now supporting no_std
. Not that useful yet before I have made file transfer API sequential (repeated calls, one per subpacket), or even fully async
compatible (or postpone async
to 0.2).
https://github.com/jarkkojs/zmodem2/commit/bc83180cacf04b5611c4068062408ef0ed75f797
also need to make unescaping a separate stage to get clean (and fast) async implementation. now that escaping/unescaping is data instead of code it factors down the complexity of the original problem to half.
sometimes the most #fortran solution is the best :-) not pretty, probably not too “rustacean” but gets the job done…
https://github.com/jarkkojs/zmodem2/commit/a4ad4508a99b66f46ab9daf0f08956c532285107
now it is pretty easy also add quirks later on without having to maintain a grazy ruleset.
Typography: it matters!
#KerningToo #humor #humour #typography #graphicdesign #writing #writingcommunity
November 2023 - My Linux Kernel work
"-Wstringop-overflow
Late in October I sent a patch to globally enable the -Wstringop-overflow compiler option, which finally landed in linux-next on November 28th. It’s expected to be merged into mainline during the next merge window, likely in the last couple of weeks of December, but “We’ll see”. I plan to send a pull request for this to Linus when the time is right. 🙂 [...]"
You can read the whole post here:
https://embeddedor.com/blog/2023/12/05/november-2023-linux-kernel-work/