Posts
64
Following
45
Followers
74
reading files, deleting files, I can go on
@ptesarik @vbabka @oleksandr Sector sizes of block devices yes, because of block layer abstraction. Not on the filesystem level unless you want more OMG blatant layering violation.
0
0
2
@vbabka @oleksandr one dd back and forth was easier on first sight. I can try again with manual partitioning though.
1
0
1
@oleksandr @vbabka I don't know but I assume this works, fdisk would create partitions at the right places and filesystem uses logical disk address.
0
0
0
@oleksandr @vbabka So, it's not that simple.

What I did:
1. dd device to file
2. format NVMe to 4096 bytes/sector
3. dd file to device
4. no partitions detected
5. reboot, stuck in BIOS

Attempt to copy parition table from file to device using sfdisk showed errors and sector size of 4096, GPT table wrong number of sectors etc.

In the log there was line like "inconsistent atomic write size, namespace will not be added subsystem=4096bytes controller/namespace=512 bytes".

Recovery:
1. boot rescue system
2. format NVMe back to 512 bytes/sector
3. dd file to device
4. partitions detected again
5. reboot, back to the system

I don't know what exactly went wrong, the sfdisk partition recreation worked fine with Kingston (KC3000). There are potential problems with partitions and some kind of mismatch what drive reports as sectors.
2
0
0
@oleksandr @vbabka And I have the first victim, Verbatim Vi3000, system does not boot, device is visible in BIOS.
1
1
2
@oleksandr @vbabka My sample is too small to tell, I have converted Kingston drives, one of them is backing drives for testing VMs, so I'll know when it fails.
1
1
1
@vbabka @oleksandr Probably a firmware bug, that do exist and happen on all devices / vendors and are unfortunatelly not that rare.
0
0
1
@oleksandr @vbabka Yes, basically two commands: nvme ns-id -H /dev/nvmeX (check the profiles) and nvme format --lbaf=1 /dev/nvmeX to change it. Obviously, it destroys the data.
1
0
2
NVMe's are formatted to 512B sector size, and most of them have option to be formatted to 4096. And it works. I'm amazed.
1
0
2
repeated

Always remember the register is a trash publication that publishes straight up lies to suit the author's bias.

Their kernel shit is worse than the usual tripe you see in the press (LWN being a massive exception to that of course!)

0
1
1
repeated

Dorinda was followed by @vbabka !

0
2
2
@karolherbst

> Doing more isn't slower than doing less
> Simpler code isn't faster than complex code

I don't know if the state of the art tools are capable of that, but the points seem to be corresponding to human evaluation (counting lines of code on CPU instruction abstractions) rather than a model (even an imperfect one) that's closer to what actually happens.

The naive evaluation is 1 line of code has a cost 1. On the lower level, 1 asm instruction 1 unit. Then the memory access cost, not obvious from the code, depending on caching, prefetch, temporal/spatial locality, instruction ordering.

The best idea I have so far is to have a model based on physics with cost assigned by a local state (nearby instrucitons and effects) some sort of energy function, then global state (memory access, cache levels), and the dependencies or ordering constraints. This is too complex to grasp for a human so this probably why we'd be more likely wrong than right.

A smart compiler can afford to spend CPU cycles to find a good solution of the problem, code -> instructions reformulated in any way that gives the best results in a reasonable time.
1
0
0
@karolherbst

Picking the point "Branching isn't slower than not branching at all" in particular that confused my intuition and actual measurement and asm analysis proved me wrong.

The case study was my attempt to optimize a comparator function, where there are multiple criteria, cascaded results of -1/0/1 comparison. The hypothesis was that encoding the 3 state results of the comparisons into one int value would be faster than a series of ifs. How wrong I was. The ifs always won, even for an artificial worst case (all ifs wrong), vs a fixed sequence of bit encoding returning one int as "strcmp" result.

The analysis was based on llvm-mca (instruction-level analysis, really great tool providing insights to micro arch optimizations). It was fun to reinvent new bit tricks only to find them on https://graphics.stanford.edu/~seander/bithacks.html . I guess the branch prediction is working well even for hard to predict data like sorting arrays.
1
4
7
While in memory management I think you guys do it on purpose:

mm/hmm: Hold a mmgrab from hmm to mm (https://lore.kernel.org/all/20190606184438.31646-4-jgg@ziepe.ca)
0
2
4
Reading aloud some paths in linux.git must sound funny and has SG-1 vibes: net/mac/mesh.c!
1
0
1
@ljs Yes, yes, yes, yes. Yes, yes. Yes, yes.
1
0
2
I think I won in testing:

Failed 1095 of 1095 tests
0
0
4
repeated

Good write-up about Linux Kernel Maintainer duties
https://lwn.net/Articles/1007325/

0
6
2
Funny. I'm writing guidelines for command line options and parameters. Writing about anti-pattern of using optional_argument for getopt(). Quick grep to check the known cases. And there's one more. I added it, in 2013. Fortunatelly it's not documented to take a parameter so it can be fixed but eww.
0
0
1
@monsieuricon I'd say much of that is generated from cached files, almost-static contents, no images, no delayed load by JS. Page size is in range of tens of KB to low hundreds on average, or low MB for long threads when viewed as flat/nested. Sounds like a typical load for a web page in the '90s, now served by a '20s CPU.
0
0
1
Show older