Posts
71
Following
48
Followers
73
reading files, deleting files, I can go on
https://git.kernel.org/linus/006568ab4c5ca2309ceb36fa553e390b4aa9c0c7 Adding Link: tag to the original discussion is ok but it should not be instead of a proper changelog, namely when there's discussion what exactly is the fix actually doing. Also it's for out of tree module. And it got a CVE.
0
0
1
cloudflare down = internet down
0
0
0
@ffmancera You should not skip 42 for reasons that only Univese can understand.
1
0
2
Days since I fixed somebody's printer: 0
1
1
2
@ljs The most absurd thing I've heared in that regard was: sometimes I "threaten" that I'll merge something if people don't review it.
1
0
3
@ffmancera @ljs Now imagine what offers get people contributing to crypto/
0
0
3
@ptesarik @vbabka @oleksandr Sector sizes of block devices yes, because of block layer abstraction. Not on the filesystem level unless you want more OMG blatant layering violation.
0
0
2
@vbabka @oleksandr one dd back and forth was easier on first sight. I can try again with manual partitioning though.
1
0
1
@oleksandr @vbabka I don't know but I assume this works, fdisk would create partitions at the right places and filesystem uses logical disk address.
0
0
0
@oleksandr @vbabka So, it's not that simple.

What I did:
1. dd device to file
2. format NVMe to 4096 bytes/sector
3. dd file to device
4. no partitions detected
5. reboot, stuck in BIOS

Attempt to copy parition table from file to device using sfdisk showed errors and sector size of 4096, GPT table wrong number of sectors etc.

In the log there was line like "inconsistent atomic write size, namespace will not be added subsystem=4096bytes controller/namespace=512 bytes".

Recovery:
1. boot rescue system
2. format NVMe back to 512 bytes/sector
3. dd file to device
4. partitions detected again
5. reboot, back to the system

I don't know what exactly went wrong, the sfdisk partition recreation worked fine with Kingston (KC3000). There are potential problems with partitions and some kind of mismatch what drive reports as sectors.
2
0
0
@oleksandr @vbabka And I have the first victim, Verbatim Vi3000, system does not boot, device is visible in BIOS.
1
1
2
@oleksandr @vbabka My sample is too small to tell, I have converted Kingston drives, one of them is backing drives for testing VMs, so I'll know when it fails.
1
1
1
@vbabka @oleksandr Probably a firmware bug, that do exist and happen on all devices / vendors and are unfortunatelly not that rare.
0
0
1
@oleksandr @vbabka Yes, basically two commands: nvme ns-id -H /dev/nvmeX (check the profiles) and nvme format --lbaf=1 /dev/nvmeX to change it. Obviously, it destroys the data.
1
0
2
NVMe's are formatted to 512B sector size, and most of them have option to be formatted to 4096. And it works. I'm amazed.
1
0
2
repeated

Always remember the register is a trash publication that publishes straight up lies to suit the author's bias.

Their kernel shit is worse than the usual tripe you see in the press (LWN being a massive exception to that of course!)

0
1
1
repeated

Dorinda was followed by @vbabka !

0
2
2
@karolherbst

> Doing more isn't slower than doing less
> Simpler code isn't faster than complex code

I don't know if the state of the art tools are capable of that, but the points seem to be corresponding to human evaluation (counting lines of code on CPU instruction abstractions) rather than a model (even an imperfect one) that's closer to what actually happens.

The naive evaluation is 1 line of code has a cost 1. On the lower level, 1 asm instruction 1 unit. Then the memory access cost, not obvious from the code, depending on caching, prefetch, temporal/spatial locality, instruction ordering.

The best idea I have so far is to have a model based on physics with cost assigned by a local state (nearby instrucitons and effects) some sort of energy function, then global state (memory access, cache levels), and the dependencies or ordering constraints. This is too complex to grasp for a human so this probably why we'd be more likely wrong than right.

A smart compiler can afford to spend CPU cycles to find a good solution of the problem, code -> instructions reformulated in any way that gives the best results in a reasonable time.
1
0
0
@karolherbst

Picking the point "Branching isn't slower than not branching at all" in particular that confused my intuition and actual measurement and asm analysis proved me wrong.

The case study was my attempt to optimize a comparator function, where there are multiple criteria, cascaded results of -1/0/1 comparison. The hypothesis was that encoding the 3 state results of the comparisons into one int value would be faster than a series of ifs. How wrong I was. The ifs always won, even for an artificial worst case (all ifs wrong), vs a fixed sequence of bit encoding returning one int as "strcmp" result.

The analysis was based on llvm-mca (instruction-level analysis, really great tool providing insights to micro arch optimizations). It was fun to reinvent new bit tricks only to find them on https://graphics.stanford.edu/~seander/bithacks.html . I guess the branch prediction is working well even for hard to predict data like sorting arrays.
1
4
7
Show older