Posts
35
Following
37
Followers
56
reading files, deleting files, I can go on
@ljs @pony There are at least two in development, pijul and jj (jujutsu). Neither has convinced me that it's better than git /for what I need/, but you can't complain nothing is happening in VCS space. Also there's fossil (by sqlite gang), I once tried to import linux kernel sources to it and gave up after a few hours the repo.db had like 1G and progress 10%. Db stored on tmpfs for speed.

If you're a database engine developer, every problem can be solved by database. Still somehow "Ad-hoc pile-of-files key/value database" beats it.
1
0
3
@Aissen @jarkko I don't know, my guess that it could be sold as "best compiler" for ia64 in it's prime years. One comment at https://lwn.net/Articles/320795/ confirms that, mentioning SGI, and there are more insightful comments.
1
0
3
@DrHyde @ludicity @banana I have "git commit -save" in my typing muscle memory, where 'e' opens the editor, and 'v' also shows the diff which is quite convenient.
0
0
0
@Aissen @jarkko Intel C compiler support was removed in 6.3, proably broken for years too but that's another compiler supported besides GCC and maybe before LLVM. From the toolchain I think the GNU ld was always needed, e.g. for arch/x86/boot/setup.ld
1
0
2
A document I compiled from feedback and community experience where things can go bad, not counting filesystem bugs: https://btrfs.readthedocs.io/en/latest/Hardware.html

There's a ZDnet article from 2010 "The universe hates your data" (https://www.zdnet.com/article/the-universe-hates-your-data/). There's only that much a filesystem can do.

Sometims I feel that btrfs is a decent faulty hardware detector that also happens to be a filesystem.
0
9
19
@karl @kernellogger Well, what else than "we disagree with the your assesment of btrfs but we understand it's been integral part of bcachefs' marketing" can we say. I've been trying to fix btrfs reputation for years, it has improved but there will be always people with problems and quick to blame the filesystem. I'm always pleased when I read on reddit or HN people with their experience of saved data, early detected HW problems or otherwise uninteresting daily usage.
3
4
8
@vbabka On one hand all the _ext things make people happy because getting the extension interface is the last thing they need before never having to talk to the community again.

On the other hand, you can always reject a report because "oh you're using your own _ext code". And it won't be entirely wrong.

The extensions can of course spark interest in experimenting in some difficult areas, like the scheduler. The best outcome is to incorporate the good things back, if that happens. If. IF.
1
0
1
I'm backporting some patches to AMD GPU code and in some files, there's code that actually starts at column 80. drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
1
0
2
@lkundrak "Total of 16 processors activated (115246.00 BogoMIPS)"
0
0
0
@ljs Thanks for the pointers, by a quick look this is what I'd need, but yes it's vma-specific. I'm namely interested in the logic, that can be copied and adapted for data structure I need it for. We have range trees for extent state tracking but it's a bit heavy for the purpose. A generic way would be better so I can rely on tested code (the same way I don't want to reimplement rb-tree balancing). Alternatively, implementing that on a linked list is maybe the easiest way with obvious performance limitations (with bonus of iterator friendly changes to the whole data structure), but I can start writing the actual thing built on the range trees.
0
0
1
@ljs I'd need a generic structure that gets built up from intervals, potentially overlapping. after that enumerate the intervals. the tricky part is the merging, I had similar sketches for the cases as you on the whiteboard for my (unfinished) prototype. the maple tree has a single value key, so the merging is still left as an exercise. The linux.git/include/interval_tree.h is close but IIRC not doing what I wanted.
1
0
0
@ljs Interval trees, or range trees? Not sure what is the common naming.
1
0
1
@bagder @ondrej @icing moving code to new files due to refactoring totally hides the whole contribution history, so tip for devleopers to get to the top of the charts is obvious :)
1
0
0
repeated
being picky about quality IS part of kernel culture, it does vary quite a bit (some parts of the kernel are much better than others), and it did originate from Linus, but most interactions with Linus are perfectly civil.

If you stop fucking confirmation biasing and look at his usual interactions (oh whoops - that'd require actual understanding of what's being discussed) you'd see that it's usually civil but with the same adherence to caring about quality at least to a reasonable degree that is a part of the culture.

Comparing it to software where there is literally no attempt whatsoever to control quality and saying 'because Linus is an arsehole based on .01% of his emails' is you know - fucking stupid really isn't it? It's just not true.

But you don't get to sound as 'special' if you say this kind of thing. If you say 'yes yes look at this cherry picked email - this is how linux is and that's why the code isn't as awful as [insert random project here]'.

No. Shut up.
1
2
3
repeated
Saw a bit of a video where somebody did the 'Linus is mean but that is why Linux is good quality' thing that a lot of people believe and not sure my eyes could roll any further.

Like do you guys think Linus controls every bit of code going in?

Do you think the sweary emails he used to send are representative? And even at his worse, who did he send them to? And how often?

The desire to excuse arseholeness by people who quote memeable stuff with usually zero understanding of the discussion is just so fucking frustrating.

How about asking people actually involved in the kernel? There's a kind of bloody arrogance with it that somebody reads a couple of these (from *checks notes* 11 years ago) and speaks as if they're an authortiy.

You're fucking not, shut up.
1
2
4
The Command-R+ (104B) gives a good answer, 10% and an explanation. Quoted in full below.
Show content
"10%. The text provided contains highly technical content and specialized terminology related to software development and file systems. It discusses specific changes made in a codebase or project log, including performance optimizations, lock scope adjustments, API conversions, error handling improvements, thread management, and various core modifications. This level of detail suggests that the writer has expertise in low-level programming concepts and is familiar with the inner workings of operating system processes.

While AI language models have made significant advancements in generating human-like text on a variety of topics, they typically struggle to produce content requiring deep domain knowledge or highly technical understanding without specific training data related to these complex subjects. The depth of software development expertise displayed here would be challenging for an LLM like ChatGPT unless specifically trained and fine-tuned with extensive data from the field of file systems and low-level programming.

Therefore, it is unlikely that this text was generated solely by AI without significant human input or a highly specialized training dataset tailored to such technical content generation tasks in software development domains."
0
0
1
For fun I've asked llama3.1:8B to evaluate my 6.12 pull request summary how likely it was written by AI (LLM, ChatGPT). It's a weaker model but I'm still not sure if I should be mad because of "I would assign a probability that it was written by an AI as 95%", or pleased by "I wouldn't rule out the possibility of a skilled human developer having written it.".

Anyway, llama3.1:70B says it's 20% written by AI. Both don't like formal tone and enumerated lists, while acknowledging technical and specific context of linux kernel where AI can "struggle".
1
1
3

When One Door Bug Closes Another Opens, But Often We Look So Long Upon the Closed Door Bug That We Do Not See the Open Door Bug

0
0
1
I've been looking around for projects with similar potential as xz. Unsurprisingly lots of GNU projects, they do the groundwork, are direct or indirect dependencies everywhere, were feature complete years ago, no new development, one last guardian of the code with credentials to ftp.gnu.org.

There's one hurdle that could raise the difficulty of infiltration though, the FSF copyright assignment paperwork.
0
1
2
Show older