Conversation

Steven Rostedt

It’s the month of December. Do you know what that means? It means it’s time to run my workstation and server with branch tracing enabled! https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/compiler.h#n50

1
1
6

And it looks to have found a bug right off the bat!

1
0
3
@vbabka @rostedt I'm happy to see somebody actually getting data for likely/unlikely, the number of times I've seen people in various code bases determine this totally on the basis of 'what I reckon will happen' is unreal
3
0
0
@ljs but do you know how @rostedt does this profiling? Via "#define if ..."! Mental.
1
0
0
@vbabka @rostedt only batman has permission to do this
0
0
1

@ljs @rostedt @vbabka You're generally right, but there are a few rare cases where you *WANT* to mispredict.

For example, when waiting for a spinlock, it's more likely to make yet another spin, but I don't care much about the speed of spinning. What I do care about is to leave the spinlock loop ASAP when the lock is finally acquired.

2
0
2
@ptesarik @rostedt @vbabka presumably would be picked up in appropriate measurements.

All I ask, nay beg for on my hands and knees are for developers to not do perf stuff without measuring or at least thinking.

It's so pervasive it kills me
1
0
1

@ljs @rostedt @vbabka Proper performance measurements are hard and extremely time-consuming. And the results are sometimes inconclusive, e.g. variant A runs faster than variant B on hardware X but slower on hardware Y where X and Y are both widely used in practice. Same with different workloads...

I mean, I agree with you; if someone is not able to do all this hard stuff, they'd better stay away from perf stuff. People not realizing diffulty of perf are so pervasive it kills me.

1
0
2
@ptesarik @rostedt @vbabka the _absolute_ fundamentals are not hard at all.

Yes the details can be infuriatingly complicated (various forms of jitter, 95%-99% percentile, variance, worst case vs. best case, are you measuring the right thing or accidentally measuring I/O stuff, latency vs. throughtput, yada yada), but the _fundamentals_ are not.

As with many things, it's not that people are wrong in the gritty details, they simply don't do ANY measurement, then just guess by 'intuition'.

I've worked in jobs in the past where somebody said 'oh that guy's really good he just knows where the hot spots are by instinct' and that was their whole argument.

Simply measuring _at all_ would be a step up for many programmers.

But there's a billy big bollocks aspect to it too with a lot of gate keeping and ugh yeah.

My TL;DR message is - just measure _something_ bro
1
1
1

@ljs @ptesarik @rostedt @vbabka i measured it and your program is slow

1
0
2

@ljs @ptesarik @rostedt @vbabka $ /usr/bin/time -v a.out
slow.
$

1
0
3

@ljs @ptesarik @rostedt @vbabka Minor (reclaiming a frame) page faults: a lot.

1
0
1
@lkundrak @ptesarik @rostedt @vbabka man this frame stuff, c'mon lad it's 2023 pages, frames, let's hug it out!

Minor faults are a lot less of a thing than major ones though

Usually in tight loops I like to read memory mapped files that I've ensured aren't in the page cache

This is how I do my things
1
0
2

@ljs @ptesarik @rostedt @vbabka is memory management slowing my program down again????/

2
0
2

@ljs @ptesarik @rostedt @vbabka my computer once had a major fault

1
0
3
@lkundrak @ptesarik @rostedt @vbabka lol bro we've played you for absolute fools, memory doesn't need managing, yet we've got you idiots to accept our bullshit for years 🤣
1
0
1
@lkundrak @ptesarik @rostedt @vbabka the major fault is not in our stars, but in ourselves dear Ľubomír
0
0
2

@ljs @ptesarik @rostedt @vbabka it must be the electron runtime then

1
0
1

@ljs @ptesarik @rostedt @vbabka unfortunately i don't understand this due to my poor educational choices

1
0
1
@lkundrak @ptesarik @rostedt @vbabka lol I don't understand anything at all, I just mash
1
0
1

@ljs @ptesarik @rostedt @vbabka that is very disappointing

1
0
1

HAMMER SMASHED FILESYSTEM 🇺🇦

Edited 1 year ago

@ljs @ptesarik @rostedt @vbabka i'm not reading it unless you include very random remarks about fish faced people of neptune like this gentleman does https://archive.org/details/the_8088_project_book_grossblatt_robert_1989/mode/2up

1
0
2
@lkundrak @ptesarik @rostedt @vbabka Brno may as well be on Neptune

I bet Steve is really happy he's tagged on all of these stupid messages lol
4
0
2

@ljs @ptesarik @rostedt @vbabka wondering how they compare shipping cost-wise.

Steven i'm so sorry but at least you'll finally have a motivation to explore the filtering features

0
0
2
@ljs @lkundrak @ptesarik @rostedt that's what you get for redefining "if"
0
0
1

@ljs @lkundrak @rostedt @vbabka Steven is a very nice guy. I even met him in person and I'm still alive. 🤣

0
0
2

@ljs @vbabka This tooling has been around for over a decade. During the month of December, I figure I can run it on my server and workstation as this month is usually slower than other months. But I recently upgraded my server (28 cores / 56 hyperthreaded) and since the profiling updates a single integer (no locking, I’m looking for estimates not real numbers) but the cache line contention is making it impact this server much harder than my older server. It has actually slowed this down much more. I’m thinking of just running it for one week and not three like I use to.

Anyway, the likely/unlikely can be a big performance increase. I sped up the ftrace ring buffer by around 50% by placing strategic likely and unlikely around.

1
0
3

@ptesarik @ljs @vbabka Noted. There’s a lot of bad annotations, and most of them I ignore because they are not obvious that they are indeed wrong. Things like “stats” use to be unlikely (before they were switched over to static_branch). Because they were runtime switches and the idea is “if you have it disabled, you likely want it to be faster”. So having it enabled would be 100% incorrect. But that was indeed the correct outcome.

Before sending any patch to fix or remove a likely/unlikely() I analyze it to understand the reason it is incorrect. I never blindly send a patch because the tool said so. I send it after I understand the situation. A lot of the fixes I have done were because the likely use to be correct, but a change occurred that made it incorrect. One example of this was a case that a condition was added before the unlikely annotation condition to always be true. The added condition exited the function in 90% of the cases that would have caused the unlikely condition to be unlikely. Now when that condition is hit, it was 90% likely.

Here’s an article I wrote while at VMware. Read it now before they take it down (VMware no longer exists!) I need to copy it too.

0
0
3

@ljs @lkundrak @ptesarik @vbabka Honestly, I didn’t notice until now.

0
0
2