Posts
564
Following
103
Followers
116
A relatively new professional kernel hacker, born in August 6, 2000, and living in Korea (South!).

- Linux Kernel Developer @ Oracle (Linux Kernel MM) (2025.02 ~ Present)
- Reviewer for the Linux Slab & Reverse Mapping subsystem
- Former Intern @ NVIDIA, SK Hynix, Panmnesia (Security, MM and CXL)
- B.Sc. in Computer Science & Engineering, Chungnam National University (Class of 2025)

Opinions are my own.

My interests are:
Memory Management,
Computer Architecture,
Circuit Design,
Virtualization

Harry (Hyeonggon) Yoo

Edited 2 years ago
@lkundrak
expert? me?
@vbabka @ljs @sj are the experts.
I am just a dumb/curious undergraduate without Ph.D nor B.S. (yet) XD

The main benefit of NUMA architecture is to distribute memory bus traffics to several memory buses instead of a single global bus, because the global bus can be bottleneck as the number of CPUs and memory capacity grows.

A set of CPUs and memory near to those CPUs is called a NUMA node. If a CPU wants to access memory not in the local node, it reads data from a remote node via interconnect (instead of the local, faster bus)

Because local (to cpu) and remote NUMA node has different access latency and bandwidth, OS tries to utilize local node's memory first (ofc that depends on NUMA memory policy of the task/VMA)

But a laptop is too cheap and small system for a single bus to be a bottleneck, so I don't get why the hardware designer decided to adopt NUMA architecture.

And it's really strange that different ranges of physical memory from a single DIMM chip belongs to different NUMA nodes. Do they really have different performance characteristics?
1
1
4
@vbabka that would have been much slower 🤣
hmm it makes no sense because it has 8GB and 32GB DIMMs and node 0 has 12GB ❓

Maybe the board designer knows why
0
0
1

Harry (Hyeonggon) Yoo

until yesterday I didn't know that my laptop has 2 NUMA nodes, but why?
2
0
3
@vbabka that's right ;)
0
0
1
@vbabka
interviewer: Throughout your career how much value did you add to Linux so far?
vbabka: I removed more than I added.
1
0
2

Harry (Hyeonggon) Yoo

Edited 2 years ago
@ljs @cwayne
hmm it might sound strange but relatively young people in Korea don't seem to care about who live on next door anymore..
1
0
2

Harry (Hyeonggon) Yoo

Edited 2 years ago
@cwayne
looks like to hate something first and then find (wrong) reason to explain it
1
0
2

Harry (Hyeonggon) Yoo

Edited 2 years ago
@cwayne
would he/she help them if they go to church :(
1
0
1

Harry (Hyeonggon) Yoo

Edited 2 years ago
Learning how to write a LAVA test definition, but the tricker thing is to decide which tests to run to verify a kernel works fine.

Candidates:
- LTP
- KUnit
- kselftests

btw it is funny that the entire LTP suite gets killed every time it runs oom testcases. and LTP takes quite long time for a lightweight testing.

hmm... maybe run only a smaller subset of them?
0
0
1
@ljs @kernellogger
it won't land on the mainline ;)
(the consensus seems to be not using static calls in __exit)
but was fun!
0
0
2
Hmm both of ways seem to be very unstable for emulated CXL memory.
Crashes very easily when the memory is actually accessed by applications or the kernel.
0
0
0

Harry (Hyeonggon) Yoo

Edited 2 years ago
Hmm I still don't get what's the point of accessing volatile CXL memory via mmap() to /dev/daxX.Y files.
0
0
0
@kernellogger
why did I read it as linux SRCU (Sleepable RCU) ;)
it's nice tool though!
1
0
2

Harry (Hyeonggon) Yoo

Edited 2 years ago
Today I learned:

CXL memory can be mapped as 'System RAM' or 'Soft Reserved' by platform firmware. Or it can be dynamically provisioned by (since v6.3) CXL region driver.

And 'Soft Reserved' or dynamically provisioned CXL RAM region can be used in two ways:

1. Applications mmap() to /dev/daxX.Y files, just like traditional persistent memory devices.
2. Kernel use it as System RAM via dax_kmem driver.

And a weird fact is that when dax_kmem onlines CXL (and other performance-differentiated like pmem) memory, to ZONE_NORMAL, not ZONE_MOVABLE.
2
0
1

Harry (Hyeonggon) Yoo

Edited 2 years ago
@lkundrak @bagder

for a moment
was wondering whose face it is
0
0
1
Working on a series for 6.8 probably...
So far...
26 files changed, 39 insertions(+), 4375 deletions(-)
3
2
16
@djlink

looks like a horror movie poster lol
1
0
1
@ljs oh no, take care!
1
0
2

Harry (Hyeonggon) Yoo

Edited 2 years ago
Booting the latest kernel is always fun ;)

I was hit by BUG_ON() because I enabled CONFIG_DEBUG_VIRTUAL=y.
It made my machine crash and I wrote and sent a quick hack to fix this.

https://lore.kernel.org/lkml/CAB=+i9QiJ=BXkQuCFJTh3dMXrkKQvVA2EM51Mj6SsDMimWQ71g@mail.gmail.com

Today I learned:

- Better to use proper tags for regzbot next time I report something, don't make @kernellogger do that instead of me.

- __init and __exit are compiler attributes that makes functions to be in specific ELF sections (.init.text and .exit.text), to drop portions of unused kernel code.

- When a kernel feature is built into kernel rather than built as a module, functions marked __init are dropped after initialization and functions marked __exit are dropped and never used. because you can't unload a built-in kernel feature ;)

- Some architectures drop .exit.text section at link time, but some drop at runtime. this is due to complexity of link-time reference counting between functions (? which I have no idea yet)

- On architectures that drop it at runtime, functions marked __exit are dropped in free_initmem() because .exit.text section is between __init_begin and __init_end.

- I need an automatic bisection system to save time in the future.

One piece of information I'm missing here is why it did not crash before the commit :(
2
0
4
Show older