• 9 Posts
  • 36 Comments
Joined 3 months ago
cake
Cake day: November 25th, 2024

help-circle
  • My point is simply that it’s probably not worth it to add another language. Doesn’t have anything to do with Rust really.

    Though I do think that the language is a bit over hyped. It’s obvious companies and projects used to say they’re using Rust, not just because they want to attract young developers or like the language, but because it’s a way to get VC. Like AI and blockchain.

    I do like Rust. But mostly because it encourages functional style programming. And the tooling is of course awesome. Especially compared to C and C++. However, I do believe that static pure functional languages are superior to Rust.


  • I don’t think you get my point.

    Of course I don’t mean that you should introduce Lisp or Scheme into the Linux kernel. However, I don’t rule out anything when it comes to the future of programming. Kernel programming isn’t that special. If you need to make a scheduler, dynamic memory manager or an interpreter, as part of the kernel, because it solves your problem, you do it. Maybe you want the kernel to generate thread optimised FPGA and micro code on the fly? And this is done with some kind of interpreter. Who knows.

    My point is that it’s probably a bad idea introduce any new language into the kernel. A new backwards compatible version of memory safe c might be a good idea though. If it can be done.

    Haven’t touched the Linux kernel in 10+ years, but my guess is that a good approach is to write a new micro kernel in Rust. One that is compatible with most existing drivers and board support packages. And of course it has to maintain the userspace ABI and POSIX yada yada. Probably what the Redox project aims for, but I don’t know.

    Keeping the Rust bindings in a separate project might be unnecessary though. I’m sceptic about allowing upstream drivers written in Rust just because I find that there is such a great value in sticking to one language. I also know that many kernel developers are getting old and it gets harder to learn new languages the older you get. Especially if the language comes with a decent share of sugar and bling (the minimalism of lisp and c is valuable).

    If there is a problem finding driver developers that want to write C code, then sure. But breaking the flow of the senior maintainers/developers likely isn’t worth it. Unless they ask for it.

    And also, I really haven’t been following this Rust in the Linux kernel debate.



  • My gut tells me that any benefits of adding Rust is massively negated by the addition of a second language.

    If one wants to write Rust, there is always Redox and probably a bunch of other kernels.

    I like Rust, but it’s for sure an over hyped language. In a year or two, people will push for Zig, Mojo or some new pure and polished functional low level language. Maybe a Scheme or a Lisp? That seems to be what the cool kids use nowadays.

    Or maybe we’ll just replace the kernel with an AI that generates machine code according with what should be your intention.








  • Thanks for high effort reply.

    The Chinese companies probably use SIMC over TSMC from now on. They were able to do low volume 7 nm last year. Also, Nvidia and “China” are not on the same spot on the tech s-curve. It will be much cheaper for China (and Intel/AMD) to catch up, than it will be for Nvidia to maintain the lead. Technological leaps and reverse engineering vs dimishing returns.

    Also, expect that the Chinese government throws insane amounts of capital at this sector right now. So unless Stargate becomes a thing (though I believe the Chinese invest much much more), there will not be fair competition (as if that has ever been a thing anywhere anytime). China also have many more tools, like optional command economy. The US has nothing but printing money and manipulating oligarchs on a broken market.

    I’m not sure about 80/10 exactly of course, but it is in that order of magnitude, if you’re willing to not run newest fancy stuff. I believe the MI300X goes for approx 1/2 of the H100 nowadays and is MUCH better on paper. We don’t know the real performance because of NDA (I believe). It used to be 1/4. If you look at VRAM per $, the ratio is about 1/10 for the 1/4 case. Of course, the price gap will shrink at the same rate as ROCm matures and customers feel its safe to use AMD hardware for training.

    So, my bet is max 2 years for “China”. At least when it comes to high-end performance per dollar. Max 1 year for AMD and Intel (if Intel survive).


  • Yeah. I don’t believe market value is a great indicator in this case. In general, I would say that capital markets are rational at a macro level, but not micro. This is all speculation/gambling.

    My guess is that AMD and Intel are at most 1 year behind Nvidia when it comes to tech stack. “China”, maybe 2 years, probably less.

    However, if you can make chips with 80% performance at 10% price, its a win. People can continue to tell themselves that big tech always will buy the latest and greatest whatever the cost. It does not make it true. I mean, it hasn’t been true for a really long time. Google, Meta and Amazon already make their own chips. That’s probably true for DeepSeek as well.











  • Is that still true though? My impression is that AMD works just fine for inference with ROCm and llama.cpp nowadays. And you get much more VRAM per dollar, which means you can stuff a bigger model in there. You might get fewer tokens per second compared with a similar Nvidia, but that shouldn’t really be a problem for a home assistant. I believe. Even an Arc a770 should work with IPEX-LLM. Buy two Arc or Radeon with 16 GB VRAM each, and you can fit a Llama 3.2 11B or a Pixtral 12B without any quantization. Just make sure that ROCm supports that specific Radeon card, if you go for team red.