• 0 Posts
  • 103 Comments
Joined 2 months ago
cake
Cake day: December 3rd, 2024

help-circle
  • Yes. The Lemmy instance I’m commenting from is running on a Raspberry Pi 4. A couple things you’ll need to consider though:

    • Any containers / applications you run need to be compiled for arm64. This is way more common now than it used to be, but there are still some things that only work on x86 (like many game servers)
    • You should hook up external storage to your Pi. You can boot from an SSD via USB 3 and you’ll get way better performance, capacity, and write endurance than an SD card.
    • RAM will likely be your first limitation. Many services can run well under 4GB, but once you start adding more, it can fill up if you’re not careful.
    • You probably already knew this, but even though the Pi has WiFi, plug it into the network via Ethernet. As a rule, you should never run servers off WiFi if you can avoid it. You’ll get much better speeds and reliability.




  • as a starting point to learn about a new topic

    No. I’ve used several models to “teach” me about subjects I already know a lot about, and they all frequently get many facts wrong. Why would I then trust it to teach me about something I don’t know about?

    to look up a song when you can only remember a small section of lyrics

    No, because traditional search engines do that just fine.

    when you want to code a block of code that is simple but monotonous to code yourself

    See this comment.

    suggest plans for how to create simple sturctures/inventions

    I guess I’ve never tried this.

    Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

    Kind of, but here’s the thing, it’s rarely faster than just using a good traditional search, especially if you know where to look and how to use advanced filtering features. Also, (and this is key) verifying the accuracy of an LLM’s answer requires about the same about of work as just not using an LLM in the first place, so I default to skipping the middle-man.

    Lastly, I haven’t even touched on the privacy nightmare that these systems pose if you’re not running local models.


  • Creating software is a great example, actually. Coding absolutely requires reasoning. I’ve tried using code-focused LLMs to write blocks of code, or even some basic YAML files, but the output is often unusable.

    It rarely makes syntax errors, but it will do things like reference libraries that haven’t been imported or hallucinate functions that don’t exist. It also constantly misunderstands the assignment and creates something that technically works but doesn’t accomplish the intended task.












  • The sad reality is that the quality of modern BluRay releases has significantly declined. Sure the picture looks great, but they barely come with special features anymore. Also, the QA is atrocious. I buy a lot of UHD BluRays and ~30% of them come corrupted/damaged out of the box.

    I really want physical media to become popular again so companies start actually putting in effort.

    EDIT: I still love physical media. It’s pretty much the only way to own a copy of media anymore. I just wish it was as beloved as the DVD days.