• 0 Posts
  • 203 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle






  • it suffers from the same enshittification problems that have killed Twitter, Reddit, BoingBoing, Digg, Slashdot

    I’ll easily agree that these platforms are bad, but saying anything “killed” them is very, VERY generous. Reddit and slashdot are very much still a thing, and they don’t look like they’re slowing down, despite the supposedly insurmountable issues. Keep in mind that the goal of a “social network” (for lack of a better word) is having an audience. Reddit literally shat on its user base, AND on the people that kept the site usable, and communities are still thriving there.



  • People go to the platform that’s easy, attractive and works, instead of the very beautiful, finely crafted, exquisite solution that requires days of reading followed by fiddling every other day to barely get the same immediate result, assorted with hidden surprises like hidden moderation and silent failure situation that leads to fragmentation of the whole network. What a surprise.

    Also, “don’t get pedantic with me” does not sit well with the current goals of bluesky. Sure, right now, they focused on making something that works and is usable by everyone. Whoop fucking doo, that’s exactly what mastodon/lemmy/most activitypub services skipped. And that’s why the general public look at them with contempt. I can’t see the future (maybe you can, lucky you), but for now, bluesky works, and the plan they’re still following up to now is aimed toward a decentralized solution.









  • No, not that either. Unless you consider “use LLM to summarize the changes/errors/inaccuracies, then have a human read the whole thing again” an improvement over “just have a human read the whole thing”.

    Because LLM will do all these things:

    • point you toward issues
    • point you toward non-issues
    • not point you toward issues
    • change stuff even when “instructed” not to

    If there is one thing you don’t want to throw an LLM at without full, unbiased review, it’s documents where the wording is legally binding. And if you have to do a full, unbiased review to begin with, where you can’t even trust your tool to have highlighted all the important parts, you may as well not bother with the tool.


  • If you consider debugging broken LLM-generated code to be a skill… sure, go for it. But, since generated code is able to use tons of unknown side effects and other seemingly (for humans) random stuff to achieve its goal, I’d rather take the other approach, where it takes a human half an hour to write the code that some LLM could generate in seconds, and not have to learn how to parse random mumbo jumbo from a machine, while getting a working result.

    Writing code is far from being the longest part of the job; and you gingerly decided that making the tedious part even more tedious is a great idea to shorten the already short part of it…