• 5 Posts
  • 40 Comments
Joined 1 month ago
cake
Cake day: December 26th, 2024

help-circle

  • llama@lemmy.dbzer0.comOPtoPrivacy@lemmy.mlHow to run LLaMA (and other LLMs) on Android.
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    2 days ago

    This is all very nuanced and there isn’t a clear cut answer. It really depends on what you’re running, for how long you’re running it, your device specs, etc. The LLMs I mentioned in the post did just fine and did not cause any overheating if not used for extended periods of time. You absolutely can run a SMALL LLM and not fry your processor if you don’t overdo it. Even then, I find it extremely unlikely that you’re going to cause permanent damage to your hardware components.

    Of course that is something to be mindful of, but that’s not what the person in the original comment said. It does run, but you need to be aware of the limitations and potential consequences. That goes without saying, though.

    Just don’t overdo it. Or do, but the worst thing that will happen is your phone getting hella hot and shutting down.



  • I am not entirely sure, to be completely honest. In my experience, it is very little but it varies too. It really depends on how many people connect, for how long they connect, etc. If you have limited upload speeds, maybe it wouldn’t be a great idea to run it in your browser/phone. Maybe try running it directly on your computer using the -capacity flag?

    I haven’t been able to find any specific numbers either, but I did find a post on the Tor Forum dated April 2023 or a user complaining about high bandwidth usage. This is not the norm in my experience, though.



  • Thank you for pointing that out. That was worded pretty badly. I corrected it in the post.

    For further clarification:

    The person who is connecting to your Snowflake bridge is connecting to it in a p2p like connection. So, the person does know what your IP address is, and your ISP also knows that the person’s IP address is – the one that is connecting to your bridge.

    However, to both of your ISPs, it will look like both of you are using some kind of video conferencing software, such as Zoom due to Snowflake using WebRTC technology, making your traffic inconspicuous and obfuscating to both of your ISPs what’s actually going on.

    To most people, that is not something of concern. But, ultimately, that comes down to your threat model. Historically, there haven’t any cases of people running bridges or entry and middle relays and getting in trouble with law enforcement.

    So, will you get in any trouble for running a Snowflake bridge? The answer is quite probably no.

    For clarification, you’re not acting as an exit node if you’re running a snowflake proxy. Please, check Tor’s documentation and Snowflake’s documentation.












  • You’re aware that it’s in their best interest to make everyone think their “”“AI”“” can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it’s mostly faked?

    Are you sure you read the edits in the post? Because they say the exact contrary; Perplexity isn’t all powerful and all knowing. It just crawls the web and uses other language models to “digest” what it found. They are also developing their own LLMs. Ask Perplexity yourself or check the documentations.

    Taking what an “”“AI”“” company has to say about their product at face value in this part of the hype cycle is questionable at best.

    Sure, that might be part of it, but they’ve always been very transparent on their reliance on third party models and web crawlers. I’m not even sure what your point here is. Don’t take what they said at face value; test the claims yourself.



  • Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?

    That doesn’t make much sense. I created this post to spark a discussion and hear different perspectives on data ownership. While I’ve shared some initial points, I’m more interested in learning what others think about this topic rather than expressing concerns. Please feel free to share your thoughts – as you already have.

    I don’t exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.

    Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity’s official docs.


  • The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

    It even talked about this very post on item 3 and on the second bullet point of the “Notable Posts” section.

    However, when I ran the same prompt again (or similar prompts), it started hallucinating a lot of information. So, it seems like the answers are very hit or miss. Maybe that’s an issue that can be solved with some prompt engineering and as one’s account gets more established.