• 0 Posts
  • 117 Comments
Joined 2 years ago
cake
Cake day: March 14th, 2023

help-circle




  • Well it’s not his tool as you clarified. But we know Marc is in contact with both ai16z and musk. And would certainly like to do the things alleged.

    Believe the breadcrumb he’s referring to is using Trump as the public example. Maybe the prompt correlates to bot messages posted to Twitter too.

    He says ‘we left’, so he personally wouldn’t have needed the perms.

    H1b doesn’t seem like a red flag to me. Plenty in my year got entry roles in FAANG in the US after their degrees and had to get h1b sponsorship. Musk got rid of most other staff and only has a skeleton crew of loyalists and h1bs. I imagine most the remaining 10% are aware intricacies of the various interference he’s demanded. (Many of which are documented outside of this).

    They are ‘advanced’ in that LLMs can now trick most people when they couldn’t a few years ago. Think you’re reading too much into that there. Seems unlikely the people who want to manipulate people with chatbots would have the self awareness to create an ad like this. Maybe a leftist putting together the obvious and embellishing but seems a bit too coincidental.











  • I mean, don’t friend, or put high trust on people you don’t know is pretty strong. Due to the “six degrees of separation” phenomenon, it scales pretty easily as well. If you have stupid friends that friend bots you can cut them off all, or just lower your trust in them.

    Know IRL? Seems it would inherently limit discoverability and openness. New users or those outside the immediate social graph would face significant barriers to entry and still vulnerable to manipulation, such as bots infiltrating through unsuspecting friends or malicious actors leveraging connections to gain credibility.

    “Post-turing” is pretty strong. People who’ve spent much time interacting with LLMs can easily spot them. For whatever reason, they all seem to have similar styles of writing.

    Not the good ones, many conversations online are fleeting. Those tell-tale signs can be removed with the right prompt and context. We’re post turing in the sense that in most interactions online people wouldn’t be able to tell they were speaking to a bot, especially if they weren’t looking - which most aren’t.





  • Your default should be try to disprove, or at least verify any claims.

    And surround yourself with good people.

    I’ve spoken to a lot of people who can’t think independently and I can’t figure out what’s wrong with them tbh. I understand it on paper. Cognitive dissonance, emotive narratives re-enforced by echo chambers that have blinded them. But how do you deny basic facts when they’re explained to you 1 on 1. I used to think they were lying but it’s clear to me now most aren’t.